Can organisms sense objects without having mental states?

Back to Chapter2 Back to current topic - Sensations and Perceptions *** EXCURSUS: The Tree of Life
Previous page - Sensory capacities Next topic - Memory and mental states
*** SUMMARY of Conclusions reached References


Staph bacteria. Picture courtesy of Janice Carr/CDC via BBC.

Granting that all cellular organisms (including bacteria) have sensory capacities, does it follow that they possess mental states? Are bacteria aware of the world around them? Or is there a distinction to be made between sensation and perception?

The two questions we have to address here are:

(1) Does having a sensation entail having a belief, which needs to be described within an agent-centred intentional stance, or is a goal-centred intentional stance adequate to describe sensation?

(2) Does a sensation need to be described within a first-person intentional stance, or is a third-person stance adequate to describe it?

1. Does sensation imply belief?

Sorabji (1993, pp. 12-14, 35-38) convincingly demonstrates that Aristotle (De Anima 3.3, 428a18-24) steadfastly refused to attribute belief (doxa) to animals, despite their possession of sensory perception (aisthesis). However, Sorabji also points out that Aristotle's usages of both terms differs in important ways from the English terms "sensory perception" and "belief". First, perception for Aristotle has a propositional content: we can have the perception that the sun is a foot across (De Anima 3.3, 428b3-10). In English usage, by contrast, we speak of sensory perception as having an object but not a propositional content. Second, for Aristotle, there can be no meaningful ascription of belief without the possibility of conviction and self-persuasion (De Anima 3.3, 428a18-24), whereas the same cannot be said for our English word belief: "The nervous examinee who believes that 1066 is the date of the Battle of Hastings may, through nervousness, not be convinced, and need not have been persuaded" (Sorabji, 1993, p. 37).


Aristotle.

As we saw above, Aristotle envisaged sensory perception (aisthesis) as having a propositional content, whereas in English usage, it is typically said to have an object. Aristotle then proposed an ingenious and oft-cited argument against the identification of sensory perception with belief: we can perceive the sun to be quite small, only a foot across, while believing it to be very large (De Anima 3.3, 428b3-10). While the argument certainly shows that the perceptual appearance that P cannot be equated to a belief that P, someone who wished to assimilate perceptions to beliefs could still argue that the appearance equates to some other belief (Sorabji, 1993, p. 36).

However, the methodology I proposed above for investigating mental states suggests a more general argument against envisaging sensory perceptions as primitive beliefs. It was argued that we should use a mind-neutral intentional stance to describe an organism's sensory behaviour, unless there was a scientific advantage in invoking mental states. An organism's ability to discriminate between beneficial and harmful stimuli, which enables it to move towards the former and away from the latter, can be described using a mind-neutral goal-centred intentional stance, which explains the organism's behaviour in terms of its information about its built-in goals. There is no scientific advantage in describing such an organism as having the belief that the object it senses is something it wants, as the hypothesis does no extra explanatory work.

Following the methodology we have adopted, we may conclude:

S.3 The possession by an organism of sensors which encode information about its surroundings is an insufficient warrant for saying that the organism is capable of having beliefs.

2. Does sensation imply conscious awareness?

The other outstanding question is whether having a sensation can be accounted for using a third-person intentional stance or whether a first-person stance is required.

The ambiguity of Aristotle's aisthesis

Although Aristotle did not have a single word like our term "mind", covering perception, emotion, imagination, memory, reason and choice, while excluding nutrition and growth, there can be little doubt that he envisaged sensing (aisthesis) in mentalistic terms, as shown by his remarks (De Anima 3.9, 432a16) that both perceiving (aisthesis) and thought belong to a faculty of discernment (krinein), and that since all animals possess some form of perception (i.e. touch), they must also be capable of pleasure, pain and desire (De Anima 2.3, 414b1ff).

Because Aristotle's usage of the term aisthesis conflates two meanings - a capacity to discriminate between stimuli (which could be described in terms of a mind-neutral third-person intentional stance), and the subjective experience of a stimulus (which needs to be described in terms of a first-person intentional stance) - we may conclude that his term aisthesis is too ambiguous to assist us in determining which intentional stance is most appropriate for describing sensation.

In another famous passage (De Anima 3.2, 425b11-25), Aristotle also argued that since we perceive that we are seeing, it must either be by sight that we do so, or some other sense. Since the latter involves us in a potential infinite regress, Aristotle suggested that sight is not a simple sense, but comprises both seeing and perceiving that one is seeing - which suggests that he envisaged the activity of sensing as some kind of subjective state. He does not seem to have addressed the issue of whether there could be organisms that can see but cannot see that they are seeing, which is what is being proposed here for bacteria. Additionally, Aristotle's use of the term "perception" to describe my being aware that I am seeing obscures the fact that this "perception" is a high-level cognitive state - perhaps "realising" would be a better word - which requires that I have a self-concept, as well as a concept of my cognitive states (as distinguished from their objects). It is debatable whether any non-human animal (let alone a bacterium) is capable of such feats.

Contemporary distinctions between different kinds of consciousness

Modern philosophers distinguish the following senses of the popular term "conscious", most of which do not require a first-person intentional stance.

The term "consciousness" may be imputed to a creature (e.g. when we say that a bird is conscious), or to one of its mental states (e.g. when we say that its perception of a worm is conscious). Accordingly, philosophers, following Rosenthal (1986), draw a distinction between creature consciousness and state consciousness.

Creature consciousness comes in two varieties: intransitive and transitive. We can say that a bird is conscious simpliciter if it is awake and not asleep or comatose. We can also say that it is conscious of something that it is capable of responding to - e.g. a wriggling worm that looks good to eat. The occurrence of sleep has been verified in animals such as fruit flies (Fox, 2004), research remains incomplete and the criteria for sleep vary across species. Since all cellular organisms possess sensory capacities (Conclusion S.2), we can formulate the follwing conclusions:

S.4 The possession by an organism of sensors which encode information about events in its surroundings is a sufficient warrant for saying that the organism has transitive creature consciousness of those events.

S.5 All cellular organisms have transitive (but not necessarily intransitive) creature consciousness.

Additionally, an animal with transitive creature consciousness can be said to be outwardly conscious of an object outside its mind (e.g. a worm), or inwardly conscious of an experience inside its mind (e.g. an unpleasant sensation). At this stage of our enquiry it is not clear which organisms can be said to be inwardly conscious of their experiences.

State consciousness, by contrast, can only be intransitive. As Dretske (1997) puts it:

States ... aren't conscious of anything. They are just conscious (or unconscious) full stop.

Ned Block (1997) criticises the concept of state consciousness as a mongrel concept, and has proposed a distinction between two different types of state consciousness: access consciousness and phenomenal consciousness - although some philosophers have queried the explanatory relevance of this distinction (Silby, 1998). A mental state is access-conscious if it is poised to be used for the direct (i.e. ready-to-go) rational control of thought and action. Phenomenally conscious states are states with a subjective feel or phenomenology, which, according to Block, we cannot define but we can immediately recognise in ourselves.

A creature with phenomenal consciousness is said to be sentient.

Finally, neuroscientists distinguish between primary consciousness (also called "core consciousness" or "feeling consciousness") - a moment-to-moment awareness of sensory experiences and some internal states - and higher-order consciousness, also known as "extended consciousness" or "self-awareness" (Rose, 2002).

The debate about animal consciousness is not a debate about creature consciousness, as all cellular organisms (including bacteria) are capable of responding to events occurring in their surroundings. Rather, what is at stake is state consciousness, and in particular, phenomenal consciousness, which roughly corresponds to what Rose calls primary consciousness. In the discussion that follows, I shall use the term "conscious" to mean "phenomenally conscious".

Dennett's distinction between sensitivity and sentience


Daniel Dennett. Photo courtesy of University of California.

Dennett (1997) allows that the sensitivity exhibited by bacteria can be described using the intentional stance, whereby each sensor is regarded as a micro-agent, awaiting the arrival of an incoming signal. However, he prefers not to call this sensitivity "sentience", principally because animals possess body-maintenance systems with similar functions, that operate even when they are asleep or comatose (1997, pp. 87-88). There is, however, a behavioural disanalogy between bacteria and sleeping animals. Even the humblest bacterium is capable of performing the activities required in order to protect itself, flourish and reproduce, in accordance with its telos - such as seeking food, avoiding harmful stimuli, and exchanging chemical messages with other individuals - whereas an animal is incapable of performing such actions such as feeding or reproducing while asleep, and can only react to a limited range of stimuli. Dennett's exclusion of bacteria from the domain of sentient beings requires further argumentative support.

The modern debate

The contemporary philosophical debate regarding sensory perception is split into several camps, with conflicting intuitions regarding the following four inconsistent propositions (Lurz, 2003):

1. Conscious mental states are mental states of which one is conscious.
2. To be conscious of one's mental states is to be conscious that one has them.
3. Animals have conscious mental states.
4. Animals are not conscious that they have mental states.

Proponents of so-called higher order representational (HOR) theories of consciousness accept propositions 1 and 2. HOR theorists argue that a mental state (such as a perception) only becomes conscious by virtue of its being an object of creature consciousness. Perceptions, on this account, are not intrinsically conscious; they require higher-order states to make them so. These higher-order states are variously conceived as thoughts (by HOT theorists) or as inner perceptions (by HOP theorists) (Wright, 2003).

Exclusive HOR theorists like Carruthers also accept Lurz's proposition 4 above but reject proposition 3 - that is, they allow that human infants and non-human animals have a variety of first-order mental states such as beliefs, desires and perceptions, but insist (Carruthers, 2000, p. 199) that we can explain their behaviour perfectly well without attributing any conscious beliefs, desires and perceptions to them.

In his earlier work, Carruthers (1992) employed two oft-discussed human analogies to animal behaviour. First, animals were said to be like distracted drivers who can navigate their way from A to B on "autopilot". I shall say more about the "distracted driver" case below.

Second, animals were likened to human beings with blindsight, who (because of damage to their visual cortex), lack phenomenal consciousness of objects that are located in a portion of their visual fields (a "scotoma"), but are able to identify these objects when pressed to do so. This analogy is no longer tenable, following the discovery of monkeys with blindsight. It has been found that blindsighted monkeys, after being trained to signal the presence or absence of a light in their sighted visual fields, signal the (apparent) absence of a light in their blindsighted visual fields, indicating that they cannot access visual information about contrast, intensity, reflectance and wavelength (Stoerig and Cowie, 1997).

However, Carruthers questions the assumption made by some philosophers (Dretske, 1995), that blindsighted monkeys must have lost whatever blindsighted humans have lost - namely, the capacity for phenomenally conscious visual experiences. On the contrary, the discrimination tasks reflect only their first-order judgments, and should not be construed as higher-order comments on their awareness of a light (Carruthers, 2004).

Carruthers has also been criticsed (Dretske, 1995) for failing to account for why consciousness evolved in the first place and what function it serves, if animals get by perfectly well without it, but has recently proposed that it arose as a consequence of humans' acquiring a theory-of-mind mechanism (ToMM), allowing them to think about their own thoughts - and those of other individuals.

Regarding the usefulness of consciousness, Carruthers (2004) concedes that it has only marginal cognitive relevance, even for adult human beings: it is "almost epiphenomenal". However, it plays a role "whenever our actions manifest higher-order thought about our experiences, or depend on the distinction between isand seems" (2004). Lacking a theory of mind mechanism (ToMM), animals are incapable of such behaviour.

Does my cat know that it feels?

By contrast, inclusive HOR theorists, such as Rosenthal, accept Lurz's proposition 3 but reject proposition 4. Rosenthal (1986) argues that animals can have very crude thoughts about their mental states - e.g. the thought that one is having a particular sensation.

However, Rosenthal's commitment to proposition 2 commits him to the view that an animal's awareness of its mental states is cognitive (awareness that it has them) rather than merely observational. Conscious animals can therefore think about their own mental states - a position which Lurz (2003) regards as implausible. Animals, he argues, lack such mental sophistication.

Does ordinary language offer a solution?


Fred Dretske. Photo courtesy of Stanford University.

Defenders of first-order representational (FOR)accounts of consciousness, such as Dretske, accept Lurz's propositions 2, 3 and 4 but reject proposition 1. Dretske (1995), who professes to employ "entirely standard" language which reflects "how ordinary folk talk about these mental states", claims that a mental state becomes conscious simply by being an act of creature consciousness. This seems to accord well with "folk psychology": in common parlance, sensing an object entails being aware of it.

For the purposes of this discussion and in accordance with most dictionaries I regard "conscious" and "aware" as synonyms. Being conscious of a thing (or fact) is being aware of it (Dretske, 1995).

I assume, furthermore, that seeing, hearing, smelling, tasting and feeling are specific forms - sensory forms - of consciousness. Consciousness is the genus; seeing, hearing, and smelling are species (the traditional five sense modalities are not, of course, the only species of consciousness). Seeing is visual awareness...

However, an animal need not be aware of its mental states for them to be conscious. When an animal senses something it is conscious of it, without necessarily knowing what it is, or being aware that it is sensing:

You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it... (Dretske, 1995).

On Dretske's account, consciousness has a very practical function: to alert an animal to salient objects in its environment - e.g. potential mates, predators or prey:

Let an animal - a gazelle, say - who is aware of prowling lions - where they are and what they are doing - compete with one who is not and the outcome is predictable. The one who is conscious will win hands down. Reproductive prospects, needless to say, are greatly enhanced by being able to see and smell predators. That, surely, is an evolutionary answer to questions about the benefits of creature consciousness. Take away perception - as you do, when you remove consciousness - and you are left with a vegetable. You are left with an eatee, not an eater. That is why the eaters of the world (most of them anyway) are conscious (Dretske, 1995).

On methodological grounds alone, Dretske's common-sensical assertion that consciousness must have a practical function is to be preferred to Carruthers' suggestion that it is almost epiphenomenal. The hypothesis that an organismic trait has a biological value is more fruitful for scientific investigation than the hypothesis that it has none. However, there are several problems with Dretske's proposal.

First, Dretske's account is vulnerable to Chalmers' (1996) zombie argument, as one could conceive of an unconscious zombie with the same discriminatory abilities as a conscious animal. Many philosophers, however, question the relevance of thought experiments based on purely logical possibilities.

Second, there are any number of practical reasons why consciousness might have evolved. Dretske's suggestion that it arose because it enabled animals to sense predators is not entirely convincing. Being able to sense predators is of little use unless you can avoid them - e.g. by running away. Most animals are very good at doing this, at short notice. This is one thing that differentiates them from "vegetables". The ability to react rapidly to sudden, unexpected changes in one's environment is another possible reason for the evolution of consciousness.

Third, Dretske's claim that animals possess "perception" while vegetables lack it is empirically flawed: as we have seen, even bacteria have sensory capacities, and so do plants. From this, one could either conclude that all organisms (with the exception of viruses) have conscious mental states, or that Dretske is wrong in linking sensations to mental states.

Fourth, Dretske's appeal to "how ordinary folk talk" creates problems for his argument that sensing an object entails being conscious of it: in fact, we often speak of artifacts (e.g. motion detectors, heat detectors) as being able to sense objects, yet we do not credit them with awareness or consciousness. Of course, Dretske could claim that for organisms (which are profoundly different from non-living artifacts), sensing an object is a way of being conscious of it, but such a position requires argumentative justification.

Finally, Lurz (2003) argues that it seems counter-intuitive to say (as Dretske does) that an animal could have a conscious experience of which it was not conscious.

Lurz (2003) proposes what he calls a same-order (SO) account, and questions the assumption (shared by HOR and FOR theorists) that to be conscious of one's mental states is to be conscious that one has them. It would be better, Lurz suggests, to say that a creature's experiences are conscious if it is conscious of what (not that) its experiences represent.

I believe that if we are to resolve the current argumentative deadlock about animal consciousness, we might do well to set aside both our philosophical thought experiments (such as Chalmers' zombie) and our grammatical intuitions about the proper usage of the word "conscious", and focus instead on the empirical cases that underlie the arguments. In particular, recent research on attention mechanisms (discussed in Wright, 2003) sheds valuable light on animal consciousness.

Wright (2003) considers the much-discussed case of the distracted driver, who is supposedly able to navigate his car for miles despite being oblivious to his visual states. FOR theorists happily grant that the distracted driver has conscious visual states of which he is not aware; HOR theorists deny this. Wright faults both camps for being too gullible, citing three driving studies which show that driving requires a certain minimum amount of attention to the road. What really happens in "distracted driving" is that the driver pays attention to the road for some of the time, but the other matter that he is thinking about demands a much greater share of his cognitive resources, with the result that the information about the visual scene is quickly bumped from working memory and never encoded in long-term memory. Hence the driver's shock when he comes to the end of his journey.

Additionally, studies of inattentional blindness (IB) and change blindness (CB) refute the claim made by FOR theorists that subjects can have visual experiences that they are not attending to. Wright (2003) cites research on IB, showing that when subjects are engaged in visual tasks demanding a high degree of attention, they fail to notice unexpected objects in their field of vision, even when they occupy the same point on their visual scene as the objects they are attending to. During CB, subjects fail to notice large-scale changes in a scene directly before their eyes, because their attention is diverted to other features of the scene. The upshot is that "there seems to be no conscious perception without attention" (Wright, 2003). Dretske's (1995) assertion that "You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it" is therefore empirically wrong.

The relevance of the above research to animals should be obvious:

Conclusion E.20 Attending to an object is an necessary condition for being conscious of it.

Conclusion E.21 Only those animals that are capable of attending to objects in their surroundings can be described as having phenomenally conscious states.
Wright does not define exactly what he means by "attention". Scientific models of attention often include aspects of selectivity, selecting one item in favor of another. Either the selected one is enhanced, or the other one is suppressed. Other models discuss the assignment of resources to items. Due to the resulting reduction of data to be processed, models of computer vision sometimes use mechanisms of visual attention. Another aspect of attention is that it allows to select relevant items and suppress distractors for task specification.

Aristotle's two definitions of anger (De Anima 1.1, 403a - line number!) as a desire for revenge (first person) and a boiling of blood around the heart (third person), suggest that he regarded sensations and perceptions as the same thing, described from the outside (third-person stance) and inside (first-person stance).

Conclusions

The upshot of this discussion is that sensation and conscious awareness are indeed distinct, because we can describe an organism's sensory capacities using mind-neutral terminology: a third-person intentional stance (or, alternatively, a goal-centred stance) is adequate for the task. Following the methodology that I proposed above, the only valid reason for preferring mentalistic terminology here would be that it allows us to better explain the behaviour of organisms with sensory capacities - i.e. to make new predictions that a neutral account could not make. If we can explain and predict the repertoire of a sensitive organism's behaviour (e.g. the behaviour of a bacterium) just as well by using neutral terminology, then it serves no scientific or philosophical purpose to introduce mentalistic terminology and ascribe perceptions to it.

We can now formulate another conclusion regarding the conditions for ascribing mental states to organisms:

S.3 The possession by an organism of sensors which encode information about its surroundings is an insufficient warrant for saying that the organism is capable of cognitive mental states.

In organisms, sensors are thus a necessary (Conclusion S.2) but not sufficient condition for having mental states.

Back to Chapter2 Back to current topic - Sensations and Perceptions *** EXCURSUS: The Tree of Life
Previous page - Sensory capacities Next topic - Memory and mental states
*** SUMMARY of Conclusions reached References