4.1.2 Scientific and Philosophical Distinctions regarding Animal Consciousness

Back to Header Page Previous section: Summary of conclusions Next section: A properly basic belief? Conclusions References

Scientists and philosophers have long grappled with the enigma of consciousness - especially the subjective aspect of it, which philosophers refer to as "phenomenal consciousness". Each discipline, I contend, has its own "original sin": scientists have a tendency to assume that consciousness does not exist where it cannot be measured, while philosophers are inclined to believe that rigorous analysis offers the best hope of resolving long-standing questions about the nature of consciousness. Both sins have hampered the investigation of animal consciousness. Scientists have often been reluctant to talk about consciousness outside of controlled settings where it can be measured, and have been slow to come up with measures that are suited to non-human animals. Philosophers have proposed fine-grained analytical distinctions between different "concepts of consciousness" which fail to correspond to real-world distinctions, while ignoring robust, nomic connections between consciousness and physical states in the real world, which promise to shed light on the nature of consciousness.

I argue that philosophers can play a vital role in critically evaluating scientific theories of what consciousness is for, as well as the "litmus tests" used by scientists to identify consciousness in humans and other animals. Where philosophers perform badly is in attempting to formulate their own theories of consciousness. While the distinctions drawn by philosophers between different usages of "consciousness" can do much to remove conceptual confusion and sharpen our thinking, they carry no ontological implications in and of themselves. I argue that when these analytical distinctions are applied to the real world, they often turn out to be under-defined in empirical terms, and usually do not correspond to any known distinctions in awareness between animals. They are therefore of no use in resolving the Distribution Question ("Which animals are conscious?"). Consciousness, I contend, is a subject about which there is radical uncertainty: at the present time, no-one knows how it arose, what it is, or even what it is for. Given this degree of uncertainty, I propose that we should examine the question of animal consciousness with as few theoretical assumptions as possible, focusing on the known neurological data and the most reliable experimental data to resolve the Distribution Question.

Finally, I examine Carruthers' view that phenomenal experiences are unique to human beings, and conclude that while Carruthers' supporting arguments are unconvincing, his sceptical view has not been disproved.

1. Scientific and philosophical distinctions regarding consciousness

The term "consciousness" has various scientific and philosophical usages, which have to be teased apart before we can address the perennial question of which animals have conscious feelings - or, as philosophers would say, "Which animals are phenomenally conscious?"

Scientific usages of "consciousness"

Neuroscientists commonly distinguish between primary consciousness (also called "core consciousness" or "feeling consciousness") - a moment-to-moment awareness of sensory experiences and some internal states - and higher-order consciousness, also known as "extended consciousness" or "self-awareness" (see Rose, 2002, p. 6).

Primary consciousness refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions. Higher-order consciousness includes awareness of one's self as an entity that exists separately from other entities; it has an autobiographical dimension, including a memory of past life events; an awareness of facts, such as one's language vocabulary; and a capacity for planning and anticipation of the future. Most discussions about the possible existence of conscious awareness in non-human animals have been concerned with primary consciousness... (Rose, 2002, p. 6).

Although consciousness is an ill-defined concept, scientists are still able to identify its occurrence by objective indicators. Rose (2002) enumerates the "indicators of consciousness" used by clinical neurologists:

From the clinical perspective, primary consciousness is defined by (1) sustained awareness of the environment in a way that is appropriate and meaningful, (2) ability to immediately follow commands to perform novel actions, and (3) exhibiting verbal or nonverbal communication indicating awareness of the ongoing interaction... Thus reflexive or other stereotyped responses to sensory stimuli are excluded by this distinction (2002, p. 6, italics mine).

For philosophers investigating animal consciousness, two vital questions that need to be asked are:

(1) Are the neurological criteria for consciousness well-defined?

(2) Can they be applied to animals?

At first glance, the first and third indicators seem rather vague, but scientists have well-defined methods of measuring them. The standard observational index used here is "accurate, verifiable report" (Baars, 2001, p. 35):

In humans reports do not have to be verbal; pressing a button, or any other voluntary response, is routinely accepted as adequate in research" (2001, p. 35).

Baars (2001) maintains that since the criteria for primary consciousness allow for nonverbal communication, they can be applied to at least some non-human animals. For instance, recent experiments by Stoerig and Cowey (1997, p. 552) have shown that a monkey can be trained to respond to a stimulus in its visual field by touching its position on a screen, and to a blank trial (no stimulus) by touching a constantly present square on the screen that indicates "no stimulus". The monkey's ongoing responses certainly fit the requirements for a nonverbal "accurate, verifiable report" (Baars, 2001) indicating "sustained awareness of the environment" (Rose, 2002, p. 6).

Recent research has also shown ways in which an animal could satisfy Rose's second criterion ("ability to immediately follow commands to perform novel actions"). For instance, some dolphins, after having been trained in an artificial language of 40 "words" - actually hand and arm gestures - can respond correctly to novel combinations of words (Hart, 1996, pp. 74-75; Herman, 2002, pp. 278-279). Sea lions can respond to novel instructions with up to seven signs, asking them, for instance, to bring a small black ball to a large white cone (Schusterman et al., 2002). The ability of Alex, the African grey parrot, to correctly "distinguish quantities of objects, including groups of novel items, heterogeneous collections, and sets in which objects are randomly arrayed" (Pepperberg, 1991), also seems to meet the novelty criterion. On the other hand, the impressive ability of honeybees to successfully distinguish "same" and "different" in match-to-sample trials (Giurfa et al., 2001) would not qualify: although the bees were performing a novel action, they were not following a command to do so.

Despite these promising results, there are philosophical problems associated with these criteria for consciousness. While the test of accurate report discussed by Baars (2001) is ideally suited to a manually dextrous animal like a monkey, it is difficult to imagine how it could be administered to a fish, for instance. The validity of any test of accurate report as a measure of animal awareness is, I suggest, limited by the number of species of animals whose ethogram is compatible with the physical response required when the animal is reporting.

The procedure of testing animal awareness by commanding them to perform novel actions is even more philosophically problematic. How novel do the actions have to be? ("Raise your right paw and hold it in front of your nose." - I think Fido would flunk this one.) What if the action is actually a novel combination of simple actions, each of which the animal has rehearsed thousands of times?

The notion of "following a novel command" suffers from another limitation: it is inapplicable in situations where the animal's sensory capacities do not allow it to realise that it is being given a command. One reason why a dog can respond appropriately to its trainer is that it can visually recognise her. Presumably, a very small animal like a fruit fly would be incapable of this feat.

Conclusion 4.1: The concept of primary consciousness is legitimately applicable to some non-human animals.

Conclusion 4.2: The concept of primary consciousness, as currently defined, does not exhaust the notion of animal consciousness.

Before addressing the relationship between primary and phenomenal consciousness, I wish to sound a note of caution. It has to be borne in mind that the ontological question of what primary consciousness is ("moment-to-moment awareness") is quite distinct from the epistemological question of which indicators scientists should use to identify it. The latter cannot define the former. The theoretical concept of primary consciousness appears to be the same as that of subjective awareness or phenomenal consciousness - except for the fact that it explicitly excludes higher-order states, which are also phenomenal. Whether the clinical/experimental concept used by scientists successfully captures the necessary and/or sufficient conditions for subjective awareness is another question altogether.

The occurrence of dreams shows that the the ability to give an accurate, ongoing report of one's surroundings (primary consciousness) is not a necessary condition for phenomenal consciousness. But dreams are a derivative form of consciousness, whose content depends on what we experience when awake. We could therefore legitimately set dreams to one side and limit our investigation to non-derivative forms of consciousness. Is there any reason to think that an individual who is awake must be able to give an accurate report of her surroundings before she can qualify as phenomenally conscious? Thought experiments suggest otherwise: we can easily imagine that some animals might have conscious feelings, but permanently lack the neurological wherewithal to learn to press a button to describe what they are experiencing. But thought experiments can mislead, especially when we are talking about a concept (consciousness) that we do not clearly understand. In any case, perhaps an animal may need to have the kind of brain that meets the requirements for primary consciousness before it can have subjective experiences of any kind.

More telling is the argument that some human beings (e.g. newborn babies) are commonly said to have conscious feelings but are incapable of giving accurate report. However, it may be that babies' brains, while mature enough for having experiences, still lack the degree of motor co-ordination required for giving an accurate report about them. By contrast, many kinds of adult non-human animals have good motor co-ordination, so there is no obvious reason why they could not engage in accurate reporting if they were phenomenally conscious.

To sum up: there may be grounds for doubting that accurate report is a sine qua non of phenomenal consciousness, even in adult animals, but until we can replace it with a better criterion, the case against its necessity - at least for developmentally normal adult individuals - remains unproven. Perhaps we should simply regard it as the best yardstick that has been devised to date.

A much stronger case can be made for the proposition that an animal's ability to give an accurate, verifiable report of its surroundings is a sufficient condition for phenomenal consciousness. This appaears to be what Block (2001) is claiming when he remarks that "[i]n order to ascertain empirically whether a phenomenal state is present or absent we require the subject's testimony" (2001, p. 211). According to Block, when a subject testifies about her experience, she manifests not only phenomenal consciousness, but a reflection on it as well. Block doubts whether lizards possess this capacity for reflexivity (2001, p. 216).

The comparative study of blindsight in humans and monkeys constitutes a good argument that it can serve as an indicator of phenomenal consciousness. Blindsight is a condition in which patients with damage to the visual cortex of the brain lose their subjective awareness of objects in a portion of their visual field, but sometimes retain the ability to make visual discriminations between objects in their blind field. Some patients, if asked to grasp an object on their blind side, which they say they cannot see, can usually "guess" the location of the object and cup their hands in an appropriate way (Stoerig and Cowey, 1997, pp. 536-538).

Lack of awareness has also been experimentally verified in studies of monkeys with blindsight. If one half of the region of the visual cortex that is damaged in human blindsight patients is surgically removed from a monkey's brain, the monkey is still capable of a range of visual discrimations on the blind side of its visual field. However, Stoerig and Cowey (1997) have recently shown that the monkey is unaware of the objects it sees on this side of its visual field. If the monkey is trained in experiments to press a "not seen" button following a warning tone when a light does not illuminate a screen presented to its sighted field, and it is then presented with a light in its blind field, she presses the "not seen" button, indicating that it lacks awareness of the stimulus (Stoerig and Cowey, 1997, p. 552).

The performance of these monkeys certainly satisfies the stipulated requirements for primary consciousness, and the inference from the discovery of blindsight in monkeys to the conclusion that normal monkeys are subjectively aware of what they see seems an obvious one. One prominent dissenter is Carruthers (2004b), who cautions against assuming that blindsighted monkeys have lost whatever blindsighted humans have lost. On Carruthers' account, only animals that possess a theory of mind mechanism qualify as having subjective feelings. When a blindsighted monkey presses a "not seen" key, it is not reporting about its subjective lack of awareness, but simply signaling the (perceived) absence of a light. Normal monkeys perceive, but are not subjectively aware of what they perceive. For a perception to count as subjective, Carruthers argues, the percipient must be able to make a distinction between appearance and reality. Only if an individual can understand the difference between "looks green" and "is green" can we be sure that they have the phenomenology of green. To understand the difference, argues Carruthers, an individual must have a "theory of mind" which enables her to grasp that how an object looks to you may not be the same as how it looks to me.

Carruthers' argument merits thoughtful evaluation and I shall return to it below in my discussion of the current philosophical debate on animal consciousness, but it is worth noting its counterintuitive implications: if Carruthers is right, then children under the age of four lack phenomenal awareness, as they lack the "theory of mind" required to make an appearance-reality distinction.

Henceforth, I shall refer to the view that primary consciousness is a sufficient condition for phenomenal consciousness as the standard view, when contrasting it with Carruthers' view.

For the time being, we are unable to conclude that the criteria used to identify primary consciousness constitute either a necessary or a sufficient condition for animals having conscious feelings.

Philosophical usages of "consciousness"

Philosophers draw a different set of distinctions regarding consciousness from those drawn by neuroscientists. We may impute consciousness to a living creature (e.g. a bird), or we may argue about whether its mental states (e.g. its perceptions of a worm) are conscious. Accordingly, philosophers, following Rosenthal (1986), draw a distinction between creature consciousness and state consciousness.

Creature consciousness comes in two varieties: intransitive and transitive. We can say that a bird is conscious simpliciter if it is awake and not asleep or comatose, and we can also say that it is conscious of something - e.g. a wriggling worm that looks good to eat. Furthermore, an animal with transitive creature consciousness might be conscious of an object outside its mind (e.g. a worm) or of an experience inside its mind (e.g. an unpleasant sensation). In the former case, the creature is said to be outwardly conscious of the object; in the latter case, it is said to be inwardly conscious of its experience.

State consciousness, by contrast, can only be intransitive. As Dretske (1995) puts it:

States ... aren't conscious of anything. They are just conscious (or unconscious) full stop.

Ned Block (1997) has criticised the concept of state consciousness as a mongrel concept, and proposed a distinction between two different types of state consciousness: access consciousness and phenomenal consciousness. A mental state is access-conscious if it is poised to be used for the direct (i.e. ready-to-go) rational control of action and speech. Phenomenally conscious states are states with a subjective feel or phenomenology, which, according to Block, we cannot define but we can immediately recognise in ourselves.

Block (1995, 2001) also distinguishes a third kind of consciousness: reflexive consciousness, also known as introspective or monitoring consciousness. A state is conscious in this way if it is the object of another of the subject's states (e.g. when I have a thought that I am having an experience). Alternatively, "a state S is reflexively conscious just in case it is phenomenally presented in a thought about S" (2001, p. 215).

Finally, Block (1995) defines self-consciousness as the possession of the concept of the self and the ability to use this concept in thinking about oneself.

The question of which animals have conscious experiences can now be formulated in philosophical language as:
which animals are phenomenally conscious, as opposed to merely creature-conscious or access-conscious?

2. Creature consciousness and its relationship to phenomenal consciousness in animals

I contend that the philosophical distinction between phenomenal consciousness and creature consciousness, while conceptually valid, is poorly drawn for several reasons.

First, transitive and intransitive creature consciousness are both defined ambiguously in empirical terms. The term "transitive creature consciousness" can have a variety of senses, even for the same individual organism, and the blanket term "intransitive creature consciousness" fails to distinguish between two very different forms of wakefulness (and sleep) in animals.

Second, the transitive and intransitive creature consciousness differ greatly in their scope within the animal kingdom. The distinction between them is not a grammatical one but a real one.

Third, to distinguish phenomenal consciousness from both kinds of creature consciousness and ignores a massive body of neurological evidence from EEG studies suggesting a nomic connection between wakefulness (i.e. intransitive creature consciousness as defined by brain-related criteria) and primary consciousness: the former seems to guarantee the occurrence of the latter in all human beings studied to date. Additionally, we now know which kinds of animals satisfy the brain-based criteria for wakefulness, and research to date suggests that at least some of these animals possess primary consciousness while awake - which, according to the standard view of primary consciousness, means that they are phenomenally conscious too. Some scientists (Baars, 2001; Cartmill, 2000) have suggested that wakefulness - defined according to brain criteria - is a reliable indicator of phenomenal consciousness across all animal species. The connection between having a brain that is awake and being phenomenally conscious may well turn out to be nomic in animals.

The question of which animals have conscious feelings is a practical one. If philosophical distinctions are to serve a practical purpose, they must carve reality at the joints. There is no guarantee that purely conceptual distinctions can do this. The first question I propose to address is: is there a real (physical) possibility of organisms possessing transitive or intransitive creature consciousness in the absence of phenomenal consciousness?

(i) Transitive creature consciousness

Transitive creature consciousness, in the broadest sense of the word, can be ascribed to those creatures that are capable of sensing things in their environment. It was argued in the second chapter (Conclusion S.2) that all cellular organisms possess sensors. I also discussed Cotterill's argument (2001, pp. 3-4) that the self-initiated probings of bacteria are not true perceptions. If we count only organisms that respond to "unprovoked" stimuli, we are still left with a large class of organisms: the domain of eukaryotes. Cotterill identifies two other evolutionary milestones: the emergence of actual nerve cells, embedded within the organism's membrance (specialised receptors), permitting the development of reflexes; and the later emergence of the sensory processor, which permitted the capture of correlations between sensory inputs.

Conclusion 4.3: Transitive creature consciousness, in its broadest sense, is a property of all cellular organisms.

Conclusion 4.4: The term transitive creature consciousness can be used in several senses. In the broadest sense, it applies to all cellular organisms; in a narrow sense, to all organisms with a nervous system.

The vomeronasal system, which responds to pheromones and affects human behaviour, but is devoid of phenomenality (Allen, 2003, p. 13) is a good example of perception or transitive creature consciousness occurring in the absence of phenomenal consciousness. (It is hard to see how we can avoid using the word "perception" here, as the vomeronasal system discriminates between individuals and impacts on our behaviour.) Allen concludes:

[T]he fact that the vomeronasal system is devoid of phenomenology shows that there is no guaranteed connection between phenomenal consciousness and any behavior-guiding perceptual system (2003, p. 13).

Another, more controversial example is the phenomenon of blindsight, discussed above. Although blindsight patients insist that they can see nothing in their blind field, some blindsight patients, when forced to make a guess about objects on their blind side, prove capable of locating and grasping them with a high degree of accuracy (Stoerig and Cowey, 1997, pp. 536-538). Given the high level of accuracy of the guesses in these "forced choice" experiments, it seems unreasonable not to regard them as bona fide perceptions, yet the subjects are not phenomenally conscious of what they see. The fact that the same distinction has also been identified in studies of monkeys with blindsight (Stoerig and Cowey, 1997, p. 552) strengthens the case that the distinction between transitive creature consciousness and phenomenality is not merely a conceptual one but a real one.

Conclusion 4.5: Transitive creature consciousness can occur in an animal in the absence of phenomenal consciousness.

An additional point that needs to be made is that the term "perception" remains under-defined even within a single individual. Blindsight varies across patients in its degree of severity, and the specificity of the responses shown by these patients varies accordingly. The most general response is a neuroendocrine reaction, in which patients suppress their levels of melatonin in response to bright light. Some patients have a more specific reaction - a reflexive response of the pupil. Some of these patients are also capable of an indirect response, as occurs when a patient hears a polysemous word (BANK) after being presented with a word related to one of its meanings (RIVER / MONEY) in her blind field, is asked which meaning she is thinking of, and makes a choice that reflects the word she unconsciously processed. Finally, a few patients can locate and grasp an object on their blind side, when forced to guess where it is (Stoerig and Cowey, 1997, pp. 536-538).

Which of these responses one decides to count as instances of transitive creature consciousness depends on how one broadly defines "perception".

Conclusion 4.6: The term transitive creature consciousness can have a variety of senses, even for the same individual organism: it describes a range of responses, of varying degrees of specificity, to the object perceived.

I conclude that the term "perception" needs to be more rigorously defined in the philosophical literature. Certain types of perception may be nomically connected with phenomenal consciousness, but perception per se is not.

(ii) Intransitive creature consciousness

The term intransitive creature consciousness is inadequate as it stands, as it assumes that "wakefulness" and "sleep" have simple, clearcut definitions. In fact, there are two kinds of criteria for sleep used by psychologists: behavioural and electrophysiological. Behavioural sleep is found in a variety of animals. Animal sleep that also satisfies electrophysiological criteria is called true or brain sleep and some neuroscientists believe it to be intimately related to phenomenal consciousness (Cartmill, 2000; Baars, 2001; White, 2000).

Conclusion 4.7: The term intransitive creature consciousness has at least two senses.

It turns out that both brain and behavioural sleep have been defined only for animals (Kavanau, 1997, p. 258), who as we saw in chapter two comprise only a tiny twig on the tree of life. Since the class of creatures possessing intransitive creature consciousness is characterised by wakefulness as opposed to sleep or coma, organisms that do not sleep can hardly be called conscious in this sense. Thus intransitive consciousness cannot be ascribed to bacteria, protoctista, plants, or fungi - or to those animals (such as alligators) that do not appear to sleep in any way (Kavanau, 1997, p. 258).

Conclusion 4.8: The class of animals possessing intransitive creature consciousness - of either the brain or behavioural variety - is a subset of the class of organisms possessing transitive consciousness - even if we limit ourselves to those organisms with specialised receptor cells ("true" senses).

This turns on its head the commonly held notion (see, for instance, Carruthers, 2004b) that transitive creature consciousness presupposes intransitive consciousness.

Conclusion 4.9: Transitive creature consciousness should be viewed as more fundamental than intransitive creature consciousness.

(ii)(a) Intransitive creature consciousness: Behavioural criteria for wakefulness and sleep

Behavioural sleep is distinguished from mere "restful waking" (also called "rest" or "drowsiness"), which is characterised by "behavioral quiescence (cessation of voluntary activity), unelevated sensory thresholds, characteristic postures, vigilance, and at most, only brief and intermittent occlusion [closing of the eyes - V.J.T.]" (Kavanau, 1997, p. 248, italics mine). Behavioural or primitive sleep is defined by "behavioral quiescence, elevated sensory thresholds, rapid arousability, characteristic postures, and occluded pupils" (Kavanau, 1997, p. 248), as well as "increased rest after prolonged waking (a criterion that indicates that rest is under homeostatic control)" (Shaw et al., 2000, p. 1834). The criterion of rapid arousability (with relatively intense stimulation) distinguishes sleep from other states like anaesthesia and coma (Sleep Research Society, 1997).

Behavioural sleep is found in a variety of animals:

Behavioral sleep, including behavioral quiescence with species-specific stereotypic postures and elevated arousal thresholds, has been observed in bees, wasps, flies, dragonflies, grasshoppers, butterflies, moths and scorpions (Kavanau, 1997, p. 258).

Behavioural sleep is probably also found in most cold-blooded vertebrates, although some (the bullfrog, sea turtle, tortoise, and American alligator) appear not to need to engage in it (Kavanau, 1997, pp. 252, 258).

Behavioural wakefulness can certainly exist in the absence of phenomenal consciousness. As an extreme example, Rose (2002, p. 14) discusses six human patients (first described in Jouvet, 1969), who had suffered the complete loss of their cerebral cortex. Some of these decorticate patients still displayed intermittent wakefulness, manifested by the presence of behavioural sleep-wake cycles, and even exhibited behaviours such as grimacing and cries evoked by noxious stimuli, and pushing at the hands of the examiner. The condition of persistent vegetative state, in which "persons with overwhelming damage to the cerebral hemispheres commonly pass into a chronic state of unconsciousness" (JAMA, 1990), has been defined as "chronic wakefulness without awareness" (JAMA, 1990). Patients exhibit behavioural sleep-wake cycles - in contrast with coma, during which patients are never awake. PVS patients may exhibit behaviours such as grinding their teeth, swallowing, smiling, shedding tears, grunting, moaning, or screaming without any apparent external stimulus. The point that needs to be made here is that all of the wakeful behaviours displayed by these patients are generated by their brain stems and spinal cords. Studies have shown that activity occurring at this level of the brain is not accessible to conscious awareness in human beings (Rose, 2002, pp. 13-15; Roth, 2003, p. 36). (For a more complete discussion of PVS, see JAMA, 1990; Multi-Society Task Force on PVS, 1994; Laureys, 2002; Baars, 2003; National Health and Medical Research Council, 2003. Borthwick, 1996, critiques the medical criteria used to define PVS, and argues that misdiagnoses are common and that the condition should not be viewed as irreversible.)

Conclusion 4.10: If we define wakefulness according to behavioural criteria, then its occurrence in an animal is an insufficient reason for ascribing phenomenally conscious states to it.

The point I am making here is a purely negative one. Let me state clearly that I am not proposing that the behaviour of PVS patients, who require assisted feeding in order to stay alive, is a model for that of behaviourally wakeful animals lacking a cortex. On the contrary: whereas humans and other mammals are very much dependent on their cerebral hemispheres for functionally effective behaviour, other animals exhibit much less dependence or none at all (Rose, 2002, pp. 9, 10, 13).

(ii)(b) Intransitive creature consciousness: Brain-based criteria for wakefulness and sleep

True or brain sleep is defined by additional electrophysiological criteria, including: EEG patterns that distinguish it from wakefulness; a lack of or decrease in awareness of environmental stimuli; and the maintenance of core body temperature (in warm-blooded creatures) (White, 2000).

Baars (2001) describes the massive contrast between scalp electrical activity (EEG) of human patients in states of global unconsciousness (deep unconscious sleep, coma, PVS, general anaesthesia and epileptic states of absence) and the EEG of patients in a state of waking consciousness, as verified by their ability to give "accurate, verifiable report" (2001, p. 35). All unconscious states are associated with slow-wave, synchronised EEG patterns caused by co-ordinated firing of neurons in the thalamus, while both waking consciousness and REM sleep are characterised by fast, irregular, high-voltage activity in the thalamus and cortex (Kavanau, 1997, p. 247). Even our phenomenally conscious dreams, which occur during REM sleep, conform to the general rule that consciousness is associated with fast-wave activity in the brain.

If we look at animals, "[a]ll mammalian species studied so far show the same massive contrast in the electrical activity between waking and deep sleep ... so much so that animal EEG studies are routinely assumed to apply to humans" (Baars, 2001, p. 37).

It turns out that all mammals and birds engage in true sleep, but no other animals (Shaw et al., 2000). EEG patterns in sleeping reptiles show arrhythmic spiking that resembles non-REM sleep, but lack the slow-wave patterns that characterise sleep in mammals and birds. In reptiles, sleep is regulated by the limbic system instead of the cerebrum (Kavanau, 1997; Backer, 1998).

Conclusion 4.11: True sleep, defined according to brain-based criteria, is only found in mammals and birds.

There appears to be a nomic connection between brain-related criteria for wakefulness and phenomenal consciousness in all human beings. The nomic connection I have in mind here is one of the form: in human brains that have not suffered massive cerebral damage, wakefulness (as defined by brain-based criteria rather than behavioural criteria) is a sufficient condition for primary consciousness.

I base my claim on abundant evidence from clinical studies of a robust connection between brain-wakefulness and primary consciousness in all human beings studied to date: patients in states of global unconsciousness (e.g. deep sleep, coma and epileptic states of absence) do not report the occurrence of events in their surroundings that they report while they are awake (Baars, 2001).

On the standard view of primary consciousness, this implies that there is a nomic connection between wakefulness and phenomenal consciousness in all human beings.

An additional reason for suggesting that the connection between brain wakefulness and phenomenal consciousness reflects an underlying law of nature rather than a mere regularity can be found in a fact of everyday life: no matter how hard we try, we cannot rouse a sleeping person to brain wakefulness without thereby making her (a) alert to her surroundings (primary-conscious) and (b) phenomenally conscious. Also, the only way to keep someone who is currently awake conscious in these two senses is to prevent her from falling asleep.

Conclusion 4.12: There is a prima facie case for a law of nature linking wakefulness (defined according to brain-related criteria) with phenomenal consciousness: the former is a sufficient condition for the latter.

Another possible reason for postulating a law of nature is that it might explain the role of brain sleep. Cartmill (2000) points out that true sleep - especially the REM sleep found in most mammals and birds - is biologically very costly, as it leaves an animal "in a limp, helpless trance state that leaves [it] unable to detect or react to danger" (Cartmill, 2000). He hypothesises that sleep serves to repair damage to the brain caused by the demands of phenomenal consciousness, as other payoffs proposed to date (energy conservation, defence against predators) are implausible. If this is the case, he argues, it seems reasonable to think that animals that have to sleep as we do are conscious when they are awake. The fact that animals who we suspect are probably unconscious lack true sleep, while true sleep is compulsory for conscious human beings and for animals that behave as if they were conscious (mammals and birds), appears to confirm this hypothesis. Of course, Cartmill's hypothesis of the function of sleep is but one of many theories (for a summary, see Sleep Research Society, 1997; see Kavanau, 1997, for an alternative explanation of how sleep restores the brain). And while Cartmill's suggestion that consciousness is unique to mammals and birds may be correct, we have to consider the alternative possibility that other animals possess a more rudimentary consciousness that is less taxing on the brain, removing the need for brain sleep.

3. Access consciousness, phenomenal experience and other varieties of state consciousness in animals

The motivation for Block's (1995) distinction between two varieties of state consciousness (phenomenal consciousness and access consciousness) was to alert philosophers to the conceptual difference between the experiential aspect of consciousness and its function in rationally guiding speech and action. Failure to make this distinction, insists Block, can lead to mistaken philosophical conclusions about the function of phenomenal consciousness. Block contends that blindsight patients, who can make correct "guesses" about what is in their blind field but cannot harness this information to guide their actions, lack both forms of consciousness. Failure to appreciate this point leads some philosophers to argue wrongly that a function of phenomenal consciousness is to enable information encoded in the brain to guide action.

The issue I wish to explore here is its relevance of Block's distinction to animal consciousness: does it reflect real differences between groups of animals, and does it shed light on the question of which animals have (phenomenally) conscious feelings?

Before I discuss Block's work in detail, I would like to make two general observations.

First, although Block has succeeded admirably in keeping abreast of the current scientific literature of consciousness, the distinction he makes between his two concepts of consciousness is an a priori conceptual distinction, which may or may not correspond to the real-world division between conscious and unconscious systems in the brain.

Second, Block is perhaps a little too hard on those philosophers and scientists (e.g. Searle, 1999; Baars, 2001) who have conflated the function of phenomenal consciousness with the benefits of access consciousness. This would be a philosophically defensible strategy if there were a law of nature that access consciousness in animals - or at least, in some restricted class of animals - is always accompanied by phenomenal consciousness. As we shall see, it is unlikely that there is such a law for all animals.

In this section, I also contend that Block fails to distinguish between different, empirically grounded senses of phenomenality (Rosenthal, 2002), and that this oversight can lead to philosophical confusion as philosophers debate purported cases of access without phenomenality and vice versa, as well as obscuring the vital issue of which (if either) of the two concepts of consciousness is more fundamental. I argue that the requirements for phenomenality in Rosenthal's "thick" sense (which corresponds to what people normally mean by conscious experience) are more stringent than those for access consciousness, while the requirements for phenomenality in its "thin", derivative sense are weaker because the element of attention is absent. This explains the apparent contradictions in the philosophical literature.

The upshot is that even if not all animals possessing access consciousness turn out to be phenomenally consciousness, we can be sure that every phenomenally conscious animal belongs to a species whose members are normally access-conscious.

(i) Access consciousness

Block (1995, 1998) gives various definitions of access consciousness, some of which are more easily applicable to animals than others. It seems fairly clear from Block's own writings that he envisages a large number of animals as having access consciousness. Block expressly states that he wishes to credit non-linguistic animals (e.g. chimps) with access consciousness states (1995, p. 238), and that "very much lower animals" are access-conscious too (1995, p. 257).

On the other hand, Block's criteria for access consciousness seem rather daunting. Block (1995) mentions three: an access-conscious state must be (i) poised to be used as a premise in reasoning, (ii)poised for rational control of action, and (iii) poised for rational control of speech. To be sure, Block exempts chimpanzees from the last condition. However, we are still left with two conditions that mention rationality.

In another passage, Block (1995) describes epileptic patients, who lose access-consciousness during their seizures, as having deficits in thinking, planning and decision making. If this is what Block means by "rationality", then it certainly applies to some non-human animals. A subsequent definition in Block (1998) is also fairly broad:

Phenomenal consciousness is just experience; access consciousness is a kind of control. More exactly, a representation is access-conscious if it is actively poised for direct control of reasoning, reporting and action (1998, p. 3).

If we read Block's "and" in a permissive sense, then on this definition, access consciousness may control action without necessarily controlling reasoning. Defined broadly, there is no reason why access consciousness could not be ascribed to any animal capable of intentional action. On a narrower alternative reading, the term could be restricted to animals capable of planning - such as lions that hunt in packs.

Two terms in the above definition require further elucidation: "actively poised" and "direct control". For a state to be access-conscious, it is not enough for that state to be available for use whenever needed, like our knowledge that the earth is round. The state itself has to be somehow re-activated: it must be poised and ready to control an individual's behaviour (Silby, 1998).

"Direct control", according to Block, occurs "when a representation is poised for free use as a premise in reasoning and can be freely reported" (1998), which suggests a fairly sophisticated higher-order awareness of one's mental states, but in the same paragraph, he suggests a much simpler criterion - attention - that animals might easily satisfy.

This proposal is made in Block's discussion of a plausible case of phenomenal consciousness without access consciousness: the everyday phenomenon in which you notice that the refrigerator has just gone off. Often, in a situation like this, "one has the feeling that one has been hearing the noise all along, but without noticing it until it went off" (1998). Block explains this case as follows: before the refrigerator went off, you may have been phenomenally conscious of the noise, but you were not access-conscious of it. Even though you may have exerted indirect control over your actions (e.g. by raising your voice to compensate for the volume of the refrigerator), "there was insufficient attention directed towards it to allow the direct control of reasoning, speech or action" (1998, italics mine).

The critical ingredient here, Block seems to be suggesting, is attention: when you attend to something, it impacts on your beliefs and desires, which directly control your intentional actions. This accords well with his remark that "[i]nattentiveness just is lack of A[access]-consciousness" (1995, p. 265).

This is a reasonable proposal. It is therefore a great pity that Block spoils his case by clearly implying that somebody can be access-conscious of something, without attending to it. Block (1998) describes the strange case of a "Reverse Anton's Syndrome" patient suffering from parietal brain damage, who regards himself as blind and is unable to tell if a room is illuminated or dark, despite the fact that he is able to recognise faces presented to the upper right of his visual field. Commenting on the case, Block writes:

Milner and Goodale have proposed that phenomenal consciousness requires ventral stream activity plus attention, and that the requisite attention can be blocked by parietal lesions. So perhaps this is a case of visual access without visual phenomenal consciousness (1998, p. 4, italics mine).

The implication here seems to be that the patient's attention was blocked by the parietal lesion, but that he still possessed access consciousness. However, I shall ignore this singular inconsistency, as it does not accord with the general tenor of Block's writings.

Because I believe that Block's notion of access consciousness is a useful one for discussing consciousness in non-human animals, I would like to propose the following simplified definition of access consciousness:

Definition An animal's state is access-conscious if (i) the animal is attending to what the state represents, and (ii) the state is poised to control the animal's intentional actions.

Conclusion 4.13: Attention is a pre-requisite for access consciousness.

This, it seems to me, retains the essence of Block's original ground for distinguishing access from phenomenal consciousness: that the action-guiding function of consciousness is separable from the phenomenal component.

It is a robust conclusion of recent research that attention (and hence access consciousness) is required for the performance of complex motor behaviours such as driving. This is relevant to the question of animal consciousness, because some philsophers (e.g. Carruthers, 1992, pp. 170 ff.) have claimed that animal behaviour is similar to certain non-conscious, inattentive activities of human beings, such as driving "on autopilot" while distracted, or the behaviour of blindsight patients. (Carruthers no longer defends this claim.)

The analogy with between animal behaviour and blindsight can be easily refuted if it can be shown that most animals possess access consciousness, while blindsight patients lack it. The reason why Block considers blindsight patients fail to qualify for access consciousness, despite the limited visual information available to them in their blind field, is that they cannot "harness this information in the service of action" (1995, p. 227). The empirical fact that animals with blindsight cannot navigate their surroundings as efficiently as their sighted peers is another good reason for rejecting the analogy of animal behaviour with blindsight.

Distracted drivers turn out to be a different kind of case, as recent evidence (Wright, 2003) has conclusively established that they do not drive "on autopilot", but navigate by paying intermittent attention to the road. These drivers therefore possess access-consciousness, like animals. Research also suggests that they enjoy fleeting phenomenal experiences, which they rapidly forget. Because they are actually intermittently conscious of their surroundings, so-called "distracted drivers" cannot be invoked as a non-conscious model for animal behaviour.

Evidence for access consciousness in animals

In chapter two, I developed an account of a "minimal mind", and concluded that a fairly large class of animals fulfilled the requirements for intentional agency, irrespective of whether they were phenomenally conscious. Most animals therefore meet the second condition for acces consciousness, under my revised definition. What about the first condition: attention?

There appears to be a conceptual nexus between intentional agency and attention: the performance of an intentional act whose target is X presupposes that one is attending to X. Additionally, it is hard to see how an animal could be said to have beliefs or desires if it were incapable of attending to anything in its environment. We should therefore expect the class of animals capable of attending to objects to include all animals with "minimal minds" - even insects.

Scientists have recently identified attentional mechanisms in insects. Van Swinderen and Greenspan (2003) have reported the discovery of "a physiological signature of object salience" by measuring local field potentials in the brain of the fruit fly Drosophila melanogaster. Although the authors prefer to avoid the word "attention" because of its controversial associations with consciousness (2003, p. 585), they describe some impressive correlations with the brain mechanisms of attention in monkeys and humans: "amplitude increases with salience, salience can be increased either by an unconditioned stimulus or by novelty, selection suppresses the response to simultaneous unattended stimuli, and coherence increases with selective attention" (2003, p. 585, italics mine).

I see no problems with the use of the word "attention" here, as long as we make it clear that we are confining our discussion to access consciousness rather than phenomenal consciousness. In chapter two, I proposed Drosophila melanogaster as a candidate for a minimal mind. The discovery of attention mechanisms in this insect makes it eligible for access consciousness as well.

Thus both philosophical considerations and scientific findings suggest the following conclusion:

Conclusion 4.14: The class of animal species whose members are access conscious is co-extensive with the class of animal species whose members have minimal minds.

The case of the distracted driver

Block (1995, 1998) makes some pertinent observations regarding the much-discussed case of the distracted driver, who is supposedly able to navigate his car home despite being oblivious to his visual states. Different philsophers have conflicting intuitions regarding whether the driver is phenomenally conscious while driving home. But according to Block, this is irrelevant: to drive home, what you need is access consciousness, not phenomenal consciousness. Access consciousness, Block suggests, comes in degrees: the inattentive driver has a diminished level of access consciousness, but if he had none at all, the car would crash. An alternative considered by Block (1995) is that the driver's access consciousness is normal, but his poor memory of the trip is due to failure to store the contents of the scene in his memory. (As we shall see, this turns out to be the case.) Likewise, when discussing a case (originally cited from Penfield (1975) and discussed by Searle (1992)) of an epileptic driver who has a petit mal seizure rendering him totally unconscious, but is still able to drive home, the individual "still has sufficient access-consciousness to drive" (1998, p. 5).

Recent research (Wright, 2003) has borne out Block's contention that attention is required for driving. Wright cites three driving studies which show that driving requires a certain minimum amount of attention to the road. As Wright (2003) puts it: "Without sufficient attention being paid to one's visual experience and driving behavior, one will quickly find one's car quite mangled." What really happens in "distracted driving" is that the driver pays attention to the road for some of the time, but the other matter that he is thinking about demands a much greater share of his cognitive resources, with the result that the information about the visual scene is quickly bumped from working memory and never encoded in long-term memory. Hence the driver's surprise when he comes to the end of his journey.

In the light of the research cited by Wright, I therefore have to express scepticism about the solitary case of Penfield's (1975) "unconscious driver" cited by Searle (1992) and discussed by Block (1995, 1998). The proposal that a person having a petit mal seizure could drive home appears implausible in the light of the following medical description:

A petit mal seizure is a temporary disturbance of brain function caused by abnormal electrical activity in the brain and characterized by abrupt, short-term lack of conscious activity ("absence") or other abnormal change in behavior.

Petit mal seizures occur most commonly in people under age 20, usually in children ages 6 to 12.

Typical petit mal seizures last only a few seconds, with full recovery occurring rapidly and no lingering confusion. Such seizures usually manifest themselves as staring episodes or "absence spells" during which the child's activity or speech ceases.

The child may stop talking in mid-sentence or cease walking. One to several seconds later, speech or activity resume. If standing or walking, a child seldom falls during one of these episodes...

There is usually no memory of the seizure (Campellone, 2002).

There are thus no grounds for believing that there are any real-life cases of drivers who possess access consciousness but have lost their phenomenal consciousness, as Block hypothesises (1998, p. 5). Rather, what happens is that inattentive drivers fail to encode the contents of their phenomenal consciousness in their long-term memory (Wright, 2003).

Conclusion 4.15: "Distracted driver" cases cannot be legitimately used to argue against phenomenal consciousness in animals.

Are animals like sleepwalkers?

Even more implausible is the claim, sometimes found in the literature on animal consciousness (Cartmill, 2000) that sleepwalkers can drive. Regrettably, this myth is perpetuated by people who ought to know better. Jiva and Masoodi (2003) repeat this claim in a medical journal of sleep research, but the reference they cite (Cruchet R. 1905. Tics et sommeil. Presse Med. 1905; 13:33-36) is 100 years old. (Incidentally, Jiva did not respond to an email query of mine, requesting evidence for driving by sleepwalkers.)

It is true that sleepwalkers can engage in a range of non-reflex complex behaviours (autonomous automatisms) that are performed without conscious volition, such as dressing, eating, and bathing (Sleepdisorderchannel, 2003). However, two important points need to be made here. First, sleepwalkers do not pay attention to their surroundings, for the simple reason that they cannot. Sleepwalking episodes take place during delta sleep, a slow-wave phase that scientists associate with the absence of primary consciousness. "During sleepwalking, coordination is poor, speech is incoherent, clumsiness is common" (Jiva and Masoodi, 2003). Some sleepwalkers bruise or injure themsleves from collisions with furniture and walls (Sleepdisorderchannel, 2003). We may conclude that access conscious is absent.

Second, sleepwalkers do not acquire new skills; they simply use their existing repertoire of automatisms. Any motor skills that sleepwalkers show are parasitic upon those they acquired during the waking state, while phenomenally conscious. Sleepwalkers do not learn any "new tricks".

By contrast, it has already been argued that many animals (even insects) possess access consciousness. Moreover, it was shown in chapter two that most phyla of animals are capable of true learning (classical conditioning). A more advanced kind of learning (operant conditioning) was also proposed for insects and cephalopods, as well as vertebrates.

Conclusion 4.16: The behaviour of sleepwalkers has no relevance to the question of which animals are conscious.

Is the fallacy identified by Block a genuine fallacy?

Block (1995) has described consciousness as a "mongrel concept" that combines two notions: the phenomenally conscious feeling of what it is like to be in a state and the availability of that state for rationally guiding action and reporting - which is what happens in access consciousness. While the biological functions served by access consciousness are perfectly obvious, Block contends that it is illicit to transfer these functions to phenomenal consciousness and argue that it evolved to fulfil these functions.

On analytical grounds, Block's point is unassailable, and Block is right to criticise previous philosophers and scientists for overlooking the distinction between the two concepts of consciousness. But if there should prove to be a law of nature linking the two, then the alleged slide in reasoning criticised by Block turns out to be an empirically informative biological counterfactual.

In his earlier writings, Block himself (1995) acknowledged the possibility of a resilient connection between access and phenomenal consciousness:

[P]erhaps P[henomenal]-consciousness and A[ccess]-consciousness ... amount to much the same thing empirically even though they differ conceptually, in which case P-consciousness would also have the aforementioned function. Perhaps the two are so inter-twined that there is no empirical sense to the idea of one without the other (1995, p. 268, italics mine).

In a similar vein, Chalmers (1996, pp. 242-246) has proposed the existence of a psychophysical law that (phenomenal) consciousness and awareness can never be found apart. Chalmers' concept of awareness is somewhat broader than Block's notion of access-consciousness, insofar as it includes states that are directly available for access even if they are not currently being accessed.

If there is a law that access consciousness entails phenomenal consciousness, then the following counterfactual is true, even if phenomenal consciousness, per se, serves no biological function: if sentient animals were not phenomenally conscious, they could not perform certain crucial biological functions. ("Could not" here refers to nomic impossibility.) This, I believe, is precisely the point made by some of the philosophers and scientists accused by Block of confusing the function of phenomenal consciousness with the benefits of access consciousness.

Searle's (1999) argument that consciousness is required for the survival of the human species clearly relates to the nomic rather than the conceptual impossibility of human beings performing all of the functions they need to survive in the absence of phenomenal consciousness:

[I]n real life much of the behavior that enables us to survive is conscious behavior. In real life you cannot subtract the consciousness and keep the behavior (1999, pp. 63-64).

Likewise, Baars (2001) invokes nomic rather than conceptual relations in order to explain the role of phenomenal consciousness in mammals. In philosophical jargon, his proposal is that the brain activity (measured by EEG readings) which keeps a mammal awake (i.e. creature-conscious in the intransitive sense) automatically enables it to execute vital goal-directed behaviour (as an access-conscious creature would do), as well as making it phenomenally conscious:

It therefore appears that brain activity that supports consciousness is a pre-condition for all goal-directed survival and reproductive behavior in humans and other mammals (2001).

The so-called fallacy identified by Block appears to reflect nothing more than the fact that other philosophers are attempting something less ambitious than he is: not to explain the function of phenomenal consciousness as such, but to tie its occurrence to some natural process that is known to confer a biological advantage on certain kinds of animals. It then follows that if (counterfactually) these animals lacked consciousness, their survival prospects would be reduced.

What are the implications of all this for the philsophical debate over animal consciousness? If it is a law of nature that any animal possessing access consciousness is also phenomenally conscious, then a large number of animals will be phenomenally conscious - perhaps even insects, if we use the broad definition of "access consciousness" that I proposed above. Or it may be the case that different senses of "access consciousness" have to be distinguished (see Rosenthal, 2002, p. 654), and that phenomenal consciousness only accompanies certain kinds of access.

Alternatively, access consciousness alone may not suffice to guarantee phenomenal consciousness: an animal's brain may need to satisfy certain extra requirements before it can be subjectively aware of anything. Identifying these requirements in humans and other animals then becomes an empirical matter.

Another possibility is that both access consciousness and phenomenal consciousness are nomically generated by a more basic form of consciousness - e.g. the brain wakefulness discussed by Baars (2001). EEG readings could then be used to establish the presence of consciousness in an animal.

It would be helpful if the "test cases" regularly cited in the philosophical literature on consciousness gave us some way of adjudicating between these theories. Unfortunately, the cases commonly discussed by philosophers are too inconclusive to definitively rule out any of the above theories. The irony is that the literature on animal cognition contains an abundance of experimental findings, which could shed light on this discussion. Until now, this literature has been generally overlooked by philosophers. I shall give a couple of examples in part (ii) in my discussion of animal consciousness.

Can access consciousness occur in animals in the absence of phenomenal consciousness?

Block has recently expressed misgivings about the suitability of the term "access consciousness", a phrase which he has since replaced with the neutral term "global access":

Since this concept of [access] consciousness does not require phenomenality, there is some doubt as to whether it is a full-fledged concept of consciousness (2001, p. 215, italics mine).

However, Block is well aware that thought experiments alone cannot decide whether access consciousness may exist without phenomenal experience in the real world, so he has endeavoured to find real-life human cases where this is the case. As it happens, these cases are either extremely rare or non-existent. Blindsight has sometimes been proposed as an instance of access without phenomenality. However, subjects with blindsight appear to lack the right sort of access to visual information on their blind side:

Their access is curiously indirect, as witnessed by the fact that it is not available for verbal report, and in the deliberate control of behavior. The information ... can be made available to other processes, but only by unusual methods such as prompting and forced choice. So this information does not qualify as directly available for global control (Chalmers, 1996, p. 227).

Block's (1995) hypothetical case of "super-blindsight" makes a testable empirical claim, but there is no evidence for its occurrence in human or non-human animals.

Block (1998, p. 4) also discusses one possible case of "Reverse Anton's Syndrome", but its interpretation is by no means certain. Because the condition was caused by brain injury, it cannot be invoked as evidence that access consciousness could have evolved in animals independently of phenomenal consciousness.

Rosenthal (2002) cites experimental results by Libet et al. (1983), in which a rational human agent's (access-conscious) decision to act occurs some time before she is consciously aware of it, as evidence that "global access" can occur independently of phenomenal consciousness. But an alternative interpretation is possible: the subject forms a conscious intention at the beginning of the experiment, when receiving instructions. The subsequent decision to move reported by the subject is not a voluntary action in the conventional sense, but a perceived effective urge to move, induced by specific experimental instructions (Zhu, 2003).

Among the cases discussed in the philosophical literature, the strongest evidence that access consciousness can exist in the absence of phenomenal consciousness comes from recent studies of the mammalian visual system:

According to Milner and Goodale (1995), the human mind / brain contains two visual systems that are functionally and anatomically distinct; and indeed, there is now a wealth of evidence that this is so (Jacob and Jeannerod, 2003). The dorsal system is located in the parietal lobes and is concerned with the on-line detailed guidance of movement. The ventral system is located in the temporal lobes and serves to underpin conceptual thought and planning in relation to the perceived environment. Each receives its primary input from area V1 at the posterior of the cortex, although the dorsal system also receives significant projections from other sites. The dorsal system operates with a set of body-centered or limb-centered spatial co-ordinates, it is fast, and it has a memory window of just two seconds. The ventral system uses allocentric or object-centered spatial co-ordinates, it is slower, and it gives rise to both medium and long-term memories. Importantly for our purposes, the outputs of the dorsal system are unconscious, while those of the ventral system are phenomenally conscious (in humans). Finally, homologous systems are widespread in the animal kingdom, being common to all mammals, at least. On this account, the phenomenally conscious experiences that I enjoy when acting are not the percepts that guide the details of my movements on-line. Rather, the phenomenally conscious percepts produced by the ventral system are the ones that give rise to my beliefs about my immediate environment, that ground my desires for perceived items ("I want that one") and that figure in my plans in respect of my environment ("I'll go that way and pick up that one"). But my planning only guides my actions indirectly, by selecting from amongst a data-base of action schemata. The latter then directly cause my movements, with the detailed execution of those movements being guided by the percepts generated by the dorsal system (Carruthers, 2004b).

The research by Milner and Goodale (1995) suggests that each human brain has two visual systems: a phenomenally conscious system that allows the subject to select a course of action but which she cannot attend to when actually executing her movements, and an access-conscious system that guides her detailed movements but is not phenomenally aware. Care should be taken not to exaggerate the significance of these findings, as they relate to just one sensory modality (sight) and only apply to a limited class of animals (mammals). Nevertheless, they are significant insofar as they reveal a distinction at the physical level between access-consciousness and phenomenal consciousness.

This leads me to formulate the following tentative conclusions:

Conclusion 4.17: The occurrence of access consciousness in animals does not, by itself, warrant the ascription of phenomenally conscious states to them.

Conclusion 4.18: The set of animal species whose members are phenomenally conscious probably does not coincide with the set of animal species whose members possess access consciousness.

(ii) Phenomenal consciousness in animals

There are potentially many kinds of animals whose members are access-conscious but not phenomenally conscious, but there are also individual animals that are phenomenally conscious but incapable of access consciousness. According to the definition of animal access-consciousness proposed above, an access-conscious state has to be poised for direct control of intentional action. Block (1995, p. 245) envisages a phenomenally conscious animal for whom brain damage has destroyed its centres of reasoning and rational control of action. According to Block's definition (and my revised one) the animal would be incapable of access-consciousness. Animals with severe frontal brain damage would be a case in point.

Could there be entire species of animals possessing phenomenal consciousness but lacking access consciousness? Even if the idea is imaginable, it does not appear to be intelligible. According to the revised definition of access consciousness that I have proposed for animals, the two critical ingredients are attention and a capacity for intentional action. It was argued in the previous chapter that one could never be justified in ascribing emotions - whether conscious or not - to those kinds of animals that are incapable of intentional agency. What about the possibility that some kinds of animals could be phenomenally conscious but incapable of paying attention to their experiences? For instance, we might suppose that these animals had some kind of built-in action selection system like that described for flatworms (Prescott, 2000), enabling them to move around their environment without needing to pay attention as insects apparently do, and that they also possessed a rudimentary awareness of their world.

While it makes sense to suppose that an individual animal may be phenomenally conscious but incapable of paying attention to its surroundings (e.g. due to severe brain damage), it is philosophically incoherent to suggest that all members of a phenomenally conscious species might spend their entire lives not paying attention to anything. For if these animals never paid any attention to the objects they experienced, it would be more appropriate to re-describe their "experiences" as perceptions - which, as we saw above, can occur in the absence of phenomenal consciousness. First-person language serves no useful purpose here.

On the other hand, there appears to be nothing infeasible from a purely philosophical standpoint about the notion that some animals may be access-conscious (in a limited sense at least) without being phenomenally conscious. This suggests that access consciousness in animals evolved before phenomenal consciousness. The supposition that two forms of consciousness emerged independently in the course of animal evolution and were subsequently integrated seems highly unlikely and scientifically unparsimonious. As a default hypothesis, we have to suppose that one form of consciousness emerged first in animal evolution, and that it served as a foundation for the other.

These arguments point to the following conclusions:

Conclusion 4.19: There are no species of animals whose members possess phenomenal consciousness but lack access consciousness. However, certain phenomenally conscious individuals within a species may lack access consciousness.

Conclusion 4.20: Access consciousness is biologically prior to phenomenal consciousness.

The conclusions presented here are based solely on theoretical arguments, but I could have invoked scientific findings on the neural pre-requisites for consciousness, to support my conclusion that having a brain that supports agency and attention states (i.e. access consciousness) is insufficient for phenomenal experience. (I will discuss these findings later in this chapter.) The latter conclusion is bound to be philosophically controversial, as there appear to be far more plausible instances of phenomenality without access in the philosophical literature than vice versa. Before I address these cases, I would like to discuss a couple of interesting behavioural phenomena, drawn from the scientific literature on animal cognition, which might shed light on the question of how access and phenomenal consciousness are related.

According to Grandin (1998), a snake does not have a centralised internal representation of its prey:

It seems to live in a world where a mouse [its prey] is many different things... [S]triking the mouse is controlled by vision; following the mouse after striking is controlled by smell; and swallowing the mouse is controlled strictly by touch. There is no integration of information from all the senses. Each sensory channel operates independently of the others. When a snake has a mouse held in its coils, it may still search for the mouse as if the information from its body which is holding the prey did not exist. It appears that the snake has no ability to transfer information between sensory channels (Grandin, 1998).

A snake presumably uses a rudimentary form of access consciousness to hunt its prey, but does not possess integrative consciousness (which gives an animal access to multiple sensory channels and can integrate information from all of them). Nor does it appear to have what might be called object consciousness: a snake, unlike a predatory mammal, has no ability to anticipate that a mouse running behind a rock will reappear (Grandin, 1998). These concepts of consciousness may prove to be coextensive with, and even nomically connected to, phenomenal consciousness.

A second significant finding in recent animal research is that certain animals possess a form of declarative memory and can use it to make robust inferences about the past and future. This suggests strongly that they have an additional form of consciousness, over and above access consciousness:

Current psychological theory distinguishes between so-called "procedural" and "declarative" memory, or "remembering how" and "remembering that". Remembering how to ride a bicycle is "procedural"; remembering that a two-wheeled vehicle with saddle and handlebar is called a bicycle is declarative (Rose, 1993).

Declarative memory can be split into "semantic" and "episodic" memory:

In humans, episodic memory, memory for one's personal past, is generally distinguished from semantic memory, memory for facts and ideas. Episodic memory is an integrated memory of a unique event that includes what took place, where and when (Shettleworth, 2001).

Episodic memory is philosophically significant for two reasons. First, phenomenality is part and parcel of episodic memory. "It is generally agreed ... that episodic memory is concerned with the conscious recall of specific past experiences" (Clayton et al., 2000, p. 274).

Episodic-like memory has been identified in rats (Bunsey and Eichenbaum, 1996; Eichenbaum, 2004) and food-caching birds (Shettleworth, 2001; Clayton et al., 2003). Rats demonstrate two forms of flexible memory expression - transitivity, the ability to judge inferentially across stimulus pairs that share a common element, and symmetry, the ability to associate paired elements presented in the reverse of training order. Likewise, western scrub-jays form flexible memories of what food they hid, where and when, and that they can adjust their caching behaviour to anticipate future needs (planning). The ability of these animals to make a robust set of inferences based on the relative ordering of events in their past, and plan for their future, suggests that they enjoy a richer kind of temporal consciousness than access consciousness, which may or may not be nomically connected with phenomenal consciousness.

Other animals' episodic memories are not likely to be as rich as ours. Even if rats and birds can access their past experiences, we need not suppose that they are engaging in some kind of "mental time-travel" every time they recollect something. An animal may consciously recall an event in its past without possessing the refined awareness that a particular episode was its own experience (see Shettleworth, 2001; Suddendorf and Busby, 2003). For this reason, researchers prefer to use the term "episodic-like" memory for non-human animals.

It remains to be seen whether these two concepts of consciousness - integrative and episodic - can account for phenomenal experience in animals. All I am claiming here is that philosophers would do well not to overlook them.

Can normal animals have phenomenal experiences without access consciousness?

Block's (1995) assertion that brain-damaged animals could enjoy phenomenal experiences without access-consciousness in no way undermines my conclusion that access consciousness is biologically prior to phenomenal consciousness, as only abnormal individuals are involved. On the other hand, the occurrence of phenomenal experiences without access-consciousness in normal, mature animals would appear to contradict my claim that the emergence of phenomenal consciousness in animals post-dated that of access consciousness.

In his discussion of the refrigerator that suddenly goes off, Block cites "the feeling that one has been hearing the noise all along" as evidence for inattentive phenomenality (1998, p. 4). Although other interpretations are possible, I would agree with Block that the most straightforward way of explaining this case is the hypothesis that "there is a period in which one has phenomenal consciousness of the noise without access consciousness of it" (1998, p. 4).

If we return to the case of the distracted driver, we saw that the driver had intermittent access-consciousness as well as phenomenal consciousness of the scene, but because he was distracted, the contents of his awareness were never encoded in his long-term memory. What we did not discuss above was the question: during which portion of his trip home is the driver phenomenally conscious? Is it only during the brief moments when he is paying attention to the road (and hence access-conscious), or is it during the whole trip? (We can construct a parallel animal case of a sentient animal returning home by a familiar route.)

Some philosophers (Wright, 2003) insist that visual experiences cannot take place unless one is paying attention to what one sees, basing their arguments on studies of inattentional blindness, which show that when subjects are engaged in visual tasks demanding a high degree of attention, they fail to notice unexpected objects in their field of vision, even when they occupy the same point on their visual scene as the objects they are attending to. During change blindness, subjects fail to notice large-scale changes in a scene directly before their eyes, because their attention is diverted to other features of the scene. The upshot is that "there seems to be no conscious perception without attention" (Wright, 2003). If this is so, then for physically normal animals, there can be no phenomenal consciousness without access consciousness.

While Wright's (2003) account accords well with my conclusion regarding the priority of access consciousness, it is inconsistent with our ordinary usage of the word "conscious", as it entails that I am not conscious of any of the objects in my field of vision except those I am attending to. This is a very odd claim.

On the other hand, Hardcastle (1997) offers a contrary interpretation of the studies cited by Wright (2003) which is equally consistent with the results. Drawing on well-supported findings which show that human subjects often see more than they can report, and that direct questioning does not elicit the full story, she argues that attention is importantly different from consciousness. In the change and inattentional blindness experiments, she suggests, subjects might well be conscious of the changes, but forget them very rapidly because they are not attending to them.

Hardcastle's (1997) account seems to be a more natural way of explaining the data than Wright's (2003), as it manages to avoid the counter-intuitive conclusion that we are not conscious of peripheral stimuli and accords better with our ordinary usage of the word "conscious".

There is nothing to prevent us from applying Hardcastle's account to non-human animals as well as people.

Conclusion 4.21: The scope of phenomenal consciousness is larger than that of attention.

Conclusion 4.22: Normal animals are capable of having phenomenally conscious experiences without access consciousness, due to lack of attention or rapid memory loss.

Is access consciousness prior to phenomenal consciousness?

We may opt for a broad usage of the term "phenomenally conscious", but if we do so, we need to make it clear in what sense we are employing the term, especially when describing cases of access consciousness occurring in the absence of phenomenal consciousness and vice versa. I suggest that this lack of precision has prevented philosophers from resolving the question of which concept of consciousness is prior to which. This has practical relevance for the question of animal consciousness.

It was argued above that there are no species of animals whose members possess phenomenal consciousness but lack access consciousness: even if we could imagine a species of animal whose phenomenally conscious experiences were devoid of attention, there would be no point in ascribing phenomenal consciousness to the animal, as we could describe its experiences more parsimoniously as unconscious perceptions. If this philosophical argument is correct, access consciousness must have emerged first in animal evolution.

But this leaves us with a philosophical conundrum. We also argued above that animals can have phenomenally conscious experiences even when they are not paying attention. What is the difference between phenomenal experiences that animals are not paying attention to and unconscious perceptions?

The question becomes especially acute if we consider what might be called a "limiting case" of phenomenality, cited by Block (2001). Human subjects may be exposed for very brief intervals to subliminal or masked stimuli which they subsequently disavow awareness of, yet these stimuli have a measurable influence on their behaviour (Block, 2001, p. 203; see also Berridge and Winkielman, 2003). Block (2001) is willing to allow that in such a case, the subjects "really do have the phenomenal experience of the stimuli without knowing it" (2001, p. 203, italics mine). But as Rosenthal (2002) points out, this way of talking is completely at odds with Block's (1995) original definition of "phenomenal consciousness": it was meant to describe "what it is like" to be in a conscious state, but if the subject never realises that she is in a certain state, then it cannot possess any "what-it's-likeness" or subjectivity.

Rosenthal (2002) argues that before we can answer this question, we have to distinguish between two kinds of phenomenal experience: "thin phenomenality", or unconscious perceiving, and "thick phenomenality" or "the subjective awareness of thin phenomenality" (2002, p. 661).

This is a helpful move, yet I would suggest it does not go far enough. There are at least three reasons, I would suggest, why a human subject may not be conscious of a phenomenal experience: the subject may forget the experience very quickly (as in change blindness and the intermittent periods of attention during so-called "distracted driving"); the subject may not be paying attention (as in peripheral vision or the long intervals between attentive episodes in distracted driving); or the experience may be so brief that it never enters the subject's conscious awareness (as in subliminal perception). What I am proposing is that all of these forms of phenomenal experience should be classified as "thin" or derivative cases. Inattention occurs because the subject's mind is directed at something else; forgetting occurs because something more important pushes the experience out of the subject's working memory; and subliminal perception is only recognised because it impacts on our behaviour towards things we are fully conscious of. These "thin" forms of phenomenality, I suggest, can only be regarded as phenomenal by virtue of their connection to "thick" or paradigm phenomenal experiences. We can express these insights as follows:

Conclusion 4.23: The following features are necessary for "thick" phenomenality: (i) long-term memory; (ii) an attention mechanism; (iii) exposure to the stimulus for a sufficient duration (100 milliseconds in human subjects).

These requirements for "thick" phenomenality should not be regarded as exhaustive. On the contrary: our earlier conclusion that access consciousness is biologically prior to phenomenal consciousness entails that some additional criteria must be met that would distinguish a "thick" phenomenal experience from attentiveness to a stimulus. What these extra criteria may be will be discussed below.

Conclusion 4.24: While it is possible that certain phenomenally conscious individuals within a species may never have "thick" phenomenal experiences, there are no species of animals whose members only have "thin" phenomenal experiences.

This conclusion entails that in each animal species whose members are phenomenally conscious, a typical individual will have at least some "thick" phenomenal experiences.

We can now address the paradoxes discussed above regarding phenomenality. If access consciousness is biologically prior to phenomenal consciousness, then the criteria for "thick" phenomenality must be more stringent than those for access consciousness; on the other hand, the requirements for phenomenality in its "thin", derivative sense are less so, because the element of attention is absent. However, "thick" phenomenality is the paradigm case, and the various "thin" forms of phenomenality are derivative cases which we impute to animals only insofar as they (or their conspecifics) sometimes undergo "thick" phenomenal experiences. If all of the members of a species lacked "thick" phenomenality, their perceptions would indeed be completely unconscious. For phenomenality to have evolved, I suggest, the element of attention must have been present: therefore, access consciousness came first.

The minimal criteria I have listed above for "thick" phenomenality are satisfied by a large number of animals. For instance, the fruit fly Drosophila melanogaster has a long-term memory as well as attentional states. Does that make it phenomenally conscious?

Rosenthal does not think so. His "pretheoretic notion" of a conscious state is a state which one is conscious of being in (2002, p. 657). A fully-fledged conscious state therefore requires a higher-order thought that one is in that mental state. Rosenthal doubts whether even lizards (let alone flies)possess "thick" phenomenality. Although they are awake and responsive to sensory stimuli, that does not show their mental states are conscious:

Lizards when awake presumably have thin phenomenality, but that is not a kind of consciousness at all, at least as we theoretically distinguish conscious from nonconscious states (2002, p. 662).

What makes phenomenal states subjective? An outline of the current philosophical positions

Rosenthal's account of conscious states is an example of a higher order representational (HOR) theory of phenomenal consciousness. HOR theorists argue that a mental state (such as a perception) is not intrinsically conscious, but only becomes conscious as the object of a higher-order state. Higher-order states are variously conceived as thoughts (by HOT theorists) or as inner perceptions (by HOP theorists). First order representational (FOR) theorists, by contrast, believe that if a perception has the appropriate relations to other first-order cognitive states, it is phenomenally conscious, regardless of whether the perceiver forms a higher-order representation of it (Wright, 2003).

The contemporary philosophical debate about animal consciousness is split into several camps, with conflicting intuitions regarding the following four inconsistent propositions (Lurz, 2003):

1. Conscious mental states are mental states of which one is conscious.
2. To be conscious of one's mental states is to be conscious that one has them.
3. Animals have conscious mental states.
4. Animals are not conscious that they have mental states.

Proponents of so-called higher order representational (HOR) theories of consciousness accept propositions 1 and 2.

Exclusive HOR theorists like Carruthers also accept 4 but reject 3 - that is, they allow that human infants and non-human animals have beliefs, desires and perceptions, but insist (Carruthers, 2000, p. 199) that we can explain their behaviour perfectly well without attributing conscious beliefs, desires and perceptions to them.

Inclusive HOR theorists, such as Rosenthal, accept 3 but reject 4. Rosenthal (2002) construes an animal as having a thought that it is in some state. Such a thought requires a minimal concept of self, but "any creature with even the most rudimentary intentional states will presumably be able to distinguish between itself and everything else" (2002, p. 661).

Defenders of first-order representational (FOR) accounts of consciousness, such as Dretske, accept 2, 3 and 4 but reject 1. For example, Dretske argues that a mental state becomes conscious simply by being an act of creature consciousness. Thus an animal need not be aware of its states for them to be conscious. On this account, consciousness has a very practical function: to alert an animal to salient objects in its environment - e.g. potential mates, predators or prey.

Lurz's (2003) same-order (SO) account presents itself as a via media between HOR and FOR. Lurz grants the premise that to have conscious mental states is to have mental states that one is conscious of, but queries the assumption (shared by HOR and FOR theorists) that to be conscious of one's mental states is to be conscious that one has them. Lurz suggests that a creature's experiences are conscious if it is conscious of what (not that) its experiences represent. By "what its experiences represent" he means their intentional object.

An evaluation of the current positions

Before clarifying my own position, I would like to make three general comments. First, we should recognise that each of the four propositions in Lurz's tetralemma has some intuitive appeal when taken alone. Linguistic criteria alone will not resolve the issue of animal consciousness.

Second, the tetralemma assumes that the question "What makes a mental state conscious?" has a single answer. But as we have already seen, the word "conscious" has a variety of different senses, some stronger than others, which need to be carefully distinguished.

Third, we should heed Carruthers' (2001) warning against pitching the standards for explaining phenomenal consciousness too high:

[A] reductive explanation of something - and of phenomenal consciousness in particular - doesn't have to be such that we cannot conceive of the explanandum (that which is being explained) in the absence of the explanans (that which does the explaining). Rather, we just need to have good reason to think that the explained properties are constituted by the explaining ones, in such a way that nothing else needed to be added to the world once the explaining properties were present, in order for the world to contain the target phenomenon (Carruthers, 2001).

Let us begin with Dretske's account of phenomenal consciousness. The kernel of truth that it contains is that many of an animal's first-order perceptions are indeed phenomenally conscious. Lurz's (2003) linguistic argument against Dretske - that it is counter-intuitive to say that an animal could have a conscious experience of which it was not conscious - overlooks the variety of "thin" usages of "phenomenally conscious" that I discussed above. I argued that one's perceptions, even while not paying attention, are "phenomenally conscious" in this "thin" sense. In that sense, Dretske's (1995) assertion that "You may not pay much attention to what you see, smell, or hear, but if you see, smell or hear it, you are conscious of it" is perfectly correct. But I also argued that if all of an animal's perceptions were "thin" phenomenal experiences, there would be no grounds for calling any of them phenomenally conscious: a neutral third-person account would do the job just as well. At least some of an animal's perceptions must be phenomenal in a stronger, "thick" sense which exceeds Dretske's requirements. I conclude:

Conclusion 4.25: Dretske's FOR account cannot explain the full range of an animal's phenomenally conscious experiences.

If the first-order account is false, then at least some of an animal's phenomenal experiences require an added ingredient to make them "conscious" in a stronger sense of the word. What might this ingredient be?

Wright's (2003) proposal is perhaps the most straightforward and least cognitively demanding on animals. Although Wright (2003) describes his higher-order attention (HOA) theory as a form of HOR that is grounded in attention, his view does not require animals to have any thoughts about their perceptions. Wright proposes that "for a state to be phenomenally conscious, it must be poised to impact the belief/desire system and it must have attentional resources dedicated to it" (2003). These attentional resources "have to do with access to and control over information by the perceiving subject" (2003). In other words, access consciousness is what makes for phenomenal consciousness.

However, Wright's account is incompatible with my earlier conclusion that access consciousness is biologically prior to phenomenal consciousness.

Conclusion 4.26: A HOA theory of animal phenomenal consciousness is inadequate.

Lurz's (2003) same-order (SO) account is similar to Wright's: he argues that "many animals seem to attend to certain features of what they are perceiving ... which suggests that they are, to some degree, conscious of what they are perceiving in perceiving those features" (italics mine).

What distinguishes Lurz's account from Wright's is his insistence that animals are conscious of what they are perceiving - even if they are not conscious that they are perceiving. Animals' consciousness of what they are perceiving implies that they are conscious of their mental states:

My cat, for instance, upon espying movement in the bushes, behaves in such a way that it seems quite appropriate to say of her that she is paying attention to what she is seeing - namely, the movement in the bushes. But surely, if she were completely unaware of what she was seeing, she would not be able to attend to what she was seeing. So, since it is plausible to say that my cat is paying attention to what she is seeing, it is plausible to say that she is (to some degree at least) conscious of what she is seeing. However, it is rather implausible that my cat is conscious that she sees movement in the bushes, since it is rather implausible to suppose, as we saw above, that my cat has thoughts about her own mental states. Nevertheless, in being conscious of what she is seeing, my cat is conscious of what a token visual state of hers represents. And, again, it is hard to understand how my cat could be conscious of what a token mental state of hers represents if she were not, in some way, conscious of the mental state itself (Lurz, 2003).

The cognitive requirements that Lurz is imposing on animals are hardly exacting, as they seem to require nothing more than a capacity for object recognition, which is found even in honeybees:

A cat who sees movement in the bushes, for instance, but who is not conscious of what she is perceiving - perhaps, as a result of being momentarily distracted by a loud noise - is less likely to catch the mouse in the bushes than the cat who sees the movement and is conscious of what she is perceiving (Lurz, 2003).

Let us recall Lurz's (2003) formulation: a creature's experiences are conscious if it is conscious of what its experiences represent. However, one problem with Lurz's account is that it appears too "outward-looking" to explain why creatures have an inner, experiential life. Why do I need to be subjectively aware of ny experience in order to be conscious of what they represent? It seems that something more is needed:

Conclusion 4.27: Lurz's SO theory of phenomenal consciousness in animals appears inadequate to account for subjectivity.

This cannot be treated as a firm conclusion, as it was arrived at on purely analytical grounds. There is, however, a theory that provides the "something more" that first-order and same-order theories lack. Rosenthal's (2002) HOT theory requires an animal to have the higher-order thought that it is in a certain state, before the state can qualify as conscious. This is a very strong requirement. According to HOT theorists, mental states do not become conscious merely by being observed; they become conscious by being thought about by their subject. This means that animals must have non-observational access to their mental states. As Lurz (2003) remarks, this is an implausible supposition for any non-human animal.

Conclusion 4.28: A HOT theory of phenomenal consciousness would preclude non-human animals from having subjective experiences.

We appear to have arrived at a philosophical impasse. All of the available philosophical theories of phenomenal consciousness have requirements that are either too minimal and include far too many animals or are so restrictive that only humans appear to qualify. First-order, same-order and attentional theories allow even insects to qualify as phenomenally conscious, but fail to account for the "inner" feel of an experience. Higher-order theories account for subjectivity quite well, but restrict it to human beings. Both kinds of theories are scientifically implausible. The overwhelming consensus of neuroscientists is that primary consciousness requires a brain much more complex than an insect's, while the view that only human beings are conscious is a minority one (Rose, 2002).

What has gone wrong here? I suggest that the "original sin" of philosophers who have formulated theories of phenomenal consciousness was to suppose that the requirements for subjectivity could be elucidated on an a priori basis, through careful analysis. Now, an analytical approach might work if we had a good idea of what consciousness is, or how it arose in the first place, or what it is for. In fact, we know none of these things, though theories abound. A selection of these theories are listed in the table below.

Table 4.1 - Theories of what consciousness is for: a brief overview

1. Consciousness is an epiphenomenon (Huxley).
2. Conscious feelings arose because they motivate an animal to seek what is pleasant and avoid what is painful (Aristotle).
3. Consciousness arose because it enabled its possessors to unify or integrate their perceptions into a single "scene" that cannot be decomposed into independent components (Edelman and Tononi, 2000).
4. Consciousness arose because it was more efficient than programming an organism with instructions enabling it to meet every contingency (Griffin, 1992).
5. Consciousness arose to enable organisms to meet the demands of a complex environment. However, environmental complexity is multi-dimensional; it cannot be measured on a scale (Godfrey-Smith, 2002).
6. Consciousness evolved to enable animals to deal with various kinds of environmental challenges their ancestors faced (Panksepp, 1998b).
7. Consciousness arose so as to enable animals to cope with immediate threats to their survival such as suffocation and thirst (Denton et al., 1996; Liotti et al., 1999; Parsons et al., 2001).
8. Consciousness gives its possessors the advantage of being able to guess what other individuals are thinking about and how they are feeling (Whiten, 1997; Cartmill, 2000).
9. Consciousness arises as a spin-off from such a theory-of-mind mechanism (Carruthers, 2000).
10. Brain activity (as defined by EEG patterns) that supports consciousness in mammals is a precondition for all their goal-directed survival and reproductive behaviour (e.g. locomotion, hunting, evading predators, mating, attending, learning and so on) (Baars, 2001).
11. Activities that are essential to the survival of our species - e.g. eating, raising children - require consciousness (Searle, 1999). It must therefore have a biological role.
12. Animals receive continual bodily feedback from their muscular movements when navigating their environment. Conscious animals have a very short real-time "muscular memory" which alerts them to any unexpected bodily feedback when probing their surroundings. A core circuit in their brains then enables them to cancel, at the last second, a movement they may have been planning, if an unexpected situation arises. This real-time veto-on-the-fly may save their lives (Cotterill, 1997).

Of the theories of consciousness listed above, nearly all remain tenable at the present time - although Aristotle's account appears to be discredited by the findings that (a) all cellular organisms - even bacteria - will seek out attractive stimuli and avoid noxious stimuli, and (b) there are unconscious sensory modalities even in human beings (e.g. the vomeronasal sense). We simply do not know which one of the remaining views is correct. Perhaps several of them are. Consciousness, for all we know, may have evolved for some very mundane purpose, such as enabling predators that attack moving prey to "lead" them by anticipating their trajectories - something fish and amphibia cannot do, but mammals can - apparently because their frontal cortex (used for spatial planning) and visual cortex are relatively larger (Kavanau, 1997, p. 255).

Consciousness, then, is a subject about which there is radical uncertainty. Nevertheless, our ignorance is not total. After more than a century of research, scientists now have a pretty good idea what the neural requirements of consciousness are, and what role the different parts of the brain play in the creation of conscious experience. These are the pieces of the jigsaw puzzle we have to work with. Philosophical analysis alone cannot identify the missing "X" that makes the difference between "thick" phenomenality and access consciousness.

This, then, is how we might identify consciousness in animals, using a combined neurobiological-experimental approach. First, we might make a list of the requisite neural states and processes required to support consciousness in human beings. Next, we would examine animals whose brain has a similar structural layout to ours, and identify those whose brains have the same features. (We could cross-check our findings by verifying experimentally that these animals were able to give accurate reports.) Finally, for those animals whose brain structure is very different from ours, we would (a) look for homologous structures - or, failing that, analogous structures - in their brains with the same functionality as those that support consciousness in human beings, and (b) try to elicit non-verbal reports from them in experimental settings. If the standard view of primary consciousness is correct, this would suffice to warrant the ascription of consciousness to them.

4. Carruthers' denial of consciousness to non-human animals

The methodology proposed here would not impress Carruthers, who has consistently upheld the view that phenomenal consciousness is the peculiar preserve of human beings - though he allows that chimpanzees may also have it. Carruthers rejects the ability to give "accurate report" as a way to identify phenomenal consciousness in animals. I propose to discuss his views under two headings: first, do his arguments against animal phenomenality work, and second, is it possible to prove his views wrong?

Carruthers' argument against phenomenal consciousness in animals

The essence of Carruthers' case against phenomenal consciousness in non-human animals can be summarised as follows:

(i) phenomenal consciousness requires the ability of to think about one's own thoughts;
(ii) the ability to conceptualise one's thoughts requires one to possess a theory of mind and attribute mental states to other individuals;
(iii) there is little evidence that non-human animals (except possibly chimpanzees) possess this ability; so
(iv) there is no reason to ascribe phenomenal consciousness to most other animals.

The first premise expresses the HOT theory of phenomenal consciousness which both Carruthers and Rosenthal endorse. There is evidence for a rudimentary theory of mind in chimpanzees, dogs and elephants (Horowitz, 2002; Nissani, 2004), but let us grant Carruthers' third premise for argument's sake. The critical step in his argument is the second, which has been critiqued by Allen (2003).

The interesting thing about Carruthers' theory of the origin of phenomenal consciousness is that it is a by-product that was not directly selected for: it arose as a consequence of animals acquiring a "mind-reading faculty" that enabled them to interpret other animals' behaviour and attribute mental states to them. According to Carruthers (2000), this mind-reading faculty may have arisen in response to the need to interpret early hominid attempts at speech. Since the human senses of touch, taste, smell, hearing and sight all have a phenomenal feel to them, Carruthers needs to explain why his mind-reading faculty needed to have access to the full range of perceptual representations:

It would have needed to have access to auditory input in order to play a role in generating interpretations of heard speech, and it would have needed to have access to visual input in order to represent and interpret people's movements and gestures, as well as to generate representations of the form, "A sees that P" or "A sees that [demonstrated object/event]" (Carruthers, 2000, p. 231).

Allen (2003, p. 12) finds this argument unconvincing, as it only explains sight and hearing:

The way others look to us, sound to us, and the sensations they produce when they touch us are all possible targets of interpretation. In contrast, there seems little to innterpret regarding others' mental states in the way they smell and taste to us, nor in the way our stomachs feel when they have not eaten for a while. I conclude that the mind-reading faculty has no need for access to smell and taste, nor to many somatosensory sensations, for interpretative purposes.

In any case, Carruthers' claim that our "mind-reading faculty" has access to the full range of perceptual systems inis a mistaken one: the vomeronasal system, which responds to pheromones and affects human behaviour, is devoid of phenomenality (Allen, 2003, p. 13).

Conclusion 4.29: Carruthers' argument fails to explain the range of our phenomenal consciousness and is unsuccessful in undermining the case for phenomenal consciousness in non-human animals.

Can there be a proof of phenomenal consciousness in animals?

According to Carruthers, most human behaviour can be explained in terms of first-order states which we share with animals. Only those behaviours which require explanation in terms of higher-order states can be described as phenomenally conscious. In particular, "phenomenal consciousness is implicated whenever we draw a distinction between the way things are and the way they seem or appear" (Carruthers, 2004).

Recent experiments with binocular rivalry have demonstrated that the humans and other animals make identical reports about what they see when conflicting data is presented to their left and right two visual fields:

If two different stimuli - e.g. horizontal and vertical stripes - are presented to each of one's eyes, one does not see a blend, but rather first horizontal stripes that fill the whole visual field and then vertical stripes, that fill the whole field. Logothetis and his colleagues... trained monkeys to pull different levers for different patterns. They then presented different patterns to the monkeys' two eyes, and observed that with monkeys as with people, the monkeys switched back and forth between the two levers even though the sensory input remained the same (Block, 2003, italics mine).

The most obvious way to explain these results is to say that human and monkey brains handle the conflict of data in the same way, and that humans and monkeys experience the same inconstancy in their conscious perceptions. Carruthers could, however, reply that there is no need to postulate higher-order states here: the monkeys simply have fluctuating first-order perceptions, which they have been conditioned to respond to by pulling a lever.

This suggests one way of testing for phenomenal consciousness in animals: any animals that can learn to correct their perceptual errors are phenomenally conscious (Allen, 2002). On this point, the only findings that I have been able to uncover are negative:

The possibility of differentiating between the phenomenal field and objective, "meaningful" images evidently is a property only of human consciousness; owing to it, man is liberated from the slavery of sensory impressions when they are distorted by incidental conditions of perception. In this connection experiments with monkeys fitted with glasses inverting the retinal image are interesting; it developed that as distinct from man, in the monkeys this completely disrupted their behavior, and they entered a long period of inactivity (Leontev, 1978).

Why were the monkeys unable to adjust to their new view of the world? I would suggest that Carruthers' (2004) distinction between the way things are and the way they seem can only be drawn by those able to formulate the concepts of appearance verus reality. These concepts require abstract language, which monkeys (and some human beings) lack. Since positive proof of consciousness requires this distinction, we are forced to the following pessimistic conclusion:

Conclusion 4.30: Carruthers' claim that non-human animals are phenomenally conscious remains, for the time being, impervious to disproof.

This might seem an unsatisfactory conclusion to this brief philosophical enquiry into animal consciousness, but it should not surprise anyone. Quantum theory, which also carries strong metaphysical implications, is consistent with a number of wildly differing world-views, but scientists have learned to live with that fact. We may never achieve absolute certainty about animal consciousness, but what I have argued is that we can get well-supported results regarding consciousness if we attend to the wealth of neurological and behavioural data that scientists have uncovered in the last few decades, and then subject it to incisive philosophical analysis. It is to this data that I now turn.