The Conscious Experience

Throughout human history, humankind has sought to set itself apart from other animals. In a world where dualisms prevailed and the physical human body is distinctly animal, philosophers and theologians turned to the mind and the soul for ways to set the human species apart from other animals. In these more secular times, the idea of the soul as a spiritual substance unique to human beings has lost favour in academic, if not popular, circles, but the mystery of the mind continues to be a source of academic intrigue. As scientists and philosophers continue to make inroads, deepening our understanding of the brain and its functions, prejudices of the past remain passengers in their vehicles. The aim of this chapter is to challenge the widely held belief that consciousness is the exclusive property of human beings.

It has been said that we all have a rough idea of what we mean by consciousness. But do we? It would appear that, at best, our notions of consciousness are vague and even among the experts there is little consensus on what constitutes consciousness, much less, how we might explain it. This being so, it seems ironic that many humans are quick to deny consciousness in other animal species. Consciousness continues to be a jealously guarded bastion of what it means to be human, and yet there are many philosophers and cognitive scientists who would want to deny any consciousness any real role in human [or non-human] activity. In epiphenomenalism, for example, the conscious experience is rendered impotent.

In this chapter I will look at different understandings of consciousness including some of the ways in which it is divided into component features. Following on from this I will set out reasons why I believe consciousness is not an impotent side-product of neuronal activity and why it needs to be taken seriously in both human and non-human animals.

I then plan to explore a few of the many approaches to understanding consciousness with a view to exposing and challenging some of the assumptions that many of the cognitive scientists and philosophers of mind make with regard to the consciousness of ‘lower’ animals species. Through the examination of some of the more prevalent contemporary philosophical theories of consciousness, namely, functionalism, property dualism and higher order monitoring, I hope to show that the notion of consciousness in non-human animals is wholly compatible with our current understandings of the mind.

As a later part of this discussion on animal consciousness, I wish to challenge aspects of this assumed hierarchy by looking at the biological continuity between human and non-human animals, particularly in relation to brain structure and neurophysiology. This will also entail a brief discussion on the relationship between consciousness, intelligence, learning and memory. My argument is not that all animals are equal, but that consciousness is not the exclusive domain of the human species, therefore we cannot posit consciousness as the threshold of ontological difference between human beings and other animals.

What is Consciousness?

One of the difficulties with understanding consciousness is that we are so intimately familiar with our own conscious states and yet we have no way of knowing the consciousness of anyone other than ourselves. If we cannot truly know the conscious states of other humans, how can we talk with any certainty about the conscious states of non-human animals? Moreover, what does it mean to say an animal is conscious or, conversely, to say an animal is not conscious?

Consciousness means many things to many people. The word is part of our every day language and we use it in a variety of ways. We might say that a person is conscious when she is aware of what is happening in her immediate environment, whilst a person in a coma is unconscious, but what about a person who is asleep? Someone can speak to me while I am sleeping and I may answer intelligibly, yet when I wake, I have no recall of the conversation. To what degree was I conscious, or unconscious, while being spoken to? And what is the connection between consciousness and memory?

When one drives a car, it is generally agreed that one needs to be conscious; we might say an unconscious person is incapable of driving. Yet, when setting out for a particular destination, one may ‘automatically’ make a wrong turn, perhaps turning in a direction one frequently travels instead of heading in the direction necessary for the intended destination. When one ‘wakes up’ to the mistake and becomes conscious of taking the wrong road, does this imply that the driver was not conscious at the time of making the wrong turn? Using similar examples John Searle asserts the difference is not one of consciousness and unconsciousness; rather, it is one of central and peripheral consciousness, or what he prefers to label as ‘attention.’ By his definition, the driver taking the wrong turn is not unconscious, rather he is driving with peripheral consciousness, meaning that the driver is fully conscious but driving with a lesser degree of attention.

Searle’s notion of central and peripheral consciousness as degrees of attention is an important one in the consideration of animal consciousness. Many people have suggested that [‘higher’] animal consciousness resides at a level similar to that of our driver driving on ‘autopilot’. That is to say, animals are aware of their surroundings and have some degree of intention but they do not have the fullness of consciousness that we humans enjoy in our fully wakened states. However, if we talk in terms of attention we can readily discern different degrees of attention in non-human animals, just as we can in human persons. Animal trainers have long understood the benefits of motivational techniques to gain an animal’s attention when training. There is a qualitative difference in performance between an animal who is ‘going through the motions’ of an exercise, and one that gives the same exercise her full attention, just as there is a qualitative difference in driving performance for our driver. (A fully attentive driver would not have taken the wrong turn out of habit.)

The line between consciousness and unconsciousness, or, in Searle’s terminology, between central and peripheral consciousness, is not sharp, if it exists at all. I can be very ‘conscious’ of what I am writing and ‘unconscious’ of the bruise on my knee until I knock my knee against the desk, at which point I become acutely conscious of my bruising and only vaguely conscious of my work. Our attention is continually shifting in response to our ever-changing environment. This can also be said to be true for even the simplest of creatures.

Consciousness is slippery concept. We can experience consciousness only by being conscious of something. As David Hume observes, “I can never catch myself at any time without a perception, and can never observe anything but the perception.” Philosophers, trying to address the issue of consciousness, have sought various ways to define the term more clearly. As already mentioned, one approach is to separate consciousness from intentionality, whereby ‘consciousness’ more specifically refers to qualitative [or phenomenal] experiences of the mind (qualia), and intentionality refers to rationality and will, and may also include beliefs, desires, and intentions. Proponents of functionalism often favour this divisive approach, and it is easy to see the attraction, but such a division often leads to an incomplete theory of consciousness.

At a neurophysiological level we might describe consciousness as the synchronised processing of multiple sensory impressions from external sources and the brain’s memory. Edward O Wilson describes consciousness as a virtual world of scenarios that are comprised of dense and finely differentiated patterns in the neural circuits of the brain. We are told,

There is no single stream of consciousness in which all information is brought together by an executive ego. There are instead multiple streams of activity, some of which contribute to conscious thought and then phase out. Consciousness is the massive coupled aggregates of such participating circuits. The mind is a self-organising republic of scenarios that individually germinate, grow, evolve, disappear, and occasionally linger to spawn additional thought and physical activity.

In a way, Wilson’s definition epitomises the difficulties of defining and explaining consciousness. While there cannot be consciousness without significant neural activity, the presence of neural activity neither guarantees consciousness nor explains it. We are left with a circular definition in which consciousness is the aggregate of neural activity that contributes to conscious thought.

Taking Consciousness Seriously

Debates on consciousness continue to rage throughout the numerous disciplines connected with cognitive science, and there is little sign of any solution waiting in the wings. Not only is there no agreement on what consciousness is, there is no agreement as to what is required to give a complete understanding of consciousness. If there is no consensus on what consciousness is for humans, what can it mean to talk about the presence or absence of consciousness in non-human animals?

When Stephen Jay Gould claims,
We are the possessors of one extraordinary evolutionary invention called consciousness – the factor that permits us, rather than any other species, to [cogitate]…
What is the consciousness he speaks of that belongs to the human species alone?

David Chalmers suggests that the word ‘consciousness’ be reserved for the phenomena of experience, whilst the term "awareness" be adopted for the more straightforward, psychological aspects of consciousness, but the difficulty remains. What does it mean to say that only human beings ‘experience’ the experiential aspect of consciousness?

The problems of animal consciousness, however, go far beyond the difficulties of definition. The widely held belief that consciousness is exclusive to humans predetermines the questions that are asked and the studies that are done. Furthermore, the belief that only humans are conscious sets the human mind quite apart from the minds of non-human animals in an evolutionary sense. It also raises serious questions about the causal role of consciousness and human free will. If we believe that non-human animals are not conscious, we must assume that their behaviour is, in some sense, automated, i.e. their behaviour is driven purely by environmental and internal stimuli. However, many aspects of animal behaviour are shared by the human species. If, in a given situation, human behaviour is the same as, or very similar to, non-human behaviour, we must be led to conclude that the human behaviour, in that situation, is as automatic as that of the non-human, and thus consciousness plays no significant causal role.

Of course, many cognitive scientists would claim exactly that. They regard consciousness as epiphenomenal, with no causal role to play, and offering no evolutionary advantage for survival. According to this view, consciousness is merely a contingent fluke of nature with no particular benefits to the human species. For example, Sir John Eccles states:

We can, in principle, explain all our input-output performance in terms of activity of neuronal circuits; and consequently, consciousness seems to be absolutely unnecessary! …as neurophysiologists we simply have no use for consciousness in our attempts to explain how the nervous system works.

Eliminativists also reject any causal role for consciousness, believing consciousness to be merely an illusion.

The most obvious difficulty with the concept of non-causality of consciousness is that it goes against our most basic intuitions. Nothing appears more certain to us than our own subjective experiences. But the difficulty goes beyond the contradiction of our intuition. Denial of the efficacy of consciousness seems to fly in the face of phenomena such as biofeedback, and studies that develop conscious muscle control. It also belies the most basic processes human beings engage in by deciding that they will, for example, practise their piano-playing more diligently.

Biofeedback is a technique used by medical practitioners, psychologists and sports trainers around the globe. It operates by making the subjects aware of physiological processes in their bodies that are normally unconscious. Once these processes are brought into consciousness by biofeedback technology, the subject can then learn to control them.

An example of the use of biofeedback is given by T. Druckman and A. Minevich, who pioneered electroencephalogram (EEG) neurofeedback as a treatment for Attention Deficit Disorder. By attaching sensors to their subject’s scalp, information on the activity of the brain can be converted into auditory and visual feedback using the EEG - a device that measures brain activity by recording and classifying bandwidths of electrical activity produced by the brain. The subject is thereby made conscious of the reaction of the brain to different situations, and can learn, with practice, to control brain activity by attending to the brain wave patterns.

Biofeedback techniques are used to help numerous disorders. In addition, athletes may use the techniques for improving their sporting performance. “Gold Medal performances start and end in the mind,” says Lawrence Klein. Indeed, we are all familiar with our national sporting coaches talking about the need for a positive psychological approach to any game. The Olympic Sport Psychologists and the Coaching Association of Canada goes a step further with its "Mind Over Muscle" program which uses biofeedback to enhance training.

Biofeedback works because it is only when a subject is conscious of specific physiological activities in her body that she is able, with practice, to develop ways of controlling those processes. Without biofeedback, without conscious awareness, attempting to control the processes would be akin to trying to read a map blindfolded.

Identity theorists, epiphenominalists and property dualists may argue that what is operative in biofeedback is psychological consciousness rather than phenomenological (or true) consciousness. In other words, controlling neural pathways are set up on the basis of the neurophysiological effects of the audio-visual data, and phenomenological consciousness (the what-is-it-like-ness) does not come into play. However, this is to accept that the two forms of consciousness are separate and discrete – a concept that I shall discuss at more length later in this chapter. If we are committed to a more integrated notion of consciousness, as I intend to argue for, we seem drawn to the conclusion that consciousness somehow plays a causal role.

Biofeedback may well provide us with the evolutionary advantage of consciousness. Feedback mechanisms operate in most, if not all, animals and plants. Basically, a feedback system is one in which a stimulus brings about a response that alters that stimulus. For example, a change in the carbon dioxide levels of blood stimulates the respiratory centre in the brain, which brings about a change in the rate and depth of breathing. The altered respiration then affects the carbon dioxide levels of the blood; thus the initial stimulus is altered. Of course, many of the feedback mechanisms, even in human beings, operate subconsciously or unconsciously. For most of the time we are unconscious of our breathing and, in healthy states, the feedback mechanisms allow our bodies to maintain optimum levels of carbon dioxide without any conscious effort.

In complex animals consciousness can also play a role in the respiratory system. For example, human beings can, if we choose, hold our breath for a limited period. The ability to regulate breathing consciously is an important one for human culture. As well as allowing us to hold our breath for limited periods, conscious regulation of breathing enables us to speak, sing, play wind instruments, and it enhances our performance in most physical exercises. By choosing to hold our breath, we consciously inhibit the natural response of the respiratory system to rising carbon dioxide levels in our blood. Many other animals also have this ability to hold breath intentionally. We are all unable to hold our breath indefinitely, however, because when the carbon dioxide levels rises to a particular level, the respiratory centre will override neurological inhibitions effected by conscious control.

Related to biofeedback techniques, but using technological bypass mechanisms to circumvent neurological breakdown, is the concept of ‘thought phones’. Recent studies at the Huntington Memorial Research Institute in Pasadena are opening the way for the limbs of paralysed people to be moved by conscious thought, using electrodes implanted in the brain and in the paralysed muscles. These electrodes transmit signals in much the same way as a cellphone, and it is conscious thought that initiates the signal. If these techniques prove successful, the case for epiphenominalism may be further eroded.

My argument is that not only does consciousness play a causal role in human behaviour, but it also plays a role in the behaviour of non-human animals. It is at this point that those who are committed to the ontological supremacy of the human species will often argue that self-consciousness is the defining differential. They will accept that non-human animals experience pain and other sensations, and recognise that non-human animals are aware, but insist that non-human animals are not self conscious: they are not aware that they are aware. To address these objections, I will look at three of the prevalent theories of consciousness: functionalism, property dualism, and higher order monitoring.

Theories of Consciousness

Functionalism
Functionalism provides a theory of mind that characterises mental states in terms of their causal and relational properties. Functionalists claim to be neither dualist nor materialist, preferring, instead, to define mental processes in terms of their function, and thus, in principle, eliminating many of the terms we commonly use to describe our conscious lives. A popular feature of many functionalist views is the endorsement of a computer metaphor as a means of understanding the mind-brain problem. At an abstract level, functionalism…
"recognises the possibility that systems as diverse as human beings, calculating machines, and disembodied spirits could all have mental states. In the functionalist view the psychology of a system depends not on the stuff it is made of (living cells, metal or spiritual energy) but on how the stuff is put together."

Although, in reality, most functionalists believe that particular functional states are embodied as particular neural states (and, as such, are similar to the identity theorists ) at a theoretical level, functionalism remains neutral to the materialism-dualism debate that pervades cognitive science. Indeed, Jerry Fodor, one of the major advocates of functionalism, acknowledges that functionalism “allows for the possibility of disembodied Coke machines in exactly the same way and to the same extent that it allows for the possibility of disembodied minds.”

Because mental events are not restricted to the matter of human brains according to functionalism, it is perfectly feasible in theory for beliefs, feelings, desires, and other psychological states to be found in non-humans, be they carbon–based or silicon-based creatures. In other words, if the relevant functional relations are present in the system, it does not matter what kind of brain one is dealing with. In the same way that an artificial heart or a pig’s heart may function just as well as a human heart in pumping blood around a body, so too, an artificial brain or an animal brain may, in the right context, be just as capable as a human brain in producing a full range of psychological states.

Not all functionalists ascribe to the possibility of artificial intelligence, however. Some, like Daniel Dennett, will argue that the brain material does matter because for the necessary speed and chemical compatibility of the control systems, only a biological system will suffice. This is not to deny functionalism, but to argue that silicon-based systems are incapable of producing the relevant functional relationships necessary for producing mental events. While functionalism may thus deny the capacity of silicon-based systems to produce a conscious mind, there seems to be no intrinsic reason for the denial of non-human animal consciousness.

The separation of consciousness into intentionality and qualia is claimed to have the advantage of successfully avoiding the traditional philosophical mind-body debate. The functionalists then focus on intentionality, but, it has been argued, they largely ignore those aspects they label ‘consciousness’ (i.e. qualia). Ultimately, functionalism reveals very little about the nature of consciousness or the relationship between mind and body. However, the difficulty with the separation of phenomenal consciousness from intentionality goes beyond the sidelining of consciousness; the separation itself is considered erroneous by many working within the field of cognitive science.

As John Searle asserts,
Only a being that could have conscious intentional states could have intentional states at all, and every unconscious intentional state is at least potentially conscious.

In other words, intentional states only make sense in a creature that can [at least potentially] consciously access those intentional states. Thus, to talk about intentional states without considering their conscious aspects, is to fail to treat intentional states in their entirety. Mary Midgely states it well when she says, “People have insides as well as outsides; they are subjects as well as objects. And the two aspects operate together. We need views on both to make sense of either.” Of course the same arguments also apply to non-human animals. No one denies that at least some, if not all, non-human animals have intentional states, therefore it can be argued that those animals with intentional states must also [at least potentially] be conscious.

David Armstrong proposes another functionalist approach. He defines different types of consciousness, beginning with a description of the unconscious state and endeavouring to discern what minimal consciousness might be. He then argues that an unconscious person can have a mind, knowledge and beliefs but cannot perceive or think. In other words, to be unconscious is to be without accessible mental activity. Minimal consciousness might include dreams, or the solving of a mathematical problem in one’s sleep. But minimal consciousness, in itself, is problematic. Even in sleep, we have some degree of perception, albeit muted. Sounds that reach our ears are often incorporated into our dreams; we may dream of running but are aware, in our dream state, that our legs are hampered though we may not, in that dream state, recognise the reasons why.

Moving from there, Armstrong then suggests the next level of consciousness is perceptual. The perceptions to which Armstrong refers are perceptions of environment and bodily state. He argues one can have minimal consciousness without perceiving, but to perceive entails, at least, minimal consciousness. He also claims that if one is in the state of perceptual consciousness, one is living entirely in the present, and any events that occur in this state of consciousness (if no higher states of consciousness are also active) are not stored in one’s event memory. This account of perceptual consciousness would appear to be analogous with Searle’s concept of peripheral consciousness, and, just as it was noted that many people see non-human consciousness as predominantly peripheral in Searle’s schema, so it is a popular perception that non-human animals live only in the present. But, as we shall see further on in this chapter, non-human animals do remember and learn.

Thirdly, Armstrong raises the notion of introspective consciousness. He describes introspective consciousness as like perception, but instead of being directed to one’s environment or bodily state, it is directed towards one’s mental activity. In other words, it is an awareness of one’s mental activity, and thus can also be directed at itself (as introspection is also a mental activity.) Using our example of the car driver who ‘unconsciously’ takes the wrong turn (above), Armstrong might say that before ‘waking up’, the driver had perceptual consciousness (i.e. could perceive the road in a sensory way and perform complex routines such as co-ordinating the turning of the car) but [temporarily] lacked introspective consciousness. A tentative claim Armstrong makes for introspective awareness is that it is necessary (although not sufficient) for the formation of event-memories, and “essential or nearly essential for memory of the past of the self.”

Armstrong further divides introspective consciousness into reflex introspection, and introspection ‘proper’. In this schema, reflex introspection is awareness of one’s perceptions whilst introspection ‘proper’ is turned upon the reflex introspection, thus one has an introspective awareness of one’s introspective awareness. He then suggests that this heightened awareness is ‘presumably’ not present in lower animals. As is so common with claims of this type, Armstrong makes no attempt to define what he classes as ‘lower animals’, nor does he make any suggestion as to why this introspective awareness proper might be presumed absent in those lower animals.

In Armstrong’s schema, reflex introspection is akin to a perceptual consciousness of one’s state of mind, whereas introspection proper is a careful scrutiny of one’s state of mind. On Armstrong’s reading of ‘lower’ animal consciousness, an irritated animal may feel anger without being aware that it feels angry.

The idea that non-human animals lack introspective awareness seems to have a popular appeal. Nevertheless, Armstrong does provide for the possibility that non-human animals are equipped with a reflex introspection, so we could argue that he allows non-human animals a greater degree of consciousness than do those who would limit animal consciousness to perceptual consciousness. I contend, however, that there are no grounds to limit all non-human consciousness to reflex introspection, and there is strong cause to argue that many non-human animals have what Armstrong terms “introspection proper”.

It is believed that human introspection is strongly connected to our sense of ‘self’. As Armstrong puts it, “we learn to organise what we introspect as being states of, and activities in, a single continuing entity: our self.” He then claims that the function of introspection “is fairly obvious[ly] … to sophisticate our mental processes in the interests of more sophisticated action.” It is, he states, the instrument of mental integration.

If mental integration is the function of introspection, then introspection must clearly be present in non-human animals. All animals have some sense of self, for without a sense of self there is nothing to stop an animal from attacking and/or eating itself. An animal must be able to distinguish its own body from that of other creatures, and, just as it must be able to organise its proprioceptions in such a way as they provide it with a perception of itself as a unitary being, so it must be able to integrate its perceptions and intentional states into those of a single creature. Similarly, if introspection is necessary for event memory, then animals must have introspective awareness. Any animal needs a sense of self, however limited that sense of self might be. Therefore, if, as Armstrong declares, “[i]ntrospective consciousness is consciousness of self,” then we must conclude that non-human animals as well as human beings have introspective consciousness.

Armstrong’s argument does not, however, appear to address the full range of conscious experiences. He confines his discussion to what Ned Block refers to as access consciousness: it is the type of consciousness that aids us in reasoning and rational actions. By focusing on intentional consciousness, Armstrong ignores the experiential aspect of consciousness, qualia. It is not at all clear how our experience of red helps us to integrate our sensory perceptions. The answer would appear to be that it does not. Indeed, some who argue that qualia are epiphenomenal would claim that qualia serve no purpose and, as such, are totally irrelevant. And yet it could be argued that qualia are our supreme motivators.

At a basic level we eat to appease hunger and fuel our bodies. At a more human level (when food is not scarce) we spend hours modifying and blending different foods to provide ourselves with heightened taste sensations even if it were to the detriment of our bodily needs. We actively seek out greater sensory experiences through music, art, literature, etc. I have called this a more ‘human level’ because the activities to which I refer are normally associated with human behaviours. But we do not have to look far to see sensory preferences exhibited in non-human animals. Just as these preferences are not necessarily needs-driven in humans (few people prefer nutritious brussel sprouts to icecream), neither are they in non-human animals. Chocolate is poisonous to dogs and eaten in sufficient quantities can be fatal, but given the choice between a chocolate biscuit and a healthy dog biscuit, most dogs will choose the chocolate. Quite simply, it tastes better!

To exhibit such preferences, an animal must have an awareness of what it is like to experience each of the choices offered, and an appreciation of the experiential quality that each of those choices affords. We may also included an event-memory here, for whilst, in the case of food choices, an initial choice may be made according to immediate olfactory sensations, later choices may be modified according to previous experiences. Different individuals exhibit different preferences and an animal trainer must know what motivates the particular animals one is working with. For example, some dogs prefer liver, others cheese; many will not be motivated by food at all but will eagerly respond to the promise of a game of ball or a walk in the park.

But while qualia appear to be omitted from the functionalist account of consciousness, Armstrong alludes to a further state of introspection, what he terms an “introspective awareness of that introspective awareness” however he does not elaborate on this theme. We are left to ponder an infinite regress and consider how many degrees of introspective awareness might be part of the human condition, and to what extent other animals might vary.

Despite the limitations of functionalism in explaining phenomenal consciousness, various forms of functionalism continue to attract numerous followers. Whether we endorse functionalism or not, it is important to see that functionalism itself does not intrinsically deny animal (or machine) consciousness. Further, if we were to adopt Armstrong’s functional theory of consciousness we would have to assume that a degree of introspective awareness is probably common to all animals, and certainly to all who animals who have a sufficiently complex neural system because of the evolutionary advantages that such a feature bestows.

Property Dualism
Someone who claims to take phenomenal consciousness more seriously is David Chalmers. Like the functionalists, Chalmers divides consciousness into psychological and phenomenological elements; allowing a physical basis for the former, but claiming the phenomenal (or “what it is like”) element of a conscious experience to be inexplicable in terms of physical reduction. Functionalism fails, he argues, because any functional account of human cognition must be accompanied by the question, “Why is this kind of functioning accompanied by consciousness?” I would extend his point beyond human cognition.

Indeed, it is the “what is it like?” aspect of consciousness that drives Chalmers to defend consciousness against the eliminativists who would have us believe that conscious experience is merely an illusion. I shall argue that his division of psychological and phenomenological consciousness is unsustainable and show how the failure of this division contributes to the argument for animal consciousness.

Rather than ignoring qualia as the functionalists do, Chalmers argues for a property dualism on the grounds that phenomenological consciousness is not logically supervenient on the physical. In other words, phenomenological consciousness is not reductively explainable in terms of lower-level properties, i.e. neurophysiology. Chalmers bases his argument on the idea that one cannot have phenomenal consciousness without psychological consciousness (awareness), but that one can have psychological consciousness (awareness) without phenomenological consciousness (‘consciousness’ proper).

In framing his argument, Chalmers conceives of a parallel world which is physically identical to this world except that it totally lacks the phenomenal aspect of consciousness. Our human-like twins (called zombies by Chalmers) in the parallel world do not have the sense of “what it is like” that is a significant part of our conscious experience in this world. The basic thrust of Chalmers’ argument, in his own words is:

1. In our world there are conscious experiences.
2. There is a logically possible world physically identical to ours, in which the positive facts about consciousness in our world do not hold.
3. Therefore, facts about consciousness are further facts about our world, over and above the physical facts.
4. So materialism is false.

The concept of a non-conscious human-like creature is not a difficult one to imagine. However, I would argue, it is impossible to conceive of their world being identical to ours. After Chalmers has claimed the experience of hearing music as grounds for establishing the reality of the phenomenal aspect of consciousness, it is difficult to see how any non-conscious zombie could revere music in the way that we do in this world. Indeed, it is difficult to see how anything in a zombie world could be developed for aesthetic pleasure if the zombies are never conscious of aesthetic pleasures. Yet, if we stretch our imagination sufficiently to allow ourselves to follow Chalmers along his path, and assume that the functional attributes of our aesthetic pursuits are sufficient to conceive of them being equally significant in the zombie world, we find we are led into a bottomless pit.

Chalmers sees his zombie twin as behaviourally identical to himself. If this is so, presumably this zombie twin has also just completed a book defending the reality of consciousness against the eliminativists within the zombie world. The only difference, according to Chalmers, is that in the zombie world, the eliminativists are right, while the zombie Chalmers’ sense of consciousness is merely an illusion. Unlike the Chalmers of this world, Chalmers’ zombie twin is deluded in believing he is conscious.

Given that the zombie twin is physically and behaviourally identical to Chalmers, doing and saying everything that the Chalmers of this world says and does, we can assume that in his book on consciousness, the zombie twin has also proposed a parallel world with deluded zombies, and we might project the same for them, ad infinitum. It seems to me, that in this argument, Chalmers contradicts his initial evidence for the reality of phenomenal consciousness. If his zombie twin (and each subsequent zombie twin) makes the exact claims using the same arguments and yet, by Chalmers’ definition, is wrong, what reason do we have to accept Chalmers’ argument for the reality of the phenomenal content of our consciousness as being any more valid than that of the zombie? By Chalmers’ reasoning, we could merely be zombie twins of a parallel world that really does have consciousness; we do not really know “what it is like” to have consciousness, but in our deluded state we believe that we do. Indeed Chalmers, at one point, acknowledges this possible conclusion but argues that our immediate evidence rules out such a possibility. Unfortunately, his zombie twin makes exactly the same claim. Chalmers further discredits his own argument when he discusses “plausibility constraints”. He suggests that “people’s reports concerning their experiences by and large accurately reflect the contents of their experiences.” He then concedes that, as a principle, this cannot be proven; however, he argues “it is antecedently much more plausible than the alternative.” This would seem perfectly reasonable and yet, manifestly, his plausibility constraints are not applicable to his physically identical, parallel world of the zombies.

Clearly, the difficulty with Chalmers’ thesis stems from the division of psychological and phenomenological consciousness. If we allow the premise of zombies in a parallel world, we are accepting that the absence of consciousness makes no difference in any appreciable measure, which suggests that consciousness is epiphenomenal and not a causative agent. However, the moment we accept an epiphenomenal account of consciousness, it could again be argued that we are failing to take consciousness seriously. Chalmers tries to insist that his theory only looks like epiphenominalism, and appeals to the mysterious nature of causation and suggests some form of idealism in which protophenomenal properties are intrinsic to the physical. He concludes, however, “that this metaphysical speculation may need to be taken with a pinch of salt.”

Chalmers’ argument is an interesting one in the context of non-human consciousness. For example, it has been suggested, by those who adopt a Cartesian perspective on animal consciousness, that animals do not feel pain. What looks to us to be pain reactions is simply a physiological reflex. In other words, animals have psychological consciousness but not phenomenal consciousness. They are like the zombies in Chalmers’ parallel world, who react to pain in a functional sense but they do not experience pain as qualia.

We have already seen, however, that the separation of psychological and phenomenal consciousness is not sustainable unless we are prepared to allow that what human beings regard as pain is just as illusory as the pain we observe in those (zombies or animals) who supposedly lack phenomenal consciousness. At an abstract level there may be some who would want to argue that human pain is illusory, (Therevada Buddhism comes to mind), but illusory or not, most human beings will concede that pain hurts. Therefore, to whatever degree human pain hurts, the pain of the zombie or the conscious animal must also hurt.

Higher Order Monitoring
Self-consciousness has been variously described as the ability to think about oneself, introspective self awareness, an ability to ascribe thoughts and experiences to oneself, and an awareness of one’s own awareness. It is what Armstrong referred to as introspection proper and what Rosenthal explains as a third-order thought. It is the degree of consciousness described by higher order monitoring that people are most reluctant to ascribe to non-human animals.

Central to Rosenthal’s theory is the concept that not all mental states are conscious. Further we can be conscious of mental states (using conscious in a transitive sense) and mental states can be conscious (using conscious as intransitive), but not all states that we conscious of in the transitive sense are necessarily conscious in an intransitive sense. Here Rosenthal gives the example of anger. We may suffer from repressed anger and therefore our anger is not conscious. However, someone may point out that we are angry, and we may respect and accept that person’s judgement without getting in touch with that repressed anger. In such a case we would be conscious of being angry without our anger being conscious. Rosenthal then goes on to define a conscious state as a mental state which we are transitively conscious of, without benefit of inference or observation. In other words, for our anger to be conscious in an intransitive sense we must be directly conscious of that anger (transitively) without inferring it from our circumstances or having an observer tell us that we are angry.

Rosenthal’s theory of consciousness is that, to be conscious, mental states must be accompanied by a higher-order thought (HOT). “We are conscious of something…when we have a thought about it.” Further, the content of that HOT must be that we are in that particular mental state. HOTs give us a transitive consciousness of particular mental states, rendering those mental states conscious in an intransitive sense. It follows then, that any mental state not accompanied by a HOT is not conscious; we remain unaware of being in that mental state. Rosenthal asks the question, “[I]s language necessary for a creature to have HOTs with the content required by [this] theory?” In answering he claims that a minimal sense of self is required, such that it would allow the creature to distinguish between itself and another, and denies that rich conceptual resources are necessary for thoughts to refer to a creature’s own mental states. In other words, even the simplest of animals is capable of some degree of consciousness. He modifies this claim, however, by contending the richer one’s conceptual resources, the greater the range of conscious states one can experience. This allows for some animals to have wider ranging experiences of consciousness than others do. Using wine and music as examples, Rosenthal claims, “conceptual sophistication seems to generate experiences with more finely differentiated sensory qualities.”

In non-introspective consciousness, HOTs are themselves unconscious, but Rosenthal proposes that in an introspective state, the HOT becomes conscious through the accompaniment of a third-order thought. Thus we have an unconscious mental state, which, if accompanied by a HOT, becomes a conscious mental state, and if the HOT is accompanied by a third-order thought, the HOT itself becomes conscious, so we become conscious of our consciousness.

In his final argument, “the argument from reporting and expressing”, Rosenthal limits his discussion to “creatures with the relevant linguistic ability” and claims that a mental state is only conscious if one can report being in that state. Many people would want to broaden Rosenthal’s statement to a claim that only those creatures who can report being conscious are actually conscious. In other words, animals without language are not capable of consciousness, and by ‘language’ they invariably mean human language. We will be taking this discussion further in the following chapter.

Interestingly, however, Rosenthal is careful not to make such an over-generalisation. His claim is a much narrower one: if a creature has sufficient language capabilities but cannot express consciousness with those language capabilities, it is not conscious. Hence, at an accident site we might test a human’s consciousness by asking them to respond to a question. How an animal without sufficient language skills might report its consciousness remains an area of contention, but Rosenthal clearly allows room for the possibility that non-linguistic creatures have HOTs and their accompanying third-order thoughts. There are a number of problems associated with Rosenthal’s theory of consciousness. If a mental state is not conscious, it is not at all clear how a further mental state about that mental state can make it conscious. Higher order reporting would not seem to be enough. But if we are to assume higher order reporting on internal states is sufficient to produce consciousness, then any machines capable of reporting on their internal states could be deemed conscious. There is also the question of awareness of our introspection. Not only are we aware that we are aware, but we can be aware that we are aware that we are aware. Does this means a fourth-order thought is required? Alvin Goldman argues that the theory generates an infinite regress.

Indeed, there is an uncomfortable circularity in the concept of ‘awareness of one’s own awareness’. If am aware of pain, I might say “I can feel pain,” but what does it mean to say, ‘I know I can feel pain’? Wittgenstein would say it is meaningless. Nevertheless, for our purposes it is important to see that the theory of higher order thoughts as a means of explaining consciousness does not, in itself, preclude non-human consciousness.

The Matter of Mind

So far we have looked at a few of the major philosophical theories of mind which are primarily aimed at explaining consciousness in the human species. In these theories true consciousness is frequently regarded as the phenomenological and/or introspective aspects of the conscious mind. John Searle, however, takes a very different stance and insists that introspection has “nothing to do with the essential features of consciousness.” For him, consciousness is a biological feature of brain tissue much like digestion is a biological feature of the digestive tract. If this is so, then we must assume that all animals with sufficiently complex brains experience consciousness, and, indeed, Searle acknowledges this. Not all people would agree with Searle. A popular approach taken by those who wish to posit an ontological difference between humankind and other animals is to take a biological look at the brain, which is held to be the organ of consciousness, for evidence of human superiority.

Intelligence and consciousness are not the same thing. Although difficult to define, intelligence does not have the same evasive quality that consciousness has. Whereas we can only truly observe our own consciousness, it is possible to perceive, and to a limited extent measure, intelligence in others. Nevertheless, consciousness and intelligence are both considered to be functions of the brain and the two are very closely linked. Further on I shall discuss their relationship more closely, but for the moment I will turn attention to the physical organ of consciousness and intelligence – the brain.

In years gone past all sorts of claims have been made for the superiority of the human brain over the brains of non-human animals, and indeed, for the superiority of some human brains (notably those belonging to white males) over other human brains (those belonging to females and/or non-whites.) However, as we shall see in this chapter, there is no significant difference in the physiology of the human brain from that of many other mammals. Whatever aspect of the physical brain we consider, there is no reason to believe that the human brain, on its own, is inherently superior to that of many other animals.

Human beings are unique in many ways, and human intelligence has certainly advantaged our species. But I wish to argue that whilst human intelligence has made the human being one of the most adaptable, and therefore one of the most successful of the mammals (second perhaps only to the rat) it give us no grounds for claiming an ontological difference from other species. Our advantage as a species comes not from our biological features per se, because in many respects human beings are remarkably unimpressive in comparison with other species; rather, our advantage comes from our collective experience.

Brain Size

One of the ideas that has been touted in years gone by is a link between brain size and intelligence. The idea that ‘bigger means better’ tends to be all-pervasive. Generally speaking, it was thought that larger brains meant greater intelligence, but it was quickly noted that elephants have larger brains than human beings. However, the brain weight to body weight ratio is higher for human beings than for elephants, so the theory soon became one of brain size in proportion to body weight. (i.e. The greater the brain weight to body weight ratio, the more intelligent an animal is.) Of all animals, human beings are said to have the largest brain weight to body weight ratio, thus many people consider this to explain our supposedly greater intelligence and superior state of consciousness.

On the basis of this theory, some scientists went a step further to propose that the same rule applies within a species. It was accordingly postulated that in the human species, women and blacks have lower brain weight to body weight ratios that white males. In 1879, social psychologist, Gustave Le Bon wrote:

"In the most intelligent races, as among the Parisians, there are a large number of women whose brains are closer in size to those of gorillas than to the most developed male brains. This inferiority is so obvious that no one can contest it for a moment; only its degree is worth discussion. All psychologists who have studied the intelligence of women, as well as poets and novelists, recognise today that they represent the most inferior forms of human evolution and that they are closer to children and savages than to an adult, civilised man. They excel in fickleness, inconsistency, absence of thought and logic, and incapacity to reason. Without doubt there exist some distinguished women, very superior to the average man, but they are as exceptional as the birth of any monstrosity, as, for example, of a gorilla with two heads; consequently we may neglect them entirely."

Basing his study on relative brain size, Le Bon equates women’s brains to those of apes and regards both with disdain. But while it was initially thought that small brains caused low intellect, neurologist Pierre Broca (1861) acknowledged that the causal relationship might work in reverse. Thus he suggested,

"But we must not forget that women are, on the average, a little less intelligent than men, a difference which we should not exaggerate but which is, nevertheless, real. We are therefore permitted to suppose that the relatively small size of the female brain depends in part upon her physical inferiority and in part upon her intellectual inferiority."

It would thus appear Broca was willing to concede that, within a species, smaller brain size might bear some relationship to overall physical structure, and he hypothesised that low intellect might also be responsible for a reduced brain weight. The idea of a larger brain weight - body weight ratio eventually lost favour, however, when the brain weights of some eminent men (measured after their deaths) proved to be “embarrassingly small.” Nevertheless, the idea “bigger is better” has not altogether escaped us.

Whilst Le Bon recognised intelligent women but dismissed them an aberrance of nature, others saw intelligence attributed to anyone other than white, adult males, as delusional. For example, David Hume, believed that humankind was comprised of four or five different species, the non-whites all “naturally inferior” to the whites. On hearing of an intelligent black, Hume writes:

"In Jamaica indeed they talk of one negroe (sic) as a man of parts and learning; but 'tis likely he is admired for very slender accomplishments like a parrot who speaks a few words plainly."

A child of his time, not willing to let the theory of white [male] supremacy be challenged, Hume found it easier to dismiss the evidence.

While views such as those of Le Bon and Hume would be regarded as scurrilous, unsubstantiated and politically incorrect if espoused in contemporary society, it is true to say they were widely held in the nineteenth century. A wealth of scientific literature at that time drew comparisons between Negroid cranial structures and apes in an attempt to demonstrate that blacks are biologically closer to apes than they are to European whites.

Brain Structure

Although there is now less emphasis on body weight to brain weight ratios, study of the brain continues to be at the centre of the scientific quest for the essence of humanity. Thus Sherwin Nuland claims,

"much…of our brain’s structure and function is unique to our species, and is the source of our humanness. The brain is therefore the ultimate key to that innermost sanctum of understanding, wherein we may find the secret of the human spirit."

In other words, as Nuland goes on to say, “Without the distinctive qualities of his brain, Homo sapiens would be merely another of [the] higher animals.”

One of the supposedly distinctive features later claimed to differentiate humans from other animals was brain symmetry. Lateralisation of the brain was, for a long time, considered unique to human brains, and the lateralisation was associated with such features such as the use of tools, language and consciousness. [Non-human] animal brains, it was suggested, are basically symmetrical whereas human brains are non-symmetrical.

Unfortunately for its proponents, this theory is not supported by the evidence. Quoting Hellige’s research as recorded in “Behavioural and Brain Asymmetries in Nonhuman Species,” James Ashbrook and Carol Rausch Albright write,

"Accumulating evidence since the early 1980s “has made it clear that behavioural and biological asymmetries are ubiquitous in nonhumans.” Asymmetries have been observed in motor performance, the production and perception of vocalizations, visuospatial processes, and motivation and emotion. Some of these “bear a striking resemblance to asymmetries seen in humans,” though the search for such asymmetries “has been guided by what is already known about humans.”"

Ashbrook and Albright go on to cite further studies which conclude, “human beings are at most quantitatively, not qualitatively, different from other vertebrate genera in nearly all behaviours, except just possibly art and aesthetics; nor, indeed, are they unique in their lateral asymmetries”

More recently, attention has turned to the neocortex. The neocortex (or isocortex) comprises 90% of the human cerebral cortex, and is a significant feature in all mammalian brains. The neocortex is the area of the brain that receives general sensory data plus vision, hearing and taste sensations. In addition, it has some motor areas but perhaps its most important function is provided by its complex intracortical circuits. It is not surprising then that the idea that ‘bigger means better’ re-emerges in the context of the neocortex.

Anatomist, Murray Barr regards the neocortex as “the principal contributor to the intellectual capabilities of man,” and goes on to propose,

"Human society and culture, together with the complexity and individuality of behaviour, are largely the consequences of man’s possessing a significantly larger volume of neocortex than any other species."

Histologically, the neocortex is similar in all mammals, but as brain size increases, the relative size of the neocortex expands. Because of the convolutions in its structure, expansion is in surface area rather than thickness. Human beings have the largest neocortex relative to brain size, and currently it is widely assumed that the evolutionary expansion of the neocortex is somehow responsible for the development of intelligence and consciousness.

Yet evidence is mounting that whilst the neocortex may be required for intelligence and consciousness in mammals it cannot be the total answer in terms of intelligence and consciousness. Birds have cognitive abilities similar (and in some instances superior) to those of the primates yet they do not have a neocortex. Thus, the behavioural scientist Theodore Barber talks of the “small-cortex fallacy.” According to the small-cortex fallacy, the cerebral cortex is the seat of intelligence, and therefore, because birds have very little cerebral cortex, they are regarded by many as having very little intelligence.

“The fallacy in this logic is that the cerebral cortex is the seat of some of the human specialised intelligences, but not necessarily other kinds of intelligence. Humans have developed the cerebral cortex to carry out such specializations as hand-tool manipulation and symbolizing. Birds have developed a different part of the brain, the hypostriatum, to carry out their specialities, such as navigating without instruments. The hypostriatum is just as hypertrophied in birds as the cerebral cortex is in humans.”

Recent studies suggest that brain expansion occurs in different regions of the brain in conjunction with specific specialised behaviours. For example, birds and mammals that store food in times of abundance in readiness for times of scarcity have enlarged hippocampal regions of their brains.

The hippocampal formation is an area of the brain that helps form the limbic system, which, among other functions, is associated with memory. As the hippocampus increases for animals that store food, so too, animals that use their hands have a corresponding hypertrophy of the primary somato-sensory area of the neocortex. For each specialised activity an animal has, it appears that there are corresponding expansions within specific locations in the brain.

The avian brain is particularly interesting. Although birds are mostly highly intelligent animals, evolutionary ‘scales’ frequently place them low on the ‘ladder’ - just above reptiles. Such scales are most commonly based on body weight - brain weight ratios, which are very low in birds. It is this ‘evolutionary positioning’ which promotes the use the term “bird-brain” in derogatory sense. However, whilst it is true that birds have very low brain-body weight ratios, experiments have shown they can perform problem solving tasks as well as primates. Indeed, studies have shown that pigeons are even able to out-perform humans in mental rotation of symbols – an exercise frequently seen on human IQ tests.

Yet the avian brain is quite different to the mammalian brain. (This is not surprising because a heavy brain like that found in mammals would not be conducive to flight.) What is particularly remarkable about avian brains is that, unlike mammalian brains, they can produce additional neurons to increase the size of brain nuclei when necessary. For example, some songbirds, in Spring, are able to develop more neurons in those nuclei responsible for song production. It would seem there is some correlation between the size of a particular nucleus (e.g. the Higher Vocal Centre) and an individual bird’s repertoire, thus a seasonal increase in the size of relevant nuclei enables a bird to produce a variety of songs in its breeding season without having to carry the cumbersome weight of the extra neurons throughout the year.

Brain Capacity

Although there is no clear correlation between brain size and intelligence, brain size continues to generate enormous interest. Barbara Finlay and Richard Darlington hypothesise that acquiring one special ability not only increases the brain capacity in the specific corresponding region of the brain, but through the multiple connections that arise, there is enhanced brain capacity in other regions too. It is thereby suggested that specific human behaviours such as walking upright and using hands to make tools led to an exponential growth in the human neocortex, giving us increased capacity for consciousness. In a similar vein, John Eccles talks of “the dominant role of language and the associated imagery and conceptual abilities in bringing about the extraordinarily rapid expansion of the neocortex.”

If this hypothesis is correct then it must be reasonable to entertain the idea that non-human animals with special abilities have also acquired increased brain capacities, including the capacity for consciousness and intelligent thought. Human beings do not have a monopoly on special abilities.

Indeed, studies have shown that it is not just special abilities that increase brain capacity. Within a species, individuals provided with richer environments, (i.e.. environments in which they receive mental stimulation through exploration and interaction with other animals) develop a thicker cerebral cortex than individuals who are raised in isolation with limited stimulation. There appears to be a positive feedback mechanism in place. Our ability to appreciate the richness of our surroundings may be attributable to the complexity of our brains, but at least in part, the complexity of our brains is attributable to the richness of our surroundings. The more an animal does, the more it is able to do, and the greater its potential capacity to develop a conscious appreciation of its activities.

Of course, biological restrictions apply. No matter how rich its environment, a pigeon cannot learn to sing like a canary any more than we humans can learn to track with our noses. But within a sufficiently rich environment, a pigeon can develop its remarkable navigational skills to even greater levels, just as we human beings can develop our language skills within a sufficiently rich environment.

At a cellular level the brains of all multi-cellular animals, from simple invertebrates with nervous systems of 10,000 to 100, 000 cells, through to the most complex of mammals with many billions of neurons, the cellular organisation of their nervous systems and its biochemistry is strikingly similar. Eric Kandel claims there are “no fundamental differences in the structure, chemistry or function between the neurons and synapses” in human beings from those of a squid, a snail or a leech. Where the different species do differ neurologically is in the patterns of neuronal interconnectedness, and it is within these patterns of interconnectedness that most neurobiologists expect to find a predisposition to consciousness.

If, as suggested, a predisposition to consciousness is to be found in the patterns of interconnectedness it is difficult to see that consciousness could be the preserve of humans only. Even if one argues that different degrees of interconnectedness result in different degrees of consciousness, this is still suggestive of some level of consciousness in the simplest of animals with neurological systems.

But patterns of interconnectedness are not the only contender for a theory of consciousness at a biological level. Maxine Sheets-Johnstone contends that consciousness originates, not in the patterns of interconnectedness, but in the nature of animation. She argues that the quest of the philosophers to understand how consciousness arises in matter is spurious. “Consciousness does not arise in matter; it arises in organic forms, forms that are animate.”

Any animal, no matter how simple or how basic its normal behaviour, “is necessarily sensitive in a proprioceptive sense to the present moment; it begins crawling, undulating, flying, stepping, elongating, contracting, or whatever, in the context of a present circumstance. It is kinetically spontaneous. Elucidation of this further truth about the nature of animate form will show in the most concrete way how animate form is the generative source of consciousness — and how consciousness cannot reasonably be claimed to be the privileged faculty of humans.”

Intelligence and Consciousness

Intelligence most commonly denotes an ability to solve new problems. The definitions of intelligence are many and varied. Generally we regard as intelligent, someone who learns quickly, is good at problem-solving and lateral thinking; we might also regard a particularly skilful use of language, or a creative spark, as signs of intelligence. If we say someone plays an intelligent innings in cricket, we mean that the batsman has played thoughtfully, picking the gaps in the field, calling well, and planning the runs to keep the in-form batsman on strike. These forms of intelligence involve conscious performances that demonstrate strategy and creativity.

But not all philosophers would see a necessary link between consciousness and intelligence. Richard Gregory, tries to avoid a definition of intelligence that involves consciousness (because any such definition would automatically exclude the intelligence of machines), and therefore defines intelligence as the “creation and understanding of successful novelty.” He goes on to postulate that intelligence is, to a large measure, made up of knowledge which he calls Potential Intelligence. In addition to this Potential Intelligence, we have what he calls Kinetic Intelligence. Kinetic Intelligence is the intelligence required to make the creative leap that necessary to bridge the knowledge gap when addressing a new problem.

It is difficult to see, however, how the mind might access Potential Intelligence, or indeed, recognise the knowledge gap without some form of consciousness. To ‘understand successful novelty’ suggests some awareness of that novelty. Whilst a machine might be able to access information without consciousness, one has to question whether it could ‘create and understand novelty’. Ultimately the question of intelligence comes down to definition.

In the study of artificial intelligence a favourite example of a machine showing intelligent behaviour is the chess-playing computer, Deep Blue. In 1997 Deep Blue founds its way into the history books when it beat champion chess player Gary Kasparov in a match by two games to one with three drawn. We might say that Deep Blue shows intelligent behaviour in playing its game, but no one suggests that the computer is conscious of its game or enjoying its success. It seems intelligent, but is that to say that it is intelligent?

The dual aspect theory of mind alluded to in our critique of functionalism, suggests that there are two ways to address the question. From the outside looking in (FOLI) we would have to say that chess-playing behaviour of Deep Blue gives all the signs of being intelligent. The second aspect of the issue is from the inside looking out (FILO). Is Deep Blue aware of its behaviour as being intelligent? Although we can never be certain, the probability is no. So we might say that Deep Blue is intelligent from the outside looking in, but not from the inside looking out.

The difficulty is that if we require an affirmative response to the FILO aspect of our question before we deem Deep Blue intelligent we are, by definition, inextricably linking intelligence with consciousness and setting up a tautology.

Nevertheless, I would argue that while it would seem possible for a machine to learn without consciousness, to show genuine intelligence by solving a new problem, hitherto unknown to it, requires that it make a creative leap, and genuine creativity requires the insight associated with consciousness. In the case of Deep Blue I believe the creative leap and genuine creativity came from the conscious efforts of its creators and operators; that we can only credit Deep Blue (or its successors) with intelligence and creativity if it, of its own volition, moves away from number and calculation based processes to tackle new problems in different, perhaps more intuitive, ways.

The analogy with animals is an important one. Animal intelligence is generally not disputed (although it is all too frequently under-estimated.) But I contend that unlike the computer, Deep Blue, non-human animals are intelligent from both the FOLI aspect and the FILO aspect. That is to say that as well as displaying observably intelligent behaviour, animals are conscious and are at least potentially aware of their intelligent behaviour.

Learning and Memory

As well as being linked to each other, consciousness and intelligence would seem inextricably linked to memory and learning. Intelligence needs knowledge, and knowledge requires the facilities of learning and memory. Significant headway has been made into understanding the processes of learning and memory in several fields, from sociology and psychology through to biology and biochemistry. At the cellular and biochemical level, Eric Kandel has led research into memory and learning through his experiments with the sea mollusc, Aplysia.

The Aplysia is a simple sea slug that eats seaweed on the seabed, close to shore. The largest of the sea slugs can grow to 30 centimetres and weigh over two kilograms, yet their central nervous systems have only 20,000 neurons. The advantage of the Aplysia for neurobiologists is the relatively large size of its neurons combined with the simplicity of its nervous system. This enables researchers to identify individual neural cells which are found to be replicated in each Aplysia. By tracing the connection of these neurons, it has been possible to work out the “wiring” of various behavioural circuits, allowing a study to be made of the causal relations of particular neurons to certain behaviours.

Whilst the behaviour of the Aplysia is mediated by genetically determined, invariant cells interconnecting in precise and invariant ways, Kandel soon discovered that the behaviour of the mollusc is by no means fixed. Even this simple invertebrate is able to modify its behaviour through learning, this learning being effected through the processes of habituation and sensitisation which act by regulating the release of the synaptic neurotransmitter in the pre-synaptic terminals of sensory neurons.

Experiments like those of Kandel have shown that learning is an experience not only in the domain of complex mammals. Even the octopus, a member of the mollusc family, can learn to ‘read’ symbols. And bees, with only 950,000 neurons can learn to recognise colours, textures and smells. Further, “when set appropriate tasks, [bees] can show most of the features of conditioning, associative and non-associative learning and relatively long-lasting memory found by mammalian psychologists in organisms with many-fold larger brains.” It is would appear that even invertebrates are not simple automata as we are sometimes led to believe.

Learning implies that memories are being created and recalled, and experiments suggest that learning is achieved through biochemical processes. Research has shown that specific cellular processes are necessary for memory formation and, further, where such processes are inhibited, learning cannot occur. However, we cannot know if these processes are sufficient. What is clear, however, is that even some of the ‘lowest’ animals can learn.

While learning mechanisms can be traced in simple creatures, there is little understanding of the processes involved in memory recall. The brain is much more than an information-processing machine, and while learning can be traced to specific regions of the brain, memories cannot. Rather, memories appear to be represented by “fluctuating dynamic patterns of electrical activity across the entire brain region.” Indeed, studies on the human brain have shown that while vivid memories may be elicited by stimulation of particular brain sites, re-stimulation of the same kind, at the same site, may elicit quite different memories.

Clearly, understanding the biochemical changes at the cellular level of the brain is not sufficient to fully explain our capacity to remember, and some would argue that no matter how complete our understanding of neurophysiology, we will never be able to fully explain memory at a biological level. It has been claimed that whatever biochemical processes are shared by the neurological systems of different animal species, these processes say nothing about the meaning of the memories that are stored; that meaning can only be understood through broader [higher level] disciplines such as psychology, sociology, etc. Others counter that the meaning or content of a memory cannot be divorced from the processes that embody it. Indeed, Stephen Rose claims arguments that seek to separate content from process are promoting the “modern version of the Cartesian split” by dividing the biological from the personal. However, it would be folly to ignore objections to the biological reductionism that often tends to accompany neurological and biochemical explanations of memory, and even Rose is keen to avoid “crassly reductionist biology”.

We have seen that even simple animals learn, and more complex animals have a similar brain structure to that of human beings. And although humankind is undoubtedly one of the most intelligent of animal species, we are on rocky ground to claim an absolute superiority on the grounds of intelligence because we design intelligence tests to privilege our own species. A more objective assessment, if it were possible, would see that while we may outperform other species in many mental activities, other species can (and do) outperform us in others.

Conclusion

Studies in the learning of simple animals have given great insights into the neurological pathways of more complex creatures including the human species. Those who posit an ontological difference between the human mind and non-human minds will sometimes argue that one cannot extrapolate the findings of non-human experimentation and apply them to the human brain, but at a cellular level, neurons from the human nervous system are virtually indistinguishable from those of other vertebrates and operate through the same biochemical mechanisms. The increased complexity of the human brain would seem to come from its multiplicity of interneural connections rather than any significant histological or physiological differences. When learning and intelligence appear to be associated with consciousness, and given that at a fundamental physiological level, the human nervous system is essentially the same as that of most living creatures, it seems churlish to deny at least the possibility of consciousness in all animals.

Consciousness is something that needs to be taken seriously, in non-human animals as well as in the human species. Although we have only addressed a few of the many theories of consciousness that are part of the ongoing struggle to understand cognition, it has been sufficient to show that at least amongst some of the predominant theories, there in no inherent reason why non-human animals can not be conscious.

Whether we choose to see human beings as created or evolved, we function according to the same natural laws as all other all other creatures on our planet. Our biological nature is like that of many other animal species, and we are subject to the same laws of physiology and biochemistry. We experience similar drives, instincts and emotions. This being the case, our claims for an ontological difference and/or an inherent superiority to other animals simply reflect an inherent prejudice for our own species.

It is natural and perhaps justifiable that we privilege our own species, just as within our species we privilege our own family or clan. What is not justifiable is the tendency to claim exclusivity for the biological attributes that we value most. We must not forget the lessons of the past when men privileged their own gender and race. If, only last century, some of the greatest western thinkers were so quick to dismiss intelligence in blacks, women and children, it is not altogether surprising that many still dismiss intelligence observed in non-human animal species.

I do not claim that consciousness is the same for all animal species. Levels of consciousness and intellect may vary widely across different species and within each species. What I do insist is that we must not dismiss the consciousness or intellect of non-human animals as a means of setting the human race apart from (and above) the natural world. To do so is a pretension of holiness.

Escape to the Homepage by clicking on the GSD