It is almost universally accepted that at least some animals have mental states, which allow them to take an interest in certain things (e.g. food, sex or the avoidance of pain). So far, I have made no assumptions about the mental capacities of living things, and have attempted to ground their interests in the mere fact of their being alive. I will begin this discussion of different levels of cognition in organisms by addressing the question of whether all or most living things possess rudimentary mental states, which could enable them to take an interest in something, as opposed to merely having an interest in it.
I should acknowledge at the outset that the quest for "mental states" comes with some philosophical baggage. As my investigation eschews pre-conceived notions of what "the mind" is, I shall simply set forth these views, and refrain from adjudicating between them until our investigation is complete.
Our modern terminology of mental states owes much to Descartes, who distinguished between activities or states requiring our attention and processes which can be performed absent-mindedly or while asleep. Descartes characterised processes of the former kind as "cogitative", or relating to thought. Descartes' conception of "thought" was meant to encompass all mental states, as he explained in the Principles of Philosophy (1644): "By thought I understand all that of which we are conscious as operating in us. And that is why not alone understanding, willing, imagining, but also feeling, are here the same thing as thought" (Haldane and Ross, 1970, I.222). Elsewhere, he wrote that "there are ... characteristics which we call mental [literally cogitative, or relating to thought] such as understanding, willing, imagining, sensing, and so on. All these are united by having in common the essential principle of thought, or perception, or consciousness [conscientia, a state of being aware]" (Descartes' Reply to Hobbes' Second Objection, translation and footnotes by Ross, 1975-1979). By contrast, processes which can be performed absent-mindedly or while asleep, were excluded from the sphere of mental states, and were deemed to be "automatic".
This way of carving up the activities of organisms would have seemed highly unusual to Aristotle. Indeed, there was no term in his lexicon for what we would call "mental states". The term psuche (soul) will not do, as plants, which are said to lack perception, have a psuche because they are capable of being nourished (De Anima 2.4, 415a24-25, 415b27-28). Animals are characterised by virtue of their faculty of perception (aisthesis) (De Sensu 1, 436b10-12), but non-human animals are said to lack reason (logos) (De Anima 3.3, 428a4; Eudemian Ethics 2.8, 1224a27; Politics 7.13, 1332b5; Nicomachaean Ethics 1.7, 1098a3-4), reasoning (logismos) (De Anima 3.10, 433a12), thought (dianoia) (Parts of Animals 1.1, 641b7), belief (doxa) (De Anima 3.3, 428a19-24; On Memory 450a16) and intellect (nous - also translated as "mind") (De Anima 1.2, 404b4-6; all references cited in Sorabji, 1993, p. 14). Aristotle described nous (translated as "mind", but also rendered as "intellect" or "reason") as gthe part of the soul by which it knows and understandsh (De Anima 3.4, 429a9-10; cf. 3.3, 428a5; 3.9, 432b26; 3.12, 434b3). "[J]ust as the having of sensory faculties is essential to being an animal, so the having of a mind is essential to being a human" (Shields, 2003; see also Metaphysics 1.1, 980a21; De Anima 2.3, 414b18; 3.3, 429a6-8). Aristotle does not seem to have regarded perception and thought as even belonging to a common category (e.g. "knowledge", "cognition", "awareness" or "consciousness"). On the contrary, he sharply distinguished knowledge or cognition (gnosis) from perception (De Anima 3.8, 431b24), and apart from his discussion (De Anima 3.2) of how it is that we can perceive that we are seeing or hearing, seems to have said very little about what we would call "consciousness". The only term which Aristotle does apply to both perception and thought is krinein (De Anima 3.9, 432a16), which according to Ebert (1983) is best translated as discrimination , or a discerning activity.
According to the Cartesian schema, then, there is a fundamental divide between beings that have minds and those lacking them, whereas on Aristotle's view, there are three basic categories of organisms: those that are capable of being nourished, those that can discriminate between objects in their surroundings, and those that can know and understand. The modern conception of mental events is somewhat broader than Descartes': it is now acceptable to speak of unconscious as well as conscious mental processes. Some philosophers (e.g. Searle, 1999, p. 88) differentiate between nonconscious and subconscious brain states, recognising only the latter as mental, because they are at least potentially conscious. Others (e.g. Lakoff and Johnson, 1999, p. 10) insist that "most of our thought is unconscious, not in the Freudian sense of being repressed, but in the sense that it operates beneath the level of cognitive awareness, inaccessible to consciousness and operating too quickly to be focussed on".
There is also a considerable diversity of opinion about the existence and location of a boundary between entities that have minds and those that do not. One school of thought rejects the idea of a sharp boundary between organisms that have mental states and those that lack them. According to this school, all living things display some degree of flexibility in response to their environment, and there is a continuum of adaptability, from the humblest microbes to the most advanced animals. Where one chooses to draw the line is quite arbitrary; there is no fundamental cognitive distinction between conscious animals and other organisms. The most extreme version of this idea is panexperientialism, the view that all individuals (including simple individuals such as electrons, and compound individuals such as atoms or cells, but excluding mere aggregates such as rocks, tables and desktop PCs) act and feel as a unit, to some degree. As a contemporary exponent, Charles Birch, puts it:
Where then, is a line to be drawn between the sentient and the non-sentient? Descartes drew a line between the human soul and the rest of nature. But drawing a line anywhere is quite arbitrary, be it between humans and other creatures, between fish and frogs or between a cell and a virus. It is more logical to argue that no line exists... (2001, p. 6).
Writing from a naturalistic property dualist perspective, David Chalmers acknowledges that "a conscious experience is a realization of an information state" (1996, p. 292), leading him to espouse a form of panpsychism, which attributes experiences to any system that realises information (including, famously, a thermostat).
At the other extreme, some philosophers argue for a clear-cut divide between animals that possess consciousness and other animals and organisms that lack it. Nicholas Humphrey is a strong proponent of this view:
One thing of which we can be sure is that whenever and wherever in the animal kingdom consciousness has in fact emerged, it will not have been a gradual process... [T]here must have been a threshold where consciousness quite suddenly emerged... (1993, pp. 195 - 196).
A common reaction to the philosophical debate over cognition among scientists is to shun the terminology that generates the debate. Many scientists reject any division of animal behaviour into cognitive and non-cognitive, or conscious and unconscious, as a methodological blind alley, preferring to resort to other concepts to explain animal behaviour. The following remarks by a scientist, who has published papers on associative learning in fruit flies and snails, exemplifies this attitude:
You may have noticed that I try to avoid the use of the word 'cognitive'. For my purpose, the distinction into cognitive and non-cognitive has no heuristic value... I personally keep a tally of tasks (in my head) of what different animals have or haven't shown to be able to successfully complete. Eventually, I want to find out how the brain solves these tasks. The question of what parts of the brain are contributing how will be answered then and the question how 'cognitive' the involved processes are, will be redundant... The more evolved and complex the brain is, the more computing power it has, not surprisingly. This is a heuristically much more valuable concept and hypothesis, than to classify certain brain functions as 'cognitive' or not. In my construction of the world, I see no use of the word 'cognitive' (yet?) (Bjorn Brembs, personal e-mail communication, 22 December 2002).
One way to resolve the philosophical conundrums relating to mental states would be to propose a definition of "mental state", argue its merits and use it to distinguish genuine from spurious candidates for having mental states. This a priori approach will not be employed here, as its limitations have been exposed in the Introduction. Instead, a constructive, empirically based approach will be employed: the capacities of different kinds of organisms will be examined, starting from the simplest and most widely shared abilities, in an attempt to identify those capacities that may be relevant to the possession of "mental states", whatever they turn out to be. A philosophical "winnowing process" will be applied at each stage in our investigation, in order to determine if a capacity really indicates the existence of bona fide mental states in organisms possessing it. If a reported phenomenon fails to pass muster as "mental", our search will focus on more promising but less general cases, until a suitable candidate is found. My proposed definition of "mind" and "mental states" will thus emerge in the course of this chapter.
Very well then; but how are we to decide what counts as a mental state? Rules of evidence were addressed in the Introduction, where I decided to err on the side of caution and not give credence to experiments and scientific observations that have not been replicated by other researchers. For instance, Abramson, Garrido, Lawson, Browne and Thomas (2002) discuss research by Cleve Backster in the 1960s, which purported to show that plants could read people's thoughts and feelings, and dryly note:
While studies on the emotional and telepathic capacities of plants were greeted with great interest, attempts to replicate these studies have not been successful (2002, p. 177).
Another rule of evidence which I adopted was to reject studies whose follow-up has produced conflicting results. Abramson, Garrido, Lawson, Browne and Thomas (2002, p. 175) report that this is precisely what has happened with classical conditioning studies conducted on Mimosa, a small shrub whose leaves are sensitive to stimulation. While there is a large body of evidence suggesting that Mimosa plants are capable of habituation (the simplest form of learning), it would be premature to conclude that they are capable of being conditioned.
Appeals to logical possibility were also rejected as a method of deciding whether or not something has mental states. To show that a state of affairs is logically possible (or not obviously logically impossible) does not establish that it is physically possible. While Chalmers (1996, p. 94 ff.) argues for the logical possibility of zombies who look and act like us, but have no subjective experiences, he is to be commended for drawing no sceptical conclusions from this example, regarding the (real-world) problem of other minds.
Thought experiments have also been used to undermine the relevance of mental states (in particular, conscious states) in the daily lives of their bearers. It has been suggested that because we can imagine beings lacking mental states who evolved in such a way that they look and behave like us (or like other animals), mental states therefore have no role in human (or animal) behaviour, making their presence unknowable. Searle argues that the methodology used to support this sceptical conclusion is flawed and contains a hidden commitment to dualism:
The normal way we have of inquiring into the role of some phenotypical trait is to imagine the absence of that trait, while holding the rest of nature constant, and then see what happens... Now try it with consciousness. Imagine that we all fall into a coma and lie around prostrate and helpless... You cannot eat, copulate, raise your young, hunt for food, raise crops, speak a language, organize social groups, or heal the sick if you are in a coma... You see that we would soon become extinct, but that is not the way the skeptic imagines it. He imagines that our behavior remains the same, only minus consciousness. But that is precisely not holding the rest of nature constant, because in real life much of the behavior that enables us to survive is conscious behavior. In real life you cannot subtract the consciousness and keep the behaviour. To suppose that you can is to suppose ... a dualistic account of consciousness (1999, pp. 63-64).
While Searle's argument exposes the defects using thought experiments to cast doubt on the relevance or even the presence of mental states, his own methodology, while certainly valid for human beings, is problematic when applied to other organisms, for three reasons. First, their physiology differs from ours, raising the possibility that their behaviour may have different underlying (non-mentalistic) causes, even when it resembles ours. The fact that we cannot eat without being conscious does not entail that bacteria, which also eat, are conscious. Another problem with the methodology is that it pre-supposes a distinction between sleeping and waking states. While this distinction can be made for humans and most other vertebrates, it remains unclear which invertebrates sleep, and even how we should define "sleep". Finally, there are other mental states besides conscious ones: subconscious or unconscious states are a feature of everyday life, as dreamers and absent-minded drivers know. The possibility that there are animals whose entire mental lives are played out below the level of conscious awareness needs to be examined.
Other criteria for identifying mental states were rejected because of their empirical inadequacies. The widely shared notion that behaviour which is programmable is mindless, was rejected as false on experimental grounds. The question of whether behaviour is conscious is independent of whether it is genetically programmed; any combination is possible (Griffin, 1992, p. 254). Accordingly, the fact that the transfer of information in bacterial cells is regulated by molecules and ions (Kilian and Muller, 2001) does not, per se, render it mindless.
In the previous chapter, we examined the argument that a computational explanation for a certain kind of behaviour in an organism should preclude it from being regarded as mentalistic, on methodological grounds: we do not need to ascribe minds to computational devices. While allowing that organisms could be described as computational devices, we rejected this argument, on the grounds that there is a fundamental difference between living systems and other computational devices. A living thing is characterised by intrinsic relations, dedicated functionality and a nested hierarchy of parts, which give it an intrinsic end and make it a true individual - something we can call a body. Other computational devices lack this finality and are nothing more than assemblages. A piece of behaviour that we would regard as mindless if performed by a human-built computer (which is a mere aggregate of parts) may be more appropriately explained in terms of mental states, if it occurs in an organism that acts for its own intrinsic ends. Thus it would be wrong to infer from the fact that the fact that a bacterium has the formal computing power of a Turing machine (Muller, Di Primio and Lengeler, 2001, p. 93) that it has no more of a mind than a human-built Turing machine.
In the Introduction, simplicity was discussed as an explanatory virtue, but found to be a double-edged sword. Occam's razor is invoked, both by minimalists (e.g. Kennedy, 1992, p. 121) to dispense with mentalistic explanations for animal behaviour as redundant, and by maximalists, either to argue that it is simpler to suppose that animals whose neuroanatomy is similar to ours have mental capacities like ours (e.g. Griffin, 1992, p. 4) or to argue that imputing to an animal the ability to think in terms of basic concepts when confronted with novel situations, is a simpler explanation of its adaptive behaviour than hypothesising that it operates according to some complex program for dealing with different environmental conditions (Griffin, 1992, p. 115). The uncertainty about which kind of explanation is simplest reflects our current state of ignorance: we simply do not know enough about the role played by each part of an animal's brain in generating its mental states, or about the internal action selection programs that regulate animal behaviour.
Morgan's Canon was also found to be an unsatisfactory guide. Even leaving aside worries about its terminology of "higher" and "lower" psychological faculties, the key insight, that nature must be parsimonious (Bavidge and Ground, 1994, p. 26) contains a hidden assumption that it is more complicated for nature to generate adaptive behaviour by means of mental states than by other means. Wolfram (2002, p. 721) posits that almost all systems, even those with simple underlying rules, are equally complex, in that they can be used to generate computations of equivalent sophistication. A mindless system that generates adaptive behaviour may be no less complex than an intelligent one.
The methodology finally proposed in the Introduction for evaluating a claim that a certain kind of behaviour is indicative of a mental state was to proceed by asking: what is the most appropriate way of describing it? We should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy, than invoking other modes of explanation.
Earlier, I rejected an a priori approach to mental states as philosophically limiting: such an investigation runs the risk of omitting important evidence that may fall outside the narrow bounds of the investigator's definition. Nevertheless, we have to start looking somewhere in our quest for mental states. Where should we look for minds? And if we are investigating the minds of organisms, which aspects of an organism's behaviour should we investigate? I shall discuss two recently proposed "answers" to this question - those given by Steve Wolfram and Daniel Dennett - before commencing my investigation of the "mind-like" behaviour of different kinds of organisms.
Conclusions reached - a note to the reader
In the course of my investigation, I shall list and number my conclusions for ease of reference. I shall formulate conclusions of four kinds: