Chapter 2 - What does it take to possess a Minimal Mind?

Back to Chapter 1 Chapter 3 Conclusions References

Preliminaries

In the second chapter, entitled "What does it take to possess a Minimal Mind?", I address the issue of which organisms can be said to possess mental states, and what kind of features the most basic kind of mind would have to possess.

The "minimal mind" which I shall describe in this chapter is not one which experiences qualia ("raw feels", such as the quality of redness that one experiences when one looks at ripe tomatoes), let alone phenomenal consciousness - a richer concept, which "covers all the various kinds of order and structure found within the domain of ... the world as it appears to us" (Van Gulick, 2004). The answer to the lay-person's question, "Does a minimal mind have subjective feelings?" is "No, but there are good reasons for calling it a mind nonetheless".

As I explain in this chapter, a minimal mind shares a number of impressive abilities with phenomenally conscious minds. In particular, it can (i) sense objects in its environment, (ii) remember new skills, (iii) flexibly update its own internal programs, which regulate its behaviour, (iv) learn to associate actions with their consequences, (v) control its own bodily movements by fine-tuning them, (vi) represent its current state, its goal and the pathway it needs to follow to get to its goal, and (vii) correct any mistakes it makes in its movements towards its goal, and correct factual mis-representations of its environment in the light of new information. I argue that an organism with these capacities - and a suitable anatomy - can be said to have beliefs, desires and intentions, making it a bona fide agent.

In particular, I argue that animals with minimal minds can be said to have beliefs in the true sense of the word, because they create their own internal representations of their movements towards their goals, and these representations: (i) track the truth, insofar as they correct their own mistakes; (ii) possess map-like features, as typical beliefs do (which is why I call them minimal maps); and (iii) incorporate both means and ends, making agency possible. Any animal which can form such a representation through a process which is under its control is a bona fide agent. Its representations can be described as beliefs, and its goals can be described as its objects of desire.

After discussing the methodologies that have been proposed for identifying mental states, I critically examine two approaches - the computational approach of Steve Wolfram and the intentional stance developed by Daniel Dennett - but argue that both approaches offer important philosophical insights.

Drawing upon my conclusions in chapter 1, I argue that when we look at Dennett's intentional systems, the most fundamental division is that between living and non-living entities, which underlies Dennett's distinction between bona fide agents and pseudo-agents. In other words, non-living entities are mindless. I also argue that Dennett's intentional stance can be divorced from the terminology of "beliefs" and "desires" which he uses to explain it, and that at least two different kinds of intentional stance can be adopted to explain the behaviour of organisms, according to two different classifications. First, we can distinguish between a mind-neutral "goal-centred" stance, which describes the organism's behaviour in terms of its goals and the information available to it, and a mentalistic "agent-centred" stance which views the organism as an intentional agent and explains its behaviour in terms of its desires and the beliefs it entertains. We can also distinguish between a "third-person" and a "first-person" account of an entity's behaviour. A third-person account need not imply mindlessness, however: some philosophers have proposed that animals are capable of having beliefs and desires without subjective states.

I argue that being an organism is a necessary but not sufficient condition for having mental states. When investigating the behaviour of organisms, I propose to adopt a mind-neutral "goal-centred" intentional stance. After clarifying the linguistic conventions I shall observe, I outline the major groupings of living things that are recognised by scientists, and the connections between these groupings. I then proceed to discuss the various behavioural and biological features of living things that have been proposed as relevant to the definition of mental states - in particular, sensory capacities, memory, flexible behaviour, the ability to learn, self-directed movement, representational capacity, the ability to correct one's mistakes and possession of a central nervous system.

I examine the various ways in which these features are realized in different kinds of living things. My enquiry draws on "case studies" from a wide range of organisms - viruses; bacteria; protoctista (e.g. amoebae, paramecia and slime moulds); plants; animal cells; the simplest animals (sponges); cnidaria (coelenterates, such as jellyfish); worms (flatworms, roundworms and earthworms); insects; cephalopods; and vertebrates.

I argue that operant conditioning, spatial learning, tool use and social learning can only be explained by adopting an agent-centred intentional stance. Any organism that is capable of one of these kinds of learning qualifies as having beliefs, desires and intentions, and therefore has what I call a "minimal mind". There are thus four possible kinds of minimal mind. For each of the four kinds of learning described, I propose a set of necessary and sufficient conditions that must be met before they can be regarded as manifestations of bona fide intentional agency.

Finally, I formulate tentative conclusions regarding which animals should be regarded as intentional agents.

How do we decide which entities possess mental states?

It is almost universally accepted that at least some animals have mental states, which allow them to take an interest in certain things (e.g. food, sex or the avoidance of pain). So far, I have made no assumptions about the mental capacities of living things, and have attempted to ground their interests in the mere fact of their being alive. I will begin this discussion of different levels of cognition in organisms by addressing the question of whether all or most living things possess rudimentary mental states, which could enable them to take an interest in something, as opposed to merely having an interest in it.

I should acknowledge at the outset that the quest for "mental states" comes with some philosophical baggage. As my investigation eschews pre-conceived notions of what "the mind" is, I shall simply set forth these views, and refrain from adjudicating between them until our investigation is complete.

Our modern terminology of mental states owes much to Descartes, who distinguished between activities or states requiring our attention and processes which can be performed absent-mindedly or while asleep. Descartes characterised processes of the former kind as "cogitative", or relating to thought. Descartes' conception of "thought" was meant to encompass all mental states, as he explained in the Principles of Philosophy (1644): "By thought I understand all that of which we are conscious as operating in us. And that is why not alone understanding, willing, imagining, but also feeling, are here the same thing as thought" (Haldane and Ross, 1970, I.222). Elsewhere, he wrote that "there are ... characteristics which we call mental [literally cogitative, or relating to thought] such as understanding, willing, imagining, sensing, and so on. All these are united by having in common the essential principle of thought, or perception, or consciousness [conscientia, a state of being aware]" (Descartes' Reply to Hobbes' Second Objection, translation and footnotes by Ross, 1975-1979). By contrast, processes which can be performed absent-mindedly or while asleep, were excluded from the sphere of mental states, and were deemed to be "automatic".

This way of carving up the activities of organisms would have seemed highly unusual to Aristotle. Indeed, there was no term in his lexicon for what we would call "mental states". The term psuche (soul) will not do, as plants, which are said to lack perception, have a psuche because they are capable of being nourished (De Anima 2.4, 415a24-25, 415b27-28). Animals are characterised by virtue of their faculty of perception (aisthesis) (De Sensu 1, 436b10-12), but non-human animals are said to lack reason (logos) (De Anima 3.3, 428a4; Eudemian Ethics 2.8, 1224a27; Politics 7.13, 1332b5; Nicomachaean Ethics 1.7, 1098a3-4), reasoning (logismos) (De Anima 3.10, 433a12), thought (dianoia) (Parts of Animals 1.1, 641b7), belief (doxa) (De Anima 3.3, 428a19-24; On Memory 450a16) and intellect (nous - also translated as "mind") (De Anima 1.2, 404b4-6; all references cited in Sorabji, 1993, p. 14). Aristotle described nous (translated as "mind", but also rendered as "intellect" or "reason") as "the part of the soul by which it knows and understands" (De Anima 3.4, 429a9-10; cf. 3.3, 428a5; 3.9, 432b26; 3.12, 434b3). "[J]ust as the having of sensory faculties is essential to being an animal, so the having of a mind is essential to being a human" (Shields, 2003; see also Metaphysics 1.1, 980a21; De Anima 2.3, 414b18; 3.3, 429a6-8). Aristotle does not seem to have regarded perception and thought as even belonging to a common category (e.g. "knowledge", "cognition", "awareness" or "consciousness"). On the contrary, he sharply distinguished knowledge or cognition (gnosis) from perception (De Anima 3.8, 431b24), and apart from his discussion (De Anima 3.2) of how it is that we can perceive that we are seeing or hearing, seems to have said very little about what we would call "consciousness". The only term which Aristotle does apply to both perception and thought is krinein (De Anima 3.9, 432a16), which according to Ebert (1983) is best translated as discrimination , or a discerning activity.

According to the Cartesian schema, then, there is a fundamental divide between beings that have minds and those lacking them, whereas on Aristotle's view, there are three basic categories of organisms: those that are capable of being nourished, those that can discriminate between objects in their surroundings, and those that can know and understand. The modern conception of mental events is somewhat broader than Descartes': it is now acceptable to speak of unconscious as well as conscious mental processes. Some philosophers (e.g. Searle, 1999, p. 88) differentiate between nonconscious and subconscious brain states, recognising only the latter as mental, because they are at least potentially conscious. Others (e.g. Lakoff and Johnson, 1999, p. 10) insist that "most of our thought is unconscious, not in the Freudian sense of being repressed, but in the sense that it operates beneath the level of cognitive awareness, inaccessible to consciousness and operating too quickly to be focussed on".

There is also a considerable diversity of opinion about the existence and location of a boundary between entities that have minds and those that do not. One school of thought rejects the idea of a sharp boundary between organisms that have mental states and those that lack them. According to this school, all living things display some degree of flexibility in response to their environment, and there is a continuum of adaptability, from the humblest microbes to the most advanced animals. Where one chooses to draw the line is quite arbitrary; there is no fundamental cognitive distinction between conscious animals and other organisms. The most extreme version of this idea is panexperientialism, the view that all individuals (including simple individuals such as electrons, and compound individuals such as atoms or cells, but excluding mere aggregates such as rocks, tables and desktop PCs) act and feel as a unit, to some degree. As a contemporary exponent, Charles Birch, puts it:

Where then, is a line to be drawn between the sentient and the non-sentient? Descartes drew a line between the human soul and the rest of nature. But drawing a line anywhere is quite arbitrary, be it between humans and other creatures, between fish and frogs or between a cell and a virus. It is more logical to argue that no line exists... (2001, p. 6).

Writing from a naturalistic property dualist perspective, David Chalmers acknowledges that "a conscious experience is a realization of an information state" (1996, p. 292), leading him to espouse a form of panpsychism, which attributes experiences to any system that realises information (including, famously, a thermostat).

At the other extreme, some philosophers argue for a clear-cut divide between animals that possess consciousness and other animals and organisms that lack it. Nicholas Humphrey is a strong proponent of this view:

One thing of which we can be sure is that whenever and wherever in the animal kingdom consciousness has in fact emerged, it will not have been a gradual process... [T]here must have been a threshold where consciousness quite suddenly emerged... (1993, pp. 195 - 196).

A common reaction to the philosophical debate over cognition among scientists is to shun the terminology that generates the debate. Many scientists reject any division of animal behaviour into cognitive and non-cognitive, or conscious and unconscious, as a methodological blind alley, preferring to resort to other concepts to explain animal behaviour. The following remarks by a scientist, who has published papers on associative learning in fruit flies and snails, exemplifies this attitude:

You may have noticed that I try to avoid the use of the word 'cognitive'. For my purpose, the distinction into cognitive and non-cognitive has no heuristic value... I personally keep a tally of tasks ... of what different animals have or haven't shown to be able to successfully complete. Eventually, I want to find out how the brain solves these tasks. The question of what parts of the brain are contributing how will be answered then and the question how 'cognitive' the involved processes are, will be redundant... The more evolved and complex the brain is, the more computing power it has, not surprisingly. This is a heuristically much more valuable concept and hypothesis, than to classify certain brain functions as 'cognitive' or not. In my construction of the world, I see no use of the word 'cognitive' (yet?) (Bjorn Brembs, personal e-mail communication, 22 December 2002).

One way to resolve the philosophical conundrums relating to mental states would be to propose a definition of "mental state", argue its merits and use it to distinguish genuine from spurious candidates for having mental states. This a priori approach will not be employed here, as its limitations have been exposed in the Introduction. Instead, a constructive, empirically based approach will be employed: the capacities of different kinds of organisms will be examined, starting from the simplest and most widely shared abilities, in an attempt to identify those capacities that may be relevant to the possession of "mental states", whatever they turn out to be. A philosophical "winnowing process" will be applied at each stage in our investigation, in order to determine if a capacity really indicates the existence of bona fide mental states in organisms possessing it. If a reported phenomenon fails to pass muster as "mental", our search will focus on more promising but less general cases, until a suitable candidate is found. My proposed definition of "mind" and "mental states" will thus emerge in the course of this chapter.

Very well then; but how are we to decide what counts as a mental state? Rules of evidence were addressed in the Introduction, where I decided to err on the side of caution and not give credence to experiments and scientific observations that have not been replicated by other researchers. For instance, Abramson, Garrido, Lawson, Browne and Thomas (2002) discuss research by Cleve Backster in the 1960s, which purported to show that plants could read people's thoughts and feelings, and dryly note:

While studies on the emotional and telepathic capacities of plants were greeted with great interest, attempts to replicate these studies have not been successful (2002, p. 177).

Another rule of evidence which I adopted was to reject studies whose follow-up has produced conflicting results. Abramson, Garrido, Lawson, Browne and Thomas (2002, p. 175) report that this is precisely what has happened with classical conditioning studies conducted on Mimosa, a small shrub whose leaves are sensitive to stimulation. While there is a large body of evidence suggesting that Mimosa plants are capable of habituation (the simplest form of learning), it would be premature to conclude that they are capable of being conditioned.

Appeals to logical possibility were also rejected as a method of deciding whether or not something has mental states. To show that a state of affairs is logically possible (or not obviously logically impossible) does not establish that it is physically possible. While Chalmers (1996, p. 94 ff.) argues for the logical possibility of zombies who look and act like us, but have no subjective experiences, he is to be commended for drawing no sceptical conclusions from this example, regarding the (real-world) problem of other minds.

Thought experiments have also been used to undermine the relevance of mental states (in particular, conscious states) in the daily lives of their bearers. It has been suggested that because we can imagine beings lacking mental states who evolved in such a way that they look and behave like us (or like other animals), mental states therefore have no role in human (or animal) behaviour, making their presence unknowable. Searle argues that the methodology used to support this sceptical conclusion is flawed and contains a hidden commitment to dualism:

The normal way we have of inquiring into the role of some phenotypical trait is to imagine the absence of that trait, while holding the rest of nature constant, and then see what happens... Now try it with consciousness. Imagine that we all fall into a coma and lie around prostrate and helpless... You cannot eat, copulate, raise your young, hunt for food, raise crops, speak a language, organize social groups, or heal the sick if you are in a coma... You see that we would soon become extinct, but that is not the way the skeptic imagines it. He imagines that our behavior remains the same, only minus consciousness. But that is precisely not holding the rest of nature constant, because in real life much of the behavior that enables us to survive is conscious behavior. In real life you cannot subtract the consciousness and keep the behaviour. To suppose that you can is to suppose ... a dualistic account of consciousness (1999, pp. 63-64).

While Searle's argument exposes the defects using thought experiments to cast doubt on the relevance or even the presence of mental states, his own methodology, while certainly valid for human beings, is problematic when applied to other organisms, for three reasons. First, their physiology differs from ours, raising the possibility that their behaviour may have different underlying (non-mentalistic) causes, even when it resembles ours. The fact that we cannot eat without being conscious does not entail that bacteria, which also eat, are conscious. Another problem with the methodology is that it pre-supposes a distinction between sleeping and waking states. While this distinction can be made for humans and most other vertebrates, it remains unclear which invertebrates sleep, and even how we should define "sleep". Finally, there are other mental states besides conscious ones: subconscious or unconscious states are a feature of everyday life, as dreamers and absent-minded drivers know. The possibility that there are animals whose entire mental lives are played out below the level of conscious awareness needs to be examined.

Other criteria for identifying mental states were rejected because of their empirical inadequacies. The widely shared notion that behaviour which is programmable is mindless, was rejected as false on experimental grounds. The question of whether behaviour is conscious is independent of whether it is genetically programmed; any combination is possible (Griffin, 1992, p. 254). Accordingly, the fact that the transfer of information in bacterial cells is regulated by molecules and ions (Kilian and Muller, 2001) does not, per se, render it mindless.

In the previous chapter, we examined the argument that a computational explanation for a certain kind of behaviour in an organism should preclude it from being regarded as mentalistic, on methodological grounds: we do not need to ascribe minds to computational devices. While allowing that organisms could be described as computational devices, we rejected this argument, on the grounds that there is a fundamental difference between living systems and other computational devices. A living thing is characterised by intrinsic relations, dedicated functionality and a nested hierarchy of parts, which give it an intrinsic end and make it a true individual - something we can call a body. Other computational devices lack this finality and are nothing more than assemblages. A piece of behaviour that we would regard as mindless if performed by a human-built computer (which is a mere aggregate of parts) may be more appropriately explained in terms of mental states, if it occurs in an organism that acts for its own intrinsic ends. Thus it would be wrong to infer from the fact that the fact that a bacterium has the formal computing power of a Turing machine (Muller, Di Primio and Lengeler, 2001, p. 93) that it has no more of a mind than a human-built Turing machine.

In the Introduction, simplicity was discussed as an explanatory virtue, but found to be a double-edged sword. Occam's razor is invoked, both by minimalists (e.g. Kennedy, 1992, p. 121) to dispense with mentalistic explanations for animal behaviour as redundant, and by maximalists, either to argue that it is simpler to suppose that animals whose neuroanatomy is similar to ours have mental capacities like ours (e.g. Griffin, 1992, p. 4) or to argue that imputing to an animal the ability to think in terms of basic concepts when confronted with novel situations, is a simpler explanation of its adaptive behaviour than hypothesising that it operates according to some complex program for dealing with different environmental conditions (Griffin, 1992, p. 115). The uncertainty about which kind of explanation is simplest reflects our current state of ignorance: we simply do not know enough about the role played by each part of an animal's brain in generating its mental states, or about the internal action selection programs that regulate animal behaviour.

Morgan's Canon was also found to be an unsatisfactory guide. Even leaving aside worries about its terminology of "higher" and "lower" psychological faculties, the key insight, that nature must be parsimonious (Bavidge and Ground, 1994, p. 26) contains a hidden assumption that it is more complicated for nature to generate adaptive behaviour by means of mental states than by other means. Wolfram (2002, p. 721) posits that almost all systems, even those with simple underlying rules, are equally complex, in that they can be used to generate computations of equivalent sophistication. A mindless system that generates adaptive behaviour may be no less complex than an intelligent one.

The methodology finally proposed in the Introduction for evaluating a claim that a certain kind of behaviour is indicative of a mental state was to proceed by asking: what is the most appropriate way of describing it? We should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy, than invoking other modes of explanation.

Earlier, I rejected an a priori approach to mental states as philosophically limiting: such an investigation runs the risk of omitting important evidence that may fall outside the narrow bounds of the investigator's definition. Nevertheless, we have to start looking somewhere in our quest for mental states. Where should we look for minds? And if we are investigating the minds of organisms, which aspects of an organism's behaviour should we investigate? I shall discuss two recently proposed "answers" to this question - those given by Steve Wolfram and Daniel Dennett - before commencing my investigation of the "mind-like" behaviour of different kinds of organisms.

Conclusions reached - a note to the reader

In the course of my investigation, I shall list and number my conclusions for ease of reference. I shall formulate conclusions of several different kinds:

Some of the conclusions reached will identify necessary and/or sufficient conditions for us to be able to ascribe cognitive mental states to an entity. Other conclusions describe the range of entities possessing properties relevant to the possession of mental states (e.g. all X's are capable of learning).

Wolfram's neo-animism: Are minds nothing more than computational devices?

Steve Wolfram (2002) espouses what I would call a "neo-animist" position with regard to the occurrence of mind (or "intelligence", to use his preferred terminology). He argues that although the idea of animism - which he defines as the view "that systems with complex behavior in nature must be driven by the same kind of essential spirit as humans ... has been seen as naive and counter to progress in science", this idea is actually "crucial" to science (2002, p. 845).

Wolfram's argument can be expressed in six steps. First, if anything can be said to be the distinguishing hallmark of intelligence, it has to be complex behaviour. Wolfram explicitly equates intelligence with complexity when he writes:

Yet in Western thought there is still a strong belief that there must be something fundamentally special about us [human beings]. And nowadays the most common assumption is that it must have to do with the level of intelligence or complexity that we exhibit (2002, p. 844, italics mine).

Second, complex behaviour can be defined as the ability to perform sophisticated calculations. Hence, "intelligence is associated with the ability to do sophisticated calculations" (2002, p. 822). Here, "calculation" is meant to be a general term. It does not matter whether the calculation is performed with numbers, the black and white cells in a cellular automaton, text, images or anything else. For example, the particles in a fluid could be used to perform a calculation. In fact, "it is possible to think of any process that follows definite rules as being a computation - regardless of the elements it involves" (2002, p. 716). This implies that we can think of natural processes as computations, where the rules are defined by the laws of nature, instead of programs written by human beings (2002, p. 716). The rules can be described as mappings or functions that take a system from one state to another. In other words, "all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations" (2002, p. 715).

This invites the question: how sophisticated do the calculations performed by a system have to be in order for it to be called intelligent? The third step in Wolfram's argument is contained in his Principle of Computational Equivalence (or P.C.I.), which states that there is in fact an upper limit to complex behaviour in our universe, and that anything that achieves this upper limit can be considered intelligent. This upper limit of complexity is found in universal systems. A universal system is one that can be used to perform any calculation - that is, one that can be programmed to follow any rule - so long as the function described by the rule only applies to a finite number of states (2002, pp 642 - 644, 721). (There could conceivably be systems that can exist in any one of an infinite number of different states, but Wolfram argues that there is no reason to suppose such systems actually occur in nature.) So, anything that can be considered as a universal system qualifies as intelligent.

Fourth, Wolfram's Principle of Computational Equivalence also implies that any entity possessing intelligence - i.e. any universal system - is as smart as any other: "once one has a universal system such a system can emulate any of the kinds of systems that we considered - even ones whose construction is more complicated than its own" (2002, p. 720). Some universal systems may require more time and resources to complete their calculations than others, but any of them can eventually solve any problem.

Fifth, the same principle implies that universal systems are surprisingly commonplace: "a vast range of systems - even ones with very simple underlying rules - should be equivalent [to universal systems] in sophistication of the calculations they perform" (2002, p. 822). More precisely, it says that "unless it is obviously simple essentially any behavior that one sees should correspond to a computation of equivalent sophistication" (2002, p. 726, italics mine). In other words, "any piece of complex behavior that we see ... is at some level equivalent" (2002, p. 726). Not only do some artificial non-biological systems (e.g. computers) exhibit this kind of complexity, but many kinds of natural phenomena, such as the weather, or the flow of sand in a sand pile, or the motion of a turbulent fluid (2002, p. 822), also do so.

Sixth, it follows that intelligence can be found wherever there are systems with the ability to perform complex calculations. Since such systems are commonplace in nature, it follows that intelligence is ubiquitous in the cosmos. Wolfram approves of the primitive, animist notion that the weather has a mind of its own: when we say this, "we are in effect attributing intelligence to the motion of a fluid" (2002, p. 822).

Before taking issue with Wolfram, I should acknowledge that computation, which Wolfram defines broadly as behaviour that can be described by a rule, is a useful starting point for any discussion of mental states. We cannot discern meaning (let alone intelligence) in an entity's behaviour unless we can first recognise a pattern in it. This leads me to propose my first conclusion regarding computational criteria for cognitive mental states:

C.1 Our identification of computations in an entity, or rule-governed transformations that take it from one state to another, is a necessary condition for our being able to ascribe cognitive mental states to it.

The term "entity" is employed very loosely here, to cover individuals, their parts, aggregates or systems in general. The initial and final states can be regarded as the "input" and "output" of the computation.

The above conclusion describes a condition for our being able to recognise intelligence. What it says is that we should never impute mental states to an entity whose behaviour is, from our standpoint, totally devoid of any underlying pattern. (There may well be entities whose behaviour is too complex for us to discern the rules underlying them. Wolfram's P.C.I. entails that our brains, being universal systems, should eventually be able to discover the rules, but "eventually" may be a lot longer than a human lifespan!)

Since (as Wolfram remarks) computations are ubiquitous in nature, we can also formulate a second conclusion regarding the range of entities performing computations:

C.2 All natural entities and natural processes can be described according to Wolfram's computational stance: that is, the set of natural entities which perform computations is universal.

Evaluation of Wolfram's arguments

Some critics might take issue with the second and third steps in Wolfram's argument - the equation of intelligence with the ability to calculate, and his denial that there are any systems exist that are capable of occupying any one of an infinite number of different states. For example, mathematicians sometimes make intuitive generalisations that defy reduction to concrete calculations. (Wolfram's own conjecture that almost all the systems in our world are universal systems is a case in point. Is this utterance a computation?) And yet, surely these generalisations qualify as intelligent utterances when made by their originators. Wolfram's response is that intelligence has to manifest itself in a concrete, physical process in order to generate results (2002, p. 721). In other words, a purely "general" intelligence would be utterly unrecognisable. Do we have any grounds for believing that every system in existence is finite, as Wolfram believes? No, but since scientists have hitherto been able to describe phenomena in the cosmos without having to posit systems capable of occupying an infinite number of states, we can set them aside on methodological grounds (Occam's razor).

My own comment on Wolfram's remarkable tour de force is that the first step in the argument - the equation of intelligence with complexity - is the most questionable, because it excludes any notion of purpose. With regard to any intelligent behaviour, it is always legitimate to ask what it is for. What is the intelligent agent trying to do? And in fact, the reason why we tend not to regard phenomena such as the wind as intelligent is that there is no discernible purpose behind them.

To his credit, Wolfram is quite explicit about excluding purpose from his definition of intelligence, on the grounds that it is too hard to discern, even when we are dealing with the behaviour of other human beings. For instance, do we have any reliable means of distinguishing the utterances of someone speaking a foreign tongue from those of someone babbling in gobbledegook (2002, p. 825)? And what about bird song? It is very complex, but no-one can be sure if it really means anything (2002, pp. 826 - 827). Again, if an alien civilisation wished to send us a message, why should they not use the wind or any other medium to encode it? We can take Wolfram's scepticism a step further and ask whether aliens themselves could be embodied in the wind.

Let us start with human language first. As Wittgenstein argued (Philosophical Investigations I. 19, 23), the meaning of linguistic utterances can only be understood with reference to their users' form of life. A tape in Sanskrit may sound like gobbledegook, but in practice, the way we learn Sanskrit, or any other language, is to see what its users do with it: greet, command, offer, request, challenge, describe, narrate and so on. In order to discern whether one of these "language games" is being played, we have to thoroughly familiarise ourselves with the way of life of the people speaking the language. "Gobbledegook" cannot be a language for the very simple reason that nobody does anything with it.

Wittgenstein's notion of a "form of life" also suggests we cannot decide whether bird song means anything until we have familiarised ourselves with how birds live in their natural environment, as ethologists have attempted to do. We shall return to this question in chapter 4.

As for alien messages, it is conceivable that they might literally be blowing in the wind, but if Wittgenstein's proposal is correct, we would have to meet the aliens first and immerse ourselves in their way of life before we could recognise their messages, let alone understand them.

But, it may be asked, what if the aliens are right under our nose: what if they, too, are blowing in the wind? The correct response to this proposal is to ask what kind of systems could embody intelligence - as opposed to merely serving as a medium for conveying a message by an intelligence? Dennett (1997) has offered some insights that help to address this question. According to Dennett, embodied minds can be regarded as agents, and the best way to discover agency at work is to look for what he calls intentional systems.

Dennett's intentional stance: Is mind a property of intentional systems?


A home thermostat is a simple example of an intentional system. Photo courtesy of howstuffworks.

Dennett (1997, pp. 34 - 49) argues that we can regard all organisms - and, for that matter, many human artifacts - as what he calls intentional systems: entities whose behaviour can be predicted from an intentional stance, where the entities are treated as if they were agents who choose to behave in a certain way, because of their underlying beliefs about their environment, and their desires. As Dennett puts it, intentional systems exhibit the philosophical property of aboutness: for instance, beliefs and desires have to be about something. I may believe that the food in front of me is delicious: I have a belief about the food, and a desire relating to it (a desire to eat it). The food is the intentional object of my belief and desire - even if it turns out that the object I had presumed to exist, does not (e.g. if the "food" is really plastic that has been molded, painted and sprayed with volatile chemicals, in order to make it look and smell like delicious food).

Dennett suggests that we can usefully regard living things and their components from an intentional stance, because their behaviour is "produced by information-modulated, goal-seeking systems" (p. 34):

It is as if these cells and cell assemblies were tiny, simple-minded agents, specialized servants rationally furthering their particular obsessive causes by acting in the ways their perception of circumstances dictated. The world is teeming with such entities, ranging from the molecular to the continental in size and including not only "natural" objects, such as plants, animals and their parts (and the parts of their parts), but also many human artifacts. Thermostats, for instance, are a familiar example of such simple pseudoagents (1997, pp. 34 - 35).

Elsewhere, Dennett elaborates his reasons for regarding a thermostat as an intentional system:

...it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat's owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desire is unfulfilled. Of course you don't have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats ... you have to rise to this intentional level... [W]hat ... thermostats ... all have in common is a systemic property that is captured only at a level that invokes belief-talk and desire-talk (or their less colorful but equally intentional alternatives; semantic information-talk and goal-registration-talk, for instance) (1995).

The chief advantage of the intentional stance, as Dennett sees it, is its predictive convenience. There are two other methods of predicting an entity's behaviour: what Dennett calls the physical stance (using scientific laws to predict the outcome - e.g. the trajectory of a bullet fired from a gun), and the design stance (assuming that the entity has been designed to function in a certain way, and that it is working properly - e.g. that a digital camera will take a picture when I press the button). The latter stance saves time and worry if the inner workings of the entity in question are too complex for behaviour to be rapidly predicted from a physical stance. Sometimes, however, even an entity's functions may be bafflingly complicated, and we may try to predict its behaviour by asking: what does it know (or at least, believe) and what does it want? The example Dennett employs is that of a chess-playing computer. I may not understand its program functions, but if I assume that it wants to win and knows where the pieces are on the board, how to move them and what the consequences of each possible move will be (up to a certain number of moves ahead), then I can make a good guess (perhaps a wrong one, given the limits of my memory and imagination) as to what it will do next in a game.

Regarding minds in general, the thesis of Dennett's book, Kinds of Minds, can be summarised as follows:

Dennett's third thesis has been hotly contested, and I will discuss it below.

I shall evaluate Dennett's intentional stance, by addressing three relevant issues. First, has Dennett mis-described intentionality? Second, is his intentional stance a global theory of mental states? Third, is it tied to any philosophically contentious theories - in particular, reductionism - or can it be used by philosophers of all persuasions?

Later, I shall argue that Dennett's intentional stance, while philosophically fruitful, does not adequately describe the necessary conditions for the occurrence of mental states, as it overlooks the crucial distinction between living and non-living systems: the latter, I contend, are ineligible for possessing mental states. Additionally, I propose that Dennett's intentional stance can be described in two ways, and that this suggests a rough program for distinguishing mental states from other states - and hence, distinguishing entities which possess minds from those that lack them.

(a) Has Dennett mis-described intentionality?


David Beisecker. Photo courtesy of University of Nevada, Las Vegas.

Beisecker (1999) has challenged the generally accepted account of intentionality:

The intentionality thought to be so definitive of mental states is typically glossed in terms of aboutness or directedness toward objects. The term 'intentionality' derives from a Latin word meaning roughly "to aim" - as one might do with a bow...

But then again, things we're not prepared to credit with thought - for example, heat-seeking missiles and sunflowers - also exhibit directedness towards objects. The challenge then is to find a way to distinguish the special sort of directedness possessed by bona fide thinkers from the more primitive kinds exhibited by these simpler systems (1999, p. 282).

Beisecker offers his own suggestion: "the hallmark of intentional states is their normativity, or susceptiblity to evaluation" (1999, p. 283). However, Beisecker is forced to admit that "there is a sense in which artifacts are susceptible to evaluation, and thus possess a certain sort of intentionality" (1999, p. 288): they can fail to fulfill the purpose for which they were designed. For Beisecker, this kind of "intentionality" is purely derivative and hence "second-class", but the point I wish to make here is that the same point could be made using Dennett's version of the intentional stance: the "beliefs" we metaphorically ascribe to thermostats are derivative upon their design specifications. Thus Beisecker's account is vulnerable to the same kinds of criticisms he directs at the notion of "aboutness": it includes not only mental states, but other phenomena as well.

Thus intentionality is not definitive of mental states, according to either Dennett's account or Beisecker's: other things also possess it. However, at this stage of our investigation, I would regard it as prejudicial to even attempt an a priori definition of mental states, before we have looked at organisms and their capacities. Rather, we should cast our net wide and attempt to describe a class of phenomena which contains all mental states, even if it includes much else besides.

Since, as Beisecker himself acknowledges, intentionality is etymologically related to "aboutness" and has historically been defined in those terms, I propose to retain the notion of "aboutness" as a useful starting point for discussing intentionality, without endorsing Dennett's philosophy of mind as such. The traditional notion of intentionality is employed by Dennett's philosophical friends and foes alike.

I shall, however, re-visit Beisecker's normativity criterion at a later stage in this chapter, since Beisecker applies it to the vital question of whether animals possess genuine intentionality.

But before we can apply the traditional notion of intentionality to mental states, we have to ask: does it apply to all mental states, or are there some that lack the property of "aboutness"?

(b) Is Dennett's intentional stance a global theory of mental states?

Dennett has performed a valuable service, by providing a perspective within which we can situate mental states, and telling us where to start looking for them: on his theory, we should start by looking for behaviour that can be described by the intentional stance.

Of course, if there are some mental states that cannot be described by the intentional stance, then Dennett's thesis is in trouble. One might argue that there are mental states, such as perceptions and drives, which are too primitive to be characterised in the terms of beliefs and desires, which Dennett uses to characterise this stance. However, such a criticism misses the point. As Dennett's example of the thermostat shows, even a mechanical sensor can be described using the intentional stance: it switches on whenever it believes that the room is too hot or cold. In fact, Dennett (1995) is famous for allowing that thermostats do indeed have "beliefs", because he construes "beliefs" in a "maximally permissive" sense as "information-structures" that are "sufficient to permit the sort of intelligent choice of behavior that is well-predicted from the intentional stance". Moreover, as Dennett argues, perceptual states (such as recognising a horse) exhibit aboutness, even if they are involuntary or automatic. A perception is always a perception of something. In other words, perceptions exhibit the property of aboutness or intentionality (1997, pp. 48 - 49). The same could be said for drives: they are towards something.

Emotions may sometimes lack the property of aboutness: one may feel depressed or elated for no particular reason. However, as de Sousa (2003) points out, these feelings cannot serve as paradigm cases, as the different kinds of emotions can only be distinguished by specifying their formal objects:

A formal object is a property implicitly ascribed by the emotion to its target, focus or propositional object, in virtue of which the emotion can be seen as intelligible. My fear of a dog, for example, construes a number of the dog's features (its salivating maw, its ferocious bark) as being frightening, and it is my perception of the dog as frightening that makes my emotion fear, rather than some other emotion. The formal object associated with a given emotion is essential to the definition of that particular emotion (de Sousa, 2003).

It is worth noting that even Dennett's severest critics, such as Searle (1999), do not dispute his contention that the intentional stance is applicable to all kinds of minds. Is it also applicable to systems which lack minds? Searle and Dennett differ here: Searle does not ascribe intentionality to these systems, because for him, intentionality is "the general term for all the various forms by which the mind can be directed at, or be about, or of, objects and states of affairs in the world" (1999, p. 85, italics mine), while for Dennett, intentionality refers to the simple property of being about something else, whether the entity exhibiting intentionality is a mind or not (1997, pp. 46-47). Even opioid receptors in the brain, to use one of Dennett's examples, are "about" something else: they have been "designed" to accept the brain's natural pain-killers, endorphins. Anything that can "embody information" possesses intentionality (1997, p. 48).

The difference here between the two positions appears to be mainly terminological. Searle concedes that mindless systems may exhibit what he calls "as-if intentionality": they behave as if they had genuine (i.e. mindful) intentionality, and can be metaphorically described as such (1999, p. 93). The real point at issue between Searle and Dennett (to be discussed in part (d) below) is whether the intentionality of our mental states is a basic, intrinsic feature of the world, or whether it can be reduced to something else.

In any case, Dennett's intentional stance certainly opens up a fruitful approach to the investigation of other minds - be they human, alien or animal ones - and it also seems to be a useful tool for describing the mind-like behaviour of "pseudo-agents".

Being an intentional system, then, is a necessary but not sufficient condition for having a mind. It is not a sufficient condition, because there are many things - such as thermostats and biological macromolecules - which are capable of being described by this stance, but are not agents. Dennett refers to such entities as "pseudoagents" (1997, p. 35). In our quest for mental states, we should start by looking for "effects produced by information-modulated, goal-seeking systems" (1997, p. 34), which may either be minds or "as-if" minds.

(c) Is Dennett's intentional stance tied to reductionism?


A DNA molecule. John Searle objects to Dennett's claim that intentional agency in human beings is grounded in the pseudo-agency of the macromolecules in their bodies.
Photo courtesy of Columbia University.

At the outset of my quest for mental states in animals and (possibly) other organisms, I committed myself to an open-ended investigation, which avoided making philosophical assumptions about the nature of "mind" or "mental states". If Dennett's intentional stance turned out to be wedded to a particular, contentious account of "the mind", then its legitimacy would be open to challenge from the outset.

Certainly, Dennett does make one highly contentious reductionist claim: he claims (1997, pp. 27, 30-31) that intentional agency in human beings is grounded in the pseudo-agency of the macromolecules in their bodies. This claim has been contested by Searle, who argues (1999, pp. 90-91) that it is vulnerable to the homunculus fallacy. In its crudest version, the homunculus fallacy attempts to account for the intentional "aboutness" of our mental states by postulating some "little man" or "spectator" in the brain who deems them to be about something. Although Dennett does not account for the intentional "aboutness" of our mental states in this way, he does attempt to solve the problem by taking it down to a lower biological level, where the problem of "aboutness" is said to be disappear: the intentionality of our mental states is the outcome of the mini-agency of the macromolecules in our bodies, and the intelligent homunculus is replaced by a horde of "dumb homunculi", each with its own specialised mini-task that it strives to accomplish (Dennett, 1997, pp. 30-31). Searle (1999, pp. 90-91) argues that this move merely postpones the problem: what gives our macromolecular states the intentional property of "aboutness"? Nor does Searle think much of causal accounts of "aboutness", where the intentionality of our symbols is said to be due to their being caused by objects in the world. The fatal objection to causal accounts is that the same causal chains may generate non-intentional states as well (1999, p. 91).

I would like to add that while Dennett's use of the intentional stance to describe the behaviour of the macromolecules in our bodies is pedagogically useful, it overlooks an important feature of rationality: he pictures them as "specialized servants rationally furthering their obsessive causes" (1997, p. 35). The picture contains an inherent contradiction: obsession is a mark of irrational rather than rational behaviour. The obsessive "mini-goals" of the parts of an intentional system derive their significance from the goals which the system, considered as a whole, is "trying" to achieve (e.g. food or sex). The metaphor of rational agency, I would suggest, is properly applied to the organism as a whole, as the good of the parts subserves that of the whole. If we use the intentional stance in our quest for mindful behaviour, then, it is not sufficient to identify body parts in which this behaviour is manifested. It must also be shown that the entity behaves as a whole (i.e. as a body) whose parts are integrated in a fashion that can be described by the intentional stance.

The fundamental divide between Dennett and Searle on intentionality concerns whether there is such a thing as "intrinsic intentionality" (whereby our mental states have a basic property of "aboutness"), as distinct from "derived intentionality" (whereby "words, sentences, books, maps, pictures, computer programs", and other "representational artifacts" (Dennett, 1997, pp. 66, 69) are endowed with an agreed meaning by their creators, who intend them to be "about" something). For Dennett, the distinction is redundant because the brain is itself an artifact of natural selection, and the "aboutness" of our brain states (read: mental states) has already been determined by their "creator, Mother Nature", who "designed" them (1997, p. 70). This move by Dennett is something of a fudge: "Mother Nature" (to borrow Dennett's anthropomorphism) does not "design" or "intend" anything; it merely causes things to happen, and as Searle has pointed out, causation is insufficient to explain intentionality. Searle (1999, pp. 89-98), while agreeing with Dennett that intrinsic intentionality is a natural, biological phenomenon, insists that there is an irreducible distinction between constructs such as the sentences of a language, whose meaning depends on what other people (language users) think, and conscious mental states such as thirst, whose significance does not depend on what other people think. Mental states, and not human constructs, are the paradigm cases of intentionality, and it is just a brute fact about the natural world that these conscious states (which are realised as high-level brain processes), refer intrinsically. An animal's conscious, intentional desire to drink, to use one of Searle's examples, is a biologically primitive example of intrinsic intentionality, with a natural cause: increased neuronal firing in the animal's hypothalamus. "That is how nature works" (1999, p. 95). Searle thus eschews both mysterian (dualist) and eliminative (reductionist) accounts of intentionality.

Despite the fierce controversy that rages over the roots of intentionality and the reducibility of mental states, it is admitted on all sides of the debate that a wide variety of entities can be treated as if they were agents in order to predict their behaviour. This, to my mind, is what makes Dennett's intentional stance a fruitful starting point in our quest for bearers of mental states. The issue of whether mental states can be reduced to mindless, lower-level processes is independent of the question of whether the intentional stance can be used to search for mental states.

Conclusions reached

If the foregoing arguments are correct, then we may conclude that behaving according to the intentional stance is a necessary condition for possessing mental states that are identifiable by us:

I.1 Our ability to describe an entity's behaviour according to Dennett's intentional stance is a necessary condition for our being able to ascribe cognitive mental states to it.

The intentional stance may well describe a considerably smaller class of entities than Wolfram's "computational stance", as I shall call it. Computations, broadly construed, are ubiquitous in nature, but the stipulation of a rule that describes an entity's information processing behaviour need not imply that the behaviour has a goal as such. It simply means that the entity can transform some initial states (inputs) into final states (outputs). Our final conclusion on Wolfram's computational stance is a negative one:

C.3 Our ability to describe an entity's behaviour in terms of rules which transform inputs into outputs (as per Wolfram's computational stance) is not a sufficient warrant for our being able to ascribe cognitive mental states to that entity.

On the other hand, Dennett's claim that the behaviour of all organisms can be described according to the intentional stance appears uncontroversial, in the light of our discussion of intrinsic finality in the previous chapter:

I.2 The set of entities which can be described by Dennett's intentional stance is not universal in scope, but includes all organisms (and their parts).

Why only living things can possess minds. Implications for artificial intelligence.


Image of the AIBO robot. Courtesy of Sanoma Magazines, Finland Oy, MikroBitti, April 2001. "AIBO" is a registered trademark of Sony.

Before we engage on a quest for minds in living organisms, we need to examine the issue of artificial intelligence. I contend that while Dennett's intentional stance is a fruitful starting point in our search for minds, it overlooks one very important condition which an entity must satisfy before it can be said to possess mental states: the entity in question must be alive.

While Dennett has narrowed the search for embodied minds, his use of the intentional stance to describe the behaviour of some non-living artifacts blurs the philosophically important distinction (argued for in the previous chapter) between living and non-living things. On Dennett's account, there is no reason in principle why non-living artifacts could not exhibit genuine agency, as opposed to the pseudo-agency of a thermostat. I would argue that Dennett has overlooked the notion of intrinsic finality, and that an entity lacking this kind of finality cannot be said to embody mental states, let alone agency. It has been argued in the previous chapter that there are profound differences between a living and a non-living system: only a living system has intrinsic relations, dedicated functionality and a nested hierarchy of parts, which give it an intrinsic end and make it a true individual - something we can call a body, instead of an assemblage.

I contend that the attribution of a mind to a system that lacks intrinsic finality makes no sense. If we accept Dennett's notion of the intentional stance, then mental states can be appropriately regarded as manifestations of (genuine or pseudo-)agency, insofar as they exhibit the property of aboutness or intentionality (Dennett, 1997, pp. 48 - 49): they are directed at something. Now, agents are not free-floating entities, but are located in, and individuated with reference to bodies. The fact that we can tie agency to a body is what enables us to ascribe different actions to the same agent and to distinguish the pursuits of one agent from those of another. In chapter 1, it was argued that non-living systems are not bodies, but aggregates of parts which lack intrinsic unity. It is meaningless to describe the behaviour of such systems as the pursuits of an individual agent - although one might still imagine that a basic component of a non-living system, such as a molecule, could possess enough internal unity to manifest agency. (Such a molecule would at least possess internal relations, as described in chapter 1. Regarding the possibility of a living molecule, we have already concluded that a virus, which is little more than a DNA molecule wrapped in a protein coat, qualifies as being alive.)

There is, however, a deeper reason for scepticism regarding the notion of non-living agents. Before we can describe an entity as an agent with intentions of its own, it is always proper to ask: what are the entity's ends or goals? In the absence of identifiable ends, one might as well suppose that a cup of coffee is an agent. (Of course, the process by which we identify an agent's ends or goals may not be an infallible one - spies, for instance, are very good at concealing their ends from investigators.) And if the entity had a maker or master, the entity's ends would have to be (at least potentially) separable from those of its maker or master, before it could be called an agent. Without ends of its own, the entity would be nothing more than a tool. To qualify as an agent, an entity has to have some capacity for "self"-ish behaviour - i.e. behaviour that serves its own internal ends.

The last point is crucial: even if we could interrogate an exotic non-living agent about its goals, how would we know that its answers were indeed its own? Consider the following thought experiment. Suppose that you stumbled across a talking coffee cup, and (once you had recovered from the shock), asked it about its goals. Suppose that the coffee cup's stated goal turned out to be a very altruistic one - peace in the Middle East. The question I wish to pose is: how could you know that you were talking to it and not some agent controlling it - via a microphone cleverly embedded in the cup, for instance? Unless the cup could be shown to possess at least some intrinsic or "selfish" ends, and could benefit from satisfying these ends, then there would be no reason to regard it as a bona fide agent. And in order to identify those ends, one would have to look for formal features like internal relations, a nested hierarchy of organisation and dedicated functionality, which enable us to identify an individual as an organism with a telos. Additionally, one would have to identify the organism's basic needs or essential conditions for its flourishing. An agent has to be the sort of thing that can be said to benefit from what it does - in other words, possess a telos - even if it also has unselfish ends (like peace in the Middle East) that have nothing to do with its telos.

Finally, for a system to cohere as a true individual with an internal rather than a merely superficial unity, it must possess a structure that is regulated and held together from within - by a master program of the sort described in chapter 1, that regulates the internal structure of an organism and the internal interactions between its components.

Man-made robots (such as AIBO, pictured above) and supercomputers are therefore doomed to remain mindless until and unless they acquire the following:

  • built-in dedicated functionality with a nested hierarchy of parts;
  • intrinsic or selfish ends; and
  • a self-directed internal program that can assemble itself together as a piece of hardware (using only external raw materials, without the need for any outside information), and subsequently maintain its functionality (again, without the need for any further input of information), as well as running any software programs that the internal program learns about.

The distinction between living and non-living systems is therefore presupposed by the distinction Dennett makes between intentional agents and pseudo-agents. Within the "family tree" of intentional systems, the most fundamental division is not between "agent" and "pseudo-agent", but between "alive" and "not alive". This is an important point to grasp, as it may seem that some non-living systems (e.g. chess-playing computers) are "cleverer" than many living systems (e.g. trees) and hence more like genuine agents. The point, however, is that trees are at least bona fide individuals with their own "selfish" ends like nutrition, whereas present-day human-built computers are assemblages without intrinsic ends, which can never exhibit agency, however well they may be programmed to mimic it.

If I am right, then we should restrict the search for mental states to organisms. Being alive is at least a necessary condition for having a mind. We can thus formulate a negative conclusion about Dennett's intentional stance, as well as a biological criterion for intelligence:

I.3 Our ability to describe an entity in terms of Dennett's intentional stance is not a sufficient condition for our being able to ascribe cognitive mental states to that entity.

B.1 An entity must be alive in order to qualify as having cognitive mental states.

This conclusion, unlike Conclusion C.1, is couched in absolute terms, rather than in terms of the limits of our knowledge. The point is that we can, most of the time, be certain that something is or is not alive, whereas the identification of all of an entity's computations is far less straightforward. Things that look simple may turn out to be complex.

However, stipulating "being alive" as a necessary condition for having mental states is methodologically vague. The following guidelines (based on chapter 1) serve to identify living things:

B.2 A necessary condition for our being able to ascribe cognitive mental states to an entity is that we can identify the following features:

(a) built-in biological needs, essential to its flourishing;

(b) a master program that regulates the internal structure of an organism and the internal interactions between its components;

(c) internal relations between the parts (i.e. new physical properties which appear when they are assembled together);

(d) a nested hierarchy of organisation of the parts;

(e) dedicated functionality, where the parts' repertoire of functionality is dedicated to supporting that of the unit they comprise;

(f) stability - the parts are able to work together for some time to maintain the entity in existence as a whole.

These conditions enable us to impute both a formal cause and a final cause or telos to the entity, and identify its "selfish" or intrinsic ends.

The following corollary of conclusion B.2 highlights the essential condition for the attribution of mental states, which was absent in our case of the talking coffee cup:

B.3 The presence in an individual of biologically "selfish" behaviour, which is directed at satisfying its own built-in biological needs, is an essential condition for the meaningful ascription of mental states to it.

If being alive is a necessary condition for having a mind (conclusion B.1), then the argument that if a non-living intentional system (such as a thermostat) is a "pseudoagent" then an organism with similar abilities need be nothing more, is undercut at once. The mere fact that an organism's actions are properly explained with reference to its intrinsic ends, or telos (which a thermostat lacks), is reason enough to treat the actions of the organism (but not the thermostat) as at least potential candidates for mental acts.

If my line of reasoning is correct, then any information-modulated, goal-seeking behaviour of an organism which is directed at the satisfaction of its biological needs is at least a prima facie candidate for being a manifestation of mental states. However, there may turn out to be valid philosophical reasons for concluding that only a subset of this behaviour warrants a mentalistic description. (These reasons will be discussed later in this chapter.)

Another corollary of conclusion B.2 is that mental states cannot be meaningfully imputed to a lineage pf organisms, but only to individual organisms. Conclusion B.2 stipulates that we must be able to identify internal relations, a master program regulating the interactions between the parts, a nested hierarchy of organisation, and dedicated functionality, before ascribing cognitive mental states to an entity. An evolutionary lineage, unlike an individual organism, lacks all of these features.

B.4 An entity must be an individual biological organism in order to qualify as having cognitive mental states. An evolutionary lineage of organisms cannot be meaningfully described as having cognitive mental states.

We can now address Wolfram's sceptical question of whether the wind could be said to embody an alien intelligence. Because the wind does not possess a nested hierarchy of organisation and lacks dedicated functionality, it cannot meaningfully be said to have intrinsic ends and qualify as a living individual - i.e. a body. Without a body, it cannot be said to possess mental states (Conclusion B.1).

Different kinds of intentional stance? Narrowing the search for mental states in organisms

I have argued that Dennett's intentional stance is a fruitful starting point in our quest for bearers of mental states. However, not all intentional systems have mental states. It has already been argued that non-living systems cannot meaningfully be credited with mental states, and there may be some organisms which also lack these states. It was suggested above that we should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy than other modes of explanation. If we can explain the behaviour of an intentional system just as well without recourse to talk of mental states such as "beliefs" and "desires", then the ascription of mental states is scientifically unhelpful.

It is my contention that our intentional discourse comes in different "flavours", some richer (i.e. more mentalistic) than others, and that Dennett's intentional stance can be divorced from the use of terms such as "beliefs" and "desires". It is important, when describing the behaviour of an organism, to choose the right "flavour" of discourse - that is, language that is just rich enough to do justice to the behaviour, and allow scientists to explain it as fully as possible.

Two intentional stances?

Dennett's use of terms such as "information" (1997, p. 34) and "goals or needs" (1997, pp. 34, 46) to describe the workings of thermostats (1997, p. 35), shows that intentional systems do not always have to be described using the mentalistic terminology of "beliefs", "desires" and "intentions", in order to successfully predict their behaviour. An alternative "language game" is available. There are thus at least two kinds of intentional stances that we can adopt: we can describe an entity as having information, or ascribe beliefs to it; and we can describe it as having goals, or ascribe desires and intentions to it.

What is the difference between these two intentional stances? According to Dennett, not much: talk of beliefs and desires can be replaced by "less colorful but equally intentional" talk of semantic information and goal-registration (1995). Pace Dennett, I would maintain that there are some important differences between the "information-goal" description of the intentional stance and the "belief-desire" description.

A goal-centred versus an agent-centred intentional stance

One difference between the two stances is that the former focuses on the goals of the action being described (i.e. what is being sought), while the latter focuses on the agent - in particular, what the agent is trying to do (its intentions). The distinction is important: often, an agent's goal (e.g. food) can be viewed as extrinsic to it, and specified without referring to its mental states. All the agent needs to attain such a goal is relevant information. A goal-centred intentional stance (which explains an entity's behaviour in terms of its goals and the information it has about them) adequately describes this kind of behaviour. Other goals (e.g. improving one's character, becoming more popular, or avoiding past mistakes) cannot be specified without reference to the agent's (or other agents') intentions. An agent-centred intentional stance (which regards the entity as an agent who decides what it will do, on the basis of its beliefs and desires) is required to characterise this kind of behaviour.

According to this classification of intentional stances, the task at hand in our search for entities having minds can be summarised as follows. Having identified "mind-like" behaviour, using the goal-centred intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using an agent-centred intentional stance? The search for mind, on this account, is a search for intentional acts, which can only be explained by reference to the agent's beliefs and desires.


Is this lion capable of believing that the ox it is about to eat is near?
Photo courtesy of Oxford University Development Programme, Wildlife Conservation Research Unit.

Some philosophers have attacked the very idea of attributing beliefs to animals as absurd. Sorabji (1993, pp. 12-14, 35-38) convincingly demonstrates that Aristotle himself (De Anima 3.3, 428a18-24) steadfastly refused to attribute belief (doxa) to animals, despite acknowledging their possession of sensory perception (aisthesis). However, Sorabji also points out that Aristotle's usages of both terms differed in important ways from the English terms "sensory perception" and "belief".

First, although Aristotle denied beliefs to animals, he allowed that they could have perceptions with a propositional content - e.g. the lion in Aristotle's Nicomachean Ethics (3.10) perceives that the ox it is about to eat is near - whereas in modern usage perceptions are typically regarded as simply having an object.

Second, for Aristotle, there can be no meaningful ascription of belief without the possibility of conviction and self-persuasion (De Anima 3.3, 428a18-24), whereas the same cannot be said for our English word belief: "The nervous examinee who believes that 1066 is the date of the Battle of Hastings may, through nervousness, not be convinced, and need not have been persuaded" (Sorabji, 1993, p. 37). The ascription of beliefs to non-human animals reflects contemporary linguistic norms that are far removed from those of Aristotle, who, were he alive today, might use a different term (such as personal convictions) to denote the states with a propositional content that only humans are capable of.

Some contemporary philosophers, on the other hand, have a more deep-seated objection to animal belief than Aristotle's: the ascription of any mental state with a propositional content (such as a belief) to a non-human animal is absurd, either because (i) the object of a belief is always that some sentence S is true, and lacking language, an animal cannot believe that any sentence is true (Frey, 1980), or because (ii) nothing in an animal's behaviour allows us to specify the content of its belief and determine the boundaries of its concepts (Stich, 1979, refers to this as the "dilemma of animal belief", p. 26), or because (iii) none of our human concepts can adequately express the content of an animal's belief, given its lack of appropriate linguistic behaviour that would confirm that our ascription was correct (Davidson, 1975).

An example from Dennett (1997, p. 56) illustrates this point. What does a dog think, just as it is about to eat? Does it think the thought that "My dish is full of beef", or the thought that "My plate is full of calves' liver", or even the thought that "The red, tasty stuff in the thing that I usually eat from is not the usual dry stuff they feed me"?

The common assumption underlying the above objections is that the content of a thought must be expressible by a that-clause, in some human language. Carruthers (2004) rejects this assumption on the grounds that it amounts to a co-thinking constraint on genuine thoughthood: "In order for another creature (whether human or animal) to be thinking a thought, it would have to be the case that someone else should also be capable of entertaining that very thought, in such a way that it can be formulated into a that-clause." This is a dubious proposition at best: as Carruthers points out, some of Einstein's more obscure thoughts may have been thinkable only by him.

A more reasonable position, urges Carruthers, is that an individual's thoughts can be characterised equally well from the outside (by an indirect description) as from the inside (by a that-clause which allows me to think what the individual is thinking):

In the case of an ape dipping for termites, for example, most of us would ... say something like this: I don't know how much the ape knows about termites, nor how exactly she conceptualizes them, but I do know that she believes of the termites in that mound that they are there, and I know she wants to eat them (Carruthers, 2004).

Dennett makes a similar point:

The idea that a dog's "thought" might be inexpressible (in human language) for the simple reason that expression in a human language cuts too fine is often ignored, along with its corollary: the idea that we may nevertheles exhaustively describe what we can't express, leaving no mysterious residue at all (1997, p. 56).

The point I wish to make here is not that animals are capable of having beliefs, but that the arguments that they are in principle incapable of doing so are open to reasonable doubt, and that the attempt to identify forms of animal behaviour that warrant description in terms of an agent-centred intentional stance is not a fool's errand.

Conclusion B.3 above highlighted behaviour that satisfies an individual's biological needs as an essential condition of our being able to attribute mental states to it. Dennett's agent-centred intentional stance suggests a way of re-phrasing this conclusion which allows us to narrow our search for individuals with mental states:

I.4 Before we can attribute beliefs and desires to an organism, it must be capable of exhibiting behaviour which manifests its desires for its own built-in biological ends, as well as its beliefs about those ends.

A third-person versus a first-person intentional stance

The difference between Dennett's two language games for explaining what thermostats do from an intentional standpoint (1997, pp. 34, 35, 46) can be explained in another way. Instead of saying that the former focuses on the goals of the action being described while the latter focuses on the agent's beliefs and desires, we could say that the former describes an entity's behaviour objectively, in the third person, while the latter uses subjective, first-person terminology. Typically, an entity's "information" and "goals" (or "needs") are completely described from an objective, third-person perspective, while its "beliefs" and "desires" are described using subjective, first-person terminology. To say "X appears desirable to A" is different from saying "X is A's goal" or "X is what A needs": the former statement implicitly describes X from A's perspective, while the latter employs an external standpoint.

It should be noted, however, that the goal-centred vs. agent-centred division may not coincide precisely with the third-person vs. first-person division.

According to one commonly accepted view (e.g. Searle, 1999), the first-person perspective is definitive of "being a mind" or "having mental states". Any entity or event which can be exhaustively described using third-person terminology, without invoking a first-person perspective, is considered to be unworthy of being called a "mind" or "mental state". On this view, to have a mind is to be, in some way, a subject.

To equate "mental states" with "first-person states", is not the same as equating "mental states" with "conscious" (or "aware") states. There are two good reasons for resisting a simplistic equation of "mental" with "conscious" or "aware". First, philosophers distinguish several meanings of the word "conscious" (discussed below), and there are rival accounts of what constitutes a first-person conscious state. Second, it is generally acknowledged that many of our perceptions, desires, beliefs and intentional acts are not conscious but subconscious occurrences. Nevertheless, we use a first-person perspective when describing these events: my subconscious beliefs are still mine. Conscious mental states may prove to be the tip of the mental iceberg. For this reason, I would criticise Dennett for subtitling his book Kinds of Minds with the words: Towards an Understanding of Consciousness. This, I think, prejudices the issue.

We can thus distinguish between what I will call a third-person intentional stance (which employs objective terms such as "information" and "goals" to describe an entity's behaviour without any commitments to its having a mind) and a first-person intentional stance (which commits itself to a mentalistic stance towards an entity, by invoking subjective terminology to explain its behaviour).

According to this classification of intentional stances, the task at hand in our search for entities having minds can be summarised as follows. Having identified "mind-like" behaviour, using the neutral, objective third-person intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using a first-person intentional stance? The search for mind, on this account, is a search for subjectivity.

Are other intentional stances possible?


Aristotle.

The two ways of classifying intentional stances (goal- versus agent-centred; third- versus first-person) employ different criteria to define "mental states", and also make conflicting claims: for instance, an animal that had subjective perceptions but was incapable of entertaining beliefs about them would be a candidate for having a mind according to the second classification but not the first. Other classifications of intentional stances may also be possible: Aristotle, for instance, seems to have favored a three-way classification: plants have a telos, because they possess a nutritive soul; animals have perceptions, pleasure and pain, desires and memory (On Sense and the Sensible, part 1, Section 1) but lack beliefs (De Anima 3.3, 428a19-24; On Memory 450a16); and human beings are capable of rationally deliberating about, and voluntarily acting on, their beliefs. I shall not commit myself to any particular classification before examining animals' mental capacities, as I wish to avoid preconceived notions of what a mental state is, and let the research results set the philosophical agenda. I shall invoke Dennett's intentional stance to identify behaviour that may indicate mental states in organisms, and then attempt to elucidate relevant distinctions that may enable us to draw a line between mental and non-mental states, or between organisms with minds and those without.

Narrowing the search for mental states: the quest for the right kind of intentional stance

The principle guiding our quest for mental states, which is that we should use mental states to explain the behaviour of an organism if and only if doing so is scientifically more productive than other modes of explanation, can now be recast more precisely. As a default position, we could attempt to describe an organism's behaviour from a mind-neutral intentional stance (e.g. an objective third-person stance, or a goal-centred stance), switching to a mentalistic account (e.g. a subjective first-person stance, or an agent-centred stance) if and only if we conclude that it gives scientists a richer understanding of, and enables them to make better predictions about, the organism's behaviour.

Case study: viral replication

Image of influenza virus. Copyright Linda M. Stannard, Department of Medical Microbiology, University of Cape Town, 1995.

A mind-neutral intentional stance can be applied to the behaviour of viruses when invading cells:

Viruses ... have evolved defenses to help them evade the immune system. Viruses that cause infection in humans hold a "key" that allows them to unlock normal molecules (called viral receptors) on a human cell surface and slip inside.

Once in, viruses commandeer the cell's nucleic acid and protein-making machinery, so that more copies of the virus can be made (Emerson, 1998).

The ability of viruses to evade cell defences can be described using Dennett's intentional stance: they possess information (a "key") that enables them to enter and control their host, thereby achieving their goal (replication). But it has been argued above (see Conclusion I.3) that our ability to describe an entity using the intentional stance is, by itself, not a sufficient reason for imputing cognitive mental states to it. A mind-neutral goal-centred intentional stance suffices here to explain the behaviour of a virus in terms of its information and goals. An agent-centred mentalistic stance should not be adopted unless it enables us to make better predictions about viruses' behaviour.

The foregoing example allows us to strengthen Conclusion I.3 and formulate a further negative conclusion regarding Dennett's intentional stance:

I.5 Our ability to identify behaviour in an organism that can be described using the intentional stance is not a sufficient warrant for ascribing mental states to it.

The possibility of applying a mind-neutral intentional stance to the characteristic behaviour of organisms also has biological implications:

B.5 Being an organism is not a sufficient condition for having mental states.

Linguistic constraints I shall observe when talking about organisms' mental states

Finally, I would like to suggest an additional methodological constraint on the search for mental states: respect for the conventions of language. There are some terms in the English language that are peculiarly reserved for mental states. The choice of these terms may change over time: before Descartes, the suggestion that something mindless could sense an object or store a memory of it or have a goal, would have seemed odd, but today, we have no problems in talking about the sensor in a thermostat, the memory of a computer (or a piece of deformed metal), or the goal of a computer game. However, for terms that currently retain a mentalistic connotation, special care will be taken to make sure that they are not employed in a way that robs them of their mental content. Some minimal mental content must be specified.

In the discussion of organisms' alleged mental capacities below, I shall treat the verbs "feel", "believe", "desire", "try" and "intend", and the nouns "feeling", "belief", "desire", "attempt" and "intention" as mentalistic terms. In ordinary parlance, these intentional terms are currently used to characterise either states of a subject ("feel", "feeling"), proposed or attempted actions by an agent ("intend", "intention", "try", "attempt"), or explanations for an agent's actions ("believe", "belief", "desire").

The verb "sense" and the noun "sensor", on the other hand, are often applied to inanimate artifacts (e.g. motion detectors), although no-one speaks of these artifacts as having "sensations". The words "perceive" and "perception", on the other hand, have a more mentalistic flavour. Modern usage draws a distinction between "sensation" and "perception" in an organism: the former is usually said to arise from immediate bodily stimulation, while the latter usually refers to the organism's awareness of a sensation (Merriam-Webster On-line, 2004, definition (1a) of "sensation"). Philosophers, however, do not always adhere to this pattern of usage. It would be prejudicial to endorse these distinctions at this stage, but we should allow for the possibility that there may be organisms that can be appropriately described as having sensations while lacking perceptions.

While contemporary usage allows us to speak of artifacts as having a "memory", from which they retrieve stored information, the verb "remember" retains a distinctly mentalistic connotation: it refers not only to stored information being retrieved but also to its coming to one's mind, as a subjective state. In popular parlance, machines are never said to "remember" anything. Accordingly, in our investigation, the verb "remember" will be reserved for organisms that are said to possess mental states. The verbs "recollect" and "recall" are even more strongly mentalistic, as they signify the intentional act of bringing something back to mind.

I shall also treat "learn" and "learning" as mentalistic terms, wherever possible. This mentalistic usage is challenged by Wolfram (2002, p. 823), but I believe there is currently no verb in common use that can replace the peculiarly mentalistic flavour of "learn" in English. The word "learn" usually means "to gain knowledge or understanding of or skill in by study, instruction, or experience" (Merriam-Webster on-line dictionary, 2003). However, we should keep an open mind. According to the above definition, gaining a "skill" by "experience" is learning. In our examination of organisms' abilities, we may find that some living things, despite lacking minds, are capable of feats that can be described as the acquisition of skills through experience. In that case, we would have to call this "learning", simply because it would be a violation of our existing linguistic conventions not to do so.

I shall, for the purposes of this enquiry, treat the words "know" and "cognition" as mentalistic, unless indicated otherwise.

Mental states are sometimes divided into two categories: cognitive and affective. In this chapter, when I use the term "cognitive mental states", I mean beliefs in particular, as well as any higher-order judgements that are founded upon those beliefs.

Although certain verbs related to action, such as "intend", "try", "attempt" and "pursue", have mentalistic overtones in contemporary usage, other verbs - e.g. "seek" and "search" - have a neutral flavour, and will be used to describe goal-oriented behaviour by organisms, without any mentalistic connotations.

An important feature of our language is that we descibe things metaphorically as well as literally. This is precisely what Dennett's intentional stance allows us to do. Dennett approvingly cites the example of a logger who told him: "Pines like to keep their feet wet" (1997, p. 45). Adopting such a stance, he argues, is not only natural, but necessary to scientific progress:

When, biologists discover that a plant has some rudimentary discriminatory organ, they immediately ask themselves what the organ is for - what devious project does the plant have that requires it to obtain information from its environment on this topic? Very often the answer is an important scientific discovery (1997, p. 45).

While I wholeheartedly approve of the use of metaphors such as "devious project" to describe things in the natural world, I would argue that if adopting a strong, agent-centred intentional stance is not only useful but essential to our understanding of how an entity functions, then it must be more than a mere metaphor for that entity: it must be literally true. At this stage, I do not wish to pre-judge the issue of whether plants really are the devious plotters that biologists like to pretend they are. The point I wish to make is that if they are not, then we need to find the most appropriate language game to describe them. I would invite the reader to consider the following three statements:

1. Water is conducive to the growth of pines.
2. Pines thrive on moisture.
3. Pines like to keep their feet wet.

The first sentence is factually accurate, but it says too little. It describes pine trees from a purely chemical standpoint. Water is also conducive to the growth of crystals, but a pine tree, unlike a crystal, is an organism, with a telos or good of its own. This is the truth captured by the second statement. Because a pine tree is an organism, it thrives when its need for water is realised. However, unless there turn out to be good scientific reasons for ascribing mental states to pines, the third sentence belongs to the realm of poetry. While botanists may like to picture pines as having likes and dislikes, it does not follow that they have to use these metaphors to describe or investigate what pine trees thrive on, and what harms them. For instance, they may be able to carry out equally productive research by adopting a goal-centred intentional stance and attempting to identify pine trees' built-in goals.