Different kinds of intentional stance? Narrowing the search for mental states in organisms

Back to Chapter 2 Previous page - Why only living things can possess minds Next page - Linguistic constraints
*** SUMMARY of Conclusions reached References


Daniel Dennett. Photo courtesy of University of California.

I have argued that Dennett's intentional stance is a fruitful starting point in our quest for bearers of mental states. However, not all intentional systems have mental states. It has already been argued that non-living systems cannot meaningfully be credited with mental states, and there may be some organisms which also lack these states. It was suggested above that we should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy than other modes of explanation. If we can explain the behaviour of an intentional system just as well without recourse to talk of mental states such as "beliefs" and "desires", then the ascription of mental states is scientifically unhelpful.

It is my contention that our intentional discourse comes in different "flavours", some richer (i.e. more mentalistic) than others, and that Dennett's intentional stance can be divorced from the use of terms such as "beliefs" and "desires". It is important, when describing the behaviour of an organism, to choose the right "flavour" of discourse - that is, language that is just rich enough to do justice to the behaviour, and allow scientists to explain it as fully as possible.

Two intentional stances?

Dennett's use of terms such as "information" (1997, p. 34) and "goals or needs" (1997, pp. 34, 46) to describe the workings of thermostats (1997, p. 35), shows that intentional systems do not always have to be described using the mentalistic terminology of "beliefs", "desires" and "intentions", in order to successfully predict their behaviour. An alternative "language game" is available. There are thus at least two kinds of intentional stances that we can adopt: we can describe an entity as having information, or ascribe beliefs to it; and we can describe it as having goals, or ascribe desires and intentions to it.

What is the difference between these two intentional stances? According to Dennett, not much: talk of beliefs and desires can be replaced by "less colorful but equally intentional" talk of semantic information and goal-registration (1995). Pace Dennett, I would maintain that there are some important differences between the "information-goal" description of the intentional stance and the "belief-desire" description.

A goal-centred versus an agent-centred intentional stance

One difference between the two stances is that the former focuses on the goals of the action being described (i.e. what is being sought), while the latter focuses on the agent - in particular, what the agent is trying to do (its intentions). The distinction is important: often, an agent's goal (e.g. food) can be viewed as extrinsic to it, and specified without referring to its mental states. All the agent needs to attain such a goal is relevant information. A goal-centred intentional stance (which explains an entity's behaviour in terms of its goals and the information it has about them) adequately describes this kind of behaviour. Other goals (e.g. improving one's character, becoming more popular, or avoiding past mistakes) cannot be specified without reference to the agent's (or other agents') intentions. An agent-centred intentional stance (which regards the entity as an agent who decides what it will do, on the basis of its beliefs and desires) is required to characterise this kind of behaviour.

According to this classification of intentional stances, the task at hand in our search for entities having minds can be summarised as follows. Having identified "mind-like" behaviour, using the goal-centred intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using an agent-centred intentional stance? The search for mind, on this account, is a search for intentional acts, which can only be explained by reference to the agent's beliefs and desires.


Aristotle.

Some philosophers have attacked the very idea of attributing beliefs to animals as absurd. Sorabji (1993, pp. 12-14, 35-38) convincingly demonstrates that Aristotle himself (De Anima 3.3, 428a18-24) steadfastly refused to attribute belief (doxa) to animals, despite acknowledging their possession of sensory perception (aisthesis). However, Sorabji also points out that Aristotle's usages of both terms differed in important ways from the English terms "sensory perception" and "belief".

First, although Aristotle denied beliefs to animals, he allowed that they could have perceptions with a propositional content - e.g. the lion in Aristotle's Nicomachean Ethics (3.10) perceives that the ox it is about to eat is near - whereas in modern usage perceptions are typically regarded as simply having an object.

Second, for Aristotle, there can be no meaningful ascription of belief without the possibility of conviction and self-persuasion (De Anima 3.3, 428a18-24), whereas the same cannot be said for our English word belief: "The nervous examinee who believes that 1066 is the date of the Battle of Hastings may, through nervousness, not be convinced, and need not have been persuaded" (Sorabji, 1993, p. 37). The ascription of beliefs to non-human animals reflects contemporary linguistic norms that are removed from those of Aristotle, who, were he alive today, might use a different term (such as convictions) to denote the states with a propositional content that only humans are capable of.

Some contemporary philosophers, on the other hand, have a more deep-seated objection to animal belief than Aristotle's: the ascription of any mental state with a propositional content (such as a belief) to a non-human animal is absurd, either because (i) the object of a belief is always that some sentence S is true, and lacking language, an animal cannot believe that any sentence is true (Frey, 1980), or because (ii) nothing in an animal's behaviour allows us to specify the content of its belief and determine the boundaries of its concepts (Stich, 1979, refers to this as the "dilemma of animal belief", p. 26), or because (iii) none of our human concepts can adequately express the content of an animal's belief, given its lack of appropriate linguistic behaviour that would confirm that our ascription was correct (Davidson, 1975).

The common assumption underlying these objections is that the content of a thought must be expressible by a that-clause, in some human language. Carruthers (2004) rejects this assumption on the grounds that it amounts to a co-thinking constraint on genuine thoughthood: "In order for another creature (whether human or animal) to be thinking a thought, it would have to be the case that someone else should also be capable of entertaining that very thought, in such a way that it can be formulated into a that-clause." This is a dubious proposition at best: as Carruthers points out, some of Einstein's more obscure thoughts may have been thinkable only by him.

A more reasonable position, urges Carruthers, is that an individual's thoughts can be characterised equally well from the outside (by an indirect description) as from the inside (by a that-clause which allows me to think what the individual is thinking):

In the case of an ape dipping for termites, for example, most of us would ... say something like this: I don't know how much the ape knows about termites, nor how exactly he conceptualizes them, but I do know that he believes of the termites in that mound that they are there, and I know he wants to eat them (Carruthers, 2004).

The point I wish to make here is not that animals are capable of having beliefs, but that the arguments that they are in principle incapable of doing so are open to reasonable doubt, and that the attempt to identify forms of animal behaviour that warrant description in terms of an agent-centred intentional stance is not a fool's errand.

A third-person versus a first-person intentional stance

The difference between Dennett's two language games for explaining what thermostats do from an intentional standpoint (1997, pp. 34, 35, 46) can be explained in another way. Instead of saying that the former focuses on the goals of the action being described while the latter focuses on the agent's beliefs and desires, we could say that the former describes an entity's behaviour objectively, in the third person, while the latter uses subjective, first-person terminology. Typically, an entity's "information" and "goals" (or "needs") are completely described from an objective, third-person perspective, while its "beliefs" and "desires" are described using subjective, first-person terminology. To say "X appears desirable to A" is different from saying "X is A's goal" or "X is what A needs": the former statement implicitly describes X from A's perspective, while the latter employs an external standpoint.

It should be noted, however, that the goal-centred vs. agent-centred division may not coincide precisely with the third-person vs. first-person division.

According to one commonly accepted view (e.g. Searle, 1999), the first-person perspective is definitive of "being a mind" or "having mental states". Any entity or event which can be exhaustively described using third-person terminology, without invoking a first-person perspective, is considered to be unworthy of being called a "mind" or "mental state". On this view, to have a mind is to be, in some way, a subject.

To equate "mental states" with "first-person states", is not the same as equating "mental states" with "conscious" (or "aware") states. There are two good reasons for resisting a simplistic equation of "mental" with "conscious" or "aware". First, philosophers distinguish several meanings of the word "conscious" (discussed below), and there are rival accounts of what constitutes a first-person conscious state. Second, it is generally acknowledged that many of our perceptions, desires, beliefs and intentional acts are not conscious but subconscious occurrences. Nevertheless, we use a first-person perspective when describing these events: my subconscious beliefs are still mine. Conscious mental states may prove to be the tip of the mental iceberg. For this reason, I would criticise Dennett for subtitling his book Kinds of Minds with the words: Towards an Understanding of Consciousness. This, I think, prejudices the issue.

We can thus distinguish between what I will call a third-person intentional stance (which employs objective terms such as "information" and "goals" to describe an entity's behaviour without any commitments to its having a mind) and a first-person intentional stance (which commits itself to a mentalistic stance towards an entity, by invoking subjective terminology to explain its behaviour).

According to this classification of intentional stances, the task at hand in our search for entities having minds can be summarised as follows. Having identified "mind-like" behaviour, using the neutral, objective third-person intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using a first-person intentional stance? The search for mind, on this account, is a search for subjectivity.

Are other intentional stances possible?


Aristotle.

The two ways of classifying intentional stances (goal- versus agent-centred; third- versus first-person) employ different criteria to define "mental states", and also make conflicting claims: for instance, an animal that had subjective perceptions but was incapable of entertaining beliefs about them would be a candidate for having a mind according to the second classification but not the first. Other classifications of intentional stances may also be possible: Aristotle, for instance, seems to have favored a three-way classification: plants have a telos, because they possess a nutritive soul; animals have perceptions, pleasure and pain, desires and memory (On Sense and the Sensible, part 1, Section 1) but lack beliefs (De Anima 3.3, 428a19-24; On Memory 450a16); and human beings are capable of rationally deliberating about, and voluntarily acting on, their beliefs. I shall not commit myself to any particular classification before examining animals' mental capacities, as I wish to avoid preconceived notions of what a mental state is, and let the research results set the philosophical agenda. I shall invoke Dennett's intentional stance to identify behaviour that may indicate mental states in organisms, and then attempt to elucidate relevant distinctions that may enable us to draw a line between mental and non-mental states, or between organisms with minds and those without.

Narrowing the search for mental states: the quest for the right kind of intentional stance

The principle guiding our quest for mental states, which is that we should use mental states to explain the behaviour of an organism if and only if doing so is scientifically more productive than other modes of explanation, can now be recast more precisely. As a default position, we could attempt to describe an organism's behaviour from a mind-neutral intentional stance (e.g. an objective third-person stance, or a goal-centred stance), switching to a mentalistic account (e.g. a subjective first-person stance, or an agent-centred stance) if and only if we conclude that it gives scientists a richer understanding of, and enables them to make better predictions about, the organism's behaviour.

Case study: viral replication

Image of influenza virus. Copyright Linda M. Stannard, Department of Medical Microbiology, University of Cape Town, 1995.

A mind-neutral intentional stance can be applied to the behaviour of viruses when invading cells:

Viruses ... have evolved defenses to help them evade the immune system. Viruses that cause infection in humans hold a "key" that allows them to unlock normal molecules (called viral receptors) on a human cell surface and slip inside.

Once in, viruses commandeer the cell's nucleic acid and protein-making machinery, so that more copies of the virus can be made (Emerson, 1998).

The ability of viruses to evade cell defences can be described using Dennett's intentional stance: they possess information (a "key") that enables them to enter and control their host, thereby achieving their goal (replication). But it has been argued above (see Conclusion I.3) that our ability to describe an entity using the intentional stance is, by itself, not a sufficient reason for imputing cognitive mental states to it. A mind-neutral goal-centred intentional stance suffices here to explain the behaviour of a virus in terms of its information and goals. An agent-centred mentalistic stance should not be adopted unless it enables us to make better predictions about viruses' behaviour.

The foregoing example allows us to strengthen Conclusion I.3 and formulate a further negative conclusion regarding Dennett's intentional stance:

I.4 Our ability to identify behaviour in an organism that can be described using the intentional stance is not a sufficient warrant for ascribing mental states to it.

The possibility of applying a mind-neutral intentional stance to the characteristic behaviour of organisms also has biological implications:

B.4 Being an organism is not a sufficient condition for having mental states.

Back to Chapter 2 Previous page - Why only living things can possess minds Next page - Linguistic constraints
*** SUMMARY of Conclusions reached References