I have argued that Dennett's intentional stance is a fruitful starting point in our quest for bearers of mental states. However, not all intentional systems have mental states. It has already been argued that non-living systems cannot meaningfully be credited with mental states, and there may be some organisms which also lack these states. It was suggested above that we should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy than other modes of explanation. If we can explain the behaviour of an intentional system just as well without recourse to talk of mental states such as "beliefs" and "desires", then the ascription of mental states is scientifically unhelpful.
It is my contention that our intentional discourse comes in different "flavours", some richer (i.e. more mentalistic) than others, and that Dennett's intentional stance can be divorced from the use of terms such as "beliefs" and "desires". It is important, when describing the behaviour of an organism, to choose the right "flavour" of discourse - that is, language that is just rich enough to do justice to the behaviour, and allow scientists to explain it as fully as possible.
Two intentional stances?
Dennett's use of terms such as "information" (1997, p. 34) and "goals or needs" (1997, pp. 34, 46) to describe the workings of thermostats (1997, p. 35), shows that intentional systems do not always have to be described using the mentalistic terminology of "beliefs", "desires" and "intentions", in order to successfully predict their behaviour. An alternative "language game" is available. There are thus at least two kinds of intentional stances that we can adopt: we can describe an entity as having information, or ascribe beliefs to it; and we can describe it as having goals, or ascribe desires and intentions to it.
What is the difference between these two intentional stances? According to Dennett, not much: talk of beliefs and desires can be replaced by "less colorful but equally intentional" talk of semantic information and goal-registration (1995). Pace Dennett, I would maintain that there are some important differences between the "information-goal" description of the intentional stance and the "belief-desire" description.
A goal-centred versus an agent-centred intentional stance
One difference between the two stances is that the former focuses on the goals of the action being described (i.e. what is being sought), while the latter focuses on the agent - in particular, what the agent is trying to do (its intentions). The distinction is important: often, an agent's goal (e.g. food) can be viewed as extrinsic to it, and specified without referring to its mental states. All the agent needs to attain such a goal is relevant information. A goal-centred intentional stance (which explains an entity's behaviour in terms of its goals and the information it has about them) adequately describes this kind of behaviour. Other goals (e.g. improving one's character, becoming more popular, or avoiding past mistakes) cannot be specified without reference to the agent's (or other agents') intentions. An agent-centred intentional stance (which regards the entity as an agent who decides what it will do, on the basis of its beliefs and desires) is required to characterise this kind of behaviour.
According to this classification of intentional stances, the task at hand in our search for entities having minds can be summarised as follows. Having identified "mind-like" behaviour, using the goal-centred intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using an agent-centred intentional stance? The search for mind, on this account, is a search for intentional acts, which can only be explained by reference to the agent's beliefs and desires.
A third-person versus a first-person intentional stance
However, there is another way of describing the difference between the two intentional stances: the former describes an entity's behaviour objectively, in the third person, while the latter uses subjective, first-person terminology. An entity's "information" and "goals" (or "needs") can be completely described from an objective, third-person perspective, whereas in ordinary parlance, the ascription of "beliefs" and "desires" to an entity entails a commitment to a subjective, first-person terminology when describing its behaviour. The statement "X is A's goal" or "X is what A needs" is a different kind of statement from "A wants X": the latter statement implicitly describes X from A's perspective (i.e. X appears desirable to A), while the former does not.
According to one commonly accepted view, the first-person perspective is definitive of "being a mind" or "having mental states". Any entity or event which can be exhaustively described using third-person terminology, without invoking a first-person perspective, is considered to be unworthy of being called a "mind" or "mental state". On this view, to have a mind is to be, in some way, a subject.
To equate "mental states" with "first-person states", is not the same as equating "mental states" with "conscious" (or "aware") states. There are two good reasons for resisting a simplistic equation of "mental" with "conscious" or "aware". First, there is no general agreement on the meaning of "consciousness". Second, many of our perceptions, desires, beliefs and intentional acts are not conscious but subconscious occurrences. (Driving absent-mindedly along a familiar road is a case in point.) Nevertheless, we use a first-person perspective when describing these events: my subconscious beliefs are still mine. Conscious mental states may prove to be the tip of the mental iceberg. For this reason, I would criticise Dennett for subtitling his book Kinds of Minds with the words: Towards an Understanding of Consciousness. This, I think, prejudices the issue.
We can thus distinguish between what I will call a third-person intentional stance (which employs objective terms such as "information" and "goals" to describe an entity's behaviour without any commitments to its having a mind) and a first-person intentional stance (which commits itself to a mentalistic stance towards an entity, by invoking subjective terminology to explain its behaviour).
According to this classification of intentional stances, the task at hand in our search for entities having minds can be summarised as follows. Having identified "mind-like" behaviour, using the neutral, objective third-person intentional stance, our next question should be: what kinds of mind-like behaviour, by which entities, are most appropriately described using a first-person intentional stance? The search for mind, on this account, is a search for subjectivity.
Are other intentional stances possible?
The two ways of classifying intentional stances (goal- versus agent-centred; third- versus first-person) employ different criteria to define "mental states", and also make conflicting claims: for instance, an animal that had subjective perceptions but was incapable of entertaining beliefs about them would be a candidate for having a mind according to the second classification but not the first. Other classifications of intentional stances may also be possible: Aristotle, for instance, seems to have favored a three-way classification: plants have a telos, because they possess a nutritive soul; animals have perceptions, pleasure and pain, desires and memory (On Sense and the Sensible, part 1, Section 1) but lack beliefs (De Anima 3.3, 428a19-24; On Memory 450a16); and human beings are capable of rationally deliberating about, and voluntarily acting on, their beliefs. I shall not commit myself to any particular classification before examining animals' mental capacities, as I wish to avoid preconceived notions of what a mental state is, and let the research results set the philosophical agenda. I shall invoke Dennett's intentional stance to identify behaviour that may indicate mental states in organisms, and then attempt to elucidate relevant distinctions that may enable us to draw a line between mental and non-mental states, or between organisms with minds and those without.
Narrowing the field: the quest for the right kind of intentional stance
The principle guiding our quest for mental states, which is that we should use mental states to explain the behaviour of an organism if and only if doing so is scientifically more productive than other modes of explanation, can now be recast more precisely. As a default position, we could attempt to describe an organism's behaviour from a neutral intentional stance (e.g. an objective third-person stance, or a goal-centred stance), switching to a mentalistic account (e.g. a subjective first-person stance, or an agent-centred stance) if and only if we conclude that it gives scientists a richer understanding of, and enables them to make better predictions about, the organism's behaviour.
Back to Chapter 2 Previous page Next page