Daniel Dennett. Photo courtesy of University of California.

Back to Chapter 2 Previous page Next page
*** SUMMARY of Conclusions reached References

Dennett's intentional stance: Is mind a property of intentional systems?

Dennett (1997, pp. 34 - 49) argues that we can regard all organisms - and, for that matter, many human artifacts - as what he calls intentional systems: entities whose behaviour can be predicted from an intentional stance, where the entities are treated as if they were agents who choose to behave in a certain way, because of their underlying beliefs about their environment, and their desires. As Dennett puts it, intentional systems exhibit the philosophical property of aboutness: beliefs and desires have to be about something. I may believe that the food in front of me is delicious: I have a belief about the food, and a desire relating to it (a desire to eat it). The food is the intentional object of my belief and desire - even if it turns out that the object I had presumed to exist, does not (e.g. if the gfoodh is really plastic that has been molded, painted and sprayed with volatile chemicals, in order to make it look and smell like delicious food).

Dennett suggests that we can usefully regard living things and their components from an intentional stance, because their behaviour is "produced by information-modulated, goal-seeking systems" (p. 34):

It is as if these cells and cell assemblies were tiny, simple-minded agents, specialized servants rationally furthering their particular obsessive causes by acting in the ways their perception of circumstances dictated. The world is teeming with such entities, ranging from the molecular to the continental in size and including not only "natural" objects, such as plants, animals and their parts (and the parts of their parts), but also many human artifacts. Thermostats, for instance, are a familiar example of such simple pseudoagents (1997, pp. 34 - 35).

Elsewhere, Dennett elaborates his reasons for regarding a thermostat as an intentional system:

...it has a rudimentary goal or desire (which is set, dictatorially, by the thermostat's owner, of course), which it acts on appropriately whenever it believes (thanks to a sensor of one sort or another) that its desire is unfulfilled. Of course you donft have to describe a thermostat in these terms. You can describe it in mechanical terms, or even molecular terms. But what is theoretically interesting is that if you want to describe the set of all thermostats ... you have to rise to this intentional level...[W]hat ... thermostats ... all have in common is a systemic property that is captured only at a level that invokes belief-talk and desire-talk (or their less colorful but equally intentional alternatives; semantic information-talk and goal-registration-talk, for instance) (1995).

The chief advantage of the intentional stance, as Dennett sees it, is its predictive convenience. There are two other methods of predicting an entityfs behaviour: what Dennett calls the physical stance (using scientific laws to predict the outcome - e.g. the trajectory of a bullet fired from a gun), and the design stance (assuming that the entity has been designed to function in a certain way, and that it is working properly - e.g. that a digital camera will take a picture when I press the button). The latter stance saves time and worry if the inner workings of the entity in question are too complex for behaviour to be rapidly predicted from a physical stance. Sometimes, however, even an entity's functions may be bafflingly complicated, and we may try to predict its behaviour by asking: what does it know (or at least, believe) and what does it want? The example Dennett employs is that of a chess-playing computer. I may not understand its program functions, but if I assume that it wants to win and knows where the pieces are on the board, how to move them and what the consequences of each possible move will be (up to a certain number of moves ahead), then I can make a good guess (perhaps a wrong one, given the limits of my memory and imagination) as to what it will do next in a game.

Regarding minds in general, the thesis of Dennettfs book, Kinds of Minds, can be summarised as follows: first, mental states are not something free-floating and abstract, but have to be located in some kind of body; second, mental states are properly regarded as manifestations of agency; third, human agency, in which people mind what they do, is grounded in the mindless quasi-agency of the macromolecules that constitute their bodies, which are capable of being described by the intentional stance - "Their sort of agency is the only possible ground from which the seeds of our kind of agency could grow" (1997, p. 27, italics mine); and finally, the intentional stance is a universal theory, which is "the key to unraveling the mysteries of the mind - all kinds of minds" (1997, p. 36). Dennett's third thesis has been hotly contested, and I will discuss it below.

I shall evaluate Dennett's intentional stance, by addressing three relevant issues. First, has Dennett mis-described intentionality? Second, is his intentional stance a global theory of mental states? Third, is it tied to any philosophically contentious theories - in particular, reductionism - or can it be used by philosophers of all persuasions?

Later, I shall argue that Dennett's intentional stance, while philosophically fruitful, does not adequately describe the necessary conditions for the occurrence of mental states, as it overlooks the crucial distinction between living and non-living systems: the latter, I contend, are ineligible for possessing mental states. Additionally, I propose that Dennett's intentional stance can be described in two ways, and that this suggests a rough program for distinguishing mental states from other states - and hence, distinguishing entities which possess minds from those that lack them.

(a) Has Dennett mis-described intentionality?


David Beisecker. Photo courtesy of University of Nevada, Las Vegas.

Beisecker (1999) has challenged the generally accepted account of intentionality:

The intentionality thought to be so definitive of mental states is typically glossed in terms of aboutness or directedness toward objects. The term 'intentionality' derives from a Latin word meaning roughly "to aim" - as one might do with a bow...

But then again, things we're not prepared to credit with thought - for example, heat-seeking missiles and sunflowers - also exhibit directedness towards objects. The challenge then is to find a way to distinguish the special sort of directedness possessed by bona fide thinkers from the more primitive kinds exhibited by these simpler systems (1999, p. 282).

Beisecker offers his own suggestion: "the hallmark of intentional states is their normativity, or susceptiblity to evaluation" (1999, p. 283). However, Beisecker is forced to admit that "there is a sense in which artifacts are susceptible to evaluation, and thus possess a certain sort of intentionality" (1999, p. 288): they can fail to fulfill the purpose for which they were designed. For Beisecker, this kind of "intentionality" is purely derivative and hence "second-class", but the point I wish to make here is that the same point could be made using Dennett's version of the intentional stance: the "beliefs" we metaphorically ascribe to thermostats are derivative upon their design specifications.

Intentionality, as such, is not definitive of mental states, according to either Dennett's account or Beisecker's. At this stage of our investigation, I would regard it as prejudicial to even attempt an a priori definition of mental states, before we have even looked at organisms and their capacities. Rather, we should cast our net wide and attempt to describe a class of phenomena which contains all mental states, even if it includes much else besides.

Since, as Beisecker himself acknowledges, intentionality is etymologically related to "aboutness" and has historically been defined in those terms, I propose to retain the notion of "aboutness" as a useful starting point for discussing intentionality, without endorsing Dennett's philosophy of mind as such. The traditional notion of intentionality is employed by Dennett's philosophical friends and foes alike.

I shall, however, re-visit Beisecker's normativity criterion at a later stage in this chapter, since Beisecker applies it to the vital question of whether animals possess genuine intentionality.

But before we can apply the traditional notion of intentionality to mental states, we have to ask: does it apply to all mental states, or are there some that lack the property of "aboutness"?

(b) Is Dennett's intentional stance a global theory of mental states?

Dennett has performed a valuable service, by providing a perspective within which we can situate mental states, and telling us where to start looking for them: on his theory, we should start by looking for behaviour that can be described by the intentional stance.

Of course, if there are some mental states that cannot be described by the intentional stance, then Dennett's thesis is in trouble. One might argue that there are mental states, such as perceptions and drives, which are too primitive to be characterised in the terms of beliefs and desires, which Dennett uses to characterise this stance. However, such a criticism misses the point. As Dennett's example of the thermostat shows, even a mechanical sensor can be described using the intentional stance: it switches on whenever it believes that the room is too hot or cold. In fact, Dennett (1995) is famous for allowing that thermostats do indeed have "beliefs", because he construes "beliefs" in a "maximally permissive" sense as "information-structures" that are "sufficient to permit the sort of intelligent choice of behavior that is well-predicted from the intentional stance". Moreover, as Dennett argues, perceptual states (such as recognising a horse) exhibit aboutness, even if they are involuntary or automatic. A perception is always a perception of something. In other words, perceptions exhibit the property of aboutness or intentionality (1997, pp. 48 - 49). The same could be said for drives: they are towards something.

It is worth noting that even Dennett's severest critics, such as Searle (1999), do not dispute his contention that the intentional stance is applicable to all kinds of minds. Is it also applicable to systems which lack minds? Searle and Dennett differ here: Searle does not ascribe intentionality to these systems, because for him, intentionality is "the general term for all the various forms by which the mind can be directed at, or be about, or of, objects and states of affairs in the world" (1999, p. 85, italics mine), while for Dennett, intentionality refers to the simple property of being about something else, whether the entity exhibiting intentionality is a mind or not (1997, pp. 46-47). Even opioid receptors in the brain, to use one of Dennett's examples, are "about" something else: they have been "designed" to accept the brain's natural pain-killers, endorphins. Anything that can "embody information" possesses intentionality (1997, p. 48).

The difference here between the two positions appears to be mainly terminological. Searle concedes that mindless systems may exhibit what he calls "as-if intentionality": they behave as if they had genuine (i.e. mindful) intentionality, and can be metaphorically described as such (1999, p. 93). The real point at issue between Searle and Dennett (to be discussed in part (d) below) is whether the intentionality of our mental states is a basic, intrinsic feature of the world, or whether it can be reduced to something else.

In any case, Dennett's intentional stance certainly opens up a fruitful approach to the investigation of other minds - be they human, alien or animal ones - and it also seems to be a useful tool for describing the mind-like behaviour of "pseudo-agents".

Being an intentional system, then, is a necessary but not sufficient condition for having a mind. It is not a sufficient condition, because there are many things - such as thermostats and biological macromolecules - which are capable of being described by this stance, but are not agents. Dennett refers to such entities as "pseudoagents" (1997, p. 35). In our quest for mental states, we should start by looking for "effects produced by information-modulated, goal-seeking systems" (1997, p. 34), which may either be minds or "as-if" minds.

(c) Is Dennett's intentional stance tied to reductionism?


John Searle. Photo courtesy of University of California, Berkeley.

At the outset of my quest for mental states in animals and (possibly) other organisms, I committed myself to an open-ended investigation, which avoided making philosophical assumptions about the nature of "mind" or "mental states". If Dennettfs intentional stance turned out to be wedded to a particular, contentious account of "the mind", then its legitimacy would be open to challenge from the outset.

Certainly, Dennett does make one highly contentious reductionist claim: he claims (1997, pp. 27, 30-31) that intentional agency in human beings is grounded in the pseudo-agency of the macromolecules in their bodies. This claim has been contested by Searle, who argues (1999, pp. 90-91) that it is vulnerable to the homunculus fallacy. In its crudest version, the homunculus fallacy attempts to account for the intentional "aboutness" of our mental states by postulating some "little man" or "spectator" in the brain who deems them to be about something. Although Dennett does not account for the intentional "aboutness" of our mental states in this way, he does attempt to solve the problem by taking it down to a lower biological level, where the problem of "aboutness" is said to be disappear: the intentionality of our mental states is the outcome of the mini-agency of the macromolecules in our bodies, and the intelligent homunculus is replaced by a horde of "dumb homunculi", each with its own specialised mini-task that it strives to accomplish (Dennett, 1997, pp. 30-31). Searle (1999, pp. 90-91) argues that this move merely postpones the problem: what gives our macromolecular states the intentional property of "aboutness"? Nor does Searle think much of causal accounts of "aboutness", where the intentionality of our symbols is said to be due to their being caused by objects in the world. The fatal objection to causal accounts is that the same causal chains may generate non-intentional states as well (1999, p. 91).

I would like to add that while Dennett's use of the intentional stance to describe the behaviour of the macromolecules in our bodies is pedagogically useful, it overlooks an important feature of rationality: he pictures them as "specialized servants rationally furthering their obsessive causes" (1997, p. 35). The picture contains an inherent contradiction: obsession is a mark of irrational rather than rational behaviour. The obsessive "mini-goals" of the parts of an intentional system derive their significance from the goals which the system, considered as a whole, is "trying" to achieve (e.g. food or sex). The metaphor of rational agency, I would suggest, is properly applied to the organism as a whole, as the good of the parts subserves that of the whole. If we use the intentional stance in our quest for mindful behaviour, then, it is not sufficient to identify body parts in which this behaviour is manifested. It must also be shown that the entity behaves as a whole (i.e. as a body) whose parts are integrated in a fashion that can be described by the intentional stance.

The fundamental divide between Dennett and Searle on intentionality concerns whether there is such a thing as "intrinsic intentionality" (whereby our mental states have a basic property of "aboutness"), as distinct from "derived intentionality" (whereby "words, sentences, books, maps, pictures, computer programs", and other "representational artifacts" (Dennett, 1997, pp. 66, 69) are endowed with an agreed meaning by their creators, who intend them to be "about" something). For Dennett, the distinction is redundant because the brain is itself an artifact of natural selection, and the "aboutness" of our brain states (read: mental states) has already been determined by their "creator, Mother Nature", who "designed" them (1997, p. 70). This move by Dennett is something of a fudge: "Mother Nature" (to borrow Dennett's anthropomorphism) does not "design" or "intend" anything; it merely causes things to happen, and as Searle has pointed out, causation is insufficient to explain intentionality. Searle (1999, pp. 89-98), while agreeing with Dennett that intrinsic intentionality is a natural, biological phenomenon, insists that there is an irreducible distinction between constructs such as the sentences of a language, whose meaning depends on what other people (language users) think, and conscious mental states such as thirst, whose significance does not depend on what other people think. Mental states, and not human constructs, are the paradigm cases of intentionality, and it is just a brute fact about the natural world that these conscious states (which are realised as high-level brain processes), refer intrinsically. An animal's conscious, intentional desire to drink, to use one of Searle's examples, is a biologically primitive example of intrinsic intentionality, with a natural cause: increased neuronal firing in the animal's hypothalamus. "That is how nature works" (1999, p. 95). Searle thus eschews both mysterian (dualist) and eliminative (reductionist) accounts of intentionality.

Despite the fierce controversy that rages over the roots of intentionality and the reducibility of mental states, it is admitted on all sides of the debate that a wide variety of entities can be treated as if they were agents in order to predict their behaviour. This, to my mind, is what makes Dennett's intentional stance a fruitful starting point in our quest for bearers of mental states. The issue of whether mental states can be reduced to mindless, lower-level processes is independent of the question of whether the intentional stance can be used to search for mental states.

Conclusions reached

If the foregoing arguments are correct, then behaving according to the intentional stance is a necessary condition for possessing mental states that are identifiable by us:

N.2 Our ability to describe an entity's behaviour according to Dennett's intentional stance is a necessary condition for our being able to ascribe cognitive mental states to it.

The intentional stance may well describe a considerably smaller class of entities than Wolfram's "computational stance", as I shall call it. Computations, broadly construed, are ubiquitous in nature, but the stipulation of a rule that describes an entity's information processing behaviour need not imply that the behaviour has a goal as such. It simply means that the entity can transform some initial states (inputs) into final states (outputs).

S.1 Our ability to describe an entity's behaviour in terms of rules which transform inputs into outputs (as per Wolfram's computational stance) is not a sufficient warrant for our being able to ascribe cognitive mental states to that entity.

On the other hand, Dennett's claim that the behaviour of all organisms can be described according to the intentional stance appears uncontroversial, in the light of our discussion of intrinsic finality in the previous chapter:

R.2 The set of entities which can be described by Dennett's intentional stance is not universal in scope, but includes all organisms (and their parts).

Back to Chapter 2 Previous page Next page
*** SUMMARY of Conclusions reached References