Back to Chapter 2
Previous page - Intentional systems
Next page - Different kinds of intentional stance?
*** SUMMARY of Conclusions reached
References
Before we engage on a quest for minds in living organisms, we need to examine the issue of artificial intelligence. I contend that while Dennett's intentional stance is a fruitful starting point in our search for minds, it overlooks one very important condition which an entity must satisfy before it can be said to possess mental states: the entity in question must be alive.
While Dennett has narrowed the search for embodied minds, his use of the intentional stance to describe the behaviour of some non-living artifacts blurs the philosophically important distinction (argued for in the previous chapter) between living and non-living things. On Dennett's account, there is no reason in principle why non-living artifacts could not exhibit genuine agency, as opposed to the pseudo-agency of a thermostat. I would argue that Dennett has overlooked the notion of intrinsic finality, and that an entity lacking this kind of finality cannot be said to embody mental states, let alone agency. It has been argued in the previous chapter that there are profound differences between a living and a non-living system: only a living system has intrinsic relations, dedicated functionality and a nested hierarchy of parts, which give it an intrinsic end and make it a true individual - something we can call a body, instead of an assemblage.
I contend that the attribution of a mind to a system that lacks intrinsic finality makes no sense. If we accept Dennett's notion of the intentional stance, then mental states can be appropriately regarded as manifestations of (genuine or pseudo-)agency, insofar as they exhibit the property of aboutness or intentionality (Dennett, 1997, pp. 48 - 49): they are directed at something. Now, agents are not free-floating entities, but are located in, and individuated with reference to bodies. The fact that we can tie agency to a body is what enables us to ascribe different actions to the same agent and to distinguish the pursuits of one agent from those of another. In chapter 1, it was argued that non-living systems are not bodies, but aggregates of parts which lack intrinsic unity. It is meaningless to describe the behaviour of such systems as the pursuits of an individual agent - although one might still imagine that a basic component of a non-living system, such as a molecule, could possess enough internal unity to manifest agency. (Such a molecule would at least possess internal relations, as described in chapter 1. Regarding the possibility of a living molecule, we have already concluded that a virus, which is little more than a DNA molecule wrapped in a protein coat, qualifies as being alive.)
There is, however, a deeper reason for scepticism regarding the notion of non-living agents. Before we can describe an entity as an agent with intentions of its own, it is always proper to ask: what are the entity's ends or goals? In the absence of identifiable ends, one might as well suppose that a cup of coffee is an agent. (Of course, the process by which we identify an agentfs ends or goals may not be an infallible one - spies, for instance, are very good at concealing their ends from investigators.) And if the entity had a maker or master, the entityfs ends would have to be (at least potentially) separable from those of its maker or master, before it could be called an agent. Without ends of its own, the entity would be nothing more than a tool. To qualify as an agent, an entity has to have some capacity for "self"-ish behaviour - i.e. behaviour that serves its own internal ends.
The last point is crucial: even if we could interrogate an exotic non-living agent about its goals, how would we know that its answers were indeed its own? Consider the following thought experiment. Suppose that you stumbled across a talking coffee cup, and (once you had recovered from the shock), asked it about its goals. Suppose that the coffee cup's stated goal turned out to be a very altruistic one - peace in the Middle East. How would you know that you were talking to it and not some agent controlling it - via a microphone cleverly embedded in the cup, for instance? Unless the cup could be shown to possess at least some intrinsic or "selfish" ends, and could benefit from satisfying these ends, then there would be no reason to regard it as a bona fide agent. And in order to identify those ends, one would have to look for features like internal relations, dedicated functionality and a nested hierarchy of parts, which define an organism as having a telos. Additionally, one would have to identify the organism's basic needs or essential conditions for its flourishing. An agent has to be the sort of thing that can be said to benefit from what it does - in other words, possess a telos - even if it also has unselfish ends (like peace in the Middle East) that have nothing to do with its telos.
Finally, for a system to cohere as a true individual with an internal rather than a merely superficial unity, it must possess a structure that is regulated and held together from within - in other words, what we called a formal cause in chapter 1. We should therefore look for a master program that regulates the internal structure of an organism and the internal interactions between its components.
Man-made robots (such as AIBO, pictured above) and supercomputers are therefore doomed to remain mindless until and unless they acquire the following:
The distinction between living and non-living systems is therefore presupposed by the distinction Dennett makes between intentional agents and pseudo-agents. Within the "family tree" of intentional systems, the most fundamental division is not between "agent" and "pseudo-agent", but between "alive" and "not alive". This is an important point to grasp, as it may seem that some non-living systems (e.g. chess-playing computers) are "cleverer" than many living systems (e.g. trees) and hence more like genuine agents. The point, however, is that trees are at least bona fide individuals with their own "selfish" ends like nutrition, whereas present-day human-built computers are assemblages without intrinsic ends, which can never exhibit agency, however well they may be programmed to mimic it.
If I am right, then we should restrict the search for mental states to organisms. Being alive is at least a necessary condition for having a mind. We can thus formulate a negative conclusion about Dennett's intentional stance, as well as a biological criterion for intelligence:
I.3 Our ability to describe an entity in terms of Dennett's intentional stance is not a sufficient condition for our being able to ascribe cognitive mental states to that entity.
B.1 An entity must be alive in order to qualify as having cognitive mental states.
This conclusion, unlike C.1, is couched in absolute terms, rather than in terms of the limits of our knowledge. The point is that we can, most of the time, be certain that something is or is not alive, whereas the identification of all of an entity's computations is far less straightforward. Things that look simple may turn out to be complex.
Stipulating "being alive" as a necessary condition for having mental states is methodologically vague. The following guidelines (based on chapter 1) serve to identify living things:
B.2 A necessary condition for our being able to ascribe cognitive mental states to an entity is that we can identify the following features: (a) basic needs, essential to its flourishing; (b) internal relations between the parts (i.e. new physical properties which appear when they are assembled together); (c) dedicated functionality, where the parts' repertoire of functionality is dedicated to supporting that of the unit they comprise; (d) a nested hierarchy, where the parts are hierarchically ordered in a nested sequence of functionality; and (e) stability - the parts are able to work together for some time to maintain the entity in existence as a whole. These conditions enable us to impute a final cause or telos to the entity, and identify its various "selfish" or intrinsic ends. We must also be able to identify a master program that regulates the internal structure of an organism and the internal interactions between its components - i.e. a formal cause.
If being alive is a necessary condition for having a mind, the argument that if a non-living intentional system (such as a thermostat) is a "pseudoagent" then an organism with similar abilities need be nothing more, is undercut at once. The mere fact that an organism's actions are properly explained with reference to its intrinsic end, or telos (which a thermostat lacks), is reason enough to treat the actions of the organism (but not the thermostat) as at least potential candidates for mental acts.
If my line of reasoning is correct, then any information-modulated, goal-seeking behaviour of an organism is a prima facie candidate for a manifestation of mental states. Alternatively, there may turn out to be valid philosophical reasons for concluding that only a subset of this behaviour warrants a mentalistic description. (These reasons will be discussed later in this chapter.)
We can now address the sceptical question of whether the wind could be said to embody an alien intelligence. Because the wind does not possess intrinsic relations (except for those between the atoms that make up the air's free-floating molecules), lacks dedicated functionality, and does not possess a nested hierarchy of parts, it cannot meaningfully be said to have intrinsic ends and qualify as a living individual - i.e. a body. Without a body, it cannot be said to possess mental states (Conclusion B.1).
Back to Chapter 2 Previous page - Intentional systems Next page - Different kinds of intentional stance?