Back to Chapter 1 Previous page Next page

Why does the distinction between living and non-living matter?

The question of whether the life versus non-life distinction is fundamental has bearings on my thesis for three reasons. First, how we answer this question could affect the way in which animals are defined, which is one of the aims of this thesis. If the answer is "no", then there is no a priori reason why we should exclude robotic animals (such as robotic bees, or AIBO, the robotic dog) from the scope of our definition of animals, which means that the domain of the animal kingdom may need to be enlarged.

Second, how we answer the question has profound implications for two related, widely shared assumptions regarding the nature of "mind". The two assumptions are subtly different, but what they have in common is the notion that causal explanations render mentalistic explanations redundant. The first assumption was first explicitly formulated by Leibniz; the second, by C. Lloyd Morgan.

The first assumption is that if you can give a complete causal explanation of how the parts of a thing operate, then that is sufficient to preclude it from having a mind. The argument was first formulated by Leibniz (Monadology, section 17):

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception.

Recently, Nicholas Humphrey has attempted to give a physicalist account of consciousness in terms of the inner workings of our brains. The kind of explanation he offers is a mechanistic (neurophysiological) one - precisely the sort that, according to Leibniz, cannot explain mental events. Yet for Humphrey, the fact that we can construct a causal explanation of the mental phenomenon of consciousness in no way undermines its reality. However, his account has been dismissed by some critics because for them, it de-mystifies consciousness. Humphrey derisively formulates their objection thus: "Is that all?" (1993, pp. 207-208). Nevertheless, the intuitive appeal of "Is that all?" is widely shared, and even Humphrey confesses that he has "sometimes been subject to the 'Is that all?' malaise" (1993, p. 210).

One consequence of the "Is that all?" principle is that if if we can explain how an individual performs some task in "mechanical" terms (e.g. in terms of processes going on the individual's brain, whose parts and inner workings can be understood in the same way as Leibniz's windmill), then it would be inappropriate to offer an alternative explanation of the same task, couched in "mentalistic" terms. For many people, the fact that a neurophysiological explanation can be given of some aspect of animal behaviour automatically renders it non-mental - it is dismissed as a mere "instinct", or, worse yet, a "reflex".

The "Is that all?" principle has few philosophical defenders today. After all, there is no reason why a physical process should be mindless simply because a causal explanation of its inner workings can be given. (The fact that neurologists can give a full account of how my patellar reflex works does not prevent me from being aware of a sharp pain when my knee is tapped with a hammer - even if the reflex is activated before I feel the pain.) However, there is a second, related assumption, shared by certain philosophers and many behavioural scientists, that assimilates animals to machines and denies them mental states.

Whereas the first assumption denies mental states to animals if we can understand the causal interactions between their constituent parts, the second focuses on animal behaviour. The assumption states that if we can give a complete causal explanation of an individual's behaviour without resorting to mentalistic terminology, then we should do so. Its original formulation can be found in Morgan's Canon:

In no case may we interpret an action as the outcome of the exercise of a higher faculty, if it can be interpreted as the outcome of one which stands lower in the psychological scale (cited in Bavidge & Ground, 1994).

More recently, the behavioural scientist J. S. Kennedy has written:

It might seem necessary to suppose that some animals have minds if we had no other explanation for their flexible, adaptive behaviour. But there is of course another explanation, namely the power of natural selection to optimize behaviour along with other features of organisms (1992, p. 121, italics mine).

For Kennedy, the error of mentalistic explanations of animal behaviour is that they confuse two kinds of explanations: proximate causal explanations, which answer the question of how behaviour occurs, and ultimate functional explanations, which explain its survival value (the ultimate "Why?"). Mentalistic explanations confuse these categories by making an ultimate cause appear proximate: the end of an action (its survival value) is seen as its subjective purpose, and hence its proximate cause. To say that an animal hunts because it experiences the subjective feeling of hunger is as anthropomorphic as saying that what makes a train go is "locomotive force" (1992, p. 51).

Kennedy takes pains to assure his readers that animals are not "automata making only fixed reflex responses to stimuli" (1992, p. 63), but nevertheless approves Descartes' characterisation of animals as machines, albeit complex and unpredictable ones (1992, pp. 2-4). More recently, Wolfram (2002) has demonstrated that even complex, unpredictable behaviour can be generated by simple algorithms, thereby rendering Kennedy's insistence that animals are not "simple machines" (1992, p. 63, my italics) redundant.

One consequence of Morgan's Canon is that if an individual's complete repertoire of behaviour, which was formerly explained in terms of its mental states, turns out to explicable in terms of low-level, non-mentalistic processes, then that individual thereby ceases to be a candidate for having a mind. For this reason, many people, upon hearing about some feat of animal cognition, are apt to object: "Yes, but even a computer could do that." (The unstated premise here is that if a computer, designed by human beings, can do X mindlessly, then we should assume that an animal that does X, does so using processes that are equally simple.) If all of an animal's so-called "cognitive" feats can be duplicated by a human-built computer, then there is no need to impute minds to animals either.

Now, if there is some fundamental difference between living and non-living systems, then the whole analogy between animals and mechanical devices, which is shared by the two foregoing assumptions, is undercut. In that case, the fact that a computer designed by human beings can do X mindlessly does not imply that an animal that can do X, does so mindlessly. Indeed, one of my aims in this chapter is to show that there is some fundamental difference between living and non-living systems. Using Aristotelian terminology, we can describe this difference as "intrinsic finality", although I hope to elaborate on what this means, with the help of some insights from modern biology. An animal, like a computer programmed by human beings, can receive information from its surroundings, store it in its memory, retrieve it and adaptively manipulate it when circumstances change. In both cases, we have to resort to some sort of cognitive terminology to understand what is happening. (The semantic meaning of a step in a computer program cannot be grasped merely by knowing how the bits, or 1s and 0s, are manipulated.) The difference is that the ultimate end of an animal's actions is the realisation of its telos (which is built into its nature), whereas the "ends" or goals of a human-built computer are extrinsically determined (by its programmer) and have nothing to do with its "well-being". Despite the parallelism between the two sets of processes, it could be argued that an action by an animal deserves to be described as cognitive, if (a) the action can be explained by its telos - which is something a human-built computer lacks; and (b) the significance of the action cannot be properly understood without resort to some sort of cognitive terminology.

If, on the other hand, there is no basic difference between living and non-living systems, then we have two choices in the debate on animal minds. We can retain the notion that causal explanations render the attribution of mental states redundant, and ascribe mental states only to those animals whose feats cannot be completely duplicated by a human-built computer. In effect, this means that only humans, whose creativity enables them to stay one step ahead of their own computers by designing ever newer and better models, can be confidently credited with having minds. Alternatively, we can reject this assumption and say that having a mind is simply being able to store and adaptively manipulate information - which means that computers built by human beings have minds too.

There is a third reason why the question of whether there is any significant difference between living and non-living systems is a significant one: it alters the scope of our ethical concerns. If we allow that living animals are not fundamentally different from robotic ones, then depending on how we answer the question of whether animals have interests, we can choose to enlarge or restrict the scope of ethical concerns. If we allow that living animals have interests, then we have to consider the possibility that robots, too, may have interests. Alternatively, if we find this idea ridiculous, then we may have to backtrack and entertain the notion that only people have interests.

Either choice has revolutionary implications. Practically everyone believes that we have a prima facie duty not to harm animals, but few people would consider it morally wrong to dismantle a Cray supercomputer that can defeat Garry Kasparov - or to pull the plug on a HAL-9000! Is this attitude a mere prejudice on our part - a case of bio-snobbery? Or are we, as J. S. Kennedy avers, guilty of anthropomorphism when we project some of our concerns onto animals?

Back to Chapter 1 Previous page Next page