The title of this e-book is "Animals and other organisms: their interests, mental capacities and moral entitlements".
My research for this e-book has convinced me that not only the mental capacities, but also the interests of living creatures (which constitute the basis of any rights they may have or any duties we may have towards them) are very much a function of their biology. Accordingly, the first chapter of my e-book deals with two issues: what it means for something to be alive, and whether we can legitimately impute interests to an organism, simply because it is alive.
A large part of my e-book is devoted to a discussion of animal minds. The bulk of my work here relates to what I call a "minimal mind" - the simplest kind of mind that could exist. I attempt to identify the requirements that an animal would have to satisfy, before it could be credited with possessing a mind of any sort, however, rudimentary. I then attempt to develop a detailed model of a minimal mind. The question of what kinds of emotions a "minimal mind" could be said to possess is discussed in chapter three.
The first part of chapter four is devoted to the topic of animal consciousness. I review the philosophical literature in the light of current scientific findings.
In the second part of chapter four, I discuss so-called higher-order mental states, and address the question of whether any non-human animals qualify as (a) rational agents, or (b) moral agents.
Chapters five and six of my e-book deal with the moral entitlements of animals and other organisms. A philosophical discussion of "entitlements" can be couched in terms of "rights", but other terminologies are available too. Although I will be reviewing different theories of rights, the ethical component of my e-book will be principally focused on: what grounds the interests that animals and other living things have; how these interests can entail obligations on our part towards animals and other organisms, corresponding to moral entitlements on their part; and finally, what human beings are morally entitled to do to animals and other life-forms, in order to advance their own good.
Two books influenced my decision to write about living things before discussing animal minds.
First, the recent publication of Gary Varner's book "In Nature's Interest?" (1998) convinced me that the ethical divide drawn by some philosophers between sentient beings (which are said to be morally significant in their own right) and non-sentient beings (said to possess no moral significance whatsoever) was far too simplistic, and that the ascription of interests to living things, simply because they are alive, was philosophically legitimate.
Second, Steve Wolfram's ground-breaking work, "A New Kind of Science" (2002) alerted me to a major philosophical crisis regarding the definition of life. Recent developments in computer science have led to the emergence of what has been called "artificial life". Briefly, Wolfram (2002) contends that all of the general behavioural characteristics of living things, including their much-vaunted complexity, can be mimicked by a variety of computational systems - even systems with very simple rules. Not only complex systems, but practically any natural or man-made system, he argues, can be programmed to solve the same range of computational problems - given enough time and memory - as a universal Turing machine. Wolfram concludes that all of these natural and artificial systems must be alive in some way - a position he describes as "animism" (2002, p. 845).
Varner's and Wolfram's philosophical arguments can be combined to generate a second crisis in the field of environmental ethics. For if, as Varner argues, all living things have interests, then Wolfram's arguments imply that machines and most natural systems - such as the wind or the flames of a fire - have interests too. Most people would reject such an enlargement of our "moral universe" as ethically counter-intuitive. Accordingly, one of my aims of this e-book is to formulate a definition of what it means to be "alive", which is scientifically and philosophically defensible, as well as more manageable in its ethical scope.
I decided to write about animals' mental capacities, interests and moral entitlements of animals, for several reasons. First, the success of artificial intelligence has led many people to ask whether non-living things can have minds too. And if (as many people believe) having a mind is a morally significant property, then do we have duties towards "artificial animals" as well as real ones?
Another reason which prompted me to write about animals was the disturbing lack of terminological rigour in the philosophical and scientific literature relating to animals' mental capacities. It is bad enough that there is no agreed definition for overtly mentalistic terms such as "mental state", "belief" or "desire", but the lack of agreed norms of usage even for lower-level terms such as "flexible", "learning" and "sense" - which might be used to define mental states - is a severe impediment to philosophical progress in the search for animal minds.
Even more alarming than the lack of definitional rigour is the lack of philosophical agreement on the appropriate methodology for investigating animals' alleged mental states.
An additional reason for writing about animal minds is that until very recently, most philosophers have been largely unaware of the vast scientific literature relating to the mental capacities of different kinds of animals. Although some recent philosophers (e.g. Dretske, 1999; Beisecker, 2000; Allen, 2002, 2004a; Carruthers, 2004) have clearly familiarised themselves with this literature, no-one (to the best of my knowledge) has attempted a systematic philosophical overview of what we know - and do not know - about animals' mental capacities. One of my aims in this e-book is to provide such an overview - especially of the extensive scientific literature, stretching back to the 1930s, on the necessary neurophysiological conditions for consciousness in humans and other animals. In this e-book, I summarise what is currently known, drawing freely upon scientific overviews (e.g. Rose, 2002) of key findings in the field.
It is a remarkable yet much-overlooked fact that nearly all of the philosophers who have debated issues relating to animal minds and the moral significance of animals in recent years have done so without even attempting to define their subject - animals - at the outset. Leahy (1994) is the only recent philosopher I know of who offers his readers some sort of definition of animals, which is independent of his ethical perspective. He describes them as "primitive beings... spanning the continuum between plants and human beings" who "exhibit the pre-linguistic sensations of pain and the ancestral tokens of human attributes such as deliberative intent, rational planning, choice, desire, fear, anger, and some beliefs, where our guiding criteria are the close similarity of their behavioural patterns, in like circumstances, to our own". (1994, pp. 165-166, italics Leahy's). Even this description fails to tell us what animals are, how we can identify them, or even how we can distinguish them from plants - questions I answer in the appendix to part A of chapter two.
In contrast to Leahy, most other philosophers have attempted to argue for the moral significance of some mental characteristic, and then restrict the scope of their ethical concerns to the animals that possess it. For Singer, it is animals that are capable of feeling pain that matter; for Regan, only subjects-of-a-life, or animals which "have beliefs and desires, possess memory and expectations about ... the future, and are able to act intentionally in seeking to fulfill their desires and purposes" (1988, p. 76), qualify as bearers of rights, although he acknowledges that there are moral constraints on our dealings with other animals. My concern with this approach is that those who proceed in this way may be unintentionally blinded by their own philosophical pre-conceptions, leading them to overlook properties that may be relevant either to having a mind or being morally significant.
A more sensible way of investigating this question might be to follow an a posteriori approach, by first examining the biological properties that define animals, and then attempting to identify those properties that may be relevant to having a mind or having interests. One would start with a large group - all animals, or even better, all organisms - and look for mind-relevant characteristics, gradually narrowing one's focus to smaller groups until one found a set of physical characteristics that was sufficient to warrant the ascription of mental states to a creature. This is the approach I shall endeavour to follow in this e-book.
Animals have received a lot of attention in philosophical circles, especially since the publication of Peter Singer's "Animal Liberation" in 1975. However, there is a great diversity of opinion in philosophical circles regarding which creatures qualify as morally significant in their own right, making them the object of duties on our part. Some philosophers (Aquinas; Kant; Carruthers, 1992; Leahy 1994) argue that we have no moral duties to animals as such, and therefore cannot wrong them. Others, such as Singer (1977), contend that our duties towards animals arise from the simple fact that they can suffer, which gives them an interest (no less valid than our own) in avoiding suffering and creates an obligation on our part not to hurt them, except when competing utilitarian interests take precedence. Still others (Taylor, 1986; Varner, 1998) have defended a biocentric ethic, which ascribes interests to all living things. Finally, some philosophers have argued that the holistic good of the community takes precedence over the interests of any individual organism (Callicott, 1989).
One of my aims in writing this e-book is to put forward a clear, coherent account of the kinds of interests can be meaningfully ascribed to animals and other organisms. I argue in chapters five and six that identifying the interests of humans and other kinds of living things facilitates resolution of outstanding questions relating to our duties towards creatures, their rights, and our own moral entitlements as human beings. I discuss the question of what kinds of rights creatures have (which I consider to be a consequence of their having interests) in an appendix to chapter six.
Table 1 - A summary of methodological questions pertaining to my e-book, by category
Life |
|
Mind |
|
Interests of Living Organisms |
|
At the beginning of chapter one, I identify five axes or dimensions along which all current and historical definitions of life can be classified: Aristotle's four dimensions of causality, plus the arrow of time, which features prominently in thermodynamic and evolutionary definitions of life. Every definition proposed to date invokes one or more of these dimensions.
I propose that at a minimum, any adequate definition of "life" must be sophisticated enough to withstand current philosophical and scientific attacks on the validity of the distinction between "living" and "non-living" - including that formulated by Wolfram (2002).
The project of attempting to enumerate necessary and sufficient conditions for being alive has fallen into disfavour among scientists, many of whom prefer to define "life" in terms of a loose cluster of properties. However, it has been noted that these cluster definitions turn "life" into an artificial category, as they provide no explanation of why these properties, and no others, form part of the definition of life (Bedau, 1996, 1998; Cameron (2000). A unified account of life is philosophically preferable.
The unified account which I put forward is a teleological account which owes much to the writings of Aristotle, who described the soul as the final and formal cause of a living body. My work draws upon Cameron (2000), who has recently argued for a teleological account of life. However, my definition offers some additional features which Cameron's does not. In particular, it provides scientists with empirical criteria for deciding whether something is alive; gives scientists a framework for deciding which of the properties shared by terrestrial organisms are truly necessary properties of life; and allows us to re-define the Aristotelian concept of nature - which I invoke in chapters five and six - in a way that is fully compatible with Darwinism.
There are many different kinds of evidence for mental states that merit serious philosophical consideration, but there is one kind of "evidence" that should, I believe, never be appealed to. Arguments or thought experiments pertaining to mental states which are based on mere logical possibility are philosophically illegitimate. To show that a state of affairs is logically possible (or not obviously logically impossible) does not establish that it is physically possible. We can imagine organisms that look and even act like us, but have no subjective experiences, as in Chalmers' "zombie world" (1996, pp. 94 - 99); we can also imagine entities such as gas clouds, force fields or ghosts having mental states. All this proves is that mental states are not logically supervenient on their underlying physical states. However, as Chalmers himself points out (1996, p. 161), they may still be physically supervenient on these states.
0.B.2(a) Is there a set of necessary and sufficient conditions for having a mind?
One of my provisional objectives in this e-book is to list the necessary and sufficient conditions for possessing mental states. Such an attempt may well fail. That in itself would be a philosophically significant result. We should not expect to find neat definitions for every concept, and the concept of "mind" may prove too elusive for such a definition.
Then again, it may not. My aim is not to define "mind" in all its possible varieties, but to define the conditions an individual would have to satisfy before it could be said to possess the most primitive kind of mind there could be - a "minimal mind", as I call it.
It might be argued that the concept of mind, like that of a game (discussed by Wittgenstein), is incapable of definition, because it is inherently open-ended. But even though the concept of "mind" appears to be open-ended, there is no reason why the concept of a minimal mind should be. A minimal mind may well turn out to be definable in terms of a small, finite set of properties.
However, I refrain from assuming that there is a unique set of sufficient conditions for having a mind. On the contrary, there may well be several varieties of "minimal minds".
Finally, I do not assume that subjectivity is a defining (and hence necessary) property of mind. It may turn out to be the case that for creatures with minimal minds, the element of phenomenal consciousness is wholly lacking from their mental states.
I conclude that the enterprise of defining the conditions for a minimal mind remains a viable one.
0.B.2(b) What is the relationship between being alive and having a mind?
It is not the aim of this e-book to address the grand metaphysical claim (see Birch, 2001, p. 4 ff.; Chalmers, 1996, pp. 293 - 299) that any individual - even an electron - or any system that can register information, is capable of experiencing rudimentary mental states (panpsychism). Instead, I propose to focus on three narrower questions. First, given that all natural systems - including living things - can be viewed as computational devices (Wolfram, 2002), is there any difference between living things and artificial computational devices, which would preclude the latter from having minds? In other words, is being alive a necessary condition for having a mind?
Second, is being alive a sufficient condition for having a mind, as some researchers have argued?
Third, if being alive is not a sufficient condition for having a mind, then what is? What kinds of creatures have mental states?
I make four broad assumptions about mental states as they occur in living organisms. First, mental states don't just "pop up" in any entity, for no reason. Any creature that possesses mental states must have some innate capacity for having these states. (The same requirement would apply to any artificial device that was found to possess these states.)
Second, a living creature's capacity for mental states is grounded in its biological characteristics. I am not here equating mental states with biological properties; rather, I simply assume that differences in organisms' mental capacities can be explained in terms of differences in their physical characteristics. This in no way commits me to the much stronger (and more speculative) supervenience thesis, which states that all mental properties and facts supervene on physical properties and facts.
Third, I assume that the mental capacities of animals supervene upon (or are grounded in) states of their brains and nervous systems. I am not, however, assuming that every organism with a mind must have a brain, or even a nervous system; indeed, I intend to examine alleged instances of mental states in organisms lacking nervous systems. In short, what I attempt to do in chapter two is to identify the set of biological capacities that warrant the ascription of mental states - however rudimentary they may be - to an organism.
Finally, I make the extremely modest assumption that at least some non-human animals possess the requisite capacities for a minimal mind, even if (as a few philosophers and scientists argue) they are lacking in phenomenal consciousness. The assumption that some animals have mental states is woven into our own language to such a degree that animals often serve as primary exemplars of these states. (This is especially true for words used to describe desires and feelings.) To deny mentality to all non-human animals would thus render much of our mentalistic terminology meaningless.
0.B.2(c) What are the most primitive mental states?
After discussing the methodologies that have been proposed for identifying mental states, I critically examine two approaches - the computational approach of Steve Wolfram and the intentional stance developed by Daniel Dennett.
Although I identify some serious problems with both approaches, I argue in chapter two that the terminology behind Wolfram's and Dennett's approaches can be usefully applied to a wide range of natural and artificial systems, including all entities with minds. I make particular use of Dennett's intentional stance in my quest for animals with mental states.
Since even mindless artifacts such as thermostats (to use an example of Dennett's) can be described using the intentional stance, I propose that we should ascribe mental states to an organism if and only if doing so allows us to describe, model and predict its behaviour more comprehensively, and with as great or a greater degree of empirical accuracy than alternative, non-mentalistic accounts.
Philosophers and scientists should therefore take care to avoid using inappropriately mentalistic language when describing the behaviour of organisms lacking minds. Some verbs in the English language are peculiarly reserved for mental states. The choice of these verbs may change over time: at one time, the suggestion that an individual could sense an object or remember it mindlessly would have seemed odd, but today, we have no problems in talking about the sensor in a thermostat, or the memory of a computer (or even a piece of deformed metal). The table below, which I believe reflects contemporary usage, sorts some everyday terms into mentalistic and non-mentalistic categories.
Table 2: Some key terms that will be used in this e-book, classified as either "mentalistic" or "non-mentalistic", in accordance with current norms of popular and scientific usage
A. Terms that will be treated as mentalistic, unless clearly indicated otherwise | |
Terms arranged by category | Comments |
1. The phrase "act intentionally". | In common usage, intentional agency presupposes the occurrence of mental states (see 2. below). |
2. The verbs "feel", "believe", "desire", "try" and "intend", and the associated nouns "feeling", "belief", "desire", "attempt" and "intention". | In ordinary parlance, these intentional terms are currently used to characterise either states of a subject ("feel", "feeling"), proposed or attempted actions by an agent ("intend", "intention", "try", "attempt"), or explanations for an agent's actions ("believe", "belief", "desire"). |
3. The words "perceive" and "perception", as opposed to "sensation". | Modern usage draws a distinction between "sensation" and "perception" in an organism: the former is usually said to arise from immediate bodily stimulation, while the latter usually refers to the organism's awareness of a sensation (Merriam-Webster On-line, 2004, definition (1a) of "sensation"). Philosophers, however, do not always adhere to this pattern of usage. It would be prejudicial to endorse these distinctions at this stage, but we should allow for the possibility that there may be organisms that can be appropriately described as having sensations while lacking perceptions. |
4. The verbs "remember", "recall" and "recollect". | The verb "remember" retains a distinctly mentalistic connotation in ordinary usage: it refers not only to stored information being retrieved but also to the subjective feeling of its coming into one's mind. In popular parlance, machines are never said to "remember" anything. The verbs "recollect" and "recall" are even more strongly mentalistic, as they signify the intentional act of bringing something back to mind. |
5. The words "learn" and "learning" will generally be treated as mentalistic, unless indicated otherwise.(N.B. In chapter 2, I address the possibility of non-mentalistic learning in worms.) | This mentalistic usage is challenged by Wolfram (2002, p. 823), but I believe there is currently no verb in common use that can replace the peculiarly mentalistic flavour of "learn" in English. The word "learn" usually means "to gain knowledge or understanding of or skill in by study, instruction, or experience" (Merriam-Webster on-line dictionary, 2003). However, we should keep an open mind. According to the above definition, gaining a "skill" by "experience" is learning. In our examination of organisms' abilities, we may find that some living things, despite lacking minds, are capable of feats that can be described as the acquisition of skills through experience. In that case, we would have to call this "learning", simply because it would be a violation of our existing linguistic conventions not to do so. |
6. The words "know" and "cognition". | Some philosophers and AI researchers use the terms "know" and "cognitive" in a more general, mind-neutral sense. At present, popular usage treats these terms as mentalistic. |
B. Terms that will be treated as mind-neutral or non-mentalistic in this e-book | |
Terms arranged by category | Comments |
1. The general-purpose verbs "act" (unless followed by "intentionally") and "react". | 1. These verbs should not be regarded as mentalistic, as they are routinely used by chemists and biologists without any mentalistic connotations. In popular parlance, too, these verbs may be used in a neutral sense. |
2. The verbs "seek", "search", "pursue", "attack", "avoid" and "compete". | 2. These verbs simply describe goal-oriented behaviour by entities, without any mentalistic connotations. |
3. The verbs "attract" and "repel". | 3. These verbs simply describe the state of being or not being a goal. |
4. The verbs "communicate", "signal" and "respond", as well as the noun "message". | 4. Scientific usage has appropriated these words, and popular usage has followed the trend. For instance, bacteria are commonly said to communicate with each other, without carrying any mentalistic overtones. |
5. The verb "sense" and the noun "sensor". | 5. These verbs are often applied to inanimate artifacts (e.g. motion detectors), although no-one speaks of these artifacts as having "sensations". |
6. The noun "memory" (but not the verb "remember"). | 6. Contemporary usage allows us to speak of artifacts as having a "memory", from which they retrieve stored information. |
Mental states are sometimes divided into two categories: cognitive and affective. In this chapter, when I use the term "cognitive mental states", I mean beliefs in particular, as well as any higher-order judgements that are founded upon those beliefs.
However, I do not wish to prejudice my enquiry by assuming that animals with beliefs and/or desires necessarily have subjective, "phenomenally conscious" mental states. The issue of which animals are phenomenally conscious is deferred until chapter four.
0.B.2(d) How do we identify the occurrence of primitive mental states?
In this e-book, I propose to adopt an a posteriori, biological, "bottom-up" approach to the philosophical problem of animal minds. Instead of first attempting to define what a minimal mind is and then seeking to determine which animals fall within the scope of my definition, I shall begin by trying to define what an animal is. This is not merely a scientific matter: while a zoologist may be able to tell us how animals differ from closely related organisms such as plants and fungi, it is the task of philosophy to untangle questions such as what it means to be an organism (i.e. "alive") or whether a robotic bee should be classified as an animal (and if not, why not).
Leaving aside the question of whether any non-living entities can be said to have minds (a question I discuss in chapter two), one sensible way of identifying mental states in animals and other organisms might be to first examine the biological properties that define living things, and attempt to identify those properties that may be relevant to having a mind. One would start with a large group, such as the set of all living organisms - which I shall refer to as L for convenience - and carefully examine the definition of "organism", as well as the general properties of organisms, for anything that may be relevant to having a mind. A philosophical "winnowing process" could then be applied to these features, to ascertain whether singly or in combination, they sufficed to define the conditions for having mental states. If these features proved to be insufficient, one would then narrow one's focus to a smaller set of organisms - such as the set of all animals (call it A) - and once again critically examine the definition, as well as those universal traits of animals that might be relevant to having a mind. One could review successively smaller sets of organisms in the same way - the set of all animals with nervous systems, the set of animals with a brain, and so on - until one found a set of physical and/or behavioural characteristics that was sufficient to warrant the ascription of mental states to a creature. These characteristics can be said to define a set M of all creatures with mental states.
This is the strategy I propose to adopt in chapter two of this e-book. In the process of converging from L to M, I hope to build up a set of conditions that may be relevant to the possession of a mind by an individual. As each new condition is added, the question of whether the set of conditions is necessary and/or sufficient for having a mind will be re-visited.
Henceforth, I shall generally focus on organisms which are developmentally mature and physically normal, as my primary concern is to identify the species whose members can be said to have minds, rather than ascertain which individuals have minds.
If M turns out to be a subset of L, then how should we construct a sufficient set of conditions for a species' being a member of M? What I propose to do is narrow down my search by examining several broad categories of behavioural and biological properties that have been proposed in the philosophical and scientific literature as relevant to having a mind, and sift through them, all the while attempting to put together a constructive definition of "minimal mind". In particular, I discuss sensory capacities, memory, flexible behaviour, the ability to learn, self-directed movement, representational capacity, the ability to correct one's mistakes and possession of a central nervous system. Within each category of "mind-relevant" properties, I examine the different ways in which these properties are realised by different kinds of organisms. The biological case studies which I invoke range from the relatively simple (viruses) to the most complex (vertebrates). In other words, I propose to converge from L towards M within each category of "mind-relevant" properties.
It should be borne in mind that the step-by-step accumulation of necessary and/or sufficient conditions for having a mind may not simply converge towards a single set of animals. M may turn out to be defined by more than one set. There may turn out to be separate "islands of mentality" in the animal kingdom. Nor should it be assumed that animals which are phylogenetically closer to members of M are necessarily smarter.
0.B.2(e) Emotions
A philosophical consensus has emerged regarding the general features of emotions in human beings (DeSousa, 2003). However, not all of these features are applicable to animals that lack rationality or the use of language. Since there are powerful linguistic and psychological reasons for regarding the occurrence of emotions in animals as a "given", it follows that any properties of human emotions which presuppose the use of reason or language are inessential features.
I argue that any adequate theory of animal emotions should be able to account for all of the remaining features of human emotions, especially the intentionality or "aboutness" of emotions. A good theory of animal emotions should also tell us how to identify and distinguish different kinds of emotions in animals, in a way that allows us to ascertain which animals have them. I propose one such theory in chapter three (Panksepp, 1998), but defer the question of animal consciousness until chapter four.
0.B.2(f) Phenomenally conscious mental states
In the fourth chapter, I discuss the issue of animal consciousness, in three parts. Part A is a summary in tabular form of my extensive findings relating to phenomenal consciousness (which I intend to publish in greater length in a philosophical journal). Parts B and C deal with rational and moral agency in animals.
Phenomenal consciousness (or subjective awareness) remains a subject concerning which philosophers have spectacularly failed to reach agreement - even regarding the most fundamental questions. The source of this disagreement, I suggest, is an over-reliance on philosophical analysis. Analysis alone, I contend, is of little use so long as we are still ignorant of what phenomenal consciousness is, how it arose, and what it is for.
I then address the ways in which scientists measure the occurrence of what they call primary consciousness - which, I argue, can be regarded as a low-grade form of phenomenal consciousness. Although it receives far less philosophical attention than it merits, the scientific literature relating to what neurologists term "primary consciousness" in humans and other animals is massive, and the behavioural criteria for identifying it have been carefully refined over the last few decades. The standard observational criterion used by scientists is accurate report, which assumes that any individual who is able to report accurately on events going on in her surroundings is conscious. Since this "reporting" does not have to be verbal - one could press a button to report what one sees - consciousness in non-human animals is a legitimate object of scientific study. I review other behavioural criteria that have been proposed as measures of consciousness.
Finally, I summarise the neurophysiological criteria developed by scientists for identifying conscious states, and discuss the justification for ascribing phenomenal (primary) consciousness to animals, either on the basis of anatomical arguments from homology or on the basis of arguments from analogy.
I argue that the various philosophical usages of the word "consciousness" do not reflect natural divisions in the animal world, and propose an alternative set of categories, based on animal studies.
0.B.2(g) Higher-order mental states
Owing to limitations of space, I attempt no more than a brief overview of the current literature regarding "higher-order" states in animals (e.g. Griffin, 1994; Leahy, 1994; Zhang, Srinivasan and Collett, 1995; Hart, 1996; Zhang, Bartsch and Srinivasan, 1996; Whiten and Byrne, 1997; Savage-Rumbaugh et al., 1998, Budiansky, 1998a, 1998b; Pepperberg, 1999; Brannon and Terrace, 2000; Giurfa et al., 2001; Huber, 2001; Reiss and Marino, 2001; Young and Wassermann, 2001; Gallup, 2002; Horowitz, 2002; Zhang, 2002; Weir, Chappell and Kacelnik, 2002; Bekoff, Allen and Burghardt (eds.), 2002; Nissani, 2004), in Appendix B of chapter 4.
The focus of my inquiry in parts B and C of chapter four is: which (if any) non-human animals are capable of rational agency and moral agency? It would be out of keeping with the conservative methodology in this e-book to cite research showing that animals behave in ways that are strongly reminiscent of rational and moral agency in humans, and suggest that the difference between them and us is one of degree. Accordingly, I shall refrain from characterising a piece of animal behaviour as rational unless doing so helps us to explain and model the behaviour better than not doing so.
I propose to answer the question of whether non-human animals are capable of rational agency by assessing the merits of what I consider to be the most compelling philosophical argument to the contrary (Kenny, 1975).
When investigating claims of moral agency in other animals, I propose to examine general features of the moral life led by typical human moral agents: in particular. the way in which they typically acquire moral virtues, critically evaluate their progress along the path to virtue, and inculcate the virtues in their children. The question I ask is whether non-human animals could achieve anything that parallels these general traits.
0.B.2(h) Drawing the Line
While some scientists and philosophers (Godfrey-Smith, 2001; Birch, 2001; Chalmers, 1996, p. 292) have argued that mental states occupy a continuum from human beings to the smallest cell, others (Humphrey, 1993, pp. 195-196) maintain that there is a clear-cut divide between organisms that have minds and those that do not.
Is there anything in my proposed methodology that commits me to either view? On the approach I am advocating, before we decide to explain a certain kind of behaviour in an organism in terms of its mental states, we should ask: "What is the most appropriate way of describing this behaviour?" In other words, is there some scientific advantage in explaining the behaviour in mentalistic terms - e.g. can it be described more completely or predicted more accurately? Either mental states do or do not further our understanding of the behaviour in question. The decision to impute these states is not one that admits of degree, although the grounds for making the decision might be much stronger for some animals than for others. On methodological grounds, then, I am committed to looking for "on-off" criteria for ascribing these states to organisms. Failure to find them would lend support to the continuum hypothesis.
0.B.2(i) How generous should we be in assessing claims for mental states?
Simplicity is generally regarded as an explanatory virtue, and many philosophers (beginning with the Spanish philosopher Gomez Pereira, whose claim that animals are true machines, predated that of Descartes by eighty years) have invoked Occam's razor, which tells us "never to multiply entities beyond necessity", to dispense with the attribution of minds to animals, on the basis that it was the simplest reading of the available evidence.
Other philosophers have used Occam's razor in a contrary sense, arguing that the most parsimonious explanation of the pervasive neurophysiological and behavioural resemblances between human beings (who can certainly feel) and animals is that animals also have feelings (e.g. Griffin, 1976, p. 20). However, one problem with this argument is that similarity comes in degrees. How similar does an animal's brain have to be to ours before we can be sure it has mental states? Alternatively, if having a mind depends on possessing a "critical mass" of neural organisation, even animals with brains like ours may miss out, if they fall below the cut-off point.
Morgan's Canon is also used to dispense with mentalistic explanations:
In no case may we interpret an action as the outcome of the exercise of a higher faculty, if it can be interpreted as the outcome of one which stands lower in the psychological scale (cited in Bavidge & Ground, 1994).
Even leaving aside worries about its terminology of "higher" and "lower" psychological faculties, the key insight, that nature must be parsimonious in the way it "designs" (i.e. selects for) organisms that can adapt to their environment (Bavidge and Ground, 1994, p. 26) contains a hidden assumption, disputed by Griffin (1994, p. 115) that it is more complicated for nature to generate adaptive behaviour by means of mental states than by other means.
The demand for "simplicity" has proven to be a double-edged sword, leaving us unsure how to wield it.
The methodology which I would like to propose here, for evaluating a claim that a certain kind of behaviour in an organism is indicative of a mental state, is to proceed by asking: "What is the most appropriate way of describing this behaviour?", rather than "What is the simplest way of describing it?" We should use mental states to explain the behaviour of an organism if and only if doing so allows us to describe, model and predict it more comprehensively, and with as great or a greater degree of empirical accuracy, than invoking other modes of explanation.
According to the criterion I have proposed, there is nothing wrong with using mentalistic explanations of an animal's behaviour to complement genetic, neurophysiological or evolutionary ones, so long as there is some scientific advantage in doing so: a more comprehensive description or better predictions of the behaviour.
0.B.2(j) Appropriate Sources of Evidence
(i) Singular versus replicable observations
The use of animal anecdotes has been discredited since the days of Darwin and Romanes, who were prepared to rely on second-hand accounts of observations from naturalists and pet-owners who wrote to them. However, I would suggest that the insistence by Thorndike and Morgan on controlled, replicable laboratory experiments, while commendable for its scientific rigour, misses the point. From a scientific perspective, the key question to be asked when assessing an observation is not: is it replicable? but: is it reliable? Laboratory experiments which have been replicated will score highly on an index of reliability, as the risk of error is low. But the risk of error is also low when a singular observation is made by an acknowledged expert in the field. I conclude that there is no good scientific reason for excluding such a singular observation. What scientists should then proceed to do is further investigate this observation and endeavour to explain it within some general framework.
As regards controlled experiments, I have decided to err on the side of caution and not give credence to experimental observations that other researchers have tried but failed to replicate. Recent research, which has not yet been replicated, will be admitted, if published in a reputable scientific journal, but any new claims made will be treated with caution. I also reject studies whose follow-up has produced conflicting results.
Lastly, laboratory studies of animals that use a single individual will also be avoided.
(ii) Laboratory versus natural observations
There is something to be said for observing animals in their natural state, as cognitive ethologists do, simply because such observations maintain the network of relationships between an organism and its environment. An organism in a laboratory is an organism uprooted: the nexus of connections is severed, leaving us with less information about the interactions which characterise its lifestyle. Rigour is secured, but at a high price.
On the other hand, if the research is designed to measure the relation between a small number of variables, laboratory controls eliminate contamination by external factors.
In other words, the methodologies of behavioural science and ethology should be seen as complementary, rather than contradictory. Observations of animals in the wild will therefore be admitted if they are reliably attested by an acknowledged expert in the field.
0.B.3(a) Is my methodology ethically biased?
I wish to declare at the outset that I have no intention of deciding in favour of any particular theory of ethics. There are, however, certain theories to ethics that are at odds with the approach I will be putting forward in this e-book, so I shall briefly state here why I reject these theories.
The three theories which I wish to exclude on purely methodological grounds are prescriptivism, subjectivism and relativism. Prescriptivist theories define goodness in terms of obedience to the arbitrary decrees of some higher authority (e.g. God or the state). (The term "arbitrary" is crucial here: if the decrees require justification on rational grounds, then that justification, and not the decree, becomes the basis for defining goodness.) Subjectivist and relativist theories define goodness in terms of either individual or social preferences. None of these theories even attempts to derive, on rational grounds, moral norms that can be used to handle conflicts of interests. One may choose to play a "language game" in which the word "goodness" is defined as what a whimsical deity allegedly likes, or what I happen to like, or what the society I live in currently decrees that it likes, but if one elects to play any of these games, then any rational argumentation about whether something is good or not becomes impossible. There can be no argument about matters of taste.
There is a deeper problem with playing this game: why should what I like, or what society likes, be called "good"? Why not call it "bad" instead? What if I prefer to define myself as evil, and "good" as what I don't like? This language game could be played - but there seem to be very few people who wish to play it. Why? I suggest that the simple identification of "what I like" with the term "good" (rather than "bad") is intuitively appealing, precisely because most of the things I happen to like are in fact good for me, in some way that is evident to everyone, and hence objectively grounded. (Likewise, the identification of "good" with what society wants yields correct answers in the majority of cases, because for any viable society, most of the things it wants - such as public sanitation or defence against hostile enemies - will be good of necessity.) In other words, the superficial attraction of playing a "language game" whereby good is identified with the likes and dislikes of some entity (such as myself) is in fact a borrowed one: it presupposes a certain ontological background, in which interested parties generally pursue what is objectively good for them.
Another way of diagnosing the failure of the three theories I criticise is that none of them provides a systematic account of goodness. Their moral "norms" (if one can call them that) are ultimately based on someone's whims - be they one's own, society's, or those of a capricious deity who supposedly makes rules for no particular reason. (I am of course aware that there are other, more rational varieties of theistic belief. However, these forms of theism typically envisage moral norms not as prescriptions, but as somehow grounded in the nature of agents and of things, and hence amenable to rational enquiry.)
What of the remaining accounts of ethics? One of the points I argue for in this e-book is that one and the same moral norm can often be justified within the framework of different ethical theories. Rather than attempting to decide which of these theories is "right", I shall limit myself to discussing to what degree each of the major theories can accommodate the conclusions I reach regarding our duties and entitlements vis-a-vis other organisms. Obviously some theories, such as contractualism, have in-built limitations of scope as regards the focus of their moral concern, but I shall argue that even these theories can generate more powerful ethical conclusions than is commonly believed.
0.B.3(b) What constraints should we impose on an ethical theory based on interests?
Although it could be argued any approach to ethics should accord with our common sense intuitions, specifying these intuitions in advance is easier said than done, and I have chosen to avoid this path. Instead, I propose a universality requirement: any interest-based theory should be broad enough to cover the gamut of interests, no matter whose they may be. A theory that failed to meet this requirement would be an unreliable one, as its conclusions could in principle be overturned by the invocation of interests lying outside its scope.
0.B.3(c) How do we define and identify interests?
In chapter one, I argue that a sentient individual's interests cannot be plausibly identified with the totality of either its actual desires or its counterfactual desires (i.e. what an individual would want if he/she were fully informed), and I discuss Gary Varner's (1998) proposal that something may also be in an individual's interests if it serves some biologically based need that the individual has. Once we grant that the satisfaction of a sentient being's biological needs is in its interests, it is much easier to argue that the satisfaction of a non-sentient organism's biological needs is also in its interests.
On the other hand, the ascription of interests to non-living artifacts such as cars is generally considered to be a reductio ad absurdum for any theory of ethics. Any theory of ethics which posits that organisms not only have psychological interests (desires) but biological ones as well must therefore explain how these differ from the needs of non-living artifacts.
0.B.3(d) The "is-ought" gap
How can we derive "ought" statements from factual "is"-statements regarding organisms' biological interests? The derivation I put forward in this e-book is not an analytic one. Instead, I simply assume that moral oughts are a fact of life, and that they have some sort of rational basis. (The alternative, which I considered and rejected above, is to accept some version of prescriptivism, subjectivism or relativism.)
I then proceed by asking what kind of facts about the world could plausibly be said to ground moral oughts. I argue in chapters one and five that if anything, biological interests supply a firmer basis for these "oughts" than actual or even hypothetical (fully informed) desires could.
0.B.3(e) How do we resolve conflicts of interests between organisms?
Our next task is to find some way of deciding what to do when the interests of different organisms are in conflict. Accordingly, the ethical focus of this e-book will address two major questions. First, what duties are imposed on us by the recognition of these interests? (Putting it another way, what are living things entitled to from us?) Second, what are we, as human organisms, morally entitled to do to other organisms, in order to promote our own interests?
0.B.3(f) A general account of goodness
The general definition of goodness which I use in this e-book - and which, I suggest, is common to competing theories of ethics - is as follows: goodness is that which is in some party's interests. Where ethical theories differ is in their answers to the questions of what kinds of interests are paradigm cases and/or of paramount importance.