BY DAVID J. GUNKEL
Ethics is an exclusive undertaking. In confronting and dealing with others—whether another human person, a non-human animal, or an artifact—we make a decision between who is worthy of consideration and respect and what remains a mere thing that can be used and even abused as we see fit. This decision matters because it divides the world of entities into other persons who count versus mere things that do not.
Further complicating matters is the fact that this distinction is neither fixed nor stable. The boundary separating who is a person from what is a thing has been flexible, dynamic, and alterable. This is actually a good thing. Ethics and law both innovate and advance by critically questioning their own limitations and accommodating many previously excluded or marginalized others, recognizing as persons what had previously been considered things.
The question that now confronts us at the beginning of the 21st century is the machine question.1 That is: Can we—or should we—recognize AI, robots, and other seemingly intelligent artifacts as another socially significant entity with some claim on us, or are they nothing more than mere things, i.e. an instrument, tool, or piece of property? Responses to this question tend to pull in two different and opposite directions.
On the one side, those opposing any form of social status for artifacts assert that these technologies are just things or objects that do not possess and will not come to possess the necessary conditions or capabilities to be considered something more. On the other side, there are those who favor extending some aspect of social status to AI and robots by arguing that these technological things either have or will soon be able to possess one or more of the necessary and essential properties to be something other than a mere thing.
What is interesting about this debate is not what makes the one side different from the other; what is interesting is what both sides already agree upon and share in order to come into conflict in the first place. And the real problem is not that this shared philosophical scaffolding has somehow failed to work in the face (or the faceplate) of AI and robots. The problem is that it has and continues to work all-too-well, exerting its influence and operations almost invisibly and without question.
This essay is designed to respond to this problem. It will proceed in four discrete steps or movements.
1) We will begin by first identifying and critically examining three seemingly intractable philosophical difficulties with the standard method for deciding questions of moral status.
2) In response to these demonstrated difficulties, the second part will introduce and describe an alternative model, one which shifts the emphasis from internal properties of the individual entity to extrinsic social circumstances and relationships.
3) In the third part, we will then take-up and consider one important objection to this relational turn and provide a response to this criticism.
4) Finally, the essay concludes by explaining how the goal in all of this is not to complicate things but to introduce and formulate a meta-ethical theory that is more agile in its response to the unique opportunities and challenges of the 21st century.
1) SOP—The Properties Approach
In responding to others (and doing so responsibly), we typically need to distinguish between what is a thing and who is another person. As Roberto Esposito, who arguably wrote the book on this subject, explains: “If there is one assumption that seems to have organized human experience from its very beginnings it is that of a division between persons and things. No other principle is so deeply rooted in our perception and in our moral conscience…”2 What really matters here is not this difference, but how this differentiation comes to be decided and justified. In order for something to have anything like moral or legal status, it would need to be recognized as another person and not just a thing.
Standard approaches to addressing and resolving this typically proceed by following a rather simple and straight forward decision-making process or what could be called a moral status algorithm. In this transaction, we first make a determination as to what property or set of properties are sufficient for something to have a particular claim to moral recognition and respect. We then investigate whether an entity actually possesses that property or not. Finally, and by applying the criteria decided in step one to the entity identified in step two, it is possible to “objectively” determine whether the entity in question either can have a claim to moral status or is to be regarded as a mere thing.
This way of proceeding sounds intuitively correct and natural. On this account, questions regarding moral status are firmly anchored in and justified by the essential nature or being of the entity that is determined to possess them. In this transaction, what something is determines how it ought to be treated. Or to put it in more formalistic terminology: ontology precedes and determines social, moral, and even legal status. But there are three problems with the approach—determination, definition, and detection.
Determination – How does one determine which exact property or set of properties are necessary and sufficient for something to be a moral subject? In other words, which one, or ones, count? The history of moral philosophy can, in fact, be read as something of an on-going debate and struggle over this matter with different properties vying for attention at different times. And in this process many properties—that at one time seemed both necessary and sufficient—have turned out to be either spurious, prejudicial, or both.
Definition – Irrespective of which property (or set of properties) is selected, they each have problems with definition. Take, for example, the property of consciousness, which is often utilized in the discussions and debates regarding moral status for intelligent machines and artifacts. Unfortunately, there is no univocal and widely accepted definition. The problem, as Max Velmans points out in his book on the subject, is that the term unfortunately “means many different things to many different people, and no universally agreed core meaning exists.”3 In fact, if there is any general agreement among philosophers, psychologists, cognitive scientists, neurobiologists, AI researchers, and robotics engineers regarding the property of consciousness, it is that there is little or no agreement when it comes to defining and characterizing the concept.
Detection – Third, there are epistemological difficulties with detection. Most (if not all) of the properties that are considered morally relevant, like consciousness, sentience or the experience of pain are internal mental states or capabilities that are not immediately accessible or directly observable. This epistemological barrier is what philosophers commonly call “the problem of other minds.” Here is how Paul Churchland describes it: “How does one determine whether something other than oneself—an alien creature, a sophisticated robot, a socially active computer, or even another human—is really a thinking, feeling, conscious being; rather than, for example, an unconscious automaton whose behavior arises from something other than genuine mental states.”4
Although philosophers, psychologists, cognitive scientists, and neuroscientists throw an impressive amount of argumentative and experimental effort at the problem, so far it has not been resolvable in any way approaching what would pass for definitive evidence. In other words, no matter what property is identified, it is always possible to seed reasonable doubt concerning its actual presence. Even if the problem of other minds is not the intractable philosophical dilemma that is often advertised, it is sufficient for sowing doubt about the presence or absence of the qualifying criteria and, by extension, rending decisions about moral status tentative, indeterminate, and uncertain.
Perhaps the best example of the problems with the properties approach can be seen with recent events surrounding former Google engineer Blake Lamoine and the LaMDA large language model. In June of 2022, Lemoine claimed that the LaMDA system was conscious and therefore was a person deserving of moral respect and consideration. Google shot back, not only arguing that LaMDA, like any computer application, was not conscious but suspending and then eventually firing Lemoine. Both sides in this debate asserted and sought to justify their positions by mobilizing the properties approach. And each side struggled with the problems of determination, definition, and detection. In fact, the debate itself circulated around an inability to resolve these issues.
2) The Relational Turn
In response to these problems, philosophers—especially in the continental and feminist STS traditions—have advanced other methods for resolving the question of moral status that can be characterized as a relational turn in ethics. This alternative has three pivotal characteristics:
Relational – Moral status is decided and conferred not on the basis of subjective or individuated internal properties determined in advance but according to objectively observable, extrinsic social relationships. As we encounter and interact with others—whether they be another human person, a non-human animal, or a seemingly intelligent machine—it is first and foremost experienced in relationship to us. Consequently, the question of moral status does not depend on what the other is in its essence but on how it stands in relationship to us and how we decide to respond and take responsibility for our mode of responding. In this transaction relations are prior to the things related. Or as Karen Barad has argued, the relationship comes first—in both temporal sequence and status—and it takes precedence over the individual relata.5
This change in perspective is not just a theoretical proposal; it has been experimentally confirmed in numerous social science investigations. The media equation studies undertaken by Byron Reeves and Clifford Nass, for example, demonstrated that human users will accord computers and other technological artifacts social standing similar to that of another human person and that this occurs as a product of the extrinsic social interaction, irrespective of the intrinsic properties (actually known or not) of the individual entities involved.6 Social standing, in other words, is a mindless operation. In two senses: it does not require resolution of the problem of other minds and it is something that we do automatically and often without thinking. And these results have been verified in “robot abuse studies,” where HRI (human robot interaction) researchers have found that human subjects respond emotionally to robots and express empathic concern for the machines irrespective of the cognitive properties or inner workings of the device.
Phenomenological – This alternative is phenomenological or (if you prefer) radically empirical in its epistemological commitments. Because moral status is dependent upon extrinsic social circumstances and not internal properties, the seemingly irreducible problem of other minds is not some fundamental epistemological limitation that must be addressed and resolved prior to decision making. Instead of being derailed by the epistemological problems and complications of other minds, the relational turn immediately affirms and acknowledges this difficulty as the basic condition of possibility for ethics as such.
Consequently, “the ethical relationship,” as the French philosopher Emmanuel Levinas explained, “is not grafted on to an antecedent relationship of cognition; it is a foundation and not a superstructure…It is then more cognitive than cognition itself, and all objectivity must participate in it.”7 Ethics, then, not only transpires prior to and in advance of resolving these epistemological questions; it provide the foundation for addressing and responding to these questions in the first place.
This means that the order of precedence in moral decision making should be reversed. Internal properties do not come first and then moral respect follows from this ontological fact. We have things backwards. We project the morally relevant properties onto or into those others who we have already decided to treat as being socially and morally significant. In social situations, then, we always and already decide between who counts as morally significant and what does not and then retroactively justify these actions by “finding” the essential properties that we believe motivated this decision-making in the first place. Properties, therefore, are not the intrinsic prior condition for moral status. They are products of extrinsic social interactions with and in the face of others.
Diverse – Finally, making moral status dependent on consciousness or other psychological capabilities belonging to the individual is thoroughly Cartesian. Other cultures, distributed across time and space, do not divide-up and make sense of the diversity of being in this arguably binary fashion. They perform decisive cuts separating the who from the what according to other ways of seeing, valuing, and acting.
And we can identify alternative ways of organizing social relationships by considering cosmologies that are not part of the Western philosophical lineage. As Archer Pechawis explains in the essay Making Kin with the Machines:
nēhiyawēwin (the Plains Cree language) divides everything into two primary categories: animate and inanimate. One is not ‘better’ than the other, they are merely different states of being. These categories are flexible: certain toys are inanimate until a child is playing with them, during which time they are animate. A record player is considered animate while a record, radio, or television set is inanimate. But animate or inanimate, all things have a place in our circle of kinship or wahkohtowin.8
This alternative formulation runs counter to the dominant ways of thinking—seeing the boundary between what Western ontologies call “person” and “thing” as being endlessly flexible, permeable, and more of a continuum than an exclusive opposition.
Similar opportunities/challenges are available by way of other non-Western religious and philosophical traditions. In her investigation of the social position of robots in Japan, Jennifer Robertson finds a remarkably different way of organizing the difference between living persons and artificially designed/manufactured things:
Inochi , the Japanese word for ‘life,’ encompasses three basic, seemingly contradictory but inter-articulated meanings: a power that infuses sentient beings from generation to generation; a period between birth and death; and, most relevant to robots, the most essential quality of something, whether organic (natural) or manufactured. Thus robots are experienced as ‘living’ things. The important point to remember here is that there is no ontological pressure to make distinctions between organic/inorganic, animate/inanimate, human/nonhuman forms. On the contrary, all of these forms are linked to form a continuous network of beings.9
These are not the only available alternatives, and, by citing these two instances, the intention is not to suggest that these different ways of thinking difference differently are somehow “better” than those developed in Western philosophical and religious traditions. These alternatives are just different and, in being different, offer the opportunity for critically questioning what is assumed to be true and often goes by without saying. Gesturing in the direction of other ways of thinking and being can have the effect of shaking one’s often unquestioned confidence in cultural constructs that are already not natural, universal, or eternally true.
3) Critical Recoil and Reply
The relational turn introduces an alternative that supplies other ways of responding to and taking responsibility for others and other forms of otherness. But it is by no means a panacea or some kind of moral theory of everything. It just arranges for other kinds of questions and modes of inquiry that are seemingly more attentive to the exigencies of life as it is encountered here and now at the beginning of the 21st century. Having said that, it is important to recognize that relational ethics is not without challenges.
For all its opportunities for thinking things otherwise, the relational turn risks exposure to the charge of moral relativism. And there have been a number of recent publications that develop this line of criticism, like this one from Kęstutis Mosakas from the book Smart Technologies and Fundamental Rights.
As Simon Kirchin explains, ‘the key relativistic thought is that the something that acts as a standard will be different for different people, and that all such standards are equally authoritative.” Particularly problematic is the extreme version, which denies there being any moral judgments or standards that could be objectively true or false. Given the apparent rejection of any such standard by Coeckelbergh and Gunkel, they seem to be hard-pressed to explain how the radically relational ethics that they are advocating avoids the extreme version.10
The perceived problem with relativism (especially the extreme version of it) is that it encourages and supports a situation where—it seems—anything goes and all things are permitted. But this particular understanding of “relative” is itself limited and the product of a culturally specific understanding of and expectation for ethics.
Robert Scott (1967), for instance, understands “relativism” entirely otherwise—as a positive rather than negative term: “Relativism, supposedly, means a standardless society, or at least a maze of differing standards, and thus a cacophony of disparate, and likely selfish, interests. Rather than a standardless society, which is the same as saying no society at all, relativism indicates circumstances in which standards have to be established cooperatively and renewed repeatedly.”11 This means that one can remain critical of “moral relativism,” in the usual dogmatic sense of the phrase, while being open and receptive to the fact that moral standards—like many social conventions and legal statues—are socially constructed formations that are subject to and the subject of difference.
Chares Ess calls this alternative “ethical pluralism.” “Pluralism,” he writes “stands as a third possibility—one that is something of a middle ground between absolutism and relativism... Ethical pluralism requires us to think in a ‘both/and’ sort of way, as it conjoins both shared norms and their diverse interpretations and applications in different cultures, times, and places.”12 Others, like Rosi Braidotti, call upon and mobilize “a form of non-Western perspectivism,” which exceed the grasp of Western epistemology. “Perspectivism,” as Eduardo Viveiro de Castro explains in his work with Amerindian traditions, “is not relativism, that is the affirmation of the relativity of truth, but relationalism, through which one can affirm the truth of the relative is the relation.’”13
For Braidotti, then, perspectivism is not just different from but is “the antidote to relativism.” “This methodology,” as she explains, “respects different viewpoints from equally materially embedded and embodied locations that express the degree of power and quality of experience of different subjects.”14 Braidotti therefore recognizes that what is called “truth” is always formulated and operationalized from a particular subject position, which is dynamic, different, and diverse. The task is not to escape from these differences in order to occupy some fantastic transcendental vantage point but to learn how to take responsibility for these inescapable alterations in perspective and their diverse social, moral, and material consequences. The relational turn, therefore, does not endorse relativism (as it is typically defined) but embodies and operationalizes an ethical pluralism, relationalism, or perspectivism that complicates the simple binary logic that defines relativism in opposition to moral absolutism.
4) Summary and Conclusions
Ultimately the question concerning the moral status of others and other forms of otherness, like AI and robots, is not really about the artifact. It is about us and the limits of who is included in and what comes to be excluded from that first-person plural pronoun, “we.” It is about how we decide—together and across differences—to respond to and take responsibility for our shared social reality. It is, then, in responding to the moral opportunities and challenges posed by seemingly intelligent and social artifacts that we are called to take responsibility for ourselves, for our world, and for those others who are encountered here.
In devising responses to these challenges, we can obviously deploy the standard properties approach. This method has the weight of history behind it and therefore constitutes what can be called the default setting for addressing questions concerning social status. But this approach, for all its advantages, also has demonstrated difficulties with the determination, definition, and detection of the qualifying essential properties. This does not mean, it is important to point out, that the properties approach is somehow wrong, misguided, or refuted on this account. It just means that this way of thinking—despite its almost unquestioned acceptance within Western traditions—has limitations and that these limitations are becoming increasingly evident in the face or the faceplate of AI and robots—in the face of others who are and remain otherwise.
As an alternative, the relational turn formulates an approach to addressing the question of moral status that is situated and oriented otherwise. This alternative circumvents many of the problems encountered in the properties approach by arranging for an ethics that is relational, phenomenological, and diverse. Whether this alternative ultimately provides a better way to formulate moral decision-making is something that will need to be determined and decided in the face of others and other kinds of otherness.
1. Gunkel, D. J. (2012). The Machine Question: Critical Perspectives on AI, Robots and Ethics . MIT Press.
2. Esposito, R. (2015). Persons and Things. Trans. Zakiya Hanafi. Cambridge: Polity, 1.
3. Velmans, M. (2000). Understanding Consciousness. New York: Routledge, 5.
4. Churchland, P. (1999). Matter and Consciousness. Cambridge, MA: MIT Press, 67.
5. Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning . Durham, NC: Duke University Press, 136-137.
6. Reeves, B. and C. Nass. (1996). The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places . Cambridge: Cambridge University Press.
7. Levinas, E. (1987). Collected Philosophical Papers. Trans. Alphonso Lingis. Dordrecht: Martinus Nijhoff, 56.
8. Lewis, J. E., A. Noelani, A. Pechawis, and S. Kite. (2018). “Making Kin with the Machines.” Journal of Design and Science 16. https://doi.org/10.21428/bfafd97b
9. Robertson, J. (2014). “Human Rights vs. Robot Rights: Forecasts from Japan.” Critical Asian Studies 46(4), 576. http://dx.doi.org/10.1080/14672715.2014.960707
10. Mosakas, K. (2021). “Machine Moral Standing: In Defense of the Standard Properties-Based View.” In John-Stewart Gordon (ed.), Smart Technologies and Fundamental Rights. Leiden and Boston: Brill Rodopi, 95.
11. Scott, R. L. (1967). “On Viewing Rhetoric as Epistemic. Central States Speech Journal 18, 264. https://doi.org/10.1080/10510976709362856
12. Ess, C. (2009). Digital Media Ethics, Cambridge: Polity Press, 21.
13. Viveiros de Castro, E. (2015). The Relative Native: Essays on Indigenous Conceptual Worlds. Trans. Martin Holbraad, David Rodgers and Julia Sauma, Chicago: HAU Press, 24.
14. Braidotti, R. (2019). Posthuman Knowledge. Cambridge: Polity Press, 90.
Image by Andy Kelly on Unsplash