Schwitzgebel & Garza September 15, 2015 AI Rights, p. 1
A Defense of the Rights of Artificial Intelligences
Eric Schwitzgebel and Mara Garza
Department of Philosophy
University of California at Riverside
Riverside, CA 92521-0201
eschwitz at domain: ucr.edu
September 15, 2015
Schwitzgebel & Garza September 15, 2015 AI Rights, p. 2
A Defense of the Rights of Artificial Intelligences
Abstract:
There are possible artificially intelligent beings who do not differ in any morally relevant respect
from human beings. Such possible beings would deserve moral consideration similar to that of
human beings. Our duties to them would not be appreciably reduced by the fact that they are
non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their
existence to us, we would likely have additional moral obligations to them that we don’t
ordinarily owe to human strangers – obligations similar to those of parent to child or god to
creature. Given our moral obligations to such AIs, two principles for ethical AI design
recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately
reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear.
Since human moral intuition and moral theory evolved and developed in contexts without AI,
those intuitions and theories might break down or become destabilized when confronted with the
wide range of weird minds that AI design might make possible.
Word count: approx 11,000 (including notes and references), plus one figure
Schwitzgebel & Garza September 15, 2015 AI Rights, p. 3
A Defense of the Rights of Artificial Intelligences
“I am thy creature, and I will be even mild and docile to my natural lord and king if thou
wilt also perform thy part, the which thou owest me. Oh, Frankenstein, be not equitable
to every other and trample upon me alone, to whom thy justice, and even thy clemency
and affection, is most due. Remember that I am thy creature; I ought to be thy Adam…”
(Frankenstein’s monster to his creator, Victor Frankenstein, in Shelley 1818/1965, p. 95).
We might someday create entities with human-grade artificial intelligence. Human-grade
artificial intelligence – hereafter, just AI, leaving human-grade implicit – in our intended sense of
the term, requires both intellectual and emotional similarity to human beings, that is, both
human-like general theoretical and practical reasoning and a human-like capacity for joy and
suffering. Science fiction authors, artificial intelligence researchers, and the (relatively few)
academic philosophers who have written on the topic tend to think that such AIs would deserve
moral consideration, or “rights”, similar to the moral consideration we owe to human beings.
1
Below we provide a positive argument for AI rights, defend AI rights against four
objections, recommend two principles of ethical AI design, and draw two further conclusions:
first, that we would probably owe more moral consideration to human-grade artificial
1
Classic examples in science fiction include Isaac Asimov’s robot stories (esp.
1954/1962, 1982) and Star Trek: The Next Generation, especially the episode “The Measure of a
Man” (Snodgrass and Scheerer 1989). Academic treatments include Basl 2013; Bryson 2013;
Bostrom and Yudkowsky 2014; Gunkel and Bryson, eds., 2014. See also Coeckelbergh 2012
and Gunkel 2012 for critical treatments of the question as typically posed.
We use the term “rights” here to refer broadly to moral considerability, moral patiency, or
the capacity to make legitimate ethical claims upon us.
Schwitzgebel & Garza September 15, 2015 AI Rights, p. 4
intelligences than we owe to human strangers, and second, that the development of AI might
destabilize ethics as an intellectual enterprise.
1. The No-Relevant-Difference Argument.
Our main argument for AI rights is:
Premise 1. If Entity A deserves some particular degree of moral consideration and Entity
B does not deserve that same degree of moral consideration, there must be some
relevant difference between the two entities that grounds this difference in moral
status.
Premise 2. There are possible AIs who do not differ in any such relevant respects from
human beings.
Conclusion. Therefore, there are possible AIs who deserve a degree of moral
consideration similar to that of human beings.
A weaker version of this argument, which we will not focus on here, substitutes “mammals” or
some other term from the animal rights literature for “human beings” in Premise 2 and the
Conclusion.
2
The argument is valid: The conclusion plainly follows from the premises. We hope that
most readers will also find both premises plausible and thus accept the argument as sound. To
deny Premise 1 renders ethics implausibly arbitrary. All four of the objections we consider
below are challenges to Premise 2.
The argument is intentionally abstract. It does not commit to any one account of what
constitutes a “relevant” difference. We believe that the argument can succeed on a variety of
2
On sub-human AI and animal rights, see especially Basl 2013, 2014.
Schwitzgebel & Garza September 15, 2015 AI Rights, p. 5
plausible accounts. On a broadly Kantian view, rational capacities would be the most relevant.
On a broadly utilitarian view, capacity for pain and pleasure would be most relevant. Also
plausible are nuanced or mixed accounts or accounts that require entering certain types of social
relationships. In Section 2, we will argue that only psychological and social properties should be
considered directly relevant to moral status.
The argument’s conclusion is intentionally weak. There are possible AIs who deserve a
degree of moral consideration similar to that of human beings. This weakness avoids burdening
our argument with technological optimism or commitment to any particular type of AI
architecture. The argument leaves room for strengthening. For example, an enthusiast for strong
“classical” versions of AI could strengthen Premise 2 to “There are possible AIs designed along
classical lines who…” and similarly strengthen the Conclusion. Someone who thought that
human beings might differ in no relevant respect from silicon-based entities, or from distributed
computational networks, or from beings who live entirely in simulated worlds (Egan 1997,
Bostrom 2003), could also strengthen Premise 2 and the Conclusion accordingly.
One might thus regard the No-Relevant-Difference Argument as a template that permits
at least two dimensions of further specification: specification of what qualifies as a relevant
difference and specification of what types of AI possibly lack any relevant difference.
The No-Relevant-Difference Argument is humanocentric in that it takes humanity as a
standard. This is desirable because we assume it is less contentious among our interlocutors that
human beings have rights (at least “normal” human beings, setting aside what is sometimes
called the problem of “marginal cases”) than it is that rights have any specific basis such as
rationality or capacity for pleasure. If a broader moral community someday emerges, it might be
desirable to recast the No-Relevant-Difference Argument in correspondingly broader terms.