scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Defense of the Rights of Artificial Intelligences

01 Sep 2015-Midwest Studies in Philosophy (John Wiley & Sons, Ltd (10.1111))-Vol. 39, Iss: 1, pp 98-119
TL;DR: In this paper, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs' real moral status, and (2) avoid designing AIs whose moral status is unclear.
Abstract: There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible. Word count: approx 10,000 (including notes and references), plus one figure

Summary (1 min read)

Jump to:  and [Summary]

Summary

  • There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings.
  • Such possible beings would deserve moral consideration similar to that of human beings.
  • The authors duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they othe authors their existence to us.
  • Given their moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear.
  • Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.
  • Approx 11,000 (including notes and references), plus one figure, also known as Word count.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Schwitzgebel & Garza September 15, 2015 AI Rights, p. 1
A Defense of the Rights of Artificial Intelligences
Eric Schwitzgebel and Mara Garza
Department of Philosophy
University of California at Riverside
Riverside, CA 92521-0201
eschwitz at domain: ucr.edu
September 15, 2015

Schwitzgebel & Garza September 15, 2015 AI Rights, p. 2
A Defense of the Rights of Artificial Intelligences
Abstract:
There are possible artificially intelligent beings who do not differ in any morally relevant respect
from human beings. Such possible beings would deserve moral consideration similar to that of
human beings. Our duties to them would not be appreciably reduced by the fact that they are
non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their
existence to us, we would likely have additional moral obligations to them that we don’t
ordinarily owe to human strangers obligations similar to those of parent to child or god to
creature. Given our moral obligations to such AIs, two principles for ethical AI design
recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately
reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear.
Since human moral intuition and moral theory evolved and developed in contexts without AI,
those intuitions and theories might break down or become destabilized when confronted with the
wide range of weird minds that AI design might make possible.
Word count: approx 11,000 (including notes and references), plus one figure

Schwitzgebel & Garza September 15, 2015 AI Rights, p. 3
A Defense of the Rights of Artificial Intelligences
“I am thy creature, and I will be even mild and docile to my natural lord and king if thou
wilt also perform thy part, the which thou owest me. Oh, Frankenstein, be not equitable
to every other and trample upon me alone, to whom thy justice, and even thy clemency
and affection, is most due. Remember that I am thy creature; I ought to be thy Adam…”
(Frankenstein’s monster to his creator, Victor Frankenstein, in Shelley 1818/1965, p. 95).
We might someday create entities with human-grade artificial intelligence. Human-grade
artificial intelligence hereafter, just AI, leaving human-grade implicit in our intended sense of
the term, requires both intellectual and emotional similarity to human beings, that is, both
human-like general theoretical and practical reasoning and a human-like capacity for joy and
suffering. Science fiction authors, artificial intelligence researchers, and the (relatively few)
academic philosophers who have written on the topic tend to think that such AIs would deserve
moral consideration, or “rights”, similar to the moral consideration we owe to human beings.
1
Below we provide a positive argument for AI rights, defend AI rights against four
objections, recommend two principles of ethical AI design, and draw two further conclusions:
first, that we would probably owe more moral consideration to human-grade artificial
1
Classic examples in science fiction include Isaac Asimov’s robot stories (esp.
1954/1962, 1982) and Star Trek: The Next Generation, especially the episode “The Measure of a
Man” (Snodgrass and Scheerer 1989). Academic treatments include Basl 2013; Bryson 2013;
Bostrom and Yudkowsky 2014; Gunkel and Bryson, eds., 2014. See also Coeckelbergh 2012
and Gunkel 2012 for critical treatments of the question as typically posed.
We use the term “rights” here to refer broadly to moral considerability, moral patiency, or
the capacity to make legitimate ethical claims upon us.

Schwitzgebel & Garza September 15, 2015 AI Rights, p. 4
intelligences than we owe to human strangers, and second, that the development of AI might
destabilize ethics as an intellectual enterprise.
1. The No-Relevant-Difference Argument.
Our main argument for AI rights is:
Premise 1. If Entity A deserves some particular degree of moral consideration and Entity
B does not deserve that same degree of moral consideration, there must be some
relevant difference between the two entities that grounds this difference in moral
status.
Premise 2. There are possible AIs who do not differ in any such relevant respects from
human beings.
Conclusion. Therefore, there are possible AIs who deserve a degree of moral
consideration similar to that of human beings.
A weaker version of this argument, which we will not focus on here, substitutes “mammals” or
some other term from the animal rights literature for “human beings” in Premise 2 and the
Conclusion.
2
The argument is valid: The conclusion plainly follows from the premises. We hope that
most readers will also find both premises plausible and thus accept the argument as sound. To
deny Premise 1 renders ethics implausibly arbitrary. All four of the objections we consider
below are challenges to Premise 2.
The argument is intentionally abstract. It does not commit to any one account of what
constitutes a “relevant” difference. We believe that the argument can succeed on a variety of
2
On sub-human AI and animal rights, see especially Basl 2013, 2014.

Schwitzgebel & Garza September 15, 2015 AI Rights, p. 5
plausible accounts. On a broadly Kantian view, rational capacities would be the most relevant.
On a broadly utilitarian view, capacity for pain and pleasure would be most relevant. Also
plausible are nuanced or mixed accounts or accounts that require entering certain types of social
relationships. In Section 2, we will argue that only psychological and social properties should be
considered directly relevant to moral status.
The argument’s conclusion is intentionally weak. There are possible AIs who deserve a
degree of moral consideration similar to that of human beings. This weakness avoids burdening
our argument with technological optimism or commitment to any particular type of AI
architecture. The argument leaves room for strengthening. For example, an enthusiast for strong
“classical” versions of AI could strengthen Premise 2 to “There are possible AIs designed along
classical lines who…” and similarly strengthen the Conclusion. Someone who thought that
human beings might differ in no relevant respect from silicon-based entities, or from distributed
computational networks, or from beings who live entirely in simulated worlds (Egan 1997,
Bostrom 2003), could also strengthen Premise 2 and the Conclusion accordingly.
One might thus regard the No-Relevant-Difference Argument as a template that permits
at least two dimensions of further specification: specification of what qualifies as a relevant
difference and specification of what types of AI possibly lack any relevant difference.
The No-Relevant-Difference Argument is humanocentric in that it takes humanity as a
standard. This is desirable because we assume it is less contentious among our interlocutors that
human beings have rights (at least “normal” human beings, setting aside what is sometimes
called the problem of “marginal cases”) than it is that rights have any specific basis such as
rationality or capacity for pleasure. If a broader moral community someday emerges, it might be
desirable to recast the No-Relevant-Difference Argument in correspondingly broader terms.

Citations
More filters
01 Jan 2016
TL;DR: In this article, the authors supersizing the mind embodiment action and cognitive extension andy clark PDF mind as action PDF mind series pdf PDF mind action series grade 11 PDF PDF Mind Action series grade 12 textbook PDF mind Action series physical science pdf PDF Mind action series mathematics grade 10 pdf PDFMind Action series ncaps answer PDF mind act series life sciences ncaps nc answer PDFMind action series physical sciences nc answers PDF mind actionseries life sciences life sciences Nc answer pdf mind action-series life sciences and physical sciences grade 11 pdf PDF pdf mind act-series physical science
Abstract: Title Type supersizing the mind embodiment action and cognitive extension andy clark PDF mind as action PDF mind action series pdf PDF mind action series grade 11 PDF mind action series grade 12 textbook PDF mind action series physical science pdf PDF mind action series mathematics grade 10 pdf PDF mind action series ncaps answer PDF mind action series life sciences PDF mind action series life sciences grade 12 PDF mind action series physical science grade 11 PDF grade 10 mind action series mathematics memo PDF mind action series mathematics grade 12 answers PDF answers for mind action series life science PDF life science caps mind action series PDF mind action series life sciences grade 11 PDF mind action series maths caps grade 12 PDF mind action series mathematics grade 11 answers PDF mind action series grade 12 answer guide PDF mind action series grade 11 physical science pdf PDF mind action series mathematics grade 12 caps PDF mind action series grade 11 mathematics answers PDF download mind action series mathematics gr11 textbook pdf PDF mind action series physical sciences answers grade 10 PDF mind action series physical science study guide PDF memo calculations for exercise7 mind action series mathematics grade12 PDF paper1 maths scope mind action 2014 final exam PDF equations and inequalities mind action series mathematics textbook grade11 PDF radical embodiment princeton theological monograph PDF mind probe hypnosis the finest tool to explore the human mind reprint PDF the mind thieves mind readers 2 by lori brighton PDF andy shane PDF the journals of lewis and clark lewis clark exped PDF the principle of path how to get from where you are want be andy stanley PDF andy warhol lesson plans PDF andy burch math 142 answers PDF the good study guide andy northedge PDF andy shane and the very bossy dolores starbuckle PDF trash andy mulligan chapter questions PDF

355 citations

Book ChapterDOI
01 Jan 2007
TL;DR: The subjectivity of value is a necessary condition for the existence of objective values that values form part of what John Mackie called "the fabric of the world" as mentioned in this paper, which is a view from nowhere.
Abstract: Some arguments for the subjectivity of value are premised on the absence of any reference to values in the objective world as described by the natural sciences. According to these arguments, it is a necessary condition for the existence of objective values that values form part of what John Mackie called ‘the fabric of the world’ (Mackie 1977, 15). On this view, objectivity entails mind independence: the domain of objectivity is a domain of existence independent of all thought and experience of it. It is a domain that could form the content of a representation that presupposes no particular perspective on the world — what has variously been called ‘the absolute conception’, or a View from nowhere’ (c.f. Williams 1985; Nagel 1986). Given these assumptions, the subjectivity of value follows from the further claim that values are not mind independent entities with a non-eliminable place in an absolute conception of reality.1 Given that we can only make sense of values with reference to a perspective on the world of beings disposed to value some things over others, there are no objective values, and some form of subjectivism about value must be true.2

226 citations

Book
01 Jan 1995
TL;DR: A TURTLE WHICH EXPLORER CAPTAIN COOK GAVE TO THE KING OF TONGA IN 1777 DIED YESTERDAY. It was NEARLY 200 YEARS OLD as mentioned in this paper.
Abstract: A TURTLE WHICH EXPLORER CAPTAIN COOK GAVE TO THE KING OF TONGA IN 1777 DIED YESTERDAY. IT WAS NEARLY 200 YEARS OLD. THE ANIMAL, CALLED TU'IMALILA, DIED AT THE ROYAL PALACE GROUND IN THE TONGAN CAPITAL OF NUKU, ALOFA. THE PEOPLE OF TONGA REGARDED THE ANIMAL AS A CHIEF AND SPECIAL KEEPERS WERE APPOINTED TO LOOK AFTER IT. IT WAS BLINDED IN A BUSH FIRE A FEW YEARS AGO. TONGA RADIO SAID TU'IMALILA'S CARCASS WOULD BE SENT TO THE AUCKLAND MUSEUM IN NEW ZEALAND.

97 citations

Journal ArticleDOI
TL;DR: It is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it and that the authors may need to take seriously a duty of ‘procreative beneficence’ towards robots.
Abstract: Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.

97 citations

References
More filters
Journal ArticleDOI
01 Oct 1950-Mind

7,266 citations

Book
01 Jan 1950
TL;DR: If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Abstract: I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.

6,137 citations

Book
01 Jan 1984
TL;DR: In this paper, the author claims that we have a false view of our own nature and that it is often rational to act against our own best interests, that most of us have moral views that are directly self-defeating, and that when we consider future generations the conclusions will often be disturbing.
Abstract: This book challenges, with several powerful arguments, some of our deepest beliefs about rationality, morality, and personal identity. The author claims that we have a false view of our own nature; that it is often rational to act against our own best interests; that most of us have moral views that are directly self-defeating; and that, when we consider future generations the conclusions will often be disturbing. He concludes that non-religious moral philosophy is a young subject, with a promising but unpredictable future.

4,518 citations

Journal ArticleDOI
TL;DR: Only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains, and no program by itself is sufficient for thinking.
Abstract: This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.“Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

4,111 citations

Frequently Asked Questions (1)
Q1. What have the authors contributed in "A defense of the rights of artificial intelligences" ?

In this paper, two principles for ethical AI design recommend themselves: ( 1 ) design AIs that tend to provoke reactions from users that accurately reflect the AIs ' real moral status, and ( 2 ) avoid designing AIs whose moral status is unclear.