scispace - formally typeset
Open AccessJournal ArticleDOI

Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation

TLDR
The problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless.
Abstract
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to explanation’ of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive limited information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.

read more

Content maybe subject to copyright    Report

1
Why a right to explanation of automated decision-making does not exist in the General
Data Protection Regulation
Sandra Wachter,
1,2
Brent Mittelstadt,
2,3,1
Luciano Floridi
1,2
1
Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, United
Kingdom;
2
The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB,
United Kingdom;
3
Department of Science and Technology Studies, University College
London, 22 Gordon Square, London, WC1E 6BT, United Kingdom.
Correspondence: Sandra Wachter, sandra.wachter@oii.ox.ac.uk
Abstract
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been
widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of
all decisions made by automated or artificially intelligent algorithmic systems. This right to
explanation is viewed as an ideal mechanism to enhance the accountability and transparency
of automated decision-making. However, there are several reasons to doubt both the legal
existence and the feasibility of such a right. In contrast to the right to explanation of specific
automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive
meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well
as the significance and the envisaged consequences of automated decision-making systems,
what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right
not to be subject to automated decision-making’ contained in Article 22 (from which the
alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to
data subjects. These problems show that the GDPR lacks precise language as well as explicit
and well-defined rights and safeguards against automated decision-making, and therefore runs
the risk of being toothless. We propose a number of legislative and policy steps that, if taken,
may improve the transparency and accountability of automated decision-making when the
GDPR comes into force in 2018.
Keywords
accountability; artificial intelligence; algorithms; automated decision-making; data protection;
right to explanation; right of access; transparency.

2
Funding
This study was funded by the Alan Turing Institute (Luciano Floridi and Sandra Wachter), the
PETRAS IoT Hub - a EPSRC project (Sandra Wachter, Luciano Floridi and Brent Mittelstadt),
and a research grant from the University of Oxford’s John Fell Fund (Brent Mittelstadt).

3
1 Introduction
1
In recent months, researchers,
2
government bodies,
3
and the media
4
have claimed that a ‘right
to explanation’ of decisions made by automated and artificially intelligent algorithmic systems
is legally mandated by the forthcoming EU General Data Protection Regulation
5
2016/679
1
We are deeply indebted to Prof. Peggy Valcke, Prof. Massimo Durante, Prof. Ugo Pagallo, Dr. Natascha Scherzer
and Mag. Priska Lueger for their invaluable comments and insightful feedback, from which the paper greatly
benefitted. We want to especially thank Dr. Alessandro Spina whose intensive review and in-depth comments
strengthened the arguments in the paper. Further we are greatly thankful to Dr. Joris van Hoboken for the inspiring
conversation as well as written feedback on the draft that significantly improved the quality of the paper. Further
we want to thank Prof. Tal Zarsky and Prof. Lee Bygrave not only for their pioneering and ground-breaking work
that inspired this paper, but also their positive feedback, in-depth review and invaluable comments. Last but not
least we want to thank the anonymous reviewer for the time spent reading and commenting so thoroughly on the
paper.
2
See for example: Bryce Goodman and Seth Flaxman, ‘EU Regulations on Algorithmic Decision-Making and a
“Right to Explanation”’ [2016] arXiv:1606.08813 [cs, stat] <http://arxiv.org/abs/1606.08813> accessed 30 June
2016; Francesca Rossi, ‘Artificial Intelligence: Potential Benefits and Ethical Considerations’ (European
Parliament: Policy Department C: Citizens’ Rights and Constitutional Affairs 2016) Briefing PE 571.380
<http://www.europarl.europa.eu/RegData/etudes/BRIE/2016/571380/IPOL_BRI(2016)571380_EN.pdf>;
Mireille Hildebrandt, ‘The New Imbroglio - Living with Machine Algorithms’, The Art of Ethics in the
Information Society (2016) <https://works.bepress.com/mireille_hildebrandt/75/> accessed 28 December 2016;
IEEE Global Initiative, ‘Ethically Aligned Designed - A Vision for Prioritizing Human Wellbeing with
Artificial Intelligence and Autonomous Systems’ (IEEE 2016) Version 1
<http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf> accessed 19 January 2017; Ben Wagner, ‘Efficiency
vs. Accountability? Algorithms, Big Data and Public Administration’ <https://cihr.eu/efficiency-vs-
accountability-algorithms-big-data-and-public-administration/> accessed 14 January 2017; Fusion, ‘EU
Introduces “Right to Explanation” on Algorithms | Fusion’ (2016) <http://fusion.net/story/321178/european-
union-right-to-algorithmic-explanation/> accessed 10 November 2016. quoting Ryan Calo.
3
See for example: Information Commissioner’s Office, ‘Overview of the General Data Protection Regulation
(GDPR)’ (Information Commissioner’s Office 2016) 1.1.1 <https://ico.org.uk/for-organisations/data-protection-
reform/overview-of-the-gdpr/individuals-rights/rights-related-to-automated-decision-making-and-profiling/>
accessed 10 November 2016; House of Commons Science and Technology Committee, ‘Robotics and Artificial
Intelligence’ (House of Commons 2016) HC 145
<http://www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf> accessed 10 November
2016; European Parliament Committee on Legal Affairs, ‘Report with Recommendations to the Commission on
Civil Law Rules on Robotics’ (European Parliament 2017) 2015/2103(INL)
<http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-
0005+0+DOC+PDF+V0//EN> accessed 11 November 2016.
4
See for example: Joon Ian Wong, ‘The UK Could Become a Leader in AI Ethicsif This EU Data Law
Survives Brexit’ <http://qz.com/807303/uk-parliament-ai-and-robotics-report-brexit-could-affect-eu-gdpr-right-
to-explanation-law/> accessed 10 November 2016; Cade Metz, ‘Artificial Intelligence Is Setting Up the Internet
for a Huge Clash With Europe’ (WIRED, 2016) <https://www.wired.com/2016/07/artificial-intelligence-setting-
internet-huge-clash-europe/> accessed 10 November 2016; Fusion (n 2); Bernard Marr, ‘New Report: Revealing
The Secrets Of AI Or Killing Machine Learning?’ <http://www.forbes.com/sites/bernardmarr/2017/01/12/new-
report-revealing-the-secrets-of-ai-or-killing-machine-learning/#258189058e56> accessed 14 January 2017;
Liisa Jaakonsaari, ‘Who Sets the Agenda on Algorithmic Accountability?’ (EURACTIV.com, 26 October 2016)
<https://www.euractiv.com/section/digital/opinion/who-sets-the-agenda-on-algorithmic-accountability/>
accessed 3 March 2017; Nick Wallace, ‘EU’s Right to Explanation: A Harmful Restriction on Artificial
Intelligence’ <https://www.datainnovation.org/2017/01/eus-right-to-explanation-a-harmful-restriction-on-
artificial-intelligence/> accessed 3 March 2017.
5
REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27
April 2016 on the protection of natural persons with regard to the processing of personal data and on the
free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) 2016.

4
(GDPR). The right to explanation is viewed as a promising mechanism in the broader pursuit
by government and industry for accountability and transparency in algorithms, artificial
intelligence, robotics, and other automated systems.
6
Automated systems can have many
unintended and unexpected effects.
7
Public assessment of the extent and source of these
problems is often difficult,
8
owing to the use of complex and opaque algorithmic mechanisms.
9
The alleged right to explanation would require data controllers to explain how such
mechanisms reach decisions. Significant hype has been mounting over the empowering effects
of such a legally enforceable right for data subjects, and the disruption of data intensive
industries, which would be forced to explain how complex and perhaps inscrutable automated
methods work in practice.
However, there are several reasons to doubt the existence, scope, and feasibility of a
‘right to explanation’ of automated decisions. In this paper, we examine the legal status of the
‘right to explanation’ in the GDPR, and identify several barriers undermining its
implementation. We argue that the GDPR does not, in its current form, implement a right to
explanation, but rather what we term a limited ‘right to be informed’. Here is a quick overview.
In Section 2, we disentangle the types and timing of explanations that can be offered of
automated decision-making. The right to explanation, as popularly proposed, is thought to grant
an explanation of specific automated decisions, after such a decision has been made.
10
6
The proliferation of unaccountable and inscrutable automated systems has proven a major concern among
government bodies, as reflected in numerous recent reports on the future ethical and social impacts of automated
systems. See for instance: Catherine Stupp, ‘Commission to Open Probe into Tech Companies’ Algorithms next
Year’ (EurActiv.com, 8 November 2016) <http://www.euractiv.com/section/digital/news/commission-to-open-
probe-into-tech-companies-algorithms-next-year/> accessed 11 November 2016; Partnership on AI, ‘Partnership
on Artificial Intelligence to Benefit People and Society’ (Partnership on Artificial Intelligence to Benefit People
and Society, 2016) <https://www.partnershiponai.org/> accessed 11 November 2016; National Science and
Technology Council, ‘Preparing for the Future of Artificial Intelligence’ (Executive Office of the President
2016)
<https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_futu
re_of_ai.pdf> accessed 11 November 2016; European Parliament Committee on Legal Affairs (n 3); House of
Commons Science and Technology Committee (n 3); Government Office for Science, ‘Artificial Intelligence:
An Overview for Policy-Makers’ (Government Office for Science 2016)
<https://www.gov.uk/government/publications/artificial-intelligence-an-overview-for-policy-makers> accessed
11 November 2016.
7
Brent Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ [2016] 3 Big Data & Society 2.
8
Christian Sandvig and others, ‘Auditing Algorithms: Research Methods for Detecting Discrimination on
Internet Platforms’ [2014] Data and Discrimination: Converting Critical Concerns into Productive Inquiry
<http://social.cs.uiuc.edu/papers/pdfs/ICA2014-Sandvig.pdf> accessed 13 February 2016.
9
Mike Ananny, ‘Toward an Ethics of Algorithms Convening, Observation, Probability, and Timeliness’ (2016)
41 Science, Technology & Human Values 93.
10
This is the type of explanation of automated decision-making imagined in Recital 71 GDPR, which states “In
any case, such processing should be subject to suitable safeguards, which should include specific information to

5
In Section 3, we assess three possible legal bases for a right to explanation in the GDPR:
1) the right not to be subject to automated decision-making and safeguards enacted thereof
(Article 22 and Recital 71);
2) notification duties of data controllers (Articles 13-14 and Recitals 60-62); and
3) the right to access (Article 15 and Recital 63).
The aforementioned claim for a right to explanation
11
muddles the first and second legal bases.
It conflates (1) legally binding requirements of Article 22 and non-binding provisions of Recital
71 and (2) notification duties (Articles 13-14) that require data subjects to be provided with
information about “the existence of automated decision-making, including profiling, referred
to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic
involved, as well as the significance and the envisaged consequences of such processing for the
data subject” [italics added].
Having challenged the legal basis for a right to explanation, we then consider whether
the right of access in Article 15 provides a stronger legal basis. Following our analysis of the
implementation and jurisprudence of the 1995 Data Protection Directive (95/46/EC), we argue
that the GDPR’s right of access allows for a limited right to explanation of the functionality of
automated decision-making systems what we refer to as the right to be informed. However,
the right of access does not establish a right to explanation of specific automated decisions of
the type currently imagined elsewhere in public discourse. Not only is a right to explanation of
specific decisions not granted by the GDPR, it also appears to have been intentionally not
adopted in the final text of the GDPR after appearing in an earlier draft.
In Section 4, we consider the limitations of scope and applicability, if a right to
explanation were to exist. We show that a ‘general’ right to explanation, applicable to all
automated decisions, would not exist even if Recital 71 were legally binding. A right to
explanation, derived from the right of access (Article 15) or safeguards described in Article
22(3), would only apply to a narrow range of decisions “solely based on automated processing”
and with legal or similarly significant effects for the data subject (Article 22(1) GDPR).
We examine the limited cases in which the right would apply, including the impact of a critical
the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an
explanation of the decision reached after such assessment and to challenge the decision.”
11
Goodman and Flaxman (n 2).

Citations
More filters
Journal ArticleDOI

A Survey of Methods for Explaining Black Box Models

TL;DR: In this paper, the authors provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box decision support systems, given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work.
Journal ArticleDOI

Machine Learning Interpretability: A Survey on Methods and Metrics

TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Posted Content

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

TL;DR: In this article, the chasm between explaining black box models and using inherently interpretable models is identified, and several key reasons why explainable models should be avoided in high-stakes decisions.
Posted Content

Local Rule-Based Explanations of Black Box Decision Systems.

TL;DR: This paper proposes LORE, an agnostic method able to provide interpretable and faithful explanations for black box outcome explanation, and shows that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.
Related Papers (5)