scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ethical Implications and Accountability of Algorithms

01 Dec 2019-Journal of Business Ethics (Springer Netherlands)-Vol. 160, Iss: 4, pp 835-850
TL;DR: In this paper, the authors conceptualize algorithms as value-laden, rather than neutral, in that algorithms create moral consequences, reinforce or undercut ethical principles, and enable or diminish stakeholder rights and dignity.
Abstract: Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. This article identifies whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms create moral consequences, reinforce or undercut ethical principles, and enable or diminish stakeholder rights and dignity. In addition, algorithms are an important actor in ethical decisions and influence the delegation of roles and responsibilities within these decisions. As such, firms should be responsible not only for the value-laden-ness of an algorithm but also for designing who-does-what within the algorithmic decision. As such, firms developing algorithms are accountable for designing how large a role individual will be permitted to take in the subsequent algorithmic decision. Counter to current arguments, I find that if an algorithm is designed to preclude individuals from taking responsibility within a decision, then the designer of the algorithm should be held accountable for the ethical implications of the algorithm in use.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Abstract: Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.

137 citations

Proceedings ArticleDOI
27 Jan 2020
TL;DR: A definition ofgorithmic accountability based on accountability theory and algorithmic accountability literature is provided, which pays extra attention to accountability risks in algorithmic systems.
Abstract: As research on algorithms and their impact proliferates, so do calls for scrutiny/accountability of algorithms. A systematic review of the work that has been done in the field of 'algorithmic accountability' has so far been lacking. This contribution puts forth such a systematic review, following the PRISMA statement. 242 English articles from the period 2008 up to and including 2018 were collected and extracted from Web of Science and SCOPUS, using a recursive query design coupled with computational methods. The 242 articles were prioritized and ordered using affinity mapping, resulting in 93 'core articles' which are presented in this contribution. The recursive search strategy made it possible to look beyond the term 'algorithmic accountability'. That is, the query also included terms closely connected to the theme (e.g. ethics and AI, regulation of algorithms). This approach allows for a perspective not just from critical algorithm studies, but an interdisciplinary overview drawing on material from data studies to law, and from computer science to governance studies. To structure the material, Bovens's widely accepted definition of accountability serves as a focal point. The material is analyzed on the five points Bovens identified as integral to accountability: its arguments on (1) the actor, (2) the forum, (3) the relationship between the two, (3) the content and criteria of the account, and finally (5) the consequences which may result from the account. The review makes three contributions. First, an integration of accountability theory in the algorithmic accountability discussion. Second, a cross-sectoral overview of the that same discussion viewed in light of accountability theory which pays extra attention to accountability risks in algorithmic systems. Lastly, it provides a definition of algorithmic accountability based on accountability theory and algorithmic accountability literature.

136 citations


Cites background from "Ethical Implications and Accountabi..."

  • ...549], and other design decisions [101] such as the weighting of factors [40, p....

    [...]

  • ...Martin [101] argues that in such situations, these third party organizations become a voluntary part of the...

    [...]

  • ...This willful membership creates ‘an obligation to respect the norms of the community as a member’ [101]....

    [...]

  • ...This is even more crucial when a third party is developing the system, as they will need to respect the norms and values of the context in which the system will eventually be deployed [69, 101, 124]....

    [...]

  • ...This is for instance the case in instances where we speak of the ‘data controller’ [135] or the ‘developing firm’ [101]....

    [...]

Journal ArticleDOI
TL;DR: It is suggested that critical data literacy, ethical awareness, the use of participatory design methods, and private regulatory regimes within civil society can help overcome challenges from the efficiency-driven logic of algorithm-based HR decision-making.
Abstract: Organizations increasingly rely on algorithm-based HR decision-making to monitor their employees. This trend is reinforced by the technology industry claiming that its decision-making tools are efficient and objective, downplaying their potential biases. In our manuscript, we identify an important challenge arising from the efficiency-driven logic of algorithm-based HR decision-making, namely that it may shift the delicate balance between employees’ personal integrity and compliance more in the direction of compliance. We suggest that critical data literacy, ethical awareness, the use of participatory design methods, and private regulatory regimes within civil society can help overcome these challenges. Our paper contributes to literature on workplace monitoring, critical data studies, personal integrity, and literature at the intersection between HR management and corporate responsibility.

128 citations


Cites background or methods from "Ethical Implications and Accountabi..."

  • ...This lack of transparency makes it difficult for HR managers to uncover biases either in the 1 3 code of an algorithm itself or in the data with which the algorithm was trained (Martin 2018)....

    [...]

  • ...…monitoring (Ball 2001), and management (Bernstein 2017) have discussed the use of algorithm-based decision-making, problematizing issues regarding privacy (Martin and Nissenbaum 2016), accountability (Diakopoulos 2016; Neyland 2015), transparency (Ananny and Crawford 2018; Martin 2018; Stohl et al....

    [...]

  • ...As these data might be biased according to an external reference point, the training has the great capacity to be faulty (Barocas and Selbst 2016; Martin 2018)....

    [...]

Posted Content
TL;DR: Big Data is analyzed as an industry, not a technology, and the ethical issues it faces arise from reselling consumers’ data to the secondary market for Big Data are identified.
Abstract: Big Data combines information from diverse sources to create knowledge, make better predictions and tailor services. This article analyzes Big Data as an industry, not a technology, and identifies the ethical issues it faces. These issues arise from reselling consumers’ data to the secondary market for Big Data. Remedies for the issues are proposed, with the goal of fostering a sustainable Big Data Industry.

119 citations

Journal ArticleDOI
TL;DR: In this article , the authors bring together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing to identify questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research.

103 citations

References
More filters
Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations


Additional excerpts

  • ...As perhaps best defined by the most cited textbook on algorithms, an algorithm is a sequence of computational steps that transform inputs into outputs—similar to a recipe (Cormen 2009)....

    [...]

01 Jan 2005

19,250 citations

Journal ArticleDOI
TL;DR: This study explores the dimensionality of organizational justice and provides evidence of construct validity for a new justice measure and demonstrated predictive validity for the justice dimensions on important outcomes, including leader evaluation, rule compliance, commitment, and helping behavior.
Abstract: This study explores the dimensionality of organizational justice and provides evidence of construct validity for a new justice measure. Items for this measure were generated by strictly following the seminal works in the justice literature. The measure was then validated in 2 separate studies. Study 1 occurred in a university setting, and Study 2 occurred in a field setting using employees in an automobile parts manufacturing company. Confirmatory factor analyses supported a 4-factor structure to the measure, with distributive, procedural, interpersonal, and informational justice as distinct dimensions. This solution fit the data significantly better than a 2- or 3-factor solution using larger interactional or procedural dimensions. Structural equation modeling also demonstrated predictive validity for the justice dimensions on important outcomes, including leader evaluation, rule compliance, commitment, and helping behavior.

4,482 citations


"Ethical Implications and Accountabi..." refers background in this paper

  • ...COMPAS is a prime example of disparate impact by an algorithm (Barocas and Selbst 2016): where one group receives differential outcome outside the implicit norms of allocation (Colquitt 2001; Feldman et al....

    [...]

  • ...COMPAS is a prime example of disparate impact by an algorithm (Barocas and Selbst 2016): where one group receives differential outcome outside the implicit norms of allocation (Colquitt 2001; Feldman et al. 2015)....

    [...]

Journal ArticleDOI
TL;DR: In the absence of a demonstrable intent to discriminate, the best doctrinal hope for data mining's victims would seem to lie in disparate impact doctrine as discussed by the authors, which holds that a practice can be justified as a business necessity when its outcomes are predictive of future employment outcomes, and data mining is specifically designed to find such statistical correlations.
Abstract: Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm’s use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court.This Essay examines these concerns through the lens of American antidiscrimination law — more particularly, through Title VII’s prohibition of discrimination in employment. In the absence of a demonstrable intent to discriminate, the best doctrinal hope for data mining’s victims would seem to lie in disparate impact doctrine. Case law and the Equal Employment Opportunity Commission’s Uniform Guidelines, though, hold that a practice can be justified as a business necessity when its outcomes are predictive of future employment outcomes, and data mining is specifically designed to find such statistical correlations. Unless there is a reasonably practical way to demonstrate that these discoveries are spurious, Title VII would appear to bless its use, even though the correlations it discovers will often reflect historic patterns of prejudice, others’ discrimination against members of protected groups, or flaws in the underlying dataAddressing the sources of this unintentional discrimination and remedying the corresponding deficiencies in the law will be difficult technically, difficult legally, and difficult politically. There are a number of practical limits to what can be accomplished computationally. For example, when discrimination occurs because the data being mined is itself a result of past intentional discrimination, there is frequently no obvious method to adjust historical data to rid it of this taint. Corrective measures that alter the results of the data mining after it is complete would tread on legally and politically disputed terrain. These challenges for reform throw into stark relief the tension between the two major theories underlying antidiscrimination law: anticlassification and antisubordination. Finding a solution to big data’s disparate impact will require more than best efforts to stamp out prejudice and bias; it will require a wholesale reexamination of the meanings of “discrimination” and “fairness.”

1,504 citations


"Ethical Implications and Accountabi..." refers background in this paper

  • ...COMPAS is a prime example of disparate impact by an algorithm (Barocas and Selbst 2016): where one group receives differential outcome outside the implicit norms of allocation (Colquitt 2001; Feldman et al....

    [...]

Journal ArticleDOI
TL;DR: In this article, Alice asked the Cheshire Cat to tell her which way she ought to go from here, and the Cat said that depends a good deal on where you want to get to, and then it doesn't matter which way you go.
Abstract: “Would you tell me, please, which way I ought to go from here?” Alice asked the Cheshire Cat. “That depends a good deal on where you want to get to,” said the Cat. “I don't much care where...” said Alice. “Then it doesn't matter which way you go,” said the Cat. (Carroll, 1983: 72)

1,476 citations


"Ethical Implications and Accountabi..." refers background in this paper

  • ...In social contract terms, firms that develop algorithms are members of the community to which they sell the algorithm—e.g., criminal justice, medicine, education, human resources, military, etc.—and create an obligation to respect the norms of the community as a member (Donaldson and Dunfee 1994)....

    [...]

  • ...—and create an obligation to respect the norms of the community as a member (Donaldson and Dunfee 1994)....

    [...]