scispace - formally typeset
Search or ask a question
Author

Reuben Binns

Bio: Reuben Binns is an academic researcher from University of Oxford. The author has contributed to research in topics: Data Protection Act 1998 & Economic Justice. The author has an hindex of 23, co-authored 75 publications receiving 2171 citations. Previous affiliations of Reuben Binns include University of Southampton & Katholieke Universiteit Leuven.

Papers published on a yearly basis

Papers
More filters
21 Jan 2018
TL;DR: This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.
Abstract: What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.

287 citations

Journal ArticleDOI
TL;DR: It is shown that even ‘well optimized’ moderation systems could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms, as it is demonstrated these systems remain opaque, unaccountable and poorly understood.
Abstract: As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinf...

271 citations

Proceedings ArticleDOI
21 Apr 2018
TL;DR: In this article, the authors interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work, and the results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning-absences likely to undermine practical initiatives unless addressed.
Abstract: Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions-like taxation, justice, and child protection-are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning-absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the 'street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.

231 citations

Posted Content
TL;DR: There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
Abstract: Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles --- under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.

231 citations

Journal ArticleDOI
TL;DR: Three potential approaches to deal with knowledge and information deficits in fairness issues that are emergent properties of complex sociotechnical systems are presented and discussed.
Abstract: Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of th...

227 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is impossible that the rulers now on earth should make any benefit, or derive any the least shadow of authority from that, which is held to be the fountain of all power, Adam's private dominion and paternal jurisdiction.
Abstract: All these premises having, as I think, been clearly made out, it is impossible that the rulers now on earth should make any benefit, or derive any the least shadow of authority from that, which is held to be the fountain of all power, Adam's private dominion and paternal jurisdiction; so that he that will not give just occasion to think that all government in the world is the product only of force and violence, and that men live together by no other rules but that of beasts, where the strongest carries it, and so lay a foundation for perpetual disorder and mischief, tumult, sedition and rebellion, (things that the followers of that hypothesis so loudly cry out against) must of necessity find out another rise of government, another original of political power, and another way of designing and knowing the persons that have it, than what Sir Robert Filmer hath taught us.

3,076 citations

01 Jan 2008
TL;DR: In this article, the authors argue that rational actors make their organizations increasingly similar as they try to change them, and describe three isomorphic processes-coercive, mimetic, and normative.
Abstract: What makes organizations so similar? We contend that the engine of rationalization and bureaucratization has moved from the competitive marketplace to the state and the professions. Once a set of organizations emerges as a field, a paradox arises: rational actors make their organizations increasingly similar as they try to change them. We describe three isomorphic processes-coercive, mimetic, and normative—leading to this outcome. We then specify hypotheses about the impact of resource centralization and dependency, goal ambiguity and technical uncertainty, and professionalization and structuration on isomorphic change. Finally, we suggest implications for theories of organizations and social change.

2,134 citations

Journal ArticleDOI
G. W. Smith1

1,991 citations

Posted Content
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Abstract: With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.

1,571 citations

Journal ArticleDOI
TL;DR: In this article, a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.
Abstract: In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.

1,419 citations