scispace - formally typeset
Search or ask a question
Posted Content

Multisided Fairness for Recommendation.

TL;DR: It is shown that in some recommendation contexts, fairness may be a multisided concept, in which fair outcomes for multiple individuals need to be considered, and a taxonomy of classes of fairness-aware recommender systems is presented.
Abstract: Recent work on machine learning has begun to consider issues of fairness. In this paper, we extend the concept of fairness to recommendation. In particular, we show that in some recommendation contexts, fairness may be a multisided concept, in which fair outcomes for multiple individuals need to be considered. Based on these considerations, we present a taxonomy of classes of fairness-aware recommender systems and suggest possible fairness-aware recommendation architectures.
Citations
More filters
Posted Content
TL;DR: This paper provides a taxonomy to position and organize the existing work on recommendation debiasing, and identifies some open challenges and envision some future directions on this important yet less investigated topic.
Abstract: While recent years have witnessed a rapid growth of research papers on recommender system (RS), most of the papers focus on inventing machine learning models to better fit user behavior data. However, user behavior data is observational rather than experimental. This makes various biases widely exist in the data, including but not limited to selection bias, position bias, exposure bias, and popularity bias. Blindly fitting the data without considering the inherent biases will result in many serious issues, e.g., the discrepancy between offline evaluation and online metrics, hurting user satisfaction and trust on the recommendation service, etc. To transform the large volume of research models into practical improvements, it is highly urgent to explore the impacts of the biases and perform debiasing when necessary. When reviewing the papers that consider biases in RS, we find that, to our surprise, the studies are rather fragmented and lack a systematic organization. The terminology "bias" is widely used in the literature, but its definition is usually vague and even inconsistent across papers. This motivates us to provide a systematic survey of existing work on RS biases. In this paper, we first summarize seven types of biases in recommendation, along with their definitions and characteristics. We then provide a taxonomy to position and organize the existing work on recommendation debiasing. Finally, we identify some open challenges and envision some future directions, with the hope of inspiring more research work on this important yet less investigated topic.

286 citations


Cites background from "Multisided Fairness for Recommendat..."

  • ...user attributes, the concept of fairness has been generalized to multiple dimensions in recommender systems [178], spanning from fairness-aware ranking [114], [115], [116], supplier fairness in two-sided marketplace platforms [179], provider-side fairness to make items from different providers have a fair chance of being recommended [108], [180], fairness in group recommendation to minimize the unfairness between group members [129]....

    [...]

Posted Content
TL;DR: An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas.
Abstract: As Machine Learning technologies become increasingly used in contexts that affect citizens, companies as well as researchers need to be confident that their application of these methods will not have unexpected social implications, such as bias towards gender, ethnicity, and/or people with disabilities. There is significant literature on approaches to mitigate bias and promote fairness, yet the area is complex and hard to penetrate for newcomers to the domain. This article seeks to provide an overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature. It organises approaches into the widely accepted framework of pre-processing, in-processing, and post-processing methods, subcategorizing into a further 11 method areas. Although much of the literature emphasizes binary classification, a discussion of fairness in regression, recommender systems, unsupervised learning, and natural language processing is also provided along with a selection of currently available open source libraries. The article concludes by summarising open challenges articulated as four dilemmas for fairness research.

240 citations

Proceedings ArticleDOI
17 Oct 2018
TL;DR: This work proposes a number of recommendation policies which jointly optimize relevance and fairness, thereby achieving substantial improvement in supplier fairness without noticeable decline in user satisfaction, and considers user disposition towards fair content.
Abstract: Two-sided marketplaces are platforms that have customers not only on the demand side (e.g. users), but also on the supply side (e.g. retailer, artists). While traditional recommender systems focused specifically towards increasing consumer satisfaction by providing relevant content to consumers, two-sided marketplaces face the problem of additionally optimizing for supplier preferences, and visibility. Indeed, the suppliers would want afair opportunity to be presented to users. Blindly optimizing for consumer relevance may have a detrimental impact on supplier fairness. Motivated by this problem, we focus on the trade-off between objectives of consumers and suppliers in the case of music streaming services, and consider the trade-off betweenrelevance of recommendations to the consumer (i.e. user) andfairness of representation of suppliers (i.e. artists) and measure their impact on consumersatisfaction. We propose a conceptual and computational framework using counterfactual estimation techniques to understand, and evaluate different recommendation policies, specifically around the trade-off between relevance and fairness, without the need for running many costly A/B tests. We propose a number of recommendation policies which jointly optimize relevance and fairness, thereby achieving substantial improvement in supplier fairness without noticeable decline in user satisfaction. Additionally, we consider user disposition towards fair content, and propose a personalized recommendation policy which takes into account consumer's tolerance towards fair content. Our findings could guide the design of algorithms powering two-sided marketplaces, as well as guide future research on sophisticated algorithms for joint optimization of user relevance, satisfaction and fairness.

236 citations


Cites background from "Multisided Fairness for Recommendat..."

  • ...The concept of multiple stakeholders in recommender systems is also suggested in prior research [1], including a previous attempt on considering multi-sided fairness in marketplaces [5]....

    [...]

Posted Content
TL;DR: An overview of the main concepts of identifying, measuring and improving algorithmic fairness when using AI algorithms is presented and the most commonly used fairness-related datasets in this field are described.
Abstract: An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence (AI) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop AI algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness, even when there is no intention for it. This paper presents an overview of the main concepts of identifying, measuring and improving algorithmic fairness when using AI algorithms. The paper begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process and post-process mechanisms. A comprehensive comparison of the mechanisms is then conducted, towards a better understanding of which mechanisms should be used in different scenarios. The paper then describes the most commonly used fairness-related datasets in this field. Finally, the paper ends by reviewing several emerging research sub-fields of algorithmic fairness.

189 citations


Cites background from "Multisided Fairness for Recommendat..."

  • ...[26] notes that many recommender system applications involve multiple stakeholders and may therefore give rise to fairness issues for more than one group of participants simultaneously, as well as achieving fairness at a regulatory level or the level of the entire system (referred to as multisided fairness)....

    [...]

  • ...Moreover, note that an individual definition of P-fairness, rather than group-fairness, may be somewhat similar to the definition of coverage in recommender systems, requiring that each item be recommended fairly [26]....

    [...]

  • ...A recent paper, [26], notes that extending the notion of fairness from general classification tasks to recommender systems should take personalization into account....

    [...]

Journal ArticleDOI
TL;DR: This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review, and identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact.
Abstract: This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.

173 citations

References
More filters
Proceedings ArticleDOI
08 Jan 2012
TL;DR: A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Abstract: We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.

2,027 citations


"Multisided Fairness for Recommendat..." refers background in this paper

  • ...In the motivating example from [6], a credit card company is recommending consumer credit o ers....

    [...]

  • ...group fairness in fairness-aware classi cation [6]....

    [...]

  • ...Bias and fairness in machine learning are topics of considerable recent research interest [4, 6, 17]....

    [...]

Posted Content
TL;DR: In this article, the authors proposed a framework for fair classification comprising a task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand, and an algorithm for maximizing utility subject to the fairness constraint that similar individuals are treated similarly.
Abstract: We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.

2,003 citations

Proceedings Article
Rich Zemel1, Yu Wu1, Kevin Swersky1, Toni Pitassi1, Cynthia Dwork2 
16 Jun 2013
TL;DR: A learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly).
Abstract: We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.

1,444 citations


"Multisided Fairness for Recommendat..." refers background in this paper

  • ...One intriguing possibility is to design a recommender system following the approach of [23] in generating fair classi cation....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors investigate the generalized second-price (GSP) auction, a new mechanism used by search engines to sell online advertising, and show that it has a unique equilibrium, with the same payoffs to all players as the dominant strategy equilibrium of VCG.
Abstract: We investigate the "generalized second-price" (GSP) auction, a new mechanism used by search engines to sell online advertising. Although GSP looks similar to the Vickrey-Clarke-Groves (VCG) mechanism, its properties are very different. Unlike the VCG mechanism, GSP generally does not have an equilibrium in dominant strategies, and truth-telling is not an equilibrium of GSP. To analyze the properties of GSP, we describe the generalized English auction that corresponds to GSP and show that it has a unique equilibrium. This is an ex post equilibrium, with the same payoffs to all players as the dominant strategy equilibrium of VCG.

1,406 citations

Book ChapterDOI
01 Jan 2011
TL;DR: In this paper, the authors survey the recent progress in the field of collaborative filtering and describe several extensions that bring competitive accuracy into neighborhood methods, which used to dominate the field and demonstrate how to utilize temporal models and implicit feedback to extend models accuracy.
Abstract: The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it played a central role within the recently completed Netflix competition has contributed to its popularity. This chapter surveys the recent progress in the field. Matrix factorization techniques, which became a first choice for implementing CF, are described together with recent innovations. We also describe several extensions that bring competitive accuracy into neighborhood methods, which used to dominate the field. The chapter demonstrates how to utilize temporal models and implicit feedback to extend models accuracy. In passing, we include detailed descriptions of some the central methods developed for tackling the challenge of the Netflix Prize competition.

1,094 citations


"Multisided Fairness for Recommendat..." refers background in this paper

  • ...The dominant recommendation paradigm, collaborative ltering [13], uses user behavior as its input, ignoring user demographics and item attributes....

    [...]