scispace - formally typeset
Search or ask a question

Showing papers by "Eric Gilbert published in 2021"


Proceedings ArticleDOI
06 May 2021
TL;DR: Affirmative consent is the idea that someone must ask for, and earn, enthusiastic approval before interacting with someone else as discussed by the authors, and it has been used to theorize and prevent sexual assault.
Abstract: Affirmative consent is the idea that someone must ask for, and earn, enthusiastic approval before interacting with someone else. For decades, feminist activists and scholars have used affirmative consent to theorize and prevent sexual assault. In this paper, we ask: Can affirmative consent help to theorize online interaction? Drawing from feminist, legal, and HCI literature, we introduce the feminist theory of affirmative consent and use it to analyze social computing systems. We present affirmative consent’s five core concepts: it is voluntary, informed, revertible, specific, and unburdensome. Using these principles, this paper argues that affirmative consent is both an explanatory and generative theoretical framework. First, affirmative consent is a theoretical abstraction for explaining various problematic phenomena in social platforms—including mass online harassment, revenge porn, and problems with content feeds. Finally, we argue that affirmative consent is a generative theoretical foundation from which to imagine new design ideas for consentful socio-technical systems.

38 citations


Posted Content
TL;DR: In this paper, the authors assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles: no research condition, they were just asked to view the article and then render a judgment, and in an individual research condition they were also asked to search for corroborating evidence and provide a link to the best evidence they found.
Abstract: Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are wildly misleading, or does partisanship and inexperience get in the way? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to look at links collected from workers in the individual research condition. The individual research condition reduced the partisanship of judgments. Moreover, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist's ratings. Without research, the crowd judgments were better than those of a single journalist, but not as good as the average of two journalists.

1 citations