scispace - formally typeset
M

Michael S. Bernstein

Researcher at Stanford University

Publications -  207
Citations -  59397

Michael S. Bernstein is an academic researcher from Stanford University. The author has contributed to research in topics: Crowdsourcing & Computer science. The author has an hindex of 52, co-authored 191 publications receiving 42744 citations. Previous affiliations of Michael S. Bernstein include Association for Computing Machinery & Massachusetts Institute of Technology.

Papers
More filters
Journal ArticleDOI

Ink: Increasing Worker Agency to Reduce Friction in Hiring Crowd Workers

TL;DR: Ink is proposed, a system that crowd workers can use to showcase their services by embedding tasks inside web tutorials by framing hiring expert crowd workers within users’ well-established information seeking habits and gave workers more control over their work.
Proceedings ArticleDOI

A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality

Abstract: Microtask crowdsourcing is increasingly critical to the creation of extremely large datasets. As a result, crowd workers spend weeks or months repeating the exact same tasks, making it necessary to understand their behavior over these long periods of time. We utilize three large, longitudinal datasets of nine million annotations collected from Amazon Mechanical Turk to examine claims that workers fatigue or satisfice over these long periods, producing lower quality work. We find that, contrary to these claims, workers are extremely stable in their quality over the entire period. To understand whether workers set their quality based on the task's requirements for acceptance, we then perform an experiment where we vary the required quality for a large crowdsourcing task. Workers did not adjust their quality based on the acceptance threshold: workers who were above the threshold continued working at their usual quality level, and workers below the threshold self-selected themselves out of the task. Capitalizing on this consistency, we demonstrate that it is possible to predict workers' long-term quality using just a glimpse of their quality on the first five tasks.
Journal ArticleDOI

My Team Will Go On: Differentiating High and Low Viability Teams through Team Interaction

TL;DR: This study aggregates features drawn from the organizational behavior literature to train a viability classification model over a dataset of 669 10-minute text conversations of online teams, and identifies the use of exclusive language such as 'but' and 'except' and second person pronouns as the most predictive features for detecting the most viable teams.
Proceedings ArticleDOI

Crowd-powered interfaces

TL;DR: This work investigates crowd-powered interfaces: interfaces that embed human activity to support high-level conceptual activities such as writing, editing and question-answering, and maps out the design space of interfaces that depend on outsourced, friendsourced, and data mined resources.
Proceedings ArticleDOI

Not Now, Ask Later: Users Weaken Their Behavior Change Regimen Over Time, But Expect To Re-Strengthen It Imminently

TL;DR: In this paper, the authors investigate how effectively users adhere to behavioral change interventions that help them control their online browsing habits and find that, while users typically begin with high-challenge interventions, over time they allow themselves to slip into easier and easier interventions.