scispace - formally typeset
Search or ask a question

Showing papers by "Michael S. Bernstein published in 2013"


Proceedings ArticleDOI
23 Feb 2013
TL;DR: This paper outlines a framework that will enable crowd work that is complex, collaborative, and sustainable, and lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.
Abstract: Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.

836 citations


Proceedings ArticleDOI
27 Apr 2013
TL;DR: This paper combines survey and large-scale log data to examine how well users' perceptions of their audience match their actual audience on Facebook, and finds that social media users consistently underestimate their audience size for their posts.
Abstract: When you share content in an online social network, who is listening? Users have scarce information about who actually sees their content, making their audience seem invisible and difficult to estimate. However, understanding this invisible audience can impact both science and design, since perceived audiences influence content production and self-presentation online. In this paper, we combine survey and large-scale log data to examine how well users' perceptions of their audience match their actual audience on Facebook. We find that social media users consistently underestimate their audience size for their posts, guessing that their audience is just 27% of its true size. Qualitative coding of survey responses reveals folk theories that attempt to reverse-engineer audience size using feedback and friend count, though none of these approaches are particularly accurate. We analyze audience logs for 222,000 Facebook users' posts over the course of one month and find that publicly visible signals --- friend count, likes, and comments --- vary widely and do not strongly indicate the audience of a single post. Despite the variation, users typically reach 61% of their friends each month. Together, our results begin to reveal the invisible undercurrents of audience attention and behavior in online social networks.

343 citations


Proceedings ArticleDOI
23 Feb 2013
TL;DR: EmailValet is an email client that recruits remote assistants from an expert crowdsourcing marketplace that aims for parsimony and transparency in access con-trol for the crowd, and is an example of a valet approach to crowdsourcing.
Abstract: This paper introduces privacy and accountability techniques for crowd-powered systems. We focus on email task management: tasks are an implicit part of every inbox, but the overwhelming volume of incoming email can bury important requests. We present EmailValet, an email client that recruits remote assistants from an expert crowdsourcing marketplace. By annotating each email with its implicit tasks, EmailValet's assistants create a task list that is automatically populated from emails in the user's inbox. The system is an example of a valet approach to crowdsourcing, which aims for parsimony and transparency in access con-trol for the crowd. To maintain privacy, users specify rules that define a sliding-window subset of their inbox that they are willing to share with assistants. To support accountability, EmailValet displays the actions that the assistant has taken on each email. In a weeklong field study, participants completed twice as many of their email-based tasks when they had access to crowdsourced assistants, and they became increasingly comfortable sharing their inbox with assistants over time.

49 citations


Proceedings ArticleDOI
08 Oct 2013
TL;DR: DeduceIt is presented, a system for creating, grading, and analyzing derivation assignments in any formal domain, and suggests that automated reasoning can extend online assignments and large-scale education to many new domains.
Abstract: Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains

18 citations


Proceedings ArticleDOI
23 Feb 2013
TL;DR: This panel brings in an interesting mix of researchers from the crowdsourcing/development space and social entrepreneurs to discuss the pros and cons of micro-volunteering for non-profits and identify the missing blocks in enabling us to replicate this concept in developing regions worldwide.
Abstract: Finding and retaining volunteers is a challenge for most of the NGOs (non-government-organizations) or non-profit organizations worldwide. Quite often, volunteers have a desire to help but are hesitant in making time commitments due to busy lives or demanding schedules. Micro-volunteering or crowdsourced volunteering has taken off in the last few years where a task is divided into fragments and accomplished collectively by the crowd. Individuals are only required to work on small chunks of tasks during their bits of short free times during the day. This panel brings in an interesting mix of researchers from the crowdsourcing/development space and social entrepreneurs to discuss the pros and cons of micro-volunteering for non-profits and identify the missing blocks in enabling us to replicate this concept in developing regions worldwide.

14 citations