scispace - formally typeset
Search or ask a question

Showing papers by "Michael S. Bernstein published in 2010"


Proceedings ArticleDOI
03 Oct 2010
TL;DR: S soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand, and the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages.
Abstract: This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.

814 citations


Proceedings ArticleDOI
10 Apr 2010
TL;DR: This paper studied content recommendation on Twitter to better direct user attention and explored three separate dimensions in designing such a recommender: content sources, topic interest models for users, and social voting.
Abstract: More and more web users keep up with newest information through information streams such as the popular micro-blogging website Twitter. In this paper we studied content recommendation on Twitter to better direct user attention. In a modular approach, we explored three separate dimensions in designing such a recommender: content sources, topic interest models for users, and social voting. We implemented 12 recommendation engines in the design space we formulated, and deployed them to a recommender service on the web to gather feedback from real Twitter users. The best performing algorithm improved the percentage of interesting content to 72% from a baseline of 33%. We conclude this work by discussing the implications of our recommender design and how our design can generalize to other information streams.

461 citations


Proceedings ArticleDOI
03 Oct 2010
TL;DR: The Eddi client, called Eddi, groups tweets in a user's feed into topics mentioned explicitly or implicitly, which users can then browse for items of interest, and reveals that search engine callouts outperform other approaches when they employ simple syntactic transformation and backoff strategies.
Abstract: Twitter streams are on overload: active users receive hundreds of items per day, and existing interfaces force us to march through a chronologically-ordered morass to find tweets of interest. We present an approach to organizing a user's own feed into coherently clustered trending topics for more directed exploration. Our Twitter client, called Eddi, groups tweets in a user's feed into topics mentioned explicitly or implicitly, which users can then browse for items of interest. To implement this topic clustering, we have developed a novel algorithm for discovering topics in short status updates powered by linguistic syntactic transformation and callouts to a search engine. An algorithm evaluation reveals that search engine callouts outperform other approaches when they employ simple syntactic transformation and backoff strategies. Active Twitter users evaluated Eddi and found it to be a more efficient and enjoyable way to browse an overwhelming status update feed than the standard chronological interface.

191 citations


Proceedings ArticleDOI
10 Apr 2010
TL;DR: FeedMe as mentioned in this paper is a plug-in for Google Reader that recommends friends who may be interested in seeing content that the user is viewing, provides information on what the recipient has seen and how many emails they have received recently, and gives recipients the opportunity to provide lightweight feedback when they appreciate shared content.
Abstract: To find interesting, personally relevant web content, people rely on friends and colleagues to pass links along as they encounter them. In this paper, we study and augment link-sharing via e-mail, the most popular means of sharing web content today. Armed with survey data indicating that active sharers of novel web content are often those that actively seek it out, we developed FeedMe, a plug-in for Google Reader that makes directed sharing of content a more salient part of the user experience. FeedMe recommends friends who may be interested in seeing content that the user is viewing, provides information on what the recipient has seen and how many emails they have received recently, and gives recipients the opportunity to provide lightweight feedback when they appreciate shared content. FeedMe introduces a novel design space within mixed-initiative social recommenders: friends who know the user voluntarily vet the material on the user's behalf. We performed a two-week field experiment (N=60) and found that FeedMe made it easier and more enjoyable to share content that recipients appreciated and would not have found otherwise.

87 citations


01 Jan 2010
TL;DR: It is proposed that the research community engage with microblogging feed consumption practice: how do users manage the Twitter feed?
Abstract: Twitter streams are on overload: active users receive hundreds of items per day and existing interfaces force us to march through a chronologically-ordered morass to find tweets of interest. We propose that the research community engage with microblogging feed consumption practice: how do users manage the

20 citations


Journal ArticleDOI
TL;DR: A professor and several PhD students at MIT examine the challenges and opportunities in human computation with a focus on machine learning and artificial intelligence.
Abstract: A professor and several PhD students at MIT examine the challenges and opportunities in human computation.

19 citations


Proceedings ArticleDOI
03 Oct 2010
TL;DR: This work investigates crowd-powered interfaces: interfaces that embed human activity to support high-level conceptual activities such as writing, editing and question-answering, and maps out the design space of interfaces that depend on outsourced, friendsourced, and data mined resources.
Abstract: We investigate crowd-powered interfaces: interfaces that embed human activity to support high-level conceptual activities such as writing, editing and question-answering. For example, a crowd-ppowered interface using paid crowd workers can compute a series of textual cuts and edits to a paragraph, then provide the user with an interface to condense his or her writing. We map out the design space of interfaces that depend on outsourced, friendsourced, and data mined resources, and report on designs for each of these. We discuss technical and motivational challenges inherent in human-powered interfaces.

7 citations


Journal ArticleDOI
TL;DR: The articles in this issue of XRDS provide striking answers to major questions in the field of human computation and crowdsourcing, including reversed forms of human-computer symbiosis.
Abstract: I n 1937, Alan Turing formalized the notion of computation by introducing the Turing machine, thus laying the theoretical foundations of modern computer science. Turing also introduced a stronger computational model: a Turing machine with an oracle. In addition to performing computations itself, such a machine is able to ask the oracle questions and immediately receive correct answers, even if these questions are too hard for the machine itself to compute. Depending on the oracle's capabilities, a Turing machine with an oracle therefore could be much stronger than the machine on its own. The oracle itself is an unspecified entity, " apart from saying that it cannot be a machine " (from Turing's 1939 work, Undecidable). The concept of a Turing machine with an oracle is purely mathematical, yet it springs to mind when observing how today's computers use human capabilities in order to solve problems. Computers are now able to complete tasks that involve challenges far beyond what algorithms and artificial intelligence can achieve today— recognizing anomalies in photos, solving a Captcha puzzle, or judging artistic value—by outsourcing these challenges to humans. Millions of people being on the internet is what makes this outsourcing possible on a large scale. It's known as crowdsourcing. Understanding these reversed forms of human-computer symbiosis, in which the computer asks a person to compute something instead of vice versa, is the main object of research in the rising field of human computation. SymbIoSIS While we say that computers are using human capabilities, of course it's really humans who are using computers in order to utilize other humans' capabilities. Indeed, in many applications the role of computers is simply to coordinate between humans as they interact among themselves. This aspect of human computation can be described as a novel form of social organization, in which computers are mediators. The most prominent example is Wikipedia, where computers serve as the platform for aggregation of knowledge and efforts by many people, making it possible to produce a vast, comprehensive, and coherent encyclopedia equivalent to a printed book of around 1,000 volumes, all in a completely distributed manner! The articles in this issue of XRDS provide striking answers to major questions in the field of human computation and crowdsourcing. At the same time, they generate even more questions, highlighting future research directions and opportunities to get involved in the field. IncentIveS Do people partake in human computation tasks …

3 citations


Journal ArticleDOI
TL;DR: In this issue of Communications, Greg Linden asks if spammers have been defeated; Michael Bernstein discusses Clay Shirky's keynote speech at CSCW 2010; and Erika S. Poole writes about how the digital world can help parents cope with the death of a child.
Abstract: http://cacm.acm.org/blogs/blog-cacm The Communications Web site, http://cacm.acm.org, features more than a dozen bloggers in the BLOG@CACM community. In each issue of Communications, we'll publish selected posts or excerpts. twitter Follow us on Twitter at http://twitter.com/blogCACM Greg Linden asks if spammers have been defeated; Michael Bernstein discusses Clay Shirky's keynote speech at CSCW 2010; and Erika S. Poole writes about how the digital world can help parents cope with the death of a child.

2 citations


Book ChapterDOI
01 Jan 2010
TL;DR: Slooney programming as discussed by the authors is a technique for translating sloppy commands into executable code, where the programmer should be able to enter a few keywords and the computer should try to interpret and make sense of this input.
Abstract: Publisher Summary The essence of sloppy programming is that the user should be able to enter something simple and natural, such as a few keywords, and the computer should try everything within its power to interpret and make sense of this input. This chapter discusses several prototypes that implement sloppy programming, translating sloppy commands directly into executable code. It also describes the algorithms used in these prototypes, exposes their limitations, and proposes directions for future work. The techniques described in this discussion still just scratch the surface of a domain with great potential: translating sloppy commands into executable code. It has described potential benefits to end users and expert programmers alike, as well as advocated a continued need for textual command interfaces. A number of prototypes are discussed exploring this technology and what one can learn from them, including the fact that users can form commands for some of these systems without any training. Finally, it gave some high-level technical details about how to go about actually implementing sloppy translation algorithms, with some references for future reading.

1 citations


01 Apr 2010
TL;DR: This paper presented two studies of content not normally expressed in status updates (well-being and status feedback) and considered how they may be processed, valued and used for potential quality-of-life benefits in terms of personal and social reflection and awareness.
Abstract: This position paper presents two studies of content not normally expressed in status updates—well-being and status feedback—and considers how they may be processed, valued and used for potential quality-of-life benefits in terms of personal and social reflection and awareness. Do I Tweet Good? (poor grammar intentional) is a site investigating more nuanced forms of status feedback than current microblogging sites allow, towards understanding self-identity, reflection, and online perception. Healthii is a tool for sharing physical and emotional well-being via status updates, investigating concepts of self-reflection and social awareness. Together, these projects consider furthering the value of microblogging on two fronts: 1) refining the online personal/social networking experience, and 2) using the status update for enhancing the personal/social experience in the offline world, and considering how to leverage that online/offline split. We offer results from two different methods of study and target groups—one co-workers in an academic setting, the other followers on Twitter—to consider how microblogging can become more than just a communication medium if it facilitates these types of reflective practice.