scispace - formally typeset
Search or ask a question

Showing papers by "Michael S. Bernstein published in 2008"


Journal ArticleDOI
TL;DR: Roles in the typical information scrap lifecycle are identified, and a set of unmet design needs in current PIM tools are suggested: lightweight entry, unconstrained content, flexible use and adaptability, visibility, and mobility.
Abstract: In this article we investigate information scraps—personal information where content has been scribbled on Post-it notes, scrawled on the corners of sheets of paper, stuck in our pockets, sent in email messages to ourselves, and stashed in miscellaneous digital text files. Information scraps encode information ranging from ideas and sketches to notes, reminders, shipment tracking numbers, driving directions, and even poetry. Although information scraps are ubiquitous, we have much still to learn about these loose forms of information practice. Why do we keep information scraps outside of our traditional PIM applicationsq What role do information scraps play in our overall information practiceq How might PIM applications be better designed to accommodate and support information scraps' creation, manipulation and retrievalq We pursued these questions by studying the information scrap practices of 27 knowledge workers at five organizations. Our observations shed light on information scraps' content, form, media, and location. From this data, we elaborate on the typical information scrap lifecycle, and identify common roles that information scraps play: temporary storage, archiving, work-in-progress, reminding, and management of unusual data. These roles suggest a set of unmet design needs in current PIM tools: lightweight entry, unconstrained content, flexible use and adaptability, visibility, and mobility.

136 citations


Journal ArticleDOI
TL;DR: The approach to friendsourcing is to design socially enjoyable interactions that produce the desired information as a side effect in a form of crowdsourcing aimed at collecting accurate information available only to a small, socially-connected group of individuals.
Abstract: When information is known only to friends in a social network, traditional crowdsourcing mechanisms struggle to motivate a large enough user population and to ensure accuracy of the collected information. We thus introduce friendsourcing, a form of crowdsourcing aimed at collecting accurate information available only to a small, socially-connected group of individuals. Our approach to friendsourcing is to design socially enjoyable interactions that produce the desired information as a side effect.We focus our analysis around Collabio, a novel social tagging game that we developed to encourage friends to tag one another within an online social network. Collabio encourages friends, family, and colleagues to generate useful information about each other. We describe the design space of incentives in social tagging games and evaluate our choices by a combination of usage log analysis and survey data. Data acquired via Collabio is typically accurate and augments tags that could have been found on Facebook or the Web. To complete the arc from data collection to application, we produce a trio of prototype applications to demonstrate how Collabio tags could be utilized: an aggregate tag cloud visualization, a personalized RSS feed, and a question and answer system. The social data powering these applications enables them to address needs previously difficult to support, such as question answering for topics comprehensible only to a few of a user's friends.

82 citations


Proceedings ArticleDOI
19 Oct 2008
TL;DR: Inky is an example of a new kind of hybrid between a command line and a GUI interface, which aims to capture the efficiency benefits of typed commands while mitigating their usability problems.
Abstract: We present Inky, a command line for shortcut access to common web tasks. Inky aims to capture the efficiency benefits of typed commands while mitigating their usability problems. Inky commands have little or no new syntax to learn, and the system displays rich visual feedback while the user is typing, including missing parameters and contextual information automatically clipped from the target web site. Inky is an example of a new kind of hybrid between a command line and a GUI interface. We describe the design and implementation of two prototypes of this idea, and report the results of a preliminary user study.

35 citations


Proceedings ArticleDOI
19 Oct 2008
TL;DR: This work proposes a fuzzy association model in which windows are related to one another by varying degrees, and introduces the WindowRank algorithm and its use in determining window association.
Abstract: Window management research has aimed to leverage users' tasks to organize the growing number of open windows in a useful manner. This research has largely assumed task classifications to be binary -- either a window is in a task, or not -- and context-independent. We suggest that the continual evolution of tasks can invalidate this approach and instead propose a fuzzy association model in which windows are related to one another by varying degrees. Task groupings are an emergent property of our approach. To support the association model, we introduce the WindowRank algorithm and its use in determining window association. We then describe Taskpose, a prototype window switch visualization embodying these ideas, and report on a week-long user study of the system.

33 citations


01 Apr 2008
TL;DR: It is argued that improved mechanisms for knowledge acquisition and access on the semantic web (SW) will be necessary before it will be adopted widely by end-users and potential help from user modeling is proposed to enable accurate, efficient, SW knowledge modeling for everyone.
Abstract: In this position paper, we argue that improved mechanisms for knowledge acquisition and access on the semantic web (SW) will be necessary before it will be adopted widely by end-users. In particular, we propose an investigation surrounding improved languages for knowledge exchange, better UI mechanisms for interaction, and potential help from user modeling to enable accurate, efficient, SW knowledge modeling for everyone.

5 citations


10 Feb 2008
TL;DR: In this article, a case study of an artifact design and evaluation process is presented, focusing on how right thinking about design methods may at times result in sub-optimal results and where design methodology may need to be tuned to be more sensitive to the domain of practice, in this case software evaluation in personal information management.
Abstract: This paper is a case study of an artifact design and evaluation process; it is a reflection on how right thinking about design methods may at times result in sub-optimal results. Our goal has been to assess our decision making process throughout the design and evaluation stages for a software prototype in order to consider where design methodology may need to be tuned to be more sensitive to the domain of practice, in this case software evaluation in personal information management. In particular, we reflect on design methods around (1) scale of prototype, (2) prototyping and design process, (3) study design, and (4) study population.

5 citations


01 Oct 2008
TL;DR: AtomsMasher (AM), a new framework which extends data mashups into the realm of context-aware reactive behaviors, and greatly simplifies the process of creating such automation in a way that is flexible, predictable, scalable and within the reach of everyday Web programmers.
Abstract: The rise of "Web 2.0" has seen an explosion of web sites for the social sharing of personal information. To enable users to make valuable use of the rich yet fragmented sea of public, social, and personal information, data mashups emerged to provide a means for combining and filtering such information into coherent feeds and visualizations. In this paper we present AtomsMasher (AM), a new framework which extends data mashups into the realm of context-aware reactive behaviors. Reactive scripts in AM can be made to trigger automatically in response to changes in its world model derived from multiple web-based data feeds. By exposing a simple state-model abstraction and query language abstractions of data derived from heterogeneous web feeds through a simulation-based interactive script debugging environment, AM greatly simplifies the process of creating such automation in a way that is flexible, predictable, scalable and within the reach of everyday Web programmers.

2 citations


Dissertation
01 Jan 2008
TL;DR: This thesis investigates information scraps – personal information whose content has been scribbled on Post-it notes, scrawled on the corners of sheets of paper, stuck in the authors' pockets, sent in e-mail messages to ourselves, and stashed into miscellaneous digital text files, and designs and builds two research systems designed for information scrap management.
Abstract: In this thesis I investigate information scraps – personal information whose content has been scribbled on Post-it notes, scrawled on the corners of sheets of paper, stuck in our pockets, sent in e-mail messages to ourselves, and stashed into miscellaneous digital text files. Information scraps encode information ranging from ideas and sketches to notes, reminders, shipment tracking numbers, driving directions, and even poetry. I proceed by performing an in-depth ethnographic investigation of the nature and use of information scraps, and by designing and building two research systems designed for information scrap management. The first system, Jourknow, lowers the capture barrier for unstructured notes and structured information such as calendar items and to-dos, captures contextual information surrounding note creation such as location, documents viewed, and people corresponded with, and manages uncommon user-generated personal information such as restaurant reviews or this week’s shopping list. The follow-up system, Pinky, further explores the lightweight capture space by providing a command line interface that is tolerant to re-ordering and GUI affordances for quick and accurate entry. Reflecting on these tools’ successes and failures, I characterize the design process challenges inherent in designing and building information scrap tools. Thesis Supervisor: David R. Karger Title: Professor of Electrical Engineering and Computer Science

2 citations


01 Mar 2008
TL;DR: This paper introduces AtomsMasher, an environment for creating reactive scripts that can draw upon widely heterogeneous information to automate common information-intensive tasks, and employs a mix of automatic and user-assisted approaches to build a common internal representation in RDF.
Abstract: This paper introduces AtomsMasher, an environment for creating reactive scripts that can draw upon widely heterogeneous information to automate common information-intensive tasks. AtomsMasher is enabled by the wealth of user-contributed personal, social and contextual information that has arisen from Web2.0 social networking content sharing and micro-blogging sites. Starting with existing web mashup tools and end-user automation, we describe new challenges in achieving reactive behaviours: deriving a consistent representation that can be used to predictably drive discrete action from a multitude of noisy, incomplete and inconsistent data sources. Our solution employs a mix of automatic and user-assisted approaches to build a common internal representation in RDF, which is used to provide a simplified programming model that lets Web2.0 programmers succinctly specify behaviours in terms of high level relationships between entities and their current contextual state. We highlight the advantages and limitations of this architecture, and conclude with ongoing work towards making the system more predictable and understandable, and accessible to non-programmers.

1 citations


01 Jan 2008
TL;DR: It is argued that improved mechanisms for knowledge acquisition on the semantic web (SW) will be necessary before it will be adopted widely by end-users and an investigation surrounding improved languages for knowledge exchange, better UI mechanisms for interaction, and potential help from user modeling is proposed.
Abstract: In this position paper, we argue that improved mechanisms for knowledge acquisition on the semantic web (SW) will be necessary before it will be adopted widely by end-users. In particular, we propose an investigation surrounding improved languages for knowledge exchange, better UI mechanisms for interaction, and potential help from user modeling to enable accurate, efficient, SW knowledge modeling for everyone.