scispace - formally typeset
Search or ask a question
Author

Sally Ahn

Bio: Sally Ahn is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Visual learning & Focus (computing). The author has an hindex of 3, co-authored 3 publications receiving 163 citations.

Papers
More filters
Proceedings ArticleDOI
05 May 2012
TL;DR: MixT is presented, which automatically generates mixed media tutorials that combine the strengths of these tutorial types and ground the design in a formative study that shows that mixed-media tutorials outperform both static and video tutorials.
Abstract: As software interfaces become more complicated, users rely on tutorials to learn, creating an increasing demand for effective tutorials. Existing tutorials, however, are limited in their presentation: Static step-by-step tutorials are easy to scan but hard to create and don't always give all of the necessary information for how to accomplish a step. In contrast, video tutorials provide very detailed information and are easy to create, but they are hard to scan as the video-player timeline does not give an overview of the entire task. We present MixT, which automatically generates mixed media tutorials that combine the strengths of these tutorial types. MixT tutorials include step-by-step text descriptions and images that are easy to scan and short videos for each step that provide additional context and detail as needed. We ground our design in a formative study that shows that mixed-media tutorials outperform both static and video tutorials.

108 citations

Proceedings ArticleDOI
07 Oct 2012
TL;DR: It is hypothesize that a mixed tutorial with static instructions and per-step videos can combine the benefits of both formats, and presents MixT, a system that automatically generates step-by-step mixed media tutorials from user demonstrations.
Abstract: Users of complex software applications often learn concepts and skills through step-by-step tutorials. Today, these tutorials are published in two dominant forms: static tutorials composed of images and text that are easy to scan, but cannot effectively describe dynamic interactions; and video tutorials that show all manipulations in detail, but are hard to navigate. We hypothesize that a mixed tutorial with static instructions and per-step videos can combine the benefits of both formats. We describe a comparative study of static, video, and mixed image manipulation tutorials with 12 participants and distill design guidelines for mixed tutorials. We present MixT, a system that automatically generates step-by-step mixed media tutorials from user demonstrations. MixT segments screencapture video into steps using logs of application commands and input events, applies video compositing techniques to focus on salient infor-mation, and highlights interactions through mouse trails. Informal evaluation suggests that automatically generated mixed media tutorials were as effective in helping users complete tasks as tutorials that were created manually.

93 citations

Proceedings ArticleDOI
10 Mar 2010
TL;DR: In producing twenty-seven illustrations, the authors determined which topics were most difficult for students to understand in the authors' university's introductory computer science courses and followed a step-by-step process of design, redesign, and revision to generate the illustrations.
Abstract: Computer Science Illustrated1 is an endeavor to help visual learners comprehend computer science topics through a series of resolution-independent illustrations, which are made available online for use as handouts in class and posters in the computer labs. These illustrations are designed to present concepts as engaging and memorable visual metaphors combined with concise explanations or short narratives, intended to maintain the students' interest and facilitate retention. An additional goal of the project is to make learning the concepts an entertaining experience through the use of colorful and whimsical characters in the illustrations. In producing our twenty-seven illustrations, we determined which topics were most difficult for students to understand in our university's introductory computer science courses and followed a step-by-step process of design, redesign, and revision to generate our illustrations. We also assessed the effectiveness of our creations, using both subjective and objective measures.

5 citations


Cited by
More filters
Proceedings ArticleDOI
26 Apr 2014
TL;DR: This research introduces a novel crowdsourcing workflow that extracts step-by-step structure from an existing video, including step times, descriptions, and before and after images, and introduces the Find-Verify-Expand design pattern for temporal and visual annotation.
Abstract: Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interactive video player that displays step descriptions and intermediate result thumbnails in the video timeline. Learners in our study performed better and gained more self-efficacy using ToolScape versus a traditional video player. To add the needed step annotations to existing how-to videos at scale, we introduce a novel crowdsourcing workflow. It extracts step-by-step structure from an existing video, including step times, descriptions, and before and after images. We introduce the Find-Verify-Expand design pattern for temporal and visual annotation, which applies clustering, text processing, and visual analysis algorithms to merge crowd output. The workflow does not rely on domain-specific customization, works on top of existing videos, and recruits untrained crowd workers. We evaluated the workflow with Mechanical Turk, using 75 cooking, makeup, and Photoshop videos on YouTube. Results show that our workflow can extract steps with a quality comparable to that of trained annotators across all three domains with 77% precision and 81% recall.

109 citations

Proceedings ArticleDOI
27 Apr 2013
TL;DR: A user study found that users perform significantly better using the FollowUs system with a library of multiple demonstrations in comparison to its equivalent baseline system with only the original authored content.
Abstract: Web-based tutorials are a popular help resource for learning how to perform unfamiliar tasks in complex software. However, in their current form, web tutorials are isolated from the applications that they support. In this paper we present FollowUs, a web-tutorial system that integrates a fully-featured application into a web-based tutorial. This novel architecture enables community enhanced tutorials, which continuously improve as more users work with them. FollowUs captures video demonstrations of users as they perform a tutorial. Subsequent users can use the original tutorial, or choose from a library of captured community demonstrations of each tutorial step. We conducted a user study to test the benefits of making multiple demonstrations available to users, and found that users perform significantly better using our system with a library of multiple demonstrations in comparison to its equivalent baseline system with only the original authored content.

89 citations

Proceedings ArticleDOI
28 Feb 2015
TL;DR: This work introduces learnersourcing, an approach in which intrinsically motivated learners contribute to a human computation workflow as they naturally go about learning from the videos.
Abstract: Websites like YouTube host millions of how-to videos, but their interfaces are not optimized for learning. Previous research suggests that people learn more from how-to videos when the videos are accompanied by outlines showing individual steps and labels for groups of steps (subgoals). We envision an alternative video player where the steps and subgoals are displayed alongside the video. To generate this information for existing videos, we introduce learnersourcing, an approach in which intrinsically motivated learners contribute to a human computation workflow as they naturally go about learning from the videos. To demonstrate this method, we deployed a live website with a workflow for constructing subgoal labels implemented on a set of introductory web programming videos. For the four videos with the highest participation, we found that a majority of learner-generated subgoals were comparable in quality to expert-generated ones. Learners commented that the system helped them grasp the material, suggesting that our workflow did not detract from the learning experience.

82 citations

Proceedings ArticleDOI
08 Oct 2013
TL;DR: An interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques and user studies show that automatically extracted construction lines can help users draw more accurately.
Abstract: We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to %support this task and help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw.

68 citations