scispace - formally typeset
Search or ask a question

Showing papers by "Barbara Tversky published in 2006"


Journal ArticleDOI
TL;DR: arrows can play a powerful role in augmenting structural diagrams to convey dynamic, causal, or functional information in mechanical systems.

168 citations


Journal ArticleDOI
TL;DR: An analysis of the movements in each frame revealed that event segments corresponded to bursts of change in movement features, with greater bursts for coarse than for fine units.
Abstract: Everyday events, such as making a bed, can be segmented hierarchically, with the coarse level characterized by changes in the actor’s goals and the fine level by subgoals (Zacks, Tversky, & Iyer, 2001). Does hierarchical event perception depend on knowledge of actors’ intentions? This question was addressed by asking participants to segment films of abstract, schematic events. Films were novel or familiarized, viewed forward or backward, and simultaneously described or not. The participants interpreted familiar films as more intentional than novel films and forward films as more intentional than backward films. Regardless of experience and film direction, however, the participants identified similar event boundaries and organized them hierarchically. An analysis of the movements in each frame revealed that event segments corresponded to bursts of change in movement features, with greater bursts for coarse than for fine units. Perceiving event structure appears to enable event schemas, rather than resulting from them.

135 citations


Journal ArticleDOI
TL;DR: People encode goal-directed behaviors by segmenting them into discrete actions, organized as goal-subgoal hierarchies, which facilitates observational learning by organizing perceived actions into a representation that can serve as an action plan.
Abstract: People encode goal-directed behaviors, such as assembling an object, by segmenting them into discrete actions, organized as goal-subgoal hierarchies. Does hierarchical encoding contribute to observational learning? Participants in 3 experiments segmented an object assembly task into coarse and fine units of action and later performed it themselves. Hierarchical encoding, measured by segmentation patterns, correlated with more accurate and more hierarchically structured performance of the later assembly task. Furthermore, hierarchical encoding increased when participants (a) segmented coarse units first, (b) explicitly looked for hierarchical structure, and (c) described actions while segmenting them. Improving hierarchical encoding always led to improvements in learning, as well as a surprising shift toward encoding and executing actions from the actor's spatial perspective instead of the participants' own. Hierarchical encoding facilitates observational learning by organizing perceived actions into a representation that can serve as an action plan.

40 citations


Journal ArticleDOI
TL;DR: The authors propose and test the hypothesis that perspective taking promotes encoding a hierarchical representation of an actor's goals and subgoals-a key process for observational learning.
Abstract: People often learn actions by watching others. The authors propose and test the hypothesis that perspective taking promotes encoding a hierarchical representation of an actor's goals and subgoals-a key process for observational learning. Observers segmented videos of an object assembly task into coarse and fine action units. They described what happened in each unit from either the actor's, their own, or another observer's perspective and later performed the assembly task themselves. Participants who described the task from the actor's perspective encoded actions more hierarchically during observation and learned the task better.

37 citations


Journal ArticleDOI
TL;DR: This paper investigated the roles of gesture and speech in explanations, both for communicators and recipients, and found that communicators using gestures alone learned assembly better, making fewer assembly errors than those communicating via speech with gestures.

31 citations


01 Jan 2006
TL;DR: A collaboration of cognitive and computer science for uncovering and instantiating cognitive design principles to generate visualizations automatically using psychological research and computer graphic techniques and knowledge to produce visualizations.
Abstract: Visualizations are everywhere, on signs and billboards, in newspapers and texts, informative or instructive. Designing effective visualizations is a challenge. We have developed a collaboration of cognitive and computer science for uncovering and instantiating cognitive design principles to generate visualizations automatically. The program is iterative, using psychological research to uncover mental representations of the content to be communicated as well as interpretable visual devices for conveying that content and computer graphic techniques and knowledge to produce visualizations. We illustrate the program by describing projects developing effective route maps and assembly instructions.

14 citations


01 Jan 2006
TL;DR: Kessell et al. as mentioned in this paper investigated the use of gestures and diagrams to solve spatial insight problems with or without paper and found that the diagrams appeared to suppress enactment gestures only, whereas gestures carried additional information or served a function not fulfilled by the diagram.
Abstract: Using Diagrams and Gestures to Think and Talk about Insight Problems Angela M. Kessell (akessell@stanford.edu) Barbara Tversky (bt@psych.stanford.edu) Department of Psychology, Stanford University Bldg. 420 Jordan Hall, Stanford, CA 94305 USA significant interaction of type of gesture by condition (F(1, 42) = 4.97, p = .03), with more enactment gestures being produced by the No Paper group (Figure 1). Surprisingly, there was no difference between groups in the number of scene creation gestures produced. Keywords: gesture; diagram; problem solving; spatial cognition Introduction Gesture occurs not only in the context of speech but also in the absence of speech (e.g., McNeill, 1992). Prior work suggests gestures may serve like an embodied diagram, offloading working memory (Kessell & Tversky, 2005). Here, participants were videotaped while solving and explaining the solutions to spatial insight problems with or without paper. The expectation was that some problems would elicit gesture or diagramming in the service of solving the problems. A comparison of the use of gestures and diagrams should provide insight into similarities in function. In particular, the present study explores the following questions: Do people use gestures versus diagrams differently for thinking versus communicating? When using a diagram to communicate, where and how do people gesture? Mean # of gestures Scene Creation Enactment No Paper Paper Available Figure 1: Mean number of gestures produced while explaining the solution to the Six Glasses problem. Nineteen of the 22 participants in the Paper Available group produced a diagram. During solution explanation, enactment gestures were more frequently produced off than on the diagram. Method Forty-four Stanford undergraduates (21 females) solved six spatial insight problems. Pen and paper were provided to half of the participants. Participants were videotaped both while silently solving the problems and while explaining the solutions to the camera. Discussion Results The parallel use of gestures and diagrams primarily in solving problems with high spatial working memory demands suggests that both serve to offload working memory. During the explanation stage, however, all problems reliably elicited gestures, even when a diagram was available to help illustrate the solution, suggesting that the gestures carried additional information or served a function not fulfilled by the diagram. Indeed, having a diagram available during explanation did not reduce scene creation gestures. Rather, the diagrams appeared to suppress enactment gestures only. In this way, diagram use changed the conceptual content of the gestures produced. Finally, although an equal number of scene creation gestures were produced on the diagram as off, enactment gestures were more frequently produced off the diagram. All deictic and representational gestures were counted. Beat gestures were ignored. During explanation, all problems elicited gestures from a majority of participants. During solution, however, only two problems elicited both gestures (No Paper condition) and diagrams (Paper Available condition) from a majority of participants. These two problems had high spatial working memory demands. While solving these problems, participants used gestures and diagrams similarly. For one of the high spatial working memory problems, the Six Glasses problem, a detailed analysis of the conceptual content of the gestures produced was carried out. Each gesture was coded as one of two types. Scene creation gestures conveyed the spatial positions and properties of objects in the problem (e.g., point to the position of one of the glasses). Enactment gestures mimed solution actions (e.g., simulate pouring the water out of one glass into another). As expected, the No Paper group produced more gestures overall than the Paper Available group. There was a References Kessell, A. M., & Tversky, B. (2005). Gestures and diagrams for thinking and communicating. Proceedings of the 27 th annual meeting of the Cognitive Science Society. Stresa, Italy. McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press.

7 citations


20 Dec 2006

2 citations


Reference EntryDOI
15 Jan 2006
TL;DR: In this article, the authors describe how people think about space in terms of the elements in space and their spatial relations, not in metric terms, and how different spaces are conceptualized differently: the space of the body, the space around the body and space of navigation, and the metaphoric or miniature space of graphics.
Abstract: People think about space in terms of the elements in space and their spatial relations, not in metric terms. Different spaces are conceptualized differently: the space of the body, the space around the body, the space of navigation, and the metaphoric or miniature space of graphics. Keywords: cognitive map; perspective; reference frame; navigation; spatial thinking

1 citations