scispace - formally typeset
Search or ask a question
Topic

Computer animation

About: Computer animation is a research topic. Over the lifetime, 7151 publications have been published within this topic receiving 124559 citations.


Papers
More filters
Journal ArticleDOI
Ken Shoemake1
01 Jul 1985
TL;DR: A new kind of spline curve is presented, created on a sphere, suitable for smoothly in-betweening (i.e. interpolating) sequences of arbitrary rotations, without quirks found in earlier methods.
Abstract: Solid bodies roll and tumble through space. In computer animation, so do cameras. The rotations of these objects are best described using a four coordinate system, quaternions, as is shown in this paper. Of all quaternions, those on the unit sphere are most suitable for animation, but the question of how to construct curves on spheres has not been much explored. This paper gives one answer by presenting a new kind of spline curve, created on a sphere, suitable for smoothly in-betweening (i.e. interpolating) sequences of arbitrary rotations. Both theory and experiment show that the motion generated is smooth and natural, without quirks found in earlier methods.

2,006 citations

Journal ArticleDOI
TL;DR: Park et al. as mentioned in this paper found that students learned better when verbal input was presented auditorily as speech rather than visually as text, and that visual and verbal materials were physically close to each other.
Abstract: Students viewed a computer animation depicting the process of lightning. In Experiment 1, they concurrently viewed on-screen text presented near the animation or far from the animation, or concurrently listened to a narration. In Experiment 2, they concurrently viewed on-screen text or listened to a narration, viewed on-screen text following or preceding the animation, or listened to a narration following or preceding the animation. Learning was measured by retention, transfer, and matching tests. Experiment 1 revealed a spatial-contiguity effect in which students learned better when visual and verbal materials were physically close. Both experiments revealed a modality effect in which students learned better when verbal input was presented auditorily as speech rather than visually as text. The results support 2 cognitive principles of multimedia learning. Technological advances have made possible the combination and coordination of verbal presentation modes (such as narration and on-screen text) with nonverbal presentation modes (such as graphics, video, animations, and environmental sounds) in just one device (the computer). These advances include multimedia environments, where students can be introduced to causal models of complex systems by the use of computer-generated animations (Park & Hopkins, 1993). However, despite its power to facilitate learning, multimedia has been developed on the basis of its technological capacity, and rarely is it used according to research-based principles (Kozma, 1991; Mayer, in press; Moore, Burton, & Myers, 1996). Instructional design of multimedia is still mostly based on the intuitive beliefs of designers rather than on empirical evidence (Park & Hannafin, 1994). The purpose of the present study is to contribute to multimedia learning theory by clarifying and testing two cognitive principles: the contiguity principle and the modality principle.

1,352 citations

Journal ArticleDOI
TL;DR: There is a much richer matching collection of expressions, enabling depiction of most human facial actions, in FaceWarehouse, a database of 3D facial expressions for visual computing applications.
Abstract: We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

952 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: The Behavior Expression Animation Toolkit (BEAT) as discussed by the authors allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronized nonverbal behaviors and synthesized speech in a form that can be sent to a number of different animation systems.
Abstract: The Behavior Expression Animation Toolkit (BEAT) allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronized nonverbal behaviors and synthesized speech in a form that can be sent to a number of different animation systems. The nonverbal behaviors are assigned on the basis of actual linguistic and contextual analysis of the typed text, relying on rules derived from extensive research into human conversational behavior. The toolkit is extensible, so that new rules can be quickly added. It is designed to plug into larger systems that may also assign personality profiles, motion characteristics, scene constraints, or the animation styles of particular animators.

796 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: An appropriately modified semi-Lagrangian method with a new approach to calculating fluid flow around objects is combined to efficiently solve the equations of motion for a liquid while retaining enough detail to obtain realistic looking behavior.
Abstract: We present a general method for modeling and animating liquids. The system is specifically designed for computer animation and handles viscous liquids as they move in a 3D environment and interact with graphics primitives such as parametric curves and moving polygons. We combine an appropriately modified semi-Lagrangian method with a new approach to calculating fluid flow around objects. This allows us to efficiently solve the equations of motion for a liquid while retaining enough detail to obtain realistic looking behavior. The object interaction mechanism is extended to provide control over the liquid s 3D motion. A high quality surface is obtained from the resulting velocity field using a novel adaptive technique for evolving an implicit surface.

780 citations


Network Information
Related Topics (5)
Rendering (computer graphics)
41.3K papers, 776.5K citations
92% related
Visualization
52.7K papers, 905K citations
85% related
User interface
85.4K papers, 1.7M citations
83% related
Feature (computer vision)
128.2K papers, 1.7M citations
80% related
Mobile device
58.6K papers, 942.8K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202353
2022169
202189
2020127
2019108
2018139