scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 2001"


Proceedings ArticleDOI
01 Aug 2001
TL;DR: An appropriately modified semi-Lagrangian method with a new approach to calculating fluid flow around objects is combined to efficiently solve the equations of motion for a liquid while retaining enough detail to obtain realistic looking behavior.
Abstract: We present a general method for modeling and animating liquids. The system is specifically designed for computer animation and handles viscous liquids as they move in a 3D environment and interact with graphics primitives such as parametric curves and moving polygons. We combine an appropriately modified semi-Lagrangian method with a new approach to calculating fluid flow around objects. This allows us to efficiently solve the equations of motion for a liquid while retaining enough detail to obtain realistic looking behavior. The object interaction mechanism is extended to provide control over the liquid s 3D motion. A high quality surface is obtained from the resulting velocity field using a novel adaptive technique for evolving an implicit surface.

780 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: A novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices, suitable for new flexible consumer graphics hardware and suited for interactive high-quality volume graphics.
Abstract: We introduce a novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices. It is suitable for new flexible consumer graphics hardware and provides high image quality even for low-resolution volume data and non-linear transfer functions with high frequencies, without the performance overhead caused by rendering additional interpolated slices. This is especially useful for volumetric effects in computer games and professional scientific volume visualization, which heavily depend on memory bandwidth and rasterization power.We present an implementation of the algorithm on current programmable consumer graphics hardware using multi-textures with advanced texture fetch and pixel shading operations. We implemented direct volume rendering, volume shading, arbitrary number of isosurfaces, and mixed mode rendering. The performance does neither depend on the number of isosurfaces nor the definition of the transfer functions, and is therefore suited for interactive high-quality volume graphics.

590 citations


Journal ArticleDOI
TL;DR: This survey reviews recent literature on both the 3D model building process and techniques used to match and identify free-form objects from imagery to offer the computer vision practitioner new ways to recognize and localize free- form objects.

573 citations


Proceedings ArticleDOI
01 Mar 2001
TL;DR: A new method for representing a hierarchy of regions on a polygonal surface which partition that surface dinto a set of face clusters, which represent the aggregate properties of the original surface at different scales rather than providing geometric approximations of varying complexity.
Abstract: Many graphics applications, and interactive systems in particular, rely on hierarchical surface representations to efficiently process very complex models. Considerable attention has been focused on hierarchies of surface approximations and their construction via automatic surface simplification. Such representations have proven effective for adapting the level of detail used in real time display systems. However, other applications such as ray tracing, collision detection, and radiosity benefit from an alternative multiresolution framework: hierarchical partitions of the original surface geometry. We present a new method for representing a hierarchy of regions on a polygonal surface which partition that surface dinto a set of face clusters. These clusters, which are connected sets of faces, represent the aggregate properties of the original surface at different scales rather than providing geometric approximations of varying complexity. We also describe the combination of an effective error metric and a novel algorithm for constructing these hierarchies. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations

415 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: To calculate a feature for a mesh, it is shown that it can first compute it for each elementary shape such as a triangle or a tetrahedron, and then add up all the values for the mesh.
Abstract: Meshes are dominantly used to represent 3D models as they fit well with graphics rendering hardware. Features such as volume, moments, and Fourier transform coefficients need to be calculated from the mesh representation efficiently. We propose an algorithm to calculate these features without transforming the mesh into other representations such as the volumetric representation. To calculate a feature for a mesh, we show that we can first compute it for each elementary shape such as a triangle or a tetrahedron, and then add up all the values for the mesh. The algorithm is simple and efficient, with many potential applications.

380 citations


Journal ArticleDOI
TL;DR: This paper focuses on methods to construct accurate digital models of scanned objects by integrating high-quality texture and normal maps with geometric data, designed for use with inexpensive, electronic camera-based systems in which low-resolution range images and high-resolution intensity images are acquired.
Abstract: The creation of three-dimensional digital content by scanning real objects has become common practice in graphics applications for which visual quality is paramount, such as animation, e-commerce, and virtual museums. While a lot of attention has been devoted recently to the problem of accurately capturing the geometry of scanned objects, the acquisition of high-quality textures is equally important, but not as widely studied. In this paper, we focus on methods to construct accurate digital models of scanned objects by integrating high-quality texture and normal maps with geometric data. These methods are designed for use with inexpensive, electronic camera-based systems in which low-resolution range images and high-resolution intensity images are acquired. The resulting models are well-suited for interactive rendering on the latest-generation graphics hardware with support for bump mapping. Our contributions include new techniques for processing range, reflectance, and surface normal data, for image-based registration of scans, and for reconstructing high-quality textures for the output digital object.

370 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: WireGL provides the familiar OpenGL API to each node in a cluster, virtualizing multiple graphics accelerators into a sort-first parallel renderer with a parallel interface, which can drive a variety of output devices, from standalone displays to tiled display walls.
Abstract: We describe WireGL, a system for scalable interactive rendering on a cluster of workstations. WireGL provides the familiar OpenGL API to each node in a cluster, virtualizing multiple graphics accelerators into a sort-first parallel renderer with a parallel interface. We also describe techniques for reassembling an output image from a set of tiles distributed over a cluster. Using flexible display management, WireGL can drive a variety of output devices, from standalone displays to tiled display walls. By combining the power of virtual graphics, the familiarity and ordered semantics of OpenGL, and the scalability of clusters, we are able to create time-varying visualizations that sustain rendering performance over 70,000,000 triangles per second at interactive refresh rates using 16 compute nodes and 16 rendering nodes.

361 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: This paper presents work carried out for a project to develop a new interactive technique that combines haptic sensation with computer graphics and a new interface device comprising of a flexible screen, an actuator array and a projector.
Abstract: This paper presents work carried out for a project to develop a new interactive technique that combines haptic sensation with computer graphics. The project has two goals. The first is to provide users with a spatially continuous surface on which they can effectively touch an image using any part of their bare hand, including the palm. The second goal is to present visual and haptic sensation simultaneously by using a single device that doesn't oblige the user to wear any extra equipment. In order to achieve these goals, we designed a new interface device comprising of a flexible screen, an actuator array and a projector. The actuator deforms the flexible screen onto which the image is projected. The user can then touch the image directly and feel its shape and rigidity. Initially we fabricated two prototypes, and their effectiveness is examined by studying the observations made by anonymous users and a performance evaluation test for spatial resolution.

349 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: Algorithms for real-time synthesis of realistic sound effects for interactive simulations (e.g., games) and animation are described that are efficient, physically-based, and can be controlled by users in natural ways.
Abstract: We describe algorithms for real-time synthesis of realistic sound effects for interactive simulations (e.g., games) and animation. These sound effects are produced automatically, from 3D models using dynamic simulation and user interaction. We develop algorithms that are efficient, physically-based, and can be controlled by users in natural ways. We develop effective techniques for producing high quality continuous contact sounds from dynamic simulations running at video rates which are slow relative to audio synthesis. We accomplish this using modal models driven by contact forces modeled at audio rates, which are much higher than the graphics frame rate. The contact forces can be computed from simulations or can be custom designed. We demonstrate the effectiveness with complex realistic simulations.

328 citations


Book
01 Nov 2001
TL;DR: Rich in theory, analysis, and practical information, this book is the complete resource for subdivision methods, providing all that is needed to understand how subdivision works its magic, and how to make that magic work.
Abstract: From the Publisher: The world's leading animation houses rely increasingly on subdivision methods for creating realistic-looking complex shapes. However, until now there was no one book devoted to this powerful geometric modeling technique. Subdivision Methods for Geometric Design does the job with authority and precision, providing all that is needed to understand how subdivision works its magic, and how to make that magic work. Throughout the book, icons cue readers to visit a companion Web site loaded with interactive exercises, implementations of the book's images, and supplementary material. Rich in theory, analysis, and practical information, this book is the complete resource for subdivision methods. Features The result of a collaboration between a leading university researcher and an industry practitioner. The only book devoted exclusively and comprehensively to this important new technology. Provides solid background and theoretical analysis of subdivision as well as a wide variety of specific applications. Addresses algorithms for Bezier and uniform B-Spline curves, Catmull-Clark subdivision for quad meshes, and regularity tests for polyhedral meshes. Via the companion Web site, (www.subdivision.com), provides opportunities for readers to experiment hands-on with implementations in a richly interactive environment. Includes a foreword by Tony DeRose, recipient of the 1999 ACM Computer Graphics Achievement Award for his seminal work in subdivision methods. Author Biography: Joe Warren, Professor of Computer Science at Rice University since 1986, is one of the world's leading experts on subdivision. Of his nearly 50 computer science papers-published in prestigious forums such as SIGGRAPH, Transactions on Graphics, Computer-Aided Geometric Design, and The Visual Computer-a dozen specifically address subdivision and its applications to computer graphics. Prof. Warren received both his M.S. and Ph.D. in Computer Science at Cornell University. His research interests focus on mathematical methods for representing geometric shape. Henrik Weimer is a research scientist at the DaimlerChrysler Corporate Research Center in Berlin, where he works on knowledge-based support for the design and creation of engineering products. Dr. Weimer obtained his Ph.D. in Computer Science from Rice University.

328 citations


Patent
16 Feb 2001
TL;DR: In this article, an enhanced operating environment for an Interactive Real-Time Distributed Navigation System (Figs. 1 and 2) is disclosed, which is provided by improving input and output techniques in a navigation system.
Abstract: An enhanced operating environment for an Interactive Real-Time Distributed Navigation System (Figs. 1 and 2) is disclosed. The environment is provided by improving input and otput techniques (via 218) in a navigation system. Methods for reducing the number of inputs to the navigational system are carried through a wireless device (202). Improved input methods include entering non-deterministic information to retrieve deterministic information. Output techniques include methods for pacing navigational prompts provided by the navigation system.The system is applicable to text, graphics, or audible systems.

Proceedings ArticleDOI
01 Mar 2001
TL;DR: CavePainting’s 3D brush strokes, color pickers, artwork viewing mode, and interface are described and several works of art created using the system are presented along with feedback from artists.
Abstract: CavePainting is an artistic medium that uses a 3D analog of 2D brush strokes to create 3D works of art in a fully immersive Cave environment. Physical props and gestures are used to provide an intuitive interface for artists who may not be familiar with virtual reality. The system is designed to take advantage of the 8 ft. x 8 ft. x 8 ft. space in which the artist works. CavePainting enables the artist to create a new type of art and provides a novel approach to viewing this art after it has been created. In this paper, we describe CavePainting’s 3D brush strokes, color pickers, artwork viewing mode, and interface. We also present several works of art created using the system along with feedback from artists. Artists are excited about this form of art and the gestural, full-body experience of creating it. CR Categories and Subject Descriptors: I.3.6 [Computer Graphics]: Methodology and Techniques - Interaction Techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Virtual Reality; J.5 [Arts and Humanities]: Fine Arts Additional Key Words: 3D painting, 3D modeling, gestures, tangible user interface, Cave

Proceedings ArticleDOI
01 Mar 2001
TL;DR: This work presents a new approach for computing generalized proximity information of arbitrary 2D objects using graphics hardware using multi-pass rendering techniques and accelerated distance computation that provides proximity information at interactive rates for a variety of simulation strategies for both backtracking and penalty-based collision responses.
Abstract: We present a new approach for computing generalized proximity information of arbitrary 2D objects using graphics hardware. Using multi-pass rendering techniques and accelerated distance computation, our algorithm performs proximity queries not only for detecting collisions, but also for computing intersections, separation distance, penetration depth, and contact points and normals. Our hybrid geometry and image-based approach balances computation between the CPU and graphics subsystems. Geometric object-space techniques coarsely localize potential intersection regions or closest features between two objects, and image-space techniques compute the low-level proximity information in these regions. Most of the proximity information is derived from a distance field computed using graphics hardware. We demonstrate the performance in collision response computation for rigid and deformable body dynamics simulations. Our approach provides proximity information at interactive rates for a variety of simulation strategies for both backtracking and penalty-based collision responses.

Patent
05 Jan 2001
TL;DR: In this paper, a user interface for a radio frequency identification interrogation system is described, which interface may include graphics, sounds, lights, or combinations of the foregoing that provide information to a user in regard to the materials being interrogated.
Abstract: A user interface for a radio frequency identification interrogation system is disclosed, which interface may include graphics, sounds, lights, or combinations of the foregoing that provide information to a user in regard to the materials being interrogated.

Book
01 Nov 2001
TL;DR: Xerox's 8010 Star Information System as mentioned in this paper is a personal computer designed for office professionals who create, analyze, and distribute information, which is based on a metaphor of a physical office.
Abstract: In April 1981 Xerox announced the 8010 Star Information System, a new personal computer designed for office professionals who create, analyze, and distribute information. The Star user interface differs from that of other office computer systems by its emphasis on graphics, its adherence to a metaphor of a physical office, and its rigorous application of a small set of design principles. The graphic imagery reduces the amount of typing and remembering required to operate the system. The office metaphor makes the system seem familiar and friendly; it reduces the alien feel that many computer systems have. The design principles unify the nearly two dozen functional areas of Star, increasing the coherence of the system and allowing user experience in one area to apply in others.

Patent
16 Nov 2001
TL;DR: In this paper, a graphics pipeline system (3300) is provided for graphics processing, which includes a transform module adapted for receiving vertex data and a lighting module which is positioned on the single semiconductor platform for performing lighting operations on the vertex data received from the transform module.
Abstract: A graphics pipeline system (3300) is provided for graphics processing. Such system includes a transform module adapted for receiving vertex data. The transform module serves to transform the vertex data from a first space to a second space. Coupled to the transform module is a lighting module which is positioned on the single semiconductor platform for performing lighting operations on the vertex data received from the transform module. Also included is a rasterizer coupled to the lighting module and positioned on the single semiconductor platform for rendering the vertex data received from the lighting module. During use, an antialiasing feature (3302) is implemented to improve a quality of the graphics rendering.

Patent
22 Jun 2001
TL;DR: In this paper, the authors present a vertex representation allowing the graphics pipeline to retain vertex state information and to mix indexed and direct vertex values and attributes, as well as a projection matrix value set command; a display list call object command; and an embedded frame buffer clear/set command.
Abstract: An interface for a graphics system includes simple yet powerful constructs that are easy for an application programmer to use and learn. Features include a unique vertex representation allowing the graphics pipeline to retain vertex state information and to mix indexed and direct vertex values and attributes; a projection matrix value set command; a display list call object command; and an embedded frame buffer clear/set command.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: A first order solver is picked up for the basic implicit level set model and an implementation performing at 2 ms for an explicit timestep on a 128/sup 2/ image is presented.
Abstract: Implicit active contours are a very flexible technique in the segmentation of digital images. A novel type of hardware implementation is presented here to approach real time applications We propose to exploit the high performance of modern graphics cards for numerical computations. Vectors are regarded as images and linear algebraic operations on vectors are realized by the graphics operations of image blending. Thus, the performance benefits from the high memory bandwidth and the economy of command transfers, while the restricted precision does not infect the qualitative behavior of the level set propagation Here, we pick up a first order solver for the basic implicit level set model and present an implementation performing at 2 ms for an explicit timestep on a 128/sup 2/ image.

Proceedings ArticleDOI
01 Aug 2001
TL;DR: A renderer that achieves 106 Mtri/s on an 8-node cluster using Lightning-2 to perform sort-last depth compositing is demonstrated, and it is demonstrated that this renderer can be upgraded across multiple generations of graphics accelerators with little effort.
Abstract: Clusters of PCs are increasingly popular as cost-effective platforms for supercomputer-class applications. Given recent performance improvements in graphics accelerators, clusters are similarly attractive for demanding graphics applications. We describe the design and implementation of Lightning-2, a display subsystem for such a cluster. The system scales in both the number of rendering nodes and the number of displays supported, and allows any pixel data generated from any node to be dynamically mapped to any location on any display. A number of image-compositing functions are supported, including color-keying and depth-compositing. A distinguishing feature of the system is its platform independence: it connects to graphics accelerators via an industry-standard digital video port and requires no modifications to accelerator hardware or device drivers. As a result, rendering clusters that utilize Lightning-2 can be upgraded across multiple generations of graphics accelerators with little effort. We demonstrate a renderer that achieves 106 Mtri/s on an 8-node cluster using Lightning-2 to perform sort-last depth compositing.

Proceedings ArticleDOI
28 May 2001
TL;DR: Reports on Move3D, a software platform dedicated to collision-free path planning, focuses on results obtained in logistics for industrial installations, in graphics animation and in mobile robotics as well.
Abstract: Reports on Move3D, a software platform dedicated to collision-free path planning. The algorithms are based on probabilistic approaches and take advantage of the progress of computer performance. The generality comes from a dedicated software architecture allowing a rapid design of path planners. The paper focuses on results obtained in logistics for industrial installations, in graphics animation and in mobile robotics as well.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: This report discusses different approaches towards interactive ray tracing using techniques such as approximation, hybrid rendering, and direct optimization of the ray tracing algorithm itself, and discusses recent research towards implementing ray tracing in hardware as an alternative to current graphics chips.
Abstract: The term ray tracing is commonly associated with highly realistic images but certainly not with interactive graphics. However, with the increasing hardware resources of today, interactive ray tracing is becoming a reality and offers a number of benefits over the traditional rasterization pipeline. The goal of this report is to provide a better understanding of the potential and challenges of interactive ray tracing. We start with a review of the problems associated with rasterization based rendering and contrast this with the advantages offered by ray tracing. Next we discuss different approaches towards interactive ray tracing using techniques such as approximation, hybrid rendering, and direct optimization of the ray tracing algorithm itself. After a brief review of interactive ray tracing on supercomputers we describe implementations on standard PCs and clusters of networked PCs. This system improves ray tracing performance by more than an order of magnitude and outperforms even high-end graphics hardware for complex scenes up to tens of millions of polygons. Finally, we discuss recent research towards implementing ray tracing in hardware as an alternative to current graphics chips. This report ends with a discussion of the remaining challenges and and the future of ray tracing in interactive 3D graphics.

Patent
30 Nov 2001
TL;DR: In this paper, a system, method and computer program product are provided for programmable processing of fragment data in a computer hardware graphics pipeline, where the programmable operations are performed in a manner/sequence specified in a graphics application program interface.
Abstract: A system, method and computer program product are provided for programmable processing of fragment data in a computer hardware graphics pipeline. Initially, fragment data is received in a hardware graphics pipeline. It is then determined whether the hardware graphics pipeline is operating in a programmable mode. If it is determined that the hardware graphics pipeline is operating in the programmable mode, programmable operations are performed on the fragment data in order to generate output. The programmable operations are performed in a manner/sequence specified in a graphics application program interface. If it is determined that the hardware graphics pipeline is not operating in the programmable mode, standard graphics application program interface (API) operations are performed on the fragment data in order to generate output.

Patent
06 Apr 2001
TL;DR: A computer-implemented method for generating electronic documents, including the steps of: receiving data from at least one application program (202), dividing the data into text data and graphics data (302), and generating at least the first file for storing at least a portion of text data as discussed by the authors.
Abstract: A computer-implemented method for generating electronic documents, including the steps of: receiving data from at least one application program (202), dividing the data into text data and graphics data (302), and generating at least one first file for storing at least a portion of text data and graphics data, thereby creating an electronic document (501).

Patent
07 Dec 2001
TL;DR: In this paper, a graphics processing system for pixel data is described, which includes a front end module for receiving pixel data and a setup unit coupled to the front-end module and generates parameter coefficients.
Abstract: A graphics processing system is provided. The graphics processing system includes a front end module for receiving pixel data. A setup unit is coupled to the front end module and generates parameter coefficients. A raster unit is coupled to the setup unit and generates stepping information. A virtual texturing array engine textures and colors the pixel data based on the parameter coefficients and stepping information. Also provided is a pixel engine adapted for processing the textured and colored pixel data received from the virtual texturing array engine.

Journal ArticleDOI
TL;DR: The goals of TRex, the system for interactive volume rendering of large data sets, are to provide near-interactive display rates for time-varying, terabyte-sized uniformly sampled data sets and provide a low-latency platform for volume visualization in immersive environments.
Abstract: To employ direct volume rendering, TRex uses parallel graphics hardware, software-based compositing, and high-performance I/O to provide near-interactive display rates for time-varying, terabyte-sized data sets. We present a scalable, pipelined approach for rendering data sets too large for a single graphics card. To do so, we take advantage of multiple hardware rendering units and parallel software compositing. The goals of TRex, our system for interactive volume rendering of large data sets, are to provide near-interactive display rates for time-varying, terabyte-sized uniformly sampled data sets and provide a low-latency platform for volume visualization in immersive environments. We consider 5 frames per second (fps) to be near-interactive rates for normal viewing environments and immersive environments to have a lower bound frame rate of l0 fps. Using TRex for virtual reality environments requires low latency - around 50 ms per frame or 100 ms per view update or stereo pair. To achieve lower latency renderings, we either render smaller portions of the volume on more graphics pipes or subsample the volume to render fewer samples per frame by each graphics pipe. Unstructured data sets must be resampled to appropriately leverage the 3D texture volume rendering method.

Book ChapterDOI
07 Sep 2001
TL;DR: Experimental results showed that the proposed method of detecting and extracting characters that are touching graphics improved the percentage of correctly detected text as well as the accuracy of character recognition significantly.
Abstract: The separation of overlapping text and graphics is a challenging problem in document image analysis. This paper proposes a specific method of detecting and extracting characters that are touching graphics. It is based on the observation that the constituent strokes of characters are usually short segments in comparison with those of graphics. It combines line continuation with the feature line width to decompose and reconstruct segments underlying the region of intersection. Experimental results showed that the proposed method improved the percentage of correctly detected text as well as the accuracy of character recognition significantly.

Proceedings ArticleDOI
21 Oct 2001
TL;DR: A hardware-assisted rendering technique coupled with a compression scheme for the interactive visual exploration of time-varying scalar volume data and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3-D graphics card.
Abstract: In this paper we present a hardware-assisted rendering technique coupled with a compression scheme for the interactive visual exploration of time-varying scalar volume data A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3-D graphics card Using a single PC equipped with a modest amount of memory, a texture capable graphics card, and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 45 millions voxels each time step) at interactive rates, permitting the visual exploration of large scientific data sets in both the temporal and spatial domain

Patent
07 May 2001
TL;DR: In this article, a graphics processor decrypts the compressed encrypted video stream and stores a decrypted version in a protected portion of an on-chip or off-chip video memory.
Abstract: A graphics processor receives a compressed encrypted video stream. The graphics processor decrypts the compressed encrypted video stream and stores a decrypted version (i.e., a decrypted compressed video stream) in a protected portion of an on-chip or off-chip video memory. The graphics processor then permits processors and other bus masters on the graphics processor to access the on-chip video memory, but conditionally limits access to other bus masters that are located off-chip, such as a central processing unit located off-chip and coupled to the graphics processor via a bus.

Patent
06 Aug 2001
TL;DR: In this paper, an apparatus and method for adding graphical material to a digital graphics album is disclosed, where annotation data is extracted from the reference material and may be processed by a natural language processor to produce search keywords.
Abstract: An apparatus and method for adding graphical material to a digital graphics album is disclosed. Reference material in a digital graphics album is specified. Annotation data is extracted from the reference material and may be processed by a natural language processor to produce search keywords. In addition to the keywords, user directives may be provided, both of which are used to conduct a search for related graphical materials. The search is conducted by querying a graphical material database through a network connection. The search results are received and the user can select from the resultant materials for inclusion in the digital graphics album. If no satisfactory material is found, the user can specify a reference graphical image that is processed to produce search criteria that are image content descriptors. The database is again queried in accordance with these descriptors to provide search results for possible inclusion.

Journal ArticleDOI
TL;DR: This article describes how direct manipulation human computer interfaces can be augmented with techniques borrowed from cartoon animators, and aims to improve the visual feedback of a direct manipulation interface by smoothing the changes of an interface, giving manipulated objects a feeling of substance and providing cues that anticipate the result of a manipulation.
Abstract: If judiciously applied, animation techniques can enhance the look and feel of computer applications that present a graphical human interface. Such techniques can smooth the rough edges and abrupt transitions common in many current graphical interfaces, and strengthen the illusion of direct manipulation that many interfaces strive to present. To date, few applications include such animation techniques. One possible reason is that animated interfaces are difficult to implement: they are difficult to design, place great burdens on programmers, and demand high-performance from underlying graphics systems.This article describes how direct manipulation human computer interfaces can be augmented with techniques borrowed from cartoon animators. In particular, we wish to improve the visual feedback of a direct manipulation interface by smoothing the changes of an interface, giving manipulated objects a feeling of substance and providing cues that anticipate the result of a manipulation. Our approach is to add support for animation techniques such as object distortion and keyframe interpolation, and to provide prepackaged animation effects such as animated widgets for common user interface interactions.To determine if these tools and techniques are practical and effective, we built a prototype direct manipulation drawing editor with an animated interface and used the prototype editor to carry out a set of human factors experiments. The experiments show that the techniques are practical even on standard workstation hardware, and that the effects can indeed enhance direct manipulation interfaces.