scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Visualization and Computer Graphics in 2002"


Journal ArticleDOI
Daniel A. Keim1
TL;DR: This paper proposes a classification of information visualization and visual data mining techniques which is based on the data type to be visualized, the visualization technique, and the interaction and distortion technique.
Abstract: Never before in history has data been generated at such high volumes as it is today. Exploring and analyzing the vast volumes of data is becoming increasingly difficult. Information visualization and visual data mining can help to deal with the flood of information. The advantage of visual data exploration is that the user is directly involved in the data mining process. There are a large number of information visualization techniques which have been developed over the last decade to support the exploration of large data sets. In this paper, we propose a classification of information visualization and visual data mining techniques which is based on the data type to be visualized, the visualization technique, and the interaction and distortion technique. We exemplify the classification using a few examples, most of them referring to techniques and systems presented in this special section.

1,759 citations


Journal ArticleDOI
TL;DR: The ThemeRiver visualization depicts thematic variations over time within a large collection of documents and uses a river metaphor to convey several key notions, allowing a user to discern patterns that suggest relationships or trends.
Abstract: The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored "currents" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme.

732 citations


Journal ArticleDOI
TL;DR: Polaris is presented, an interface for exploring large multidimensional databases that extends the well-known pivot table interface that includes an interfaces for constructing visual specifications of table-based graphical displays and the ability to generate a precise set of relational queries from the visual specifications.
Abstract: In the last several years, large multidimensional databases have become common in a variety of applications, such as data warehousing and scientific computing. Analysis and exploration tasks place significant demands on the interfaces to these databases. Because of the size of the data sets, dense graphical representations are more effective for exploration than spreadsheets and charts. Furthermore, because of the exploratory nature of the analysis, it must be possible for the analysts to change visualizations rapidly as they pursue a cycle involving first hypothesis and then experimentation. In this paper, we present Polaris, an interface for exploring large multidimensional databases that extends the well-known pivot table interface. The novel features of Polaris include an interface for constructing visual specifications of table-based graphical displays and the ability to generate a precise set of relational queries from the visual specifications. The visual specifications can be rapidly and incrementally developed, giving the analyst visual feedback as he constructs complex queries and visualizations.

711 citations


Journal ArticleDOI
TL;DR: An important class of 3D transfer functions for scalar data is demonstrated, and the application of multi-dimensional transfer functions to multivariate data is described, and a set of direct manipulation widgets that make specifying such transfer functions intuitive and convenient are presented.
Abstract: Most direct volume renderings produced today employ 1D transfer functions which assign color and opacity to the volume based solely on the single scalar quantity which comprises the data set. Though they have not received widespread attention, multi-dimensional transfer functions are a very effective way to extract materials and their boundaries for both scalar and multivariate data. However, identifying good transfer functions is difficult enough in 1D, let alone 2D or 3D. This paper demonstrates an important class of 3D transfer functions for scalar data, and describes the application of multi-dimensional transfer functions to multivariate data. We present a set of direct manipulation widgets that make specifying such transfer functions intuitive and convenient. We also describe how to use modern graphics hardware to both interactively render with multidimensional transfer functions and to provide interactive shadows for volumes. The transfer functions, widgets and hardware combine to form a powerful system for interactive volume exploration.

623 citations


Journal ArticleDOI
TL;DR: This work presents a novel technique for texture mapping on arbitrary surfaces with minimal distortion by preserving the local and global structure of the texture by solving an "inverse" problem and map a flat texture patch onto a curved surface while preserving the structure ofThe texture.
Abstract: Presents a novel technique for texture mapping on arbitrary surfaces with minimal distortion by preserving the local and global structure of the texture. The recent introduction of the fast marching method on triangulated surfaces has made it possible to compute a geodesic distance map from a given surface point in O(n lg n) operations, where n is the number of triangles that represent the surface. We use this method to design a surface flattening approach based on multi-dimensional scaling (MDS). MDS is a family of methods that map a set of points into a finite-dimensional flat (Euclidean) domain, where the only data given is the corresponding distance between every pair of points. The MDS mapping yields minimal changes of the distances between the corresponding points. We then solve an "inverse" problem and map a flat texture patch onto a curved surface while preserving the structure of the texture.

343 citations


Journal ArticleDOI
TL;DR: It is shown that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques, by virtue of dramatic improvements in multilevel cache coherence.
Abstract: We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.

306 citations


Journal ArticleDOI
TL;DR: A framework for high quality splatting based on elliptical Gaussian kernels is presented, and it is shown that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels, which makes the splat primitive universal in rendering surface and volume data.
Abstract: We present a framework for high quality splatting based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter, combining a reconstruction kernel with a low-pass filter. Because of the similarity to Heckbert's (1989) EWA (elliptical weighted average) filter for texture mapping, we call our technique EWA splatting. Our framework allows us to derive EWA splat primitives for volume data and for point-sampled surface data. It provides high image quality without aliasing artifacts or excessive blurring for volume data and, additionally, features anisotropic texture filtering for point-sampled surfaces. It also handles nonspherical volume kernels efficiently; hence, it is suitable for regular, rectilinear, and irregular volume datasets. Moreover, our framework introduces a novel approach to compute the footprint function, facilitating efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in rendering surface and volume data.

146 citations


Journal ArticleDOI
TL;DR: A new hybrid scheme that combines the advantages of the Eulerian and Lagrangian frameworks is applied to the visualization of dense representations of time-dependent vector fields, suggesting that LEA could become a useful component of any scientific visualization toolkit concerned with the display of unsteady flows.
Abstract: A new hybrid scheme, called Lagrangian-Eulerian advection (LEA), that combines the advantages of the Eulerian and Lagrangian frameworks is applied to the visualization of dense representations of time-dependent vector fields. The algorithm encodes the particles into a texture that is then advected. By treating every particle equally, we can handle texture advection and dye advection within a single framework. High temporal and spatial correlation is achieved through the blending of successive frames. A combination of particle and dye advection enables the simultaneous visualization of streamlines, particle paths and streak-lines. We demonstrate various experimental techniques on several physical flow fields. The simplicity of both the resulting data structures and the implementation suggest that LEA could become a useful component of any scientific visualization toolkit concerned with the display of unsteady flows.

117 citations


Journal ArticleDOI
TL;DR: A new scheme that enables us to apply a filter mask (or a convolution filter) to orientation data to give time-domain filters for orientation data that are computationally efficient and satisfy such important properties as coordinate invariance, time invariance and symmetry.
Abstract: Capturing live motion has gained considerable attention in computer animation as an important motion generation technique. Canned motion data are comprised of both position and orientation components. Although a great number of signal processing methods are available for manipulating position data, the majority of these methods cannot be generalized easily to orientation data due to the inherent nonlinearity of the orientation space. In this paper, we present a new scheme that enables us to apply a filter mask (or a convolution filter) to orientation data. The key idea is to transform the orientation data into their analogues in a vector space, to apply a filter mask on them, and then to transform the results back to the orientation space. This scheme gives time-domain filters for orientation data that are computationally efficient and satisfy such important properties as coordinate invariance, time invariance and symmetry. Experimental results indicate that our scheme is useful for various purposes, including smoothing and sharpening.

107 citations


Journal ArticleDOI
James Abello1, Jeffrey L. Korn2
TL;DR: The main algorithmic and visualization techniques behind MGV (Massive Graph Visualizer), an integrated visualization and exploration system for massive multidigraph navigation, are highlighted and point out several possible application scenarios.
Abstract: Describes MGV (Massive Graph Visualizer), an integrated visualization and exploration system for massive multidigraph navigation. It adheres to the visual information-seeking mantra: overview first, zoom and filter, then details on demand. MGV's only assumption is that the vertex set of the underlying digraph corresponds to the set of leaves of a pre-determined tree T. MGV builds an out-of-core graph hierarchy and provides mechanisms to plug in arbitrary visual representations for each graph hierarchy slice. Navigation from one level to another of the hierarchy corresponds to the implementation of a drill-down interface. In order to provide the user with navigation control and interactive response, MGV incorporates a number of visualization techniques like interactive pixel-oriented 2D and 3D maps, statistical displays, color maps, multi-linked views and a zoomable label-based interface. This makes the association of geographic information and graph data very natural. To automate the creation of the vertex set hierarchy for MGV, we use the notion of graph sketches. They can be thought of as visual indices that guide the navigation of a multigraph too large to fit on the available display. MGV follows the client-server paradigm and it is implemented in C and Java-3D. We highlight the main algorithmic and visualization techniques behind the tools and, along the way, point out several possible application scenarios. Our techniques are being applied to multigraphs defined on vertex sets with sizes ranging from 100 million to 250 million vertices.

102 citations


Journal ArticleDOI
TL;DR: A practical method for creating implicit surfaces from polygonal models that produces high-quality results for complex surfaces based on a variational interpolation technique (the three-dimensional generalization of thin-plate interpolation) is introduced.
Abstract: Implicit surfaces are used for a number of tasks in computer graphics, including modeling soft or organic objects, morphing, collision detection, and constructive solid geometry. Although operating on implicit surfaces is usually straightforward, creating them is not. We introduce a practical method for creating implicit surfaces from polygonal models that produces high-quality results for complex surfaces. Whereas much previous work in implicit surfaces has been done with primitives such as "blobbies," we use implicit surfaces based on a variational interpolation technique (the three-dimensional generalization of thin-plate interpolation). Given a polygonal mesh, we convert the data to a volumetric representation to use as a guide for creating the implicit surface iteratively. We begin by seeding the surface with a number of constraint points through which the surface must pass. Iteratively, additional constraints are added; the resulting surfaces are evaluated, and the errors guide the placement of subsequent constraints. We have applied our method successfully to a variety of polygonal meshes and consider it to be robust.

Journal ArticleDOI
TL;DR: A flexible framework for visual data mining which combines analytical and visual methods to achieve a better understanding of the information space is described, which provides several pre-processing methods for unstructured information spaces, such as a flexible hierarchy generation with user-controlled refinement.
Abstract: The exploration of heterogenous information spaces requires suitable mining methods as well as effective visual interfaces. Most of the existing systems concentrate either on mining algorithms or on visualization techniques. This paper describes a flexible framework for visual data mining which combines analytical and visual methods to achieve a better understanding of the information space. We provide several pre-processing methods for unstructured information spaces, such as a flexible hierarchy generation with user-controlled refinement. Moreover, we develop new visualization techniques, including an intuitive focus+context technique to visualize complex hierarchical graphs. A special feature of our system is a new paradigm for visualizing information structures within their frame of reference.

Journal ArticleDOI
TL;DR: Based on newly developed adaptive algorithms in the spatial-temporal domain, an implementation of the new approach is developed to efficiently deliver high-quality motion blurred images in general computer graphics production environments.
Abstract: A framework for discussing the motion blur image generation process is formulated. Previous work is studied in the context of this framework. Due to the implicit assumptions on low temporal frequencies in most motion blur algorithms, issues involved in large screen space movements and fast illumination changes in time have not been adequately addressed so far. A new approach that does not make these assumptions is introduced to solve the spatial-temporal geometric and shading aliasing problems separately. Based on newly developed adaptive algorithms in the spatial-temporal domain, an implementation of the new approach is developed to efficiently deliver high-quality motion blurred images in general computer graphics production environments.

Journal ArticleDOI
TL;DR: An anatomically-based approach to muscle modeling is proposed that attempts to provide models for human musculature based on the real morphological structures and provides a good visual description of muscle form and action and represents a sound base from which to produce further progress toward medically accurate simulation of human bodies.
Abstract: Muscle simulation is an important component of human modeling, but there have been few attempts to demonstrate, in 3D and in an anatomically correct way, the structures of muscles and the way in which these change during motion. This paper proposes an anatomically-based approach to muscle modeling that attempts to provide models for human musculature based on the real morphological structures. These models provide a good visual description of muscle form and action and represent a sound base from which to produce further progress toward medically accurate simulation of human bodies. Three major problems have been addressed: geometric modeling, deformation and texture. To allow for the wide variety of deformable muscle shapes encountered in the body, while retaining as many of their common properties as possible, the geometric models are classified into several categories according to the characteristics of their structures and actions. Within each category, the model for each muscle has an efficient structural form, created using anatomical data. Deformation is also performed on the basis of the categories, with all models within each category sharing the same deformation scheme. The categories cover both general and special cases. The result is an efficient, anatomically accurate muscle representation that is specifically designed to accommodate the particular form of deformation exhibited by each individual muscle. Interactions between muscles; are also taken into account to avoid penetration occurring between adjacent muscles in our model. To provide a suitable visual effect, the muscle texture is generated directly on the model surface. The textures and colors are obtained from anatomical data via image analysis. Some results are presented on the geometric modeling, the deformation and the texture of muscles related to the lower limb.

Journal ArticleDOI
TL;DR: The general characteristics of a number of volumetric display system configurations are examined, with emphasis given to issues relating to the predictability of the volume within which images are depicted.
Abstract: A diverse range of volumetric display systems has been proposed during the last 90 years. In order to facilitate a comparison between the various approaches, the three subsystems that comprise displays of this type are identified and are used as a basis for a classification scheme. The general characteristics of a number of volumetric display system configurations are examined, with emphasis given to issues relating to the predictability of the volume within which images are depicted. Key characteristics of this image space are identified and the complex manner in which they depend upon the display unit subsystems are illustrated for several current volumetric display techniques.

Journal ArticleDOI
TL;DR: This work proposes a combination of traditional bar charts and x-y-plots, which allows the visualization of large amounts of data with categorical and numerical data, and uses the pixels within the bars to present the detailed information of the data records.
Abstract: Simple presentation graphics are intuitive and easy-to-use, but only show highly aggregated data. Bar charts, for example, only show a rather small number of data values and x-y-plots often have a high degree of overlap. Presentation techniques are often chosen depending on the considered data type, bar charts, for example, are used for categorical data and x-y plots are used for numerical data. We propose a combination of traditional bar charts and x-y-plots, which allows the visualization of large amounts of data with categorical and numerical data. The categorical data dimensions are used for the partitioning into the bars and the numerical data dimensions are used for the ordering arrangement within the bars. The basic idea is to use the pixels within the bars to present the detailed information of the data records. Our so-called pixel bar charts retain the intuitiveness of traditional bar charts while applying the principle of x-y charts within the bars. In many applications, a natural hierarchy is defined on the categorical data dimensions such as time, region, or product type. In hierarchical pixel bar charts, the hierarchy is exploited to split the bars for selected portions of the hierarchy. Our application to a number of real-world e-business and Web services data sets shows the wide applicability and usefulness of our new idea.

Journal ArticleDOI
TL;DR: The normalization of color values by mapping RGB values to the CIE L*u*v* color space is presented and the combined effects of each of the two color spaces are empirically compared using source data from the Visible Human project.
Abstract: Photographic volumes present a unique, interesting challenge for volume rendering. In photographic volumes, the voxel color is pre-determined, making color selection through transfer functions unnecessary. However, photographic data does not contain a clear mapping from the multi-valued color values to a scalar density or opacity, making projection and compositing much more difficult than with traditional volumes. Moreover, because of the nonlinear nature of color spaces, there is no meaningful norm for the multi-valued voxels. Thus, the individual color channels of photographic data must be treated as incomparable data tuples rather than as vector values. Traditional differential geometric tools, such as intensity gradients, density and Laplacians, are distorted by the nonlinear non-orthonormal color spaces that are the domain of the voxel values. We have developed different techniques for managing these issues while directly rendering volumes from photographic data. We present and justify the normalization of color values by mapping RGB values to the CIE L*u*v* color space. We explore and compare different opacity transfer functions that map three-channel color values to opacity. We apply these many-to-one mappings to the original RGB values as well as to the voxels after conversion to L*u*v* space. Direct rendering using transfer functions allows us to explore photographic volumes without having to commit to an a-priori segmentation that might mask fine variations of interest. We empirically compare the combined effects of each of the two color spaces with our opacity transfer functions using source data from the Visible Human project.

Journal ArticleDOI
TL;DR: A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3D graphics card to permit highly interactive exploration of time-varying scalar volume data.
Abstract: We present a scalable volume rendering technique that exploits lossy compression and low-cost commodity hardware to permit highly interactive exploration of time-varying scalar volume data. A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3D graphics card. Using a single PC equipped with a modest amount of memory, a texture-capable graphics card and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 42 million voxels each time step) at interactive rates. By clustering multiple PCs together, we demonstrate the data-size scalability of our method. The frame rates achieved make possible the interactive exploration of data in the temporal, spatial and transfer function domains. A comprehensive evaluation of our method based on experimental studies using data sets (up to 134 million voxels per time step) from turbulence flow simulations is also presented.

Journal ArticleDOI
TL;DR: In this paper, vector quantization is used to accelerate the computation of linear vertex transformations, which can be used for complexity reduction by approximately 60 percent of the time required by a conventional method without compression.
Abstract: Rendering geometrically detailed 3D models requires the transfer and processing of large amounts of triangle and vertex geometry data. Compressing the geometry bit stream can reduce bandwidth requirements and alleviate transmission bottlenecks. In this paper, we show vector quantization to be an effective compression technique for triangle mesh vertex data. We present predictive vector quantization methods using unstructured code books as well as a product code pyramid vector quantizer. The technique is compatible with most existing mesh connectivity encoding schemes and does not require the use of entropy coding. In addition to compression, our vector quantization scheme can be used for complexity reduction by accelerating the computation of linear vertex transformations. Consequently, an encoded set of vertices can be both decoded and transformed in approximately 60 percent of the time required by a conventional method without compression.

Journal ArticleDOI
TL;DR: A new method for the visualization of state transition systems is presented, where visual information is reduced by clustering nodes, forming a tree structure of related clusters that enables the user to relate features in the visualize of the state transition graph to semantic concepts in the corresponding process and vice versa.
Abstract: A new method for the visualization of state transition systems is presented. Visual information is reduced by clustering nodes, forming a tree structure of related clusters. This structure is visualized in three dimensions with concepts from cone trees and emphasis on symmetry. A number of interactive options are provided as well, allowing the user to superimpose detail information on this tree structure. The resulting visualization enables the user to relate features in the visualization of the state transition graph to semantic concepts in the corresponding process and vice versa.

Journal ArticleDOI
TL;DR: The described software architecture is implemented using the Virtual Rendering System, which currently wraps the functionality of the OpenGL, Radiance, POV Ray and RenderMan systems.
Abstract: Describes the software architecture of a rendering system that follows a pragmatic approach to integrating and bundling the power of different low-level rendering systems within an object-oriented framework. The generic rendering system provides higher-level abstractions to existing rendering systems and serves as a framework for developing new rendering techniques. It wraps the functionality of several widely-used rendering systems, defines a unified object-oriented application programming interface and provides an extensible, customizable apparatus for evaluating and interpreting hierarchical scene information. As a fundamental property, individual features of a specific rendering system can be integrated into the generic rending system in a transparent way. The system is based on a state machine, called an "engine", which operates on "rendering components". Four major categories of rendering components constitute the generic rendering system: "shapes" represent geometries, "attributes" specify properties assigned to geometries and scenes, "handlers" encapsulate rendering algorithms, and "techniques" represent evaluation strategies for rendering components. As a proof of concept, we have implemented the described software architecture using the Virtual Rendering System, which currently wraps the functionality of the OpenGL, Radiance, POV Ray and RenderMan systems.

Journal ArticleDOI
TL;DR: Geometric analysis suggests that image shifting and image scaling may be less appropriate than the other methods for interactive, stereo HTDs and anecdotally link some of these artifacts to exceeding the perceptual limitations of human vision.
Abstract: This paper concerns stereoscopic virtual reality displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. These are called stereoscopic HTDs (head-tracked displays). Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally, the user's natural visual system combines the stereo image pair into a single, 3D perceived image. Unfortunately, users often have difficulty fusing the stereo image pair. Researchers use a number of software techniques to reduce fusion problems. This paper geometrically examines and compares a number of these techniques and reaches the following conclusions: In interactive stereoscopic applications, the combination of view placement, scale, and either false eye separation or /spl alpha/-false eye separation can provide fusion control that is geometrically similar to image shifting and image scaling. However, in stereo HTDs, image shifting and image scaling also generate additional geometric artifacts that are not generated by the other methods. We anecdotally link some of these artifacts to exceeding the perceptual limitations of human vision. While formal perceptual studies are still needed, geometric analysis suggests that image shifting and image scaling may be less appropriate than the other methods for interactive, stereo HTDs.

Journal ArticleDOI
TL;DR: This paper presents a simple approach to capturing the appearance and structure of immersive scenes based on the imagery acquired with an omnidirectional video camera by augmenting the video sequence with pose information, and provides the end-user with the ability to index the video sequences spatially as opposed to temporally.
Abstract: This paper presents a simple approach to capturing the appearance and structure of immersive scenes based on the imagery acquired with an omnidirectional video camera. The scheme proceeds by combining techniques from structure-from-motion with ideas from image-based rendering. An interactive photogrammetric modeling scheme is used to recover the locations of a set of salient features in the scene (points and lines) from image measurements in a small set of keyframe images. The estimates obtained from this process are then used as a basis for estimating the position and orientation of the camera at every frame in the video clip. By augmenting the video sequence with pose information, we provide the end-user with the ability to index the video sequence spatially as opposed to temporally. This allows the user to explore the immersive scene by interactively selecting the desired viewpoint and viewing direction.

Journal ArticleDOI
TL;DR: Performance experiments show that the second eye image can be produced approximately 45 percent faster than drawing the two images separately and a smooth stereoscopic visualization can be achieved at interactive frame rates using continuous multiresolution representation of height fields.
Abstract: Visualization of large geometric environments has always been an important problem of computer graphics. We present a framework for the stereoscopic view-dependent visualization of large scale terrain models. We use a quadtree based multiresolution representation for the terrain data. This structure is queried to obtain the view-dependent approximations of the terrain model at different levels of detail. In order not to lose depth information, which is crucial for the stereoscopic visualization, we make use of a different simplification criterion, namely, distance-based angular error threshold. We also present an algorithm for the construction of stereo pairs in order to speed up the view-dependent stereoscopic visualization. The approach we use is the simultaneous generation of the triangles for two stereo images using a single draw-list so that the view frustum culling and vertex activation is done only once for each frame. The cracking problem is solved using the dependency information stored for each vertex. We eliminate the popping artifacts that can occur while switching between different resolutions of the data using morphing. We implemented the proposed algorithms on personal computers and graphics workstations. Performance experiments show that the second eye image can be produced approximately 45 percent faster than drawing the two images separately and a smooth stereoscopic visualization can be achieved at interactive frame rates using continuous multiresolution representation of height fields.

Journal ArticleDOI
TL;DR: A novel querying paradigm is presented which is based on usage of 3D interfaces exploiting navigation and editing of3D virtual environments and their related query paradigms and develops on a user test on retrieval efficiency and effectiveness, as well as on an evaluation of users' satisfaction.
Abstract: Image databases are widely exploited in a number of different contexts, ranging from history of art, through medicine, to education. Existing querying paradigms are based either on the usage of textual strings, for high-level semantic queries or on 2D visual examples for the expression of perceptual queries. Semantic queries require manual annotation of the database images. Instead, perceptual queries only require that image analysis is performed on the database images in order to extract salient perceptual features that are matched with those of the example. However, usage of 2D examples is generally inadequate as effective authoring of query images, attaining a realistic reproduction of complex scenes, needs manual editing and sketching ability. Investigation of new querying paradigms is therefore an important-yet still marginally investigated-factor for the success of content-based image retrieval. In this paper, a novel querying paradigm is presented which is based on usage of 3D interfaces exploiting navigation and editing of 3D virtual environments. Query images are obtained by taking a snapshot of the framed environment and by using the snapshot as an example to retrieve similar database images. A comparative analysis is carried out between the usage of 3D and 2D interfaces and their related query paradigms. This analysis develops on a user test on retrieval efficiency and effectiveness, as well as on an evaluation of users' satisfaction.

Journal ArticleDOI
TL;DR: The aim of this paper is to contribute to the standardization process of multiplatform synthetic actor programs or libraries by demonstrating how the proposed data structure is used to generate motion by means of two different applications.
Abstract: We present a data structure specially geared toward the definition and management of synthetic actors in real-time computer graphics. The relation between our proposed data structure and the Silicon Graphics API Performer/spl reg/ makes its implementation possible on a low-cost real-time platform thanks to current accelerating cards. We demonstrate how our data structure is used to generate motion by means of two different applications. Both of them make use of direct and inverse kinematics and may use motion capture. ARTgraph is a development environment devoted to the creation of high-quality real-time 3D-graphics applications (basically, 3D games) and the ALVW system is a general platform that provides and coordinates a sensing-analysis-acting loop to provide behavior for synthetic actors in their own scenario. The aim of this paper is to contribute to the standardization process of multiplatform synthetic actor programs or libraries.

Journal ArticleDOI
Dan Gordon1
TL;DR: The floating column algorithm is a new method for the shaded rendering of function surfaces that requires no patching and uses only constant space, so it can be implemented on graphics cards and hand-held devices and it is suitable for multiprocessor workstations and clusters.
Abstract: The floating column algorithm is a new method for the shaded rendering of function surfaces. Derived from the monochromatic floating horizon algorithm, it uses the partial derivatives of the function to compute surface normals, thus enabling intensity or normal-interpolation shading. Current rendering methods require tiling the surface with patches, so higher-resolution patching is required for zoom-in views, interactive modification or time-varying surfaces. The new algorithm requires no patching and uses only constant space, so it can be implemented on graphics cards and hand-held devices. Each pixel column is displayed independently of the others, and this "independent column mode" makes the algorithm inherently parallel in the image space, so it is suitable for multiprocessor workstations and clusters and it is scalable in the resolution size. Furthermore, the sampling frequency of the surface can be controlled locally, matching local surface features, distance or artifact elimination requirements. Space-efficient super-sampling for anti-aliasing is also possible. The new algorithm, which allows orthogonal and perspective projections, produces pixel-wide strips which can be displayed in software or hardware. Various extensions are described, including shadows and texture mapping. These properties, together with the algorithm's parallelism, make it potentially useful for the real-time display of functionally-defined textured terrains and the animated display of time-varying surfaces.

Journal ArticleDOI
TL;DR: This work proposes a generalization of classic implicit surfaces which are able to produce a larger variety of particle coatings, from rigid solids to highly deformable objects and even wave propagation and fluid flow Coatings, thus handling all these disparate categories with the same paradigm.
Abstract: Physically-based particle models are used by an increasing community of computer graphics researchers and users in order to produce a large variety of dynamic motions. Among all of the methods dedicated to the coating of point models, the implicit surface method has proven to be one of the most powerful. However, for the visualization of a wide variety of objects ranging from smoke to solids, the time-independent coating of traditional implicit surfaces appears to be dynamically too poor and restrictive. We propose a generalization of classic implicit surfaces which are able to produce a larger variety of particle coatings, from rigid solids to highly deformable objects and even wave propagation and fluid flow coatings, thus handling all these disparate categories with the same paradigm. The method consists of extracting the coating from a field function which is not predetermined but calculated as the modulation of a dynamic discrete medium by particles. For these reasons, the coating behaviors present higher-order dynamic behaviors closely correlated with the dynamics of skeleton particles.