scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Visualization and Computer Graphics in 1998"


Journal ArticleDOI
TL;DR: This work develops and analyzes a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments, and provides experimental evidence showing that this approach yields substantially faster collision detection than previous methods.
Abstract: Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force feedback can require over 1000 collision queries per second. We develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a discrete orientation polytope (k-DOP), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (BV-trees) of bounding k-DOPs. Further, we propose algorithms for maintaining an effective BV-tree of k-DOPs for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods.

941 citations


Journal ArticleDOI
TL;DR: The author describes how the use of volumetric textures is well suited to complex repetitive scenes containing forests, foliage, grass, hair, or fur, using a single ray per pixel.
Abstract: Complex repetitive scenes containing forests, foliage, grass, hair, or fur, are challenging for common modeling and rendering tools The amount of data, the tediousness of modeling and animation tasks, and the cost of realistic rendering have caused such kind of scene to see only limited use even in high-end productions The author describes how the use of volumetric textures is well suited to such scenes These primitives can greatly simplify modeling and animation tasks More importantly, they can be very efficiently rendered using ray tracing with few aliasing artifacts The main idea, initially introduced by Kajiya and Kay (1989), is to represent a pattern of 3D geometry in a reference volume, that is tiled over an underlying surface much like a regular 2D texture In our contribution, the mapping is independent of the mesh subdivision, the pattern can contain any kind of shape, and it is prefiltered at different scales as for MIP-mapping Although the model encoding is volumetric, the rendering method differs greatly from traditional volume rendering A volumetric texture only exists in the neighborhood of a surface, and the repeated instances (called texels) of the reference volume are spatially deformed Furthermore, each voxel of the reference volume contains a key feature which controls the reflectance function that represents aggregate intravoxel geometry This allows for ray tracing of highly complex scenes with very few aliasing artifacts, using a single ray per pixel (for the part of the scene using the volumetric texture representation) The major technical considerations of our method lie in the ray-path determination and in the specification of the reflectance function

193 citations


Journal ArticleDOI
TL;DR: This work presents a new method for synthesizing novel views of a 3D scene from two or three reference images in full correspondence through the use and manipulation of an algebraic entity, termed the "trilinear tensor", that links point correspondences across three images.
Abstract: Presents a new method for synthesizing novel views of a 3D scene from two or three reference images in full correspondence. The core of this work is the use and manipulation of an algebraic entity, termed the "trilinear tensor", that links point correspondences across three images. For a given virtual camera position and orientation, a new trilinear tensor can be computed based on the original tensor of the reference images. The desired view can then be created using this new trilinear tensor and point correspondences across two of the reference images.

181 citations


Journal ArticleDOI
TL;DR: Experimental results indicating that a viewer's perception of motion characteristics is affected by the geometric model used for rendering are reported on.
Abstract: Human figures have been animated using a variety of geometric models, including stick figures, polygonal models and NURBS-based models with muscles, flexible skin or clothing. This paper reports on experimental results indicating that a viewer's perception of motion characteristics is affected by the geometric model used for rendering. Subjects were shown a series of paired motion sequences and asked if the two motions in each pair were the same or different. The motion sequences in each pair were rendered using the same geometric model. For the three types of motion variation tested, sensitivity scores indicate that subjects were better able to observe changes with the polygonal model than they were with the stick-figure model.

176 citations


Journal ArticleDOI
TL;DR: The Information Mural is a two-dimensional, reduced representation of an entire information space that fits entirely within a display window or screen that creates a miniature version of the information space using visual attributes, such as gray-scale shading, intensity, color, and pixel size, along with antialiased compression techniques.
Abstract: Information visualizations must allow users to browse information spaces and focus quickly on items of interest. Being able to see some representation of the entire information space provides an initial gestalt overview and gives context to support browsing and search tasks. However, the limited number of pixels on the screen constrain the information bandwidth and make it difficult to completely display large information spaces. The Information Mural is a two-dimensional, reduced representation of an entire information space that fits entirely within a display window or screen. The Mural creates a miniature version of the information space using visual attributes, such as gray-scale shading, intensity, color, and pixel size, along with antialiased compression techniques. Information Murals can be used as stand-alone visualizations or in global navigational views. We have built several prototypes to demonstrate the use of Information Murals in visualization applications; subject matter for these views includes computer software, scientific data, text documents and geographic information.

160 citations


Journal ArticleDOI
TL;DR: A new approach to video-based augmented reality that avoids both camera calibration and Euclidean 3D measurements is described, which is readily implementable, imposes minimal computational and hardware requirements, and generates real-time and accurate video overlays even when the camera parameters vary dynamically.
Abstract: Camera calibration and the acquisition of Euclidean 3D measurements have so far been considered necessary requirements for overlaying three-dimensional graphical objects with live video. We describe a new approach to video-based augmented reality that avoids both requirements: it does not use any metric information about the calibration parameters of the camera or the 3D locations and dimensions of the environment's objects. The only requirement is the ability to track across frames at least four fiducial points that are specified by the user during system initialization and whose world coordinates are unknown. Our approach is based on the following observation: given a set of four or more noncoplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projections of just four of the points. We exploit this observation by: tracking regions and color fiducial points at frame rate; and representing virtual objects in a non-Euclidean, affine frame of reference that allows their projection to be computed as a linear combination of the projection of the fiducial points. Experimental results on two augmented reality systems, one monitor-based and one head-mounted, demonstrate that the approach is readily implementable, imposes minimal computational and hardware requirements, and generates real-time and accurate video overlays even when the camera parameters vary dynamically.

157 citations


Journal ArticleDOI
TL;DR: This work looks for situations where piecewise linear or bilinear approximation destroys the local topology if nonlinear behavior is present, and chooses an appropriate polynomial approximation in these areas, and visualizes the topology.
Abstract: We present our results on the visualization of nonlinear vector field topology. The underlying mathematics is done in Clifford algebra, a system describing geometry by extending the usual vector space by a multiplication of vectors. We started with the observation that all known algorithms for vector field topology are based on piecewise linear or bilinear approximation, and that these methods destroy the local topology if nonlinear behavior is present. Our algorithm looks for such situations, chooses an appropriate polynomial approximation in these areas, and, finally, visualizes the topology. This overcomes the problem, and the algorithm is still very fast because we are using linear approximation outside these small but important areas. The paper contains a detailed description of the algorithm and a basic introduction to Clifford algebra.

144 citations


Journal ArticleDOI
TL;DR: The paper presents an algorithm, called UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields by devising a new convolution algorithm that uses a time-accurate value scattering scheme to model the texture advection.
Abstract: New challenges on vector field visualization emerge as time dependent numerical simulations become ubiquitous in the field of computational fluid dynamics (CFD). To visualize data generated from these simulations, traditional techniques, such as displaying particle traces, can only reveal flow phenomena in preselected local regions and thus, are unable to track the evolution of global flow features over time. The paper presents an algorithm, called UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Our algorithm extends a texture synthesis technique, called Line Integral Convolution (LIC), by devising a new convolution algorithm that uses a time-accurate value scattering scheme to model the texture advection. In addition, our algorithm maintains the coherence of the flow animation by successively updating the convolution results over time. Furthermore, we propose a parallel UFLIC algorithm that can achieve high load balancing for multiprocessor computers with shared memory architecture. We demonstrate the effectiveness of our new algorithm by presenting image snapshots from several CFD case studies.

116 citations


Journal ArticleDOI
Peter Williams1, Nelson Max, C.M. Stein
TL;DR: A revision to an existing accurate visibility ordering algorithm is described, which includes a correction and a method for dramatically increasing its efficiency and hardware assisted projection and compositing are extended from tetrahedra to arbitrary convex polyhedra.
Abstract: This paper describes a volume rendering system for unstructured data, especially finite element data, that creates images with very high accuracy. The system will currently handle meshes whose cells are either linear or quadratic tetrahedra. Compromises or approximations are not introduced for the sake of efficiency. Whenever possible, exact mathematical solutions for the radiance integrals involved and for interpolation are used. The system will also handle meshes with mixed cell types: tetrahedra, bricks, prisms, wedges, and pyramids, but not with high accuracy. Accurate semi-transparent shaded isosurfaces may be embedded in the volume rendering. For very small cells, subpixel accumulation by splatting is used to avoid sampling error. A revision to an existing accurate visibility ordering algorithm is described, which includes a correction and a method for dramatically increasing its efficiency. Finally, hardware assisted projection and compositing are extended from tetrahedra to arbitrary convex polyhedra.

109 citations


Journal ArticleDOI
TL;DR: A method to produce a hierarchy of triangle meshes that can be used to blend different levels of detail in a smooth fashion and insuring that the simplified mesh approximates the original surface well is presented.
Abstract: We present a method to produce a hierarchy of triangle meshes that can be used to blend different levels of detail in a smooth fashion. The algorithm produces a sequence of meshes M/sub 0/, M/sub 1/, M/sub 2/..., M/sub n/, where each mesh M/sub i/ can be transformed to mesh M/sub i+1/ through a set of triangle-collapse operations. For each triangle, a function is generated that approximates the underlying surface in the area of the triangle, and this function serves as a basis for assigning a weight to the triangle in the ordering operation and for supplying the points to which the triangles are collapsed. The algorithm produces a limited number of intermediate meshes by selecting, at each step, a number of triangles that can be collapsed simultaneously. This technique allows us to view a triangulated surface model at varying levels of detail while insuring that the simplified mesh approximates the original surface well.

98 citations


Journal ArticleDOI
TL;DR: A technique is presented for line art rendering of scenes composed of freeform surfaces, creating a unified line artrender method for both parametric and implicit forms, and exposes a new horizon of special, parameterization independent, line art effects.
Abstract: A technique is presented for line art rendering of scenes composed of freeform surfaces. The line art that is created for parametric surfaces is practically intrinsic and is globally invariant to changes in the surface parameterization. This method is equally applicable for line art rendering of implicit forms, creating a unified line art rendering method for both parametric and implicit forms. This added flexibility exposes a new horizon of special, parameterization independent, line art effects. Moreover, the production of the line art illustrations can be combined with traditional rendering techniques such as transparency and texture mapping. Examples that demonstrate the capabilities of the proposed approach are presented for both the parametric and implicit forms.

Journal ArticleDOI
TL;DR: This work presents a new dynamic surface model based on the Catmull-Clark subdivision scheme, a popular technique for modeling complicated objects of arbitrary genus which can be interactively deformed by applying synthesized forces.
Abstract: Recursive subdivision schemes have been extensively used in computer graphics, computer-aided geometric design, and scientific visualization for modeling smooth surfaces of arbitrary topology. Recursive subdivision generates a visually pleasing smooth surface in the limit from an initial user-specified polygonal mesh through the repeated application of a fixed set of subdivision rules. We present a new dynamic surface model based on the Catmull-Clark subdivision scheme, a popular technique for modeling complicated objects of arbitrary genus. Our new dynamic surface model inherits the attractive properties of the Catmull-Clark subdivision scheme, as well as those of the physics-based models. This new model provides a direct and intuitive means of manipulating geometric shapes, and an efficient hierarchical approach for recovering complex shapes from large range and volume data sets using very few degrees of freedom (control vertices). We provide an analytic formulation and introduce the "physical" quantities required to develop the dynamic subdivision surface model which can be interactively deformed by applying synthesized forces. The governing dynamic differential equation is derived using Lagrangian mechanics and the finite element method. Our experiments demonstrate that this new dynamic model has a promising future in computer graphics, geometric shape design, and scientific visualization.

Journal ArticleDOI
TL;DR: Experimental and theoretical results are presented which show that the algorithm is more accurate that previous algorithms and is faster than previous algorithms in terrains of more than 100,000 sample points.
Abstract: A terrain is most often represented with a digital elevation map consisting of a set of sample points from the terrain surface. This paper presents a fast and practical algorithm to compute the horizon, or skyline, at all sample points of a terrain. The horizons are useful in a number of applications, including the rendering of self-shadowing displacement maps, visibility culling for faster flight simulation, and rendering of cartographic data. Experimental and theoretical results are presented which show that the algorithm is more accurate that previous algorithms and is faster than previous algorithms in terrains of more than 100,000 sample points.

Journal ArticleDOI
TL;DR: A "wavelet-like" decomposition is introduced that works on piecewise constant data sets over irregular triangular surface meshes and is based on an extension of wavelet-theory allowing hierarchical meshes without property.
Abstract: Wavelet-based methods have proven their efficiency for visualization at different levels of detail, progressive transmission, and compression of large data sets. The required core of all wavelet-based methods is a hierarchy of meshes that satisfies subdivision-connectivity. This hierarchy has to be the result of a subdivision process starting from a base mesh. Examples include quadtree uniform 2D meshes, octree uniform 3D meshes, or 4-to-1 split triangular meshes. In particular, the necessity of subdivision-connectivity prevents the application of wavelet-based methods on irregular triangular meshes. In this paper, a "wavelet-like" decomposition is introduced that works on piecewise constant data sets over irregular triangular surface meshes. The decomposition/reconstruction algorithms are based on an extension of wavelet-theory allowing hierarchical meshes without property. Among others, this approach has the following features: it allows exact reconstruction of the data set, even for nonregular triangulations, and it extends previous results on Haar-wavelets over 4-to-1 split triangulations.

Journal ArticleDOI
TL;DR: It is demonstrated that using topology and geometry simplifications together yields superior multiresolution hierarchies than is possible by using either of them alone.
Abstract: We present a topology simplifying approach that can be used for genus reductions, removal of protuberances, and repair of cracks in polygonal models in a unified framework. Our work is complementary to the existing work on geometry simplification of polygonal datasets and we demonstrate that using topology and geometry simplifications together yields superior multiresolution hierarchies than is possible by using either of them alone. Our approach can also address the important issue of repair of cracks in polygonal models, as well as for rapid identification and removal of protuberances based on internal accessibility in polygonal models. Our approach is based on identifying holes and cracks by extending the concept of /spl alpha/-shapes to polygonal meshes under the L/sub /spl infin// distance metric. We then generate valid triangulations to fill them using the intuitive notion of sweeping an L/sub /spl infin// cube over the identified regions.

Journal ArticleDOI
TL;DR: This investigation was carried out as part of the design work for a screen-space rasterization ASIC and the implementations of several algorithms of comparable visual quality are discussed, and a comparison is provided in terms of per-primitive and per-pixel computational costs.
Abstract: Texture mapping is a fundamental feature of computer graphics image generation. In current PC-based acceleration hardware, MIP ("multum in parvo") mapping with bilinear and trilinear filtering is a commonly used filtering technique for reducing spatial aliasing artifacts. The effectiveness of this technique in reducing image aliasing at the expense of blurring is dependent upon the MIP-map level selection and the associated calculation of screen-space to texture-space pixel scaling. This paper describes an investigation of practical methods for per-pixel and per-primitive level of detail calculation. This investigation was carried out as part of the design work for a screen-space rasterization ASIC. The implementations of several algorithms of comparable visual quality are discussed, and a comparison is provided in terms of per-primitive and per-pixel computational costs.

Journal ArticleDOI
TL;DR: An event-driven approach that efficiently detects collisions among multiple ballistic spheres moving in the 3D space using the collision model from kinetic theory for molecular gas to determine subspace sizes for the space subdivision scheme, that minimize simulation time.
Abstract: This paper presents an event-driven approach that efficiently detects collisions among multiple ballistic spheres moving in the 3D space. Adopting a hierarchical uniform space subdivision scheme, we are able to trace the trajectories of spheres and their time-varying spatial distribution. We identify three types of events to detect the sequence of all collisions during our simulation: collision, entering, and leaving. The first type of event is due to actual collisions, and the other two types occur when spheres move from subspace to subspace in the space. Tracing all such events in the order of their occurring times, we are able to avoid fixed time step simulation. When the size of the largest sphere is bounded by a constant multiple of that of the smallest, it takes O(n~/sub c/ log n+n~/sub e/ log n) time with O(n) space after O(n log n) time preprocessing to simulate n moving spheres, where n~/sub c/ and n~/sub e/ are the number of actual collisions and that of entering and leaving events during the simulation, respectively. Since n~/sub e/, depends on the size of subspaces, we modify the collision model from kinetic theory for molecular gas to determine the subspace sizes for the space subdivision scheme, that minimize simulation time. Experimental results show that collision detection can be done in linear time in n over a large range.

Journal ArticleDOI
TL;DR: An antialiasing extension to the basic splatting algorithm is introduced that mitigates the spatial aliasing for high resolution volumes and a simple but highly effective scheme for adding motion blur to fast moving volumes is presented.
Abstract: The paper describes three new results for volume rendering algorithms utilizing splatting. First, an antialiasing extension to the basic splatting algorithm is introduced that mitigates the spatial aliasing for high resolution volumes. Aliasing can be severe for high resolution volumes or volumes where a high depth of field leads to converging samples along the perspective axis. Next, an analysis of the common approximation errors in the splatting process for perspective viewing is presented. In this context, we give different implementations, distinguished by efficiency and accuracy, for adding the splat contributions to the image plane. We then present new results in controlling the splatting errors and also show their behavior in the framework of our new antialiasing technique. Finally, current work in progress on extensions to splatting for temporal antialiasing is demonstrated. We present a simple but highly effective scheme for adding motion blur to fast moving volumes.

Journal ArticleDOI
TL;DR: The system proposes an object repairing process based on a set of user tunable heuristics that allows the user to override the algorithm's decisions in a repair visualization step and presents an organized and intuitive way for theuser to explore the space of valid solutions and to select the correct one.
Abstract: The paper presents a system and the associated algorithms for repairing the boundary representation of CAD models. Two types of errors are considered: topological errors, i.e., aggregate errors, like zero volume parts, duplicate or missing parts, inconsistent surface orientation, etc., and geometric errors, i.e., numerical imprecision errors, like cracks or overlaps of geometry. The output of our system describes a set of clean and consistent two-manifolds (possibly with boundaries) with derived adjacencies. Such solid representation enables the application of a variety of rendering and analysis algorithms, e.g., finite element analysis, radiosity computation, model simplification, and solid free form fabrication. The algorithms described were originally designed to correct errors in polygonal B-Reps. We also present an extension for spline surfaces. Central to our system is a procedure for inferring local adjacencies of edges. The geometric representation of topologically adjacent edges are merged to evolve a set of two-manifolds. Aggregate errors are discovered during the merging step. Unfortunately, there are many ambiguous situations where errors admit more than one valid solution. Our system proposes an object repairing process based on a set of user tunable heuristics. The system also allows the user to override the algorithm's decisions in a repair visualization step. In essence, this visualization step presents an organized and intuitive way for the user to explore the space of valid solutions and to select the correct one.

Journal ArticleDOI
TL;DR: The method of adaptive projection and the corresponding operators on data functions, respectively, are introduced and defined and discussed as mathematically rigorous foundations for multiresolution data analysis.
Abstract: Recently, multiresolution visualization methods have become an indispensable ingredient of real-time interactive postprocessing. The enormous databases, typically coming along with some hierarchical structure, are locally resolved on different levels of detail to achieve a significant savings of CPU and rendering time. In this paper, the method of adaptive projection and the corresponding operators on data functions, respectively, are introduced. They are defined and discussed as mathematically rigorous foundations for multiresolution data analysis. Keeping in mind data from efficient numerical multigrid methods, this approach applies to hierarchical nested grids consisting of elements which are any tensor product of simplices, generated recursively by an arbitrary, finite set of refinement rules from some coarse grid. The corresponding visualization algorithms, e.g. color shading on slices or isosurface rendering, are confined to an appropriate depth-first traversal of the grid hierarchy. A continuous projection of the data onto an adaptive, extracted subgrid is thereby calculated recursively. The presented concept covers different methods of local error measurement, time-dependent data which have to be interpolated from a sequence of key frames, and a tool for local data focusing. Furthermore, it allows for a continuous level of detail.

Journal ArticleDOI
TL;DR: The heart of the method is a two-phase perspective ray casting algorithm that takes advantage of the coherence inherent in adjacent frames during navigation to generate a sequence of approximate volume-rendered views in a fraction of the time that would be required to compute them individually.
Abstract: Volume navigation is the interactive exploration of volume data sets by "flying" the viewpoint through the data, producing a volume rendered view at each frame. We present an inexpensive perspective volume navigation method designed to be run on a PC platform with accelerated 3D graphics hardware. The heart of the method is a two-phase perspective ray casting algorithm that takes advantage of the coherence inherent in adjacent frames during navigation. The algorithm generates a sequence of approximate volume-rendered views in a fraction of the time that would be required to compute them individually. The algorithm handles arbitrarily large volumes by dynamically swapping data within the current view frustum into main memory as the viewpoint moves through the volume. We also describe an interactive volume navigation application based on this algorithm. The application renders gray-scale, RGB, and labeled RGB volumes by volumetric compositing, allows trilinear interpolation of sample points, and implements progressive refinement during pauses in user input.

Journal ArticleDOI
TL;DR: Recently, multiresolution visualization methods have become an indispensable ingredient of real-time interactive postprocessing in the context of enormous databases.
Abstract: Recently, multiresolution visualization methods have become an indispensable ingredient of real-time interactive postprocessing. The enormous databases, typically coming along with some hierarchica...

Journal ArticleDOI
TL;DR: This paper presents a new approach to rendering triangular algebraic free-form surfaces, where an irregular adaptive subdivision is constructed to quickly eliminate all parts outside the trimming curve from consideration during rendering.
Abstract: This paper presents a new approach to rendering triangular algebraic free-form surfaces. A hierarchical subdivision of the surface with associated tight bounding volumes provides for quick identification of the surface regions likely to be hit by a ray. For each leaf of the hierarchy, an approximation to the corresponding surface region is stored. The approximation is used to compute a good starting point for the iteration, which ensures rapid convergence. Trimming curves are described by a tree of trimming primitives, such as squares, circles, polygons, and free-form curves, combined with Boolean operations. For trimmed surfaces, an irregular adaptive subdivision is constructed to quickly eliminate all parts outside the trimming curve from consideration during rendering. Cost heuristics are introduced to optimize the rendering time further.

Journal ArticleDOI
TL;DR: A way to increase the efficiency of SIMD clipping without sacrificing the efficient flow of a SIMD graphics pipeline is described, and the concepts of clip-plane pairs and edge batching are introduced.
Abstract: SIMD processors have become popular architectures for multimedia. Though most of the 3D graphics pipeline can be implemented on such SIMD platforms in a straightforward manner, polygon clipping tends to cause clumsy and expensive interruptions to the SIMD pipeline. This paper describes a way to increase the efficiency of SIMD clipping without sacrificing the efficient flow of a SIMD graphics pipeline. In order to fully utilize the parallel execution units, we have developed two methods to avoid serialization of the execution stream: deferred clipping postpones polygon clipping and uses hardware assistance to buffer polygons that need to be clipped. SIMD clipping partitions the actual polygon clipping procedure between the SIMD engine and a conventional RISC processor. To increase the efficiency of SIMD clipping, we introduce the concepts of clip-plane pairs and edge batching. Clip-plane pairs allow clipping a polygon against two clip planes without introducing corner vertices. Edge batching reduces the communication and control overhead for starting of clipping on the SIMD engine.

Journal ArticleDOI
TL;DR: The methods described allow the ranked access of significant structures based on the order of significance, giving rise to an adaptive and embedded representation scheme and are demonstrated on two datasets from computational field simulations.
Abstract: Numerical simulation of physical phenomena is an accepted way of scientific inquiry. However, the field is still evolving, with a profusion of new solution and grid generation techniques being continuously proposed. Concurrent and retrospective visualization are being used to validate the results. There is a need for representation schemes which allow access of structures in an increasing order of smoothness. We describe our methods on datasets obtained from curvilinear grids. Our target application required visualization of a computational simulation performed on a very remote supercomputer. Since no grid adaptation was performed, it was not deemed necessary to simplify or compress the grid. Inherent to the identification of significant structures is determining the location of the scale coherent structures and assigning saliency values to them. Scale coherent structures are obtained as a result of combining the coefficients of a wavelet transform across scales. The result of this operation is a correlation mask that delineates regions containing significant structures. A spatial subdivision is used to delineate regions of interest. The mask values in these subdivided regions are used as a measure of information content. Later, another wavelet transform is conducted within each subdivided region and the coefficients are sorted based on a perceptual function with bandpass characteristics. This allows for ranking of structures based on the order of significance, giving rise to an adaptive and embedded representation scheme. We demonstrate our methods on two datasets from computational field simulations. We show how our methods allow the ranked access of significant structures. We also compare our adaptive representation scheme with a fixed blocksize scheme.

Journal ArticleDOI
TL;DR: This work presents a new ray classification scheme that considerably reduces memory consumption while preserving its inherent time efficiency, and produces much simpler-shaped, compact ray cells that eventually accelerate ray shooting operations.
Abstract: We present a new ray classification scheme that considerably reduces memory consumption while preserving its inherent time efficiency. Our key idea is due to the fact that the rays lying on the same line are duplicated over many cells in the ray classification scheme. We are thus able to lower the dimensions of the ray space by classifying lines instead of rays. Our scheme produces much simpler-shaped, compact ray cells that eventually accelerate ray shooting operations.

Journal ArticleDOI
TL;DR: The visualization techniques are applied to two experimental systems-one from combustion and the other from neurobiology-to show how relevant information can be quickly extracted from video data and can be integrated into the video acquisition process to provide real-time feedback to the experimentalist during the operation of an experiment.
Abstract: Fast methods are developed for visualizing and classifying certain types of scientific video data. These techniques, which are based on Karhunen-Loe/spl grave/ve (KL) decomposition, find a best coordinate system for a data set. When the data set represents a temporally ordered collection of images, the best coordinate system leads to approximations that are separable in time and space. Practical methods for computing this best coordinate system are discussed, and physically significant visualizations for experimental video data are developed. The visualization techniques are applied to two experimental systems-one from combustion and the other from neurobiology-to show how relevant information can be quickly extracted from video data. These techniques can be integrated into the video acquisition process to provide real-time feedback to the experimentalist during the operation of an experiment.