scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Visualization and Computer Graphics in 2000"


Journal ArticleDOI
TL;DR: This is a survey on graph visualization and navigation techniques, as used in information visualization, which approaches the results of traditional graph drawing from a different perspective.
Abstract: This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.

1,648 citations


Journal ArticleDOI
TL;DR: The major goal of this article is to provide a formal basis of pixel-oriented visualization techniques and show that the design decisions in developing them can be seen as solutions of well-defined optimization problems.
Abstract: Visualization techniques are of increasing importance in exploring and analyzing large amounts of multidimensional information. One important class of visualization techniques which is particularly interesting for visualizing very large multidimensional data sets is the class of pixel-oriented techniques. The basic idea of pixel-oriented visualization techniques is to represent as many data objects as possible on the screen at the same time by mapping each data value to a pixel of the screen and arranging the pixels adequately. A number of different pixel-oriented visualization techniques have been proposed in recent years and it has been shown that the techniques are useful for visual data exploration in a number of different application contexts. In this paper, we discuss a number of issues which are important in developing pixel-oriented visualization techniques. The major goal of this article is to provide a formal basis of pixel-oriented visualization techniques and show that the design decisions in developing them can be seen as solutions of well-defined optimization problems. This is true for the mapping of the data values to colors, the arrangement of pixels inside the subwindows, the shape of the subwindows, and the ordering of the dimension subwindows. The paper also discusses the design issues of special variants of pixel-oriented techniques for visualizing large spatial data sets.

477 citations


Journal ArticleDOI
TL;DR: This work gives an explicit method for mapping any simply connected surface onto the sphere in a manner which preserves angles and provides a new way to automatically assign texture coordinates to complex undulating surfaces.
Abstract: We give an explicit method for mapping any simply connected surface onto the sphere in a manner which preserves angles. This technique relies on certain conformal mappings from differential geometry. Our method provides a new way to automatically assign texture coordinates to complex undulating surfaces. We demonstrate a finite element method that can be used to apply our mapping technique to a triangulated geometric description of a surface.

400 citations


Journal ArticleDOI
TL;DR: The CPM (compressed progressive meshes) approach proposed here uses a new technique, which refines the topology of the mesh in batches in batches, which each increase the number of vertices by up to 50 percent, leading to representations of vertex coordinates that are 50 percent more compact than previously reported progressive geometry compression techniques.
Abstract: Most systems that support visual interaction with 3D models use shape representations based on triangle meshes. The size of these representations imposes limits on applications for which complex 3D models must be accessed remotely. Techniques for simplifying and compressing 3D models reduce the transmission time. Multiresolution formats provide quick access to a crude model and then refine it progressively. Unfortunately, compared to the best nonprogressive compression methods, previously proposed progressive refinement techniques impose a significant overhead when the full resolution model must be downloaded. The CPM (compressed progressive meshes) approach proposed here eliminates this overhead. It uses a new technique, which refines the topology of the mesh in batches, which each increase the number of vertices by up to 50 percent. Less than an amortized total of 4 bits per triangle encode where and how the topological refinements should be applied. We estimate the position of new vertices from the positions of their topological neighbors in the less refined mesh using a new estimator that leads to representations of vertex coordinates that are 50 percent more compact than previously reported progressive geometry compression techniques.

399 citations


Journal ArticleDOI
TL;DR: The usefulness of the proposed tissue classification method is demonstrated by comparisons with conventional single-channel classification using both synthesized data and clinical data acquired with CT (computed tomography) and MRI (magnetic resonance imaging) scanners.
Abstract: This paper describes a novel approach to tissue classification using three-dimensional (3D) derivative features in the volume rendering pipeline. In conventional tissue classification for a scalar volume, tissues of interest are characterized by an opacity transfer function defined as a one-dimensional (1D) function of the original volume intensity. To overcome the limitations inherent in conventional 1D opacity functions, we propose a tissue classification method that employs a multidimensional opacity function, which is a function of the 3D derivative features calculated from a scalar volume as well as the volume intensity. Tissues of interest are characterized by explicitly defined classification rules based on 3D filter responses highlighting local structures, such as edge, sheet, line, and blob, which typically correspond to tissue boundaries, cortices, vessels, and nodules, respectively, in medical volume data. The 3D local structure filters are formulated using the gradient vector and Hessian matrix of the volume intensity function combined with isotropic Gaussian blurring. These filter responses and the original intensity define a multidimensional feature space in which multichannel tissue classification strategies are designed. The usefulness of the proposed method is demonstrated by comparisons with conventional single-channel classification using both synthesized data and clinical data acquired with CT (computed tomography) and MRI (magnetic resonance imaging) scanners. The improvement in image quality obtained using multichannel classification is confirmed by evaluating the contrast and contrast-to-noise ratio in the resultant volume-rendered images with variable opacity values.

378 citations


Journal ArticleDOI
TL;DR: This paper describes and demonstrates further means of generating opacity, color, and shading from the tensor information in diffusion tensor MRI data, and proposes anisotropic reaction-diffusion volume textures as an additional tool for visualizing the structure of diffusion data.
Abstract: Diffusion-weighted magnetic resonance imaging is a relatively new modality capable of elucidating the fibrous structure of certain types of tissue, such as the white matter within the brain. One tool for interpreting this data is volume rendering because it permits the visualization of three dimensional structure without a prior segmentation process. In order to use volume rendering, however, we must develop methods for assigning opacity and color to the data, and create a method to shade the data to improve the legibility of the rendering. Previous work introduced three such methods: barycentric opacity maps, hue-balls (for color), and lit-tensors (for shading). The paper expands on and generalizes these methods, describing and demonstrating further means of generating opacity, color, and shading from the tensor information. We also propose anisotropic reaction-diffusion volume textures as an additional tool for visualizing the structure of diffusion data. The patterns generated by this process can be visualized on their own or they can be used to supplement the volume rendering strategies described in the rest of the paper. Finally, because interpolation between data points is a fundamental issue in volume rendering, we conclude with a discussion and evaluation of three distinct interpolation methods suitable for diffusion tensor MRI data.

162 citations


Journal ArticleDOI
TL;DR: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays, and produces a pose estimate that is significantly more accurate than that produced by either sensor acting alone.
Abstract: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone.

134 citations


Journal ArticleDOI
TL;DR: The goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification), and focuses on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene.
Abstract: Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semiautomatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities.

126 citations


Journal ArticleDOI
TL;DR: A new approach based on anisotropic nonlinear diffusion is introduced that enables an easy perception of vector field data and serves as an appropriate scale space method for the visualization of complicated flow pattern.
Abstract: Vector field visualization is an important topic in scientific visualization. Its aim is to graphically represent field data on two and three-dimensional domains and on surfaces in an intuitively understandable way. Here, a new approach based on anisotropic nonlinear diffusion is introduced. It enables an easy perception of vector field data and serves as an appropriate scale space method for the visualization of complicated flow pattern. The approach is closely related to nonlinear diffusion methods in image analysis where images are smoothed while still retaining and enhancing edges. Here, an initial noisy image intensity is smoothed along integral lines, whereas the image is sharpened in the orthogonal direction. The method is based on a continuous model and requires the solution of a parabolic PDE problem. It is discretized only in the final implementational step. Therefore, many important qualitative aspects can already be discussed on a continuous level. Applications are shown for flow fields in 2D and 3D, as well as for principal directions of curvature on general triangulated surfaces. Furthermore, the provisions for flow segmentation are outlined.

102 citations


Journal ArticleDOI
TL;DR: This paper describes the design and implementation of a structure-based brushing tool, which allows users to navigate hierarchies by specifying focal extents and level-of-detail on a visual representation of the structure, and validate its usefulness using two distinct hierarchical visualization techniques, namely hierarchical parallel coordinates and tree-maps.
Abstract: Interactive selection is a critical component in exploratory visualization, allowing users to isolate subsets of the displayed information for highlighting, deleting, analysis, or focused investigation. Brushing, a popular method for implementing the selection process, has traditionally been performed in either screen space or data space. In this paper, we introduce an alternate, and potentially powerful, mode of selection that we term structure-based brushing, for selection in data sets with natural or imposed structure. Our initial implementation has focused on hierarchically structured data, specifically very large multivariate data sets structured via hierarchical clustering and partitioning algorithms. The structure-based brush allows users to navigate hierarchies by specifying focal extents and level-of-detail on a visual representation of the structure. Proximity-based coloring, which maps similar colors to data that are closely related within the structure, helps convey both structural relationships and anomalies. We describe the design and implementation of our structure-based brushing tool. We also validate its usefulness using two distinct hierarchical visualization techniques, namely hierarchical parallel coordinates and tree-maps. Finally, we discuss relationships between different classes of brushes and identify methods by which structure-based brushing could be extended to alternate data structures.

95 citations


Journal ArticleDOI
TL;DR: Prioritized-Layered Projection is a technique for fast rendering of high depth complexity scenes by estimating the visible polygons of a scene from a given viewpoint incrementally, one primitive at a time and is suitable for the computation of partially correct images for use as part of time-critical rendering systems.
Abstract: Prioritized-Layered Projection (PLP) is a technique for fast rendering of high depth complexity scenes. It works by estimating the visible polygons of a scene from a given viewpoint incrementally, one primitive at a time. It is not a conservative technique, instead PLP is suitable for the computation of partially correct images for use as part of time-critical rendering systems. From a very high level, PLP amounts to a modification of a simple view-frustum culling algorithm, however, it requires the computation of a special occupancy-based tessellation and the assignment to each cell of the tessellation a solidity value, which is used to compute a special ordering on how primitives get projected. The authors detail the PLP algorithm, its main components, and implementation. They also provide experimental evidence of its performance, including results on two types of spatial tessellation (using octree- and Delaunay-based tessellations), and several datasets. They also discuss several extensions of their technique.

Journal ArticleDOI
TL;DR: The paper discusses and experimentally compares distance based acceleration algorithms for ray tracing of volumetric data with an emphasis on the Chessboard Distance (CD) voxel traversal, which enables fast search of intersections between rays and the interpolated surface, further improving speed of the process.
Abstract: The paper discusses and experimentally compares distance based acceleration algorithms for ray tracing of volumetric data with an emphasis on the Chessboard Distance (CD) voxel traversal. The acceleration of this class of algorithms is achieved by skipping empty macro regions, which are defined for each background voxel of the volume. Background voxels are labeled in a preprocessing phase by a value, defining the macro region size, which is equal to the voxel distance to the nearest foreground voxel. The CD algorithm exploits the chessboard distance and defines the ray as a nonuniform sequence of samples positioned at voxel faces. This feature assures that no foreground voxels are missed during the scene traversal. Further, due to parallelepipedal shape of the macro region, it supports accelerated visualization of cubic, regular, and rectilinear grids. The CD algorithm is suitable for all modifications of the ray tracing/ray casting techniques being used in volume visualization and volume graphics. However, when used for rendering based on local surface interpolation, it also enables fast search of intersections between rays and the interpolated surface, further improving speed of the process.

Journal ArticleDOI
TL;DR: A rich set of flexible visual components, strategies for arranging the components for particular analyses, an in-memory data pool, data manipulation components, and container applications that provide a powerful production platform for creating innovative visual query and analysis applications are developed.
Abstract: We have developed a flexible software environment called ADVIZOR for visual information discovery. ADVIZOR complements existing assumptive-based analyses by providing a discovery-based approach. ADVIZOR consists of five parts: a rich set of flexible visual components, strategies for arranging the components for particular analyses, an in-memory data pool, data manipulation components, and container applications. Working together, ADVIZOR's architecture provides a powerful production platform for creating innovative visual query and analysis applications.

Journal ArticleDOI
TL;DR: Results from testing the T-BON algorithm on large data sets show that its performance is similar to that of the three-dimensional branch-on-need octree for static data sets while providing substantial advantages for time varying fields.
Abstract: For large time-varying data sets, memory and disk limitations can lower the performance of visualization applications. Algorithms and data structures must be explicitly designed to handle these data sets in order to achieve more interactive rates. The Temporal Branch-on-Need Octree (T-BON) extends the three-dimensional branch-on-need octree for time-varying isosurface extraction. This data structure minimizes the impact of the I/O bottleneck by reading from disk only those portions of the search structure and data necessary to construct the current isosurface. By performing a minimum of I/O and exploiting the hierarchical memory found in modern CPUs, the T-BON algorithm achieves high performance isosurface extraction in time-varying fields. The paper extends earlier work on the T-BON data structure by including techniques for better memory utilization, out-of-core isosurface extraction, and support for nonrectilinear grids. Results from testing the T-BON algorithm on large data sets show that its performance is similar to that of the three-dimensional branch-on-need octree for static data sets while providing substantial advantages for time varying fields.

Journal ArticleDOI
TL;DR: This paper shows the theoretical and experimental results of the application of nonmetric vision to augmented reality and proposes an algorithm for augmenting a real video sequence with views of graphics objects without metric calibration of the video camera by representing the motion of theVideo camera in projective space.
Abstract: This paper deals with video-based augmented reality and proposes an algorithm for augmenting a real video sequence with views of graphics objects without metric calibration of the video camera by representing the motion of the video camera in projective space. A virtual camera, by which views of graphics objects are generated, is attached to a real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has the function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows the theoretical and experimental results of our application of nonmetric vision to augmented reality.

Journal ArticleDOI
TL;DR: The spectral approach is investigated to see whether it could enhance the visualization of volume data and interactive tools with greater visual flexibility and a better balance between quality and speed.
Abstract: Volume renderers for interactive analysis must be sufficiently versatile to render a broad range of volume images: unsegmented "raw" images as recorded by a 3D scanner, labeled segmented images, multimodality images, or any combination of these. The usual strategy is to assign to each voxel a three component RGB color and an opacity value /spl alpha/. This so-called RGB/spl alpha/ approach offers the possibility of distinguishing volume objects by color. However, these colors are connected to the objects themselves, thereby bypassing the idea that in reality the color of an object is also determined by the light source and light detectors c.q. human eyes. The physically realistic approach presented, models light interacting with the materials inside a voxel causing spectral changes in the light. The radiated spectrum falls upon a set of RGB detectors. The spectral approach is investigated to see whether it could enhance the visualization of volume data and interactive tools. For that purpose, a material is split into an absorbing part (the medium) and a scattering part (small particles). The medium is considered to be either achromatic or chromatic, while the particles are considered to scatter the light achromatically, elastically, or inelastically. Inelastic scattering particles combined with an achromatic absorbing medium offer additional visual features: objects are made visible through the surface structure of a surrounding volume object and volume and surface structures can be made visible at the same time. With one or two materials the method is faster than the RGB/spl alpha/ approach, with three materials the performance is equal. The spectral approach can be considered as an extension of the RGB/spl alpha/ approach with greater visual flexibility and a better balance between quality and speed.

Journal ArticleDOI
TL;DR: It is demonstrated that a software implementation of a particular rendering algorithm (shell rendering) can outperform dedicated hardware, and that, for medical surface visualization, expensive dedicated hardware engines are not required.
Abstract: The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consist of 10 different objects from various parts of the body and various modalities (CT, MR, and MRA) with a variety of surface sizes (up to 1 million voxels/2 million triangles) and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the Marching Cubes algorithm. The hardware environment consists of a variety of platforms, including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300 MHz Pentium PC. The results indicate that the software method (shell rendering) was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm (shell rendering) can outperform dedicated hardware. We conclude that, for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms (shell rendering) on a 300 MHz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31.

Journal ArticleDOI
TL;DR: The thesis of the paper is that shape analysis of algorithms on linked data structures produces abstract representations of such data structures, which focus on the "active" parts, which the algorithm can access during it's next steps.
Abstract: Algorithm animation attempts to explain an algorithm by visualizing interesting events of the execution of the implemented algorithm on some sample input. Algorithm explanation describes the algorithm on some adequate level of abstraction, states invariants, explains how important steps of the algorithm preserve the invariants, and abstracts from the input data up to the relevant properties. It uses a small focus onto the execution state. This paper is concerned with the explanation of algorithms on linked data structures. The thesis of the paper is that shape analysis of such algorithms produces abstract representations of such data structures, which focus on the "active" parts, i.e., the parts of the data structures, which the algorithm can access during it's next steps. The paper presents a concept of visually executing an algorithm on these abstract representations of data.

Journal ArticleDOI
TL;DR: Novel and efficient algorithms for computing accessible directions for tactile probes used in 3D digitization with Coordinate Measuring Machines are presented.
Abstract: Analyzing the accessibility of an object's surface to probes or tools is important for many planning and programming tasks that involve spatial reasoning and arise in robotics and automation. The paper presents novel and efficient algorithms for computing accessible directions for tactile probes used in 3D digitization with Coordinate Measuring Machines. The algorithms are executed in standard computer graphics hardware. They are a nonobvious application of rendering hardware to scientific and technological areas beyond computer graphics.

Journal ArticleDOI
TL;DR: A functional developer's framework for general Web-based visualization systems which makes intelligent use of application specific software and hardware components on the server side, as well as Java's benefits on the client side is proposed.
Abstract: The accelerating evolution of information visualization research in the last few years has led to several specific system implementations. The obvious drawbacks of this development are highly dependent software systems, which are only available for a restricted number of users. Today, due to the remarkable advances in hardware and software technologies, not only very expensive graphics workstations, but also low-cost PCs are capable of running computational demanding visualization systems. Furthermore, the rapid development of the medium World Wide Web along with state-of-the-art Internet programming techniques has led to a trend toward more generally usable visualization systems. In this paper, we propose a functional developer's framework for general Web-based visualization systems which makes intelligent use of application specific software and hardware components on the server side, as well as Java's benefits on the client side. To demonstrate the framework's abilities, we have applied it to two practical visualization tasks and report on our experience concerning practicability and pitfalls.

Journal ArticleDOI
TL;DR: In this work, a combination of a hybrid ray tracing and image-based rendering (IBR) technique and a novel perception-based antialiasing technique is used to improve rendering performance of high quality walkthrough animation sequences along predefined paths.
Abstract: We consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance, we use a combination of a hybrid ray tracing and image-based rendering (IBR) technique and a novel perception-based antialiasing technique. In our rendering solution, we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal animation quality metric (AQM) is used to automatically guide such a hybrid rendering. The image flow (IF) obtained as a byproduct of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity.

Journal ArticleDOI
TL;DR: The authors develop integrated techniques that unify physics based modeling with geometric subdivision methodology and present a scheme for dynamic manipulation of the smooth limit surface generated by the (modified) butterfly scheme using physics based "force" tools.
Abstract: The authors develop integrated techniques that unify physics based modeling with geometric subdivision methodology and present a scheme for dynamic manipulation of the smooth limit surface generated by the (modified) butterfly scheme using physics based "force" tools. This procedure based surface model obtained through butterfly subdivision does not have a closed form analytic formulation (unlike other well known spline based models), and hence poses challenging problems to incorporate mass and damping distributions, internal deformation energy, forces, and other physical quantities required to develop a physics based model. Our primary contributions to computer graphics and geometric modeling include: (1) a new hierarchical formulation for locally parameterizing the butterfly subdivision surface over its initial control polyhedron, (2) formulation of dynamic butterfly subdivision surface as a set of novel finite elements, and (3) approximation of this new type of finite elements by a collection of existing finite elements subject to implicit geometric constraints. Our new physics based model can be sculpted directly by applying synthesized forces and its equilibrium is characterized by the minimum of a deformation energy subject to the imposed constraints. We demonstrate that this novel dynamic framework not only provides a direct and natural means of manipulating geometric shapes, but also facilitates hierarchical shape and nonrigid motion estimation from large range and volumetric data sets using very few degrees of freedom (control vertices that define the initial polyhedron).

Journal ArticleDOI
TL;DR: An approach for interactively approximating specular reflections in arbitrary curved surfaces that is applicable to any smooth implicitly defined reflecting surface that is equipped with a ray intersection procedure and also extremely efficient as it employs local perturbations to interpolate point samples analytically.
Abstract: We describe an approach for interactively approximating specular reflections in arbitrary curved surfaces. The technique is applicable to any smooth implicitly defined reflecting surface that is equipped with a ray intersection procedure; it is also extremely efficient as it employs local perturbations to interpolate point samples analytically. After ray tracing a sparse set of reflection paths with respect to a given vantage point and static reflecting surfaces, the algorithm rapidly approximates reflections of arbitrary points in 3-space by expressing them as perturbations of nearby points with known reflections. The reflection of each new point is approximated to second-order accuracy by applying a closed-form perturbation formula to one or more nearby reflection paths. This formula is derived from the Taylor expansion of a reflection path and is based on first and second-order path derivatives. After preprocessing, the approach is fast enough to compute reflections of tessellated diffuse objects in arbitrary curved surfaces at interactive rates using standard graphics hardware. The resulting images are nearly indistinguishable from ray traced images that take several orders of magnitude longer to generate.

Journal ArticleDOI
TL;DR: The authors describe a novel algorithm for computing view-independent finite element radiosity solutions on distributed shared memory parallel architectures based on the notion of a subiteration being the transfer of energy from a single source to a subset of the scene's receiver patches.
Abstract: The authors describe a novel algorithm for computing view-independent finite element radiosity solutions on distributed shared memory parallel architectures. Our approach is based on the notion of a subiteration being the transfer of energy from a single source to a subset of the scene's receiver patches. By using an efficient queue based scheduling system to process these subiterations, we show how radiosity solutions can be generated without the need for processor synchronization between iterations of the progressive refinement algorithm. The only significant source of interprocessor communication required by our method is for visibility calculations. We also describe a perceptually driven approach to visibility estimation, which employs an efficient volumetric grid structure and attempts to reduce the amount of interprocessor communication by approximating visibility queries between distant patches. Our algorithm also eliminates the need for dynamic load balancing until the end of the solution process and is shown to achieve a superlinear speedup in many situations.

Journal ArticleDOI
TL;DR: A more complete surface micro-geometry description is presented, suitable for some common surface defects, including porosity and micro-cracks; both of them are crucial surface features since they strongly influence light reflection properties.
Abstract: The behavior of light interacting with materials is a crucial factor in achieving a high degree of realism in image synthesis. Local illumination processes, describing the interactions between a point of the surface and a shading ray, are evaluated by bidirectional reflectance distribution functions (BRDFs). Current theoretical BRDFs use surface models restricted to roughness only, sometimes at different scales. We present a more complete surface micro-geometry description, suitable for some common surface defects, including porosity and micro-cracks; both of them are crucial surface features since they strongly influence light reflection properties. These new features are modeled by holes inserted in the surface profile, depending on two parameters: the proportion of surface covered by the defects and the mean geometric characteristic of these defects. In order to preserve the advantages and characteristics of existing BRDFs, a postprocessing method is adopted (we integrate our technique into existing models, instead of defining a completely new one). Beyond providing graphical results closely matching real behaviors, this method moreover opens the way to various important new considerations in computer graphics (for example, changes of appearance due to the degree of humidity).

Journal ArticleDOI
TL;DR: Fig. 5CBEM results are shown, which shows the number of visible triangles PLP finds with a given budget and the results of the centroid sorting algorithm, which required 6-7 seconds per frame.
Abstract: Fig. 11. 5CBEM results. (a) The top curve, labeled Exact, is the number of visible triangles for each given frame. The next four curves are the number of the visible triangles PLP finds with a given budget. From top to bottom, budgets of 10 percent, 5 percent, 2 percent, and 1 percent are reported. The bottom curve is the number of visible triangles that the centroid sorting algorithm finds. (b) Rendering times in seconds for each curve shown in (a), with the exception of the centroid sorting algorithm, which required 6-7 seconds per frame. (c) Image of all the visible triangles. (d) Image of the 10 percent PLP visible set.

Journal ArticleDOI
TL;DR: This issue contains extended versions of five outstanding papers from the IEEE Visualization ‘99 conference and the IEEE 1999 Symposium on Information Visualization (InfoVis ’99) held in San Francisco, California, and presents new research in one of the newest areas of visualization research, information visualization.
Abstract: —————————— ✦ —————————— HIS issue contains extended versions of five outstanding papers from the IEEE Visualization ‘99 conference and the IEEE 1999 Symposium on Information Visualization (InfoVis ‘99) held in San Francisco, California. The authors of these papers were invited to significantly extend their work and submit journal quality papers for this special issue. The papers were then reviewed and revised according to suggestions by expert referees. These papers represent a sampling of the state-of-the-art research from a diverse collection of research areas presented at IEEE Visualization ‘99 and InfoVis ‘99. The first two papers discuss important systems issues and new techniques for visualization of large scale data sets. The next two papers discuss new techniques in classical areas of visualization research: flow visualization and medical visualization. The final paper presents new research in one of the newest areas of visualization research, information visualization. Systems issues and visualization performance are still important research problems because dataset sizes are growing just as rapidly (or more rapidly) than processor speed, data transfer rates, and memory capacity. Sutton and Hansen present a new data structure and algorithm for interactive visualization and isosurface extraction of timevarying datasets. Their Temporal-Branch-on-Need Octree reduces I/O bottlnecks and achieves high performance isosurface extraction for time-varying fields. New techniques for the time-critical rendering of high-complexity scenes is presented in the paper by Klosowski and Silva. Their system takes the approach of producing partially-correct images using a new visibility culling algorithm that can be used for time-critical rendering of visualization data, as well as architectural scenes and other geometric datasets. There are still many open research problems in classic application areas of visualization research. The paper by Kindlmann, Weinstein, and Hart presents new techniques for volume rendering medical data from a relatively new source, diffusion-weighted magnetic resonance imaging. They present extensions to previous volume visualization techniques, including barycentric opacity maps, hue-maps, and lit-tensors. Diewald, Preusser, and Rumpf present new techniques in another important area of visualization research: vector field visualization. Their work uses nonlinear anisotropic diffusion to aid the perception of complex flow pattens and flow fields. The work by Fua, Ward, and Rundensteiner presents new techniques in one of the newest areas of visualization research: information visualization. Their work on structure-based brushes helps to solve the navigation problem for visualization of hierarchically organized data. These five papers demonstrate the range of diverse research and the high-quality innovation presented at IEEE Visualization ‘99 and InfoVis ‘99.