scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Visualization and Computer Graphics in 1997"


Journal ArticleDOI
TL;DR: The paper describes a fast algorithm for scattered data interpolation and approximation that makes use of a coarse to fine hierarchy of control lattices to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function.
Abstract: The paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel B-splines are introduced to compute a C/sup 2/ continuous surface through a set of irregularly spaced points. The algorithm makes use of a coarse to fine hierarchy of control lattices to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. Large performance gains are realized by using B-spline refinement to reduce the sum of these functions into one equivalent B-spline function. Experimental results demonstrate that high fidelity reconstruction is possible from a selected set of sparse and irregular samples.

1,054 citations


Journal ArticleDOI
TL;DR: A tone reproduction operator is presented that preserves visibility in high dynamic range scenes and introduces a new histogram adjustment technique, based on the population of local adaptation luminances in a scene, that incorporates models for human contrast sensitivity, glare, spatial acuity, and color sensitivity.
Abstract: We present a tone reproduction operator that preserves visibility in high dynamic range scenes. Our method introduces a new histogram adjustment technique, based on the population of local adaptation luminances in a scene. To match subjective viewing experience, the method incorporates models for human contrast sensitivity, glare, spatial acuity, and color sensitivity. We compare our results to previous work and present examples of our techniques applied to lighting simulation and electronic photography.

723 citations


Journal ArticleDOI
TL;DR: The approach is more effective than the current level-of-detail-based rendering approaches for most scientific visualization applications, where there are a limited number of highly complex objects that stay relatively close to the viewer.
Abstract: We present an algorithm for performing adaptive real-time level-of-detail-based rendering for triangulated polygonal models. The simplifications are dependent on viewing direction, lighting, and visibility and are performed by taking advantage of image-space, object-space, and frame-to-frame coherences. In contrast to the traditional approaches of precomputing a fixed number of level-of-detail representations for a given object, our approach involves statically generating a continuous level-of-detail representation for the object. This representation is then used at run time to guide the selection of appropriate triangles for display. The list of displayed triangles is updated incrementally from one frame to the next. Our approach is more effective than the current level-of-detail-based rendering approaches for most scientific visualization applications, where there are a limited number of highly complex objects that stay relatively close to the viewer. Our approach is applicable for scalar (such as distance from the viewer) as well as vector (such as normal direction) attributes.

269 citations


Journal ArticleDOI
TL;DR: The interval tree is an optimally efficient search structure proposed by Edelsbrunner (1980) to retrieve intervals on the real line that contain a given query value and the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset is proposed.
Abstract: The interval tree is an optimally efficient search structure proposed by Edelsbrunner (1980) to retrieve intervals on the real line that contain a given query value. We propose the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset. The resulting search method can be applied to both structured and unstructured volume datasets, and it can be applied incrementally to exploit coherence between isosurfaces. We also address issues of storage requirements, and operations other than the location of cells, whose impact is relevant in the whole isosurface extraction task. In the case of unstructured grids, the overhead, due to the search structure, is compatible with the storage cost of the dataset, and local coherence in the computation of isosurface patches is exploited through a hash table. In the case of a structured dataset, a new conceptual organization is adopted, called the chess-board approach, which exploits the regular structure of the dataset to reduce memory usage and to exploit local coherence. In both cases, efficiency in the computation of surface normals on the isosurface is obtained by a precomputation of the gradients at the vertices of the mesh. Experiments on different kinds of input show that the practical performance of the method reflects its theoretical optimality.

214 citations


Journal ArticleDOI
TL;DR: A technique which isolates and tracks full-volume representations of regions of interest from 3D regular and curvilinear computational fluid dynamics datasets and can be used to enhance isosurface visualization and volume rendering by color coding individual regions.
Abstract: Visualizing 3D time-varying fluid datasets is difficult because of the immense amount of data to be processed and understood. These datasets contain many evolving amorphous regions, and it is difficult to observe patterns and visually follow regions of interest. In this paper, we present a technique which isolates and tracks full-volume representations of regions of interest from 3D regular and curvilinear computational fluid dynamics datasets. Connected voxel regions ("features") are extracted from each time step and matched to features in subsequent time steps. Spatial overlap is used to determine the matching. The features from each time step are stored in octree forests to speed up the matching process. Once the features have been identified and tracked, the properties of the features and their evolutionary history can be computed. This information can be used to enhance isosurface visualization and volume rendering by color coding individual regions. We demonstrate the algorithm on four 3D time-varying simulations from ongoing research in computational fluid dynamics and show how tracking can significantly improve and facilitate the processing of massive datasets.

206 citations


Journal ArticleDOI
TL;DR: The goal is to enable normally inanimate graphics objects, such as teapots and tables, to become animated, and learn to move about in a charming, cartoon like manner, by implementing a system that can transform a wide class of objects into dynamic characters.
Abstract: Free form deformations (FFDs) are a popular tool for modeling and keyframe animation. The paper extends the use of FFDs to a dynamic setting. Our goal is to enable normally inanimate graphics objects, such as teapots and tables, to become animated, and learn to move about in a charming, cartoon like manner. To achieve this goal, we implement a system that can transform a wide class of objects into dynamic characters. Our formulation is based on parameterized hierarchical FFDs augmented with Lagrangian dynamics, and provides an efficient way to animate and control the simulated characters. Objects are assigned mass distributions and elastic deformation properties, which allow them to translate, rotate, and deform according to internal and external forces. In addition, we implement an automated optimization process that searches for suitable control strategies. The primary contributions of the work are threefold. First, we formulate a dynamic generalization of conventional, geometric FFDs. The formulation employs deformation modes which are tailored by the user and are expressed in terms of FFDs. Second, the formulation accommodates a hierarchy of dynamic FFDs that can be used to model local as well as global deformations. Third, the deformation modes can be active, thereby producing locomotion.

188 citations


Journal ArticleDOI
TL;DR: Investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way are described.
Abstract: Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data.

175 citations


Journal ArticleDOI
TL;DR: A new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order, based on the Taylor series expansion of the convolution sum is described.
Abstract: We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is based on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the method for the normalization of derivative filter weights. Under certain minimal restrictions of the underlying function, we are able to compute tight absolute error bounds of the reconstruction process. We demonstrate the utilization of our methods to the analysis of the class of cubic BC-spline filters. As our technique is not restricted to interpolation filters, we are able to show that the Catmull-Rom spline filter and its derivative are the most accurate reconstruction and derivative filters, respectively, among the class of BC-spline filters. We also present a new derivative filter which features better spatial accuracy than any derivative BC-spline filter, and is optimal within our framework. We conclude by demonstrating the use of these optimal filters for accurate interpolation and gradient estimation in volume rendering.

171 citations


Journal ArticleDOI
TL;DR: A general approach for designing and animating complex deformable models with implicit surfaces and builds on the specific properties of implicit surfaces for modeling soft inelastic substances capable of separation and fusion that maintain a constant volume when animated.
Abstract: The paper presents a general approach for designing and animating complex deformable models with implicit surfaces. Implicit surfaces are introduced as an extra layer coating any kind of structure that moves and deforms over time. Offering a compact definition of a smooth surface around an object, they provide an efficient collision detection mechanism. The implicit layer deforms in order to generate exact contact surfaces between colliding bodies. A simple physically based model approximating elastic behavior is then used for computing collision response. The implicit formulation also eases the control of the object's volume with a new method based on local controllers. We present two different applications that illustrate the benefits of these techniques. First, the animation of simple characters made of articulated skeletons coated with implicit flesh exploits the compactness and enhanced control of the model. The second builds on the specific properties of implicit surfaces for modeling soft inelastic substances capable of separation and fusion that maintain a constant volume when animated.

158 citations


Journal ArticleDOI
TL;DR: A topological rule is shown that puts a constraint on the topology of Tensor fields defined across surfaces, extending to tensor fields the Pointcaré-Hopf theorem for vector fields.
Abstract: The authors study the topology of symmetric, second-order tensor fields. The results of the study can be readily extended to include general tensor fields through linear combination of symmetric tensor fields and vector fields. The goal is to represent their complex structure by a simple set of carefully chosen points, lines, and surfaces analogous to approaches in vector field topology. They extract topological skeletons of the eigenvector fields and use them for a compact, comprehensive description of the tensor field. Their approach is based on the premise: "analyze, then visualize". The basic constituents of tensor topology are the degenerate points, or points where eigenvalues are equal to each other. Degenerate points play a similar role as critical points in vector fields. In tensor fields they identify two kinds of elementary degenerate points, which they call wedge points and trisector points. They can combine to form more familiar singularities-such as saddles, nodes, centers, or foci. However, these are generally unstable structures in tensor fields. Based on the notions developed for 2D tensor fields, they extend the theory to include 3D degenerate points. Examples are given on the use of tensor field topology for the interpretation of physical systems.

154 citations


Journal ArticleDOI
TL;DR: A system to represent and visualize scalar volume data at multiple resolution on a multiresolution model based on tetrahedral meshes with scattered vertices that can be obtained from any initial dataset is presented.
Abstract: A system to represent and visualize scalar volume data at multiple resolution is presented. The system is built on a multiresolution model based on tetrahedral meshes with scattered vertices that can be obtained from any initial dataset. The model is built off-line through data simplification techniques, and stored in a compact data structure that supports fast on-line access. The system supports interactive visualization of a representation at an arbitrary level of resolution through isosurface and projective methods. The user can interactively adapt the quality of visualization to requirements of a specific application task and to the performance of a specific hardware platform. Representations at different resolutions can be used together to further enhance interaction and performance through progressive and multiresolution rendering.

Journal ArticleDOI
TL;DR: An incremental algorithm for collision detection between general polygonal models in dynamic environments that combines a hierarchical representation with incremental computation to rapidly detect collisions and highlights its performance on different applications.
Abstract: Fast and accurate collision detection between general polygonal models is a fundamental problem in physically based and geometric modeling, robotics, animation, and computer-simulated environments. Most earlier collision detection algorithms are either restricted to a class of models (such as convex polytopes) or are not fast enough for practical applications. The authors present an incremental algorithm for collision detection between general polygonal models in dynamic environments. The algorithm combines a hierarchical representation with incremental computation to rapidly detect collisions. It makes use of coherence between successive instances to efficiently determine the number of object features interacting. For each pair of objects, it tracks the closest features between them on their respective convex hulls. It detects the objects' penetration using pseudo internal Voronoi cells and constructs the penetration region, thus identifying the regions of contact on the convex hulls. The features associated with these regions are represented in a precomputed hierarchy. The algorithm uses a coherence based approach to quickly traverse the precomputed hierarchy and check for possible collisions between the features. They highlight its performance on different applications.

Journal ArticleDOI
TL;DR: An out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements using an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval.
Abstract: This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that, during the streamline construction, only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-15 megabytes. We also demonstrate that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

Journal ArticleDOI
TL;DR: A new technique for interactive vector field visualization using large numbers of properly illuminated field lines, taking into account ambient, diffuse and specular reflection terms, as well as transparency and depth cueing, is presented.
Abstract: A new technique for interactive vector field visualization using large numbers of properly illuminated field lines is presented. Taking into account ambient, diffuse and specular reflection terms, as well as transparency and depth cueing, we employ a realistic shading model which significantly increases the quality and realism of the resulting images. While many graphics workstations offer hardware support for illuminating surface primitives, usually no means for an accurate shading of line primitives are provided. However, we show that proper illumination of lines can be implemented by exploiting the texture mapping capabilities of modern graphics hardware. In this way, high rendering performance with interactive frame rates can be achieved. We apply the technique to render large numbers of integral curves of a vector field. The impression of the resulting images can be further improved by a number of visual enhancements, like color coding or particle animation. We also describe methods for controlling the distribution of field lines in space. These methods enable us to use illuminated field lines for interactive exploration of vector fields.

Journal ArticleDOI
TL;DR: An analytical definition of a discrete hypersphere with arbitrary center, radius, and thickness in dimension n is introduced and is called a discrete analytical hypersPhere.
Abstract: An analytical definition of a discrete hypersphere with arbitrary center, radius, and thickness in dimension n is introduced. The new discrete hypersphere is called a discrete analytical hypersphere. The hypersphere has important original properties including exact point localization, space tiling, k-separation, etc. These properties are almost obvious with this new discrete analytical definition contrary to the classical approaches based on digitization schemes. The analytically defined circle is compared to Pham's (1992) classically defined circle. Efficient incremental circle and hypersphere generation algorithms are provided.

Journal ArticleDOI
TL;DR: This work examines a wavelet basis representation of reflectance functions, and the algorithms required for efficient point-wise reconstruction of the BRDF, and shows that the nonstandard wavelet decomposition leads to considerably more efficient algorithms than the standard wave let decomposition.
Abstract: Analytical models of light reflection are in common use in computer graphics. However, models based on measured reflectance data promise increased realism by making it possible to simulate many more types of surfaces to a greater level of accuracy than with analytical models. They also require less expert knowledge about the illumination models and their parameters. There are a number of hurdles to using measured reflectance functions, however. The data sets are very large. A reflectance distribution function sampled at five degrees angular resolution, arguably sparse enough to miss highlights and other high frequency effects, can easily require over a million samples, which in turn amount to over four megabytes of data. These data then also require some form of interpolation and filtering to be used effectively. We examine issues of representation of measured reflectance distribution functions. In particular, we examine a wavelet basis representation of reflectance functions, and the algorithms required for efficient point-wise reconstruction of the BRDF. We show that the nonstandard wavelet decomposition leads to considerably more efficient algorithms than the standard wavelet decomposition. We also show that thresholding allows considerable improvement in running times, without unduly sacrificing image quality.

Journal ArticleDOI
TL;DR: The authors present the Virtual Data Visualizer, a highly interactive, immersive environment for visualizing and analyzing data that employs a data organization with data arranged hierarchically in classes that can be modified by the user within the virtual environment.
Abstract: Te authors present the Virtual Data Visualizer, a highly interactive, immersive environment for visualizing and analyzing data. VDV is a set of tools for exploratory data visualization that does not focus on just one type of application. It employs a data organization with data arranged hierarchically in classes that can be modified by the user within the virtual environment. The class structure is the basis for bindings or mappings between data variables and glyph elements, which the user can make, change, or remove. The binding operation also has a set of defaults so that the user can quickly display the data. The VDV requires a user interface that is fairly complicated for a virtual environment. They have taken the approach that a combination of more-or-less traditional menus and more direct means of icon manipulation will do the job. This work shows that a useful interface and set of tools can be built. Controls in VDV include a panel for controlling animation of the data and zooming in and out. Tools include a workbench for changing the glyphs and setting glyph/variable ranges and a boundary tool for defining new classes spatially.

Journal ArticleDOI
TL;DR: Lazy sweep ray casting as discussed by the authors is a fast algorithm for rendering general irregular grids based on the sweep-plane paradigm, and it is able to accelerate ray casting for rendering irregular grids, including disconnected and nonconvex unstructured irregular grids.
Abstract: Lazy sweep ray casting is a fast algorithm for rendering general irregular grids. It is based on the sweep-plane paradigm, and it is able to accelerate ray casting for rendering irregular grids, including disconnected and nonconvex unstructured irregular grids (even with holes) with a rendering cost that decreases as the "disconnectedness" decreases. The algorithm is carefully tailored to exploit spatial coherence even if the image resolution differs substantially from the object space resolution. Lazy sweep ray casting has several desirable properties, including its generality, (depth-sorting) accuracy, low memory consumption, speed, simplicity of implementation and portability (e.g. no hardware dependencies). We establish the practicality of our method through experimental results based on our implementation, which is shown to be substantially faster (by up to two orders of magnitude) than other algorithms implemented in software. We also provide theoretical results, both lower and upper bounds, on the complexity of ray casting of irregular grids.

Journal ArticleDOI
TL;DR: In inverse kinetics for the center of mass and inverse kinematics for fixed end effecters can be combined to generate a posture displaying static balance, goal oriented features, and an additional gravity optimization.
Abstract: We present a posture design paradigm for the positioning of complex characters. It is illustrated here on human figures. We exploit the inverse kinetics technique which allows the center of mass position control for postures with either single or multiple supports. For the multiple support case, we introduce a compatible flow model of the supporting influence. With this approach, we are able to handle continuous modification of the support distribution. By construction, inverse kinetics presents the same control architecture as inverse kinematics, and thus, it shows equivalent computing cost and similar intuitive concepts. Furthermore, inverse kinetics for the center of mass and inverse kinematics for fixed end effecters can be combined to generate a posture displaying static balance, goal oriented features, and an additional gravity optimization.

Journal ArticleDOI
Noriko Nagata1, T. Dobashi, Yoshitsugu Manabe, Teruo Usami, Seiji Inokuchi 
TL;DR: P portions of photos of real pearls and the images generated by the present method were evaluated based on a scale of psychological evaluations of pearl-like quality demonstrating, thereby, that not merely the generated images as a whole, but the respective parts of images can present such a pearl- like quality.
Abstract: Visual simulation using CG and VR has attracted wide attention in the machine vision field. This paper proposes a method of modeling and visualizing pearls that will be the central technique of a pearl-quality evaluation simulator. Pearls manifest a very specific optical phenomenon that is not dependent on the direction of the light source. To investigate this feature, we propose a physical model, called an illuminant model for multilayer film interference considering the multiple reflection in spherical bodies. The rendering algorithm has been configured from such representations of physical characteristics as interference, mirroring, and texture, which correspond, respectively, to the sense of depth, brightness, and grain that are the main evaluation factors obtained from psychological experiments. Further, portions of photos of real pearls and the images generated by the present method were evaluated based on a scale of psychological evaluations of pearl-like quality demonstrating, thereby, that not merely the generated images as a whole, but the respective parts of images can present such a pearl-like quality.

Journal ArticleDOI
TL;DR: The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity, improvement over a previous result that pointed to an O(n log n) complexity.
Abstract: The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown to be valid for the nondiscrete gathering case. The author also gives bounds for the variances and MSE for the infinite path length estimators; these bounds might be useful in the study of biased estimators resulting from cutting off the infinite path.

Journal ArticleDOI
TL;DR: This work describes the construction of a C/sup 0/ continuous surface consisting of rational quadratic surface patches interpolating the triangles in T approximating a contour (isosurface) F(x, y, z)=T.
Abstract: Given a three dimensional (3D) array of function values F/sub i,j,k/ on a rectilinear grid, the marching cubes (MC) method is the most common technique used for computing a surface triangulation T approximating a contour (isosurface) F(x, y, z)=T. We describe the construction of a C/sup 0/ continuous surface consisting of rational quadratic surface patches interpolating the triangles in T. We determine the Bezier control points of a single rational quadratic surface patch based on the coordinates of the vertices of the underlying triangle and the gradients and Hessians associated with the vertices.

Journal ArticleDOI
K. Nakamaru1, Y. Ohno
TL;DR: Experimental analysis, including comparisons with depth-first ray tracing, shows that large databases can be handled efficiently with this approach, and presents ways to combine breadth-first methods with traditional efficient algorithms, along with new schemes to minimize accessing objects stored on disk.
Abstract: Breadth-first ray tracing is based on the idea of exchanging the roles of rays and objects For scenes with a large number of objects, it may be profitable to form a set of rays and compare each object in turn against this set By doing so, thrashing, due to disk access, can be minimized We present ways to combine breadth-first methods with traditional efficient algorithms, along with new schemes to minimize accessing objects stored on disk Experimental analysis, including comparisons with depth-first ray tracing, shows that large databases can be handled efficiently with this approach


Journal ArticleDOI
TL;DR: The authors introduce the notion of compatible star decompositions of simple polygons, and prove that some pairs of polygons require /spl Omega/(n/sup 2/) pieces, and that the decomposition computed by the second algorithm possess no more than O(n/Sup 2%) pieces.
Abstract: The authors introduce the notion of compatible star decompositions of simple polygons. In general, given two polygons with a correspondence between their vertices, two polygonal decompositions of the two polygons are said to be compatible if there exists a one-to-one mapping between them such that the corresponding pieces are defined by corresponding vertices. For compatible star decompositions, they also require correspondence between star points of the star pieces. Compatible star decompositions have applications in computer animation and shape representation and analysis. They present two algorithms for constructing compatible star decompositions of two simple polygons. The first algorithm is optimal in the number of pieces in the decomposition, providing that such a decomposition exists without adding Steiner vertices. The second algorithm constructs compatible star decompositions with Steiner vertices, which are not minimal in the number of pieces but are asymptotically worst-case optimal in this number and in the number of added Steiner vertices. They prove that some pairs of polygons require /spl Omega/(n/sup 2/) pieces, and that the decompositions computed by the second algorithm possess no more than O(n/sup 2/) pieces. In addition to the contributions regarding compatible star decompositions, the paper also corrects an error in the only previously published polynomial algorithm for constructing a minimal star decomposition of a simple polygon, an error which might lead to a nonminimal decomposition.

Journal ArticleDOI
TL;DR: This paper introduces a very fast generalized Gibbs sampler that combines two novel techniques, namely a preconditioning technique in a wavelet basis for constraining the splines and a perturbation scheme in which all sites (surface nodes) that do not share a common neighbor are updated simultaneously.
Abstract: It is well known that the spatial frequency spectrum of membrane and thin plate splines exhibit self-affine characteristics and, hence, behave as fractals. This behavior was exploited in generating the constrained fractal surfaces, which were generated by using a Gibbs sampler algorithm in the work of Szeliski and Terzopoulos (1989). The algorithm involves locally perturbing a constrained spline surface with white noise until the spline surface reaches an equilibrium state. We introduce a fast generalized Gibbs sampler that combines two novel techniques, namely, a preconditioning technique in a wavelet basis for constraining the splines and a perturbation scheme in which, unlike the traditional Gibbs sampler, all sites (surface nodes) that do not share a common neighbor are updated simultaneously. In addition, we demonstrate the capability to generate arbitrary order fractal surfaces without resorting to blending techniques. Using this fast Gibbs sampler algorithm, we demonstrate the synthesis of realistic terrain models from sparse elevation data.

Journal ArticleDOI
A. Hausner1
TL;DR: The author examines the accuracy of the formulas for spherical and rectangular Lambertian sources, and applies them to obtain light gradients, and shows how to use the formulas to estimate light from uniform polygonal sources, sources with polynomially varying radiosity, and luminous textures.
Abstract: Computing the light field due to an area light source remains an interesting problem in computer graphics. The paper presents a series approximation of the light field due to an unoccluded area source, by expanding the light field in spherical harmonics. The source can be nonuniform and need not be a planar polygon. The resulting formulas give expressions whose cost and accuracy can be chosen between the exact and expensive Lambertian solution for a diffuse polygon, and the fast but inexact method of replacing the area source by a point source of equal power. The formulas break the computation of the light vector into two phases: the first phase represents the light source's shape and brightness with numerical coefficients, and the second uses these coefficients to compute the light field at arbitrary locations. The author examines the accuracy of the formulas for spherical and rectangular Lambertian sources, and applies them to obtain light gradients. The author also shows how to use the formulas to estimate light from uniform polygonal sources, sources with polynomially varying radiosity, and luminous textures.

Journal ArticleDOI
TL;DR: The paper describes methods for constructing partitioning trees from a discrete image/volume data set and a hierarchical encoding schema for both lossless and lossy encodings is described.
Abstract: The discrete space representation of most scientific datasets, generated through instruments or by sampling continuously defined fields, while being simple, is also verbose and structureless. We propose the use of a particular spatial structure, the binary space partitioning tree as a new representation to perform efficient geometric computation in discretely defined domains. The ease of performing affine transformations, set operations between objects, and correct implementation of transparency makes the partitioning tree a good candidate for probing and analyzing medical reconstructions, in such applications as surgery planning and prostheses design. The multiresolution characteristics of the representation can be exploited to perform such operations at interactive rates by smooth variation of the amount of geometry. Application to ultrasound data segmentation and visualization is proposed. The paper describes methods for constructing partitioning trees from a discrete image/volume data set. Discrete space operators developed for edge detection are used to locate discontinuities in the image from which lines/planes containing the discontinuities are fitted by using either the Hough transform or a hyperplane sort. A multiresolution representation can be generated by ordering the choice of hyperplanes by the magnitude of the discontinuities. Various approximations can be obtained by pruning the tree according to an error metric. The segmentation of the image into edgeless regions can yield significant data compression. A hierarchical encoding schema for both lossless and lossy encodings is described.

Journal ArticleDOI
TL;DR: A physically based model is proposed to implicitly guard against isotopy violation during evolution of tangled configurations of mathematical knots and a robust stochastic optimization procedure, simulated annealing, is suggested for the purpose of identifying the globally optimal solution.
Abstract: The article describes a tool for simplification and analysis of tangled configurations of mathematical knots. The proposed method addresses optimization issues common in energy based approaches to knot classification. In this class of methods, an initially tangled elastic rope is "charged" with an electrostatic like field which causes it to self repel, prompting it to evolve into a mechanically stable configuration. This configuration is believed to be characteristic for its knot type. We propose a physically based model to implicitly guard against isotopy violation during such evolution and suggest that a robust stochastic optimization procedure, simulated annealing, be used for the purpose of identifying the globally optimal solution. Because neither of these techniques depends on the properties of the energy function being optimized, our method is of general applicability, even though we applied it to a specific potential here. The method has successfully analyzed several complex tangles and is applicable to simplifying a large class of knots and links. Our work also shows that energy based techniques will not necessarily terminate in a unique configuration, thus we empirically refute a prior conjecture that one of the commonly used energy functions (J. Simon, 1994) is unimodal. Based on these results we also compare techniques that rely on geometric energy optimization to conventional algebraic methods with regards to their classification power.