scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Graphics in 2004"


Journal ArticleDOI
TL;DR: The theoretical and experimental investigations into possible sources of errors in the approximation of principal direction vectors from triangular meshes are described, and a new method for estimating principal directions that can yield better results under some circumstances is suggested.
Abstract: There are a number of applications in computer graphics that require as a first step the accurate estimation of principal direction vectors at arbitrary vertices on a triangulated surface. Although several methods for calculating principal directions over such models have been previously proposed, we have found in practice that all exhibit unexplained large errors in some cases. In this article, we describe our theoretical and experimental investigations into possible sources of errors in the approximation of principal direction vectors from triangular meshes, and suggest a new method for estimating principal directions that can yield better results under some circumstances.

300 citations


Journal ArticleDOI
TL;DR: These studies show that the facial illustrations and caricatures generated using the techniques presented are as effective as photographs in recognition tasks.
Abstract: We present a method for creating black-and-white illustrations from photographs of human faces. In addition an interactive technique is demonstrated for deforming these black-and-white facial illustrations to create caricatures which highlight and exaggerate representative facial features. We evaluate the effectiveness of the resulting images through psychophysical studies to assess accuracy and speed in both recognition and learning tasks. These studies show that the facial illustrations and caricatures generated using our techniques are as effective as photographs in recognition tasks. For the learning task we find that illustrations are learned two times faster than photographs and caricatures are learned one and a half times faster than photographs. Because our techniques produce images that are effective at communicating complex information, they are useful in a number of potential applications, ranging from entertainment and education to low bandwidth telecommunications and psychology research.

259 citations


Journal ArticleDOI
TL;DR: This article presents a practical method for removing handles in an isosurface by making an axis-aligned sweep through the volume to locate handles, compute their sizes, and selectively remove them, and demonstrates topology simplification on several complex models, and shows its benefits for subsequent surface processing.
Abstract: Many high-resolution surfaces are created through isosurface extraction from volumetric representations, obtained by 3D photography, CT, or MRI. Noise inherent in the acquisition process can lead to geometrical and topological errors. Reducing geometrical errors during reconstruction is well studied. However, isosurfaces often contain many topological errors in the form of tiny handles. These nearly invisible artifacts hinder subsequent operations like mesh simplification, remeshing, and parametrization. In this article we present a practical method for removing handles in an isosurface. Our algorithm makes an axis-aligned sweep through the volume to locate handles, compute their sizes, and selectively remove them. The algorithm is designed to facilitate out-of-core execution. It finds the handles by incrementally constructing and analyzing a Reeb graph. The size of a handle is measured by a short nonseparating cycle. Handles are removed robustly by modifying the volume rather than attempting "mesh surgery." Finally, the volumetric modifications are spatially localized to preserve geometrical detail. We demonstrate topology simplification on several complex models, and show its benefits for subsequent surface processing.

243 citations


Journal ArticleDOI
TL;DR: A novel algorithm for the construction of sphere-trees that approximates objects, both convex and non-convex, with a higher degree of fit than existing algorithms is presented.
Abstract: Hierarchical object representations play an important role in performing efficient collision handling. Many different geometric primitives have been used to construct these representations, which allow areas of interaction to be localized quickly. For time-critical algorithms, there are distinct advantages to using hierarchies of spheres, known as sphere-trees, for object representation. This article presents a novel algorithm for the construction of sphere-trees. The algorithm presented approximates objects, both convex and non-convex, with a higher degree of fit than existing algorithms. In the lower levels of the representations, there is almost an order of magnitude decrease in the number of spheres required to represent the objects to a given accuracy.

215 citations


Journal ArticleDOI
TL;DR: An image-based modeling and rendering system that models a sparse light field using a set of coherent layers, and introduces a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.
Abstract: In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.

200 citations


Journal ArticleDOI
TL;DR: Najm, a set of tools built on the axioms of absolute geometry for exploring the design space of Islamic star patterns, is presented, which makes use of a novel family of tilings, called "inflation tilings," which are particularly well suited as guides for creating star patterns.
Abstract: We present Najm, a set of tools built on the axioms of absolute geometry for exploring the design space of Islamic star patterns. Our approach makes use of a novel family of tilings, called "inflation tilings," which are particularly well suited as guides for creating star patterns. We describe a method for creating a parameterized set of motifs that can be used to fill the many regular polygons that comprise these tilings, as well as an algorithm to infer geometry for any irregular polygons that remain. Erasing the underlying tiling and joining together the inferred motifs produces the star patterns. By choice, Najm is build upon the subset of geometry that makes no assumption about the behavior of parallel lines. As a consequence, star patterns created by Najm can be designed equally well to fit the Euclidean plane, the hyperbolic plane, or the surface of a sphere.

123 citations


Journal ArticleDOI
TL;DR: Mathematically, the frequency-space coefficients of the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF.
Abstract: We present a signal-processing framework for analyzing the reflected light field from a homogeneous convex curved surface under distant illumination. This analysis is of theoretical interest in both graphics and vision and is also of practical importance in many computer graphics problems---for instance, in determining lighting distributions and bidirectional reflectance distribution functions (BRDFs), in rendering with environment maps, and in image-based rendering. It is well known that under our assumptions, the reflection operator behaves qualitatively like a convolution. In this paper, we formalize these notions, showing that the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF. Mathematically, we are able to express the frequency-space coefficients of the reflected light field as a product of the spherical harmonic coefficients of the illumination and the BRDF. These results are of practical importance in determining the well-posedness and conditioning of problems in inverse rendering---estimation of BRDF and lighting parameters from real photographs. Furthermore, we are able to derive analytic formulae for the spherical harmonic coefficients of many common BRDF and lighting models. From this formal analysis, we are able to determine precise conditions under which estimation of BRDFs and lighting distributions are well posed and well-conditioned. Our mathematical analysis also has implications for forward rendering---especially the efficient rendering of objects under complex lighting conditions specified by environment maps. The results, especially the analytic formulae derived for Lambertian surfaces, are also relevant in computer vision in the areas of recognition, photometric stereo and structure from motion.

105 citations


Journal ArticleDOI
TL;DR: This research extends existing glyph-based and nonphotorealistic techniques by applying perceptual guidelines to build an effective representation of the underlying data in a large, multidimensional weather dataset.
Abstract: An important problem in the area of computer graphics is the visualization of large, complex information spaces. Datasets of this type have grown rapidly in recent years, both in number and in size. Images of the data stored in these collections must support rapid and accurate exploration and analysis. This article presents a method for constructing visualizations that are both effective and aesthetic. Our approach uses techniques from master paintings and human perception to visualize a multidimensional dataset. Individual data elements are drawn with one or more brush strokes that vary their appearance to represent the element's attribute values. The result is a nonphotorealistic visualization of information stored in the dataset. Our research extends existing glyph-based and nonphotorealistic techniques by applying perceptual guidelines to build an effective representation of the underlying data. The nonphotorealistic properties the strokes employ are selected from studies of the history and theory of Impressionist art. We show that these properties are similar to visual features that are detected by the low-level human visual system. This correspondence allows us to manage the strokes to produce perceptually salient visualizations. Psychophysical experiments confirm a strong relationship between the expressive power of our nonphotorealistic properties and previous findings on the use of perceptual color and texture patterns for data display. Results from these studies are used to produce effective nonphotorealistic visualizations. We conclude by applying our techniques to a large, multidimensional weather dataset to demonstrate their viability in a practical, real-world setting.

100 citations


Journal ArticleDOI
TL;DR: The concept of a lighting sensitive display (LSD)---a display that measures the incident illumination and modifies its content accordingly is presented, which can render 640 × 480 images of scenes under complex and varying illuminations at 15 frames per second using a 2 GHz processor.
Abstract: Although display devices have been used for decades, they have functioned without taking into account the illumination of their environment. We present the concept of a lighting sensitive display (LSD)---a display that measures the incident illumination and modifies its content accordingly. An ideal LSD would be able to measure the 4D illumination field incident upon it and generate a 4D light field in response to the illumination. However, current sensing and display technologies do not allow for such an ideal implementation. Our initial LSD prototype uses a 2D measurement of the illumination field and produces a 2D image in response to it. In particular, it renders a 3D scene such that it always appears to be lit by the real environment that the display resides in. The current system is designed to perform best when the light sources in the environment are distant from the display, and a single user in a known location views the display.The displayed scene is represented by compressing a very large set of images (acquired or rendered) of the scene that correspond to different lighting conditions. The compression algorithm is a lossy one that exploits not only image correlations over the illumination dimensions but also coherences over the spatial dimensions of the image. This results in a highly compressed representation of the original image set. This representation enables us to achieve high quality relighting of the scene in real time. Our prototype LSD can render 640 × 480 images of scenes under complex and varying illuminations at 15 frames per second using a 2 GHz processor. We conclude with a discussion on the limitations of the current implementation and potential areas for future research.

70 citations


Journal ArticleDOI
TL;DR: It is shown how controls used for specifying texture synthesis on surfaces may be used on images as well, allowing interesting new image-based effects, and highlight modelling applications enabled by the speed of the approach.
Abstract: We present techniques for accelerated texture synthesis from example images. The key idea of our approach is to divide the task into two phases: analysis, and synthesis. During the analysis phase, which is performed once per sample texture, we generate a jump map. Using the jump map, the synthesis phase is capable of synthesizing texture similar to the analyzed example at interactive rates. We describe two such synthesis phase algorithms: one for creating images, and one for directly texturing manifold surfaces. We produce texture images at rates comparable to the fastest alternative algorithms, and produce textured surfaces an order of magnitude faster than current alternative approaches. We further develop a new, faster patch-based algorithm for image synthesis, which improves the quality of our results on ordered textures. We show how controls used for specifying texture synthesis on surfaces may be used on images as well, allowing interesting new image-based effects, and highlight modelling applications enabled by the speed of our approach.

62 citations


Journal ArticleDOI
TL;DR: This model is able to capture the most important features of subsurface scattering: reflection and transmission due to multiple scattering and interactive frame rates for medium-sized scenes.
Abstract: Subsurface scattering is important for photo-realistic rendering of translucent materials. We make approximations to the BSSRDF model and propose a simple lighting model to simulate the effects on translucent meshes. Our approximations are based on the observation that subsurface scattering is relatively local due to its exponential falloff.In the preprocessing stage we build subsurface scattering neighborhood information, which includes all the vertices within effective scattering range from each vertex. We then modify the traditional local illumination model into a run-time two-stage process. The first stage involves computation of reflection and transmission of light on surface vertices. The second stage bleeds in scattering effects from a vertex's neighborhood to generate the final result. We then merge the run-time two-stage process into a run-time single-stage process using precomputed integrals, and reduce the complexity of our run-time algorithm to O(N), where N is the number of vertices. The selection of the optimum set size for precomputed integrals is guided by a standard imagespace error-metric. Furthermore, we show how to compress the precomputed integrals using spherical harmonics. We compensate for the inadequacy of spherical harmonics for storing high frequency components by a reference points scheme to store high frequency components of the precomputed integrals explicitly. With this approach, we greatly reduce memory usage without loss of visual quality under a high-frequency lighting environment and achieve interactive frame rates for medium-sized scenes. Our model is able to capture the most important features of subsurface scattering: reflection and transmission due to multiple scattering.


Journal ArticleDOI
TL;DR: 4-3 direction subdivision combines quad and triangle meshes and defines the unique scheme with a 3 × 3 stencil that can model constant features without ripples both aligned with the quad grid and diagonal to it.
Abstract: 4-3 direction subdivision combines quad and triangle meshes. On quad submeshes it applies a 4-direction alternative to Catmull-Clark subdivision and on triangle submeshes a modification of Loop's scheme. Remarkably, 4-3 surfaces can be proven to be C1 and have bounded curvature everywhere. In regular mesh regions, they are C2 and correspond to two closely-related box-splines of degree four. The box-spline in quad regions has a smaller stencil than Catmull-Clark and defines the unique scheme with a 3 × 3 stencil that can model constant features without ripples both aligned with the quad grid and diagonal to it. From a theoretical point of view, 4-3 subdivision near extraordinary points is remarkable in that the eigenstructure of the local subdivision matrix is easy to determine and a complete analysis is possible. Without tweaking the rules artificially to force a specific spectrum, the leading eigenvalues ordered by modulus of all local subdivision matrices are 1, 1/2, 1/2, 1/4 where the multiplicity of the eigenvalue 1/4 depends on the valence of the extraordinary point and the number of quads surrounding it. This implies equal refinement of the mesh, regardless of the number of neighbors of a mesh node.

Journal ArticleDOI
TL;DR: It is shown that n determines the extent of this region of the subdivision surface that is affected by the displacement of a single control point and largely determines whether its boundary is polygonal or fractal.
Abstract: We study the support of subdivision schemes: that is, the region of the subdivision surface that is affected by the displacement of a single control point. Our main results cover the regular case, where the mesh induces a regular Euclidean tesselation of the local parameter space. If n is the ratio of similarity between the tesselations at steps k and k − 1 of the refinement, we show that n determines the extent of this region and largely determines whether its boundary is polygonal or fractal. In particular if n = 2 (or n2 = 2 because we can always take double steps) the support is a convex polygon whose vertices can easily be determined. In other cases, whether the boundary of the support is fractal or not depends on whether there are sufficient points with non-zero coefficients in the edges of the convex hull of the mask. If there are enough points on every such edge, the support is again a convex polygon. If some edges have enough points and others do not, the boundary can consist of a fractal assembly of an unbounded number of line segments.

Journal ArticleDOI
TL;DR: This article proposes a novel method to approximate a given mesh with a normal mesh which separates the parameterization construction into an initial setup followed only by subsequent perturbations, giving us an algorithm which is far simpler to implement, more robust, and significantly faster.
Abstract: Hierarchical representations of surfaces have many advantages for digital geometry processing applications. Normal meshes are particularly attractive since their level-to-level displacements are in the local normal direction only. Consequently, they only require scalar coefficients to specify. In this article, we propose a novel method to approximate a given mesh with a normal mesh. Instead of building an associated parameterization on the fly, we assume a globally smooth parameterization at the beginning and cast the problem as one of perturbing this parameterization. Controlling the magnitude of this perturbation gives us explicit control over the range between fully constrained (only scalar coefficients) and unconstrained (3-vector coefficients) approximations. With the unconstrained problem giving the lowest approximation error, we can thus characterize the error cost of normal meshes as a function of the number of nonnormal offsets---we find a significant gain for little (error) cost. Because the normal mesh construction creates a geometry driven approximation, we can replace the difficult geometric distance minimization problem with a much simpler least squares problem. This variational approach reduces magnitude and structure (aliasing) of the error further. Our method separates the parameterization construction into an initial setup followed only by subsequent perturbations, giving us an algorithm which is far simpler to implement, more robust, and significantly faster.

Journal Article
TL;DR: An algorithm for rendering faceted colored gemstones in real time, using graphics hardware based on a number of controlled approximations of the physical phenomena involved when light enters a stone, which permit an implementation based on the most recent -- yet commonly available -- hardware features such as fragment programs, cube-mapping and floating-point rendering.
Abstract: We present an algorithm for rendering faceted colored gemstones in real time, using graphics hardware. Beyond the technical challenge of handling the complex behavior of light in such objects, a real time high quality rendering of gemstones has direct applications in the field of jewelry prototyping, which has now become a standard practice for replacing tedious (and less interactive) wax carving methods. Our solution is based on a number of controlled approximations of the physical phenomena involved when light enters a stone, which permit an implementation based on the most recent -- yet commonly available -- hardware features such as fragment programs, cube-mapping and floating-point rendering.

Journal Article
TL;DR: An efficient technique for out-of-core construction and accurate view-dependent visualization of very large surface models of hundreds of millions of triangles at over 40Hz (or 70M triangles/s) on current commodity graphics platforms.

Journal ArticleDOI
TL;DR: A final reconstruction step for a novel unified approach to global illumination that automatically detects different types of light transfer and uses the appropriate method in a closely-integrated manner is presented.
Abstract: In the past twenty years, many algorithms have been proposed to compute global illumination in synthetic scenes. Typically, such approaches can deal with specific lighting configurations, but often have difficulties with others. In this article, we present a final reconstruction step for a novel unified approach to global illumination that automatically detects different types of light transfer and uses the appropriate method in a closely-integrated manner. With our approach, we can deal with difficult lighting configurations such as indirect nondiffuse illumination. The first step of this algorithm consists in a view-independent solution based on hierarchical radiosity with clustering, integrated with particle tracing. This first pass results in solutions containing directional effects such as caustics, which can be interactively rendered. The second step consists of a view-dependent final reconstruction that uses all existing information to compute higher quality, ray-traced images.