scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Graphics in 2005"


Journal ArticleDOI
TL;DR: ABF++ robustly parameterizes mesh models of hundreds of thousands and millions of triangles within minutes, and is extremely suitable for robustly and efficiently parameterizing models forgeometry-processing applications.
Abstract: Conformal parameterization of mesh models has numerous applications in geometry processing. Conformality is desirable for remeshing, surface reconstruction, and many other mesh processing applications. Subject to the conformality requirement, these applications typically benefit from parameterizations with smaller stretch. The Angle Based Flattening (ABF) method, presented a few years ago, generates provably valid conformal parameterizations with low stretch. However, it is quite time-consuming and becomes error prone for large meshes due to numerical error accumulation. This work presents ABFpp, a highly efficient extension of the ABF method, that overcomes these drawbacks while maintaining all the advantages of ABF. ABF++ robustly parameterizes meshes of hundreds of thousands and millions of triangles within minutes. It is based on three main components: (1) a new numerical solution technique that dramatically reduces the dimension of the linear systems solved at each iteration, speeding up the solution; (2) a new robust scheme for reconstructing the 2D coordinates from the angle space solution that avoids the numerical instabilities which hindered the ABF reconstruction scheme; and (3) an efficient hierarchical solution technique. The speedup with (1) does not come at the expense of greater distortion. The hierarchical technique (3) enables parameterization of models with millions of faces in seconds at the expense of a minor increase in parametric distortion. The parameterization computed by ABF++ are provably valid, that is they contain no flipped triangles. As a result of these extensions, the ABF++ method is extremely suitable for robustly and efficiently parameterizing models for geometry-processing applications.

339 citations


Journal ArticleDOI
TL;DR: An automatic parameterization method for segmenting a surface into patches that are then flattened with little stretch, and an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency, and surface visibility is described.
Abstract: Surface parameterization is necessary for many graphics tasks: texture-preserving simplification, remeshing, surface painting, and precomputation of solid textures. The stretch caused by a given parameterization determines the sampling rate on the surface. In this article, we present an automatic parameterization method for segmenting a surface into patches that are then flattened with little stretch.Many objects consist of regions of relatively simple shapes, each of which has a natural parameterization. Based on this observation, we describe a three-stage feature-based patch creation method for manifold surfaces. The first two stages, genus reduction and feature identification, are performed with the help of distance-based surface functions. In the last stage, we create one or two patches for each feature region based on a covariance matrix of the feature's surface points.To reduce stretch during patch unfolding, we notice that stretch is a 2 × 2 tensor, which in ideal situations is the identity. Therefore, we use the Green-Lagrange tensor to measure and to guide the optimization process. Furthermore, we allow the boundary vertices of a patch to be optimized by adding scaffold triangles. We demonstrate our feature-based patch creation and patch unfolding methods for several textured models.Finally, to evaluate the quality of a given parameterization, we describe an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency, and surface visibility.

329 citations


Journal ArticleDOI
TL;DR: A novel constraint-based motion editing technique that works as a filter that sequentially scans the input motion to produce a stream of output motion frames at a stable interactive rate, based on the per-frame Kalman filter framework.
Abstract: This article presents a novel constraint-based motion editing technique. On the basis of animator-specified kinematic and dynamic constraints, the method converts a given captured or animated motion to a physically plausible motion. In contrast to previous methods using spacetime optimization, we cast the motion editing problem as a constrained state estimation problem, based on the per-frame Kalman filter framework. The method works as a filter that sequentially scans the input motion to produce a stream of output motion frames at a stable interactive rate. Animators can tune several filter parameters to adjust to different motions, turn the constraints on or off based on their contributions to the final result, or provide a rough sketch (kinematic hint) as an effective way of producing the desired motion. Experiments on various systems show that the technique processes the motions of a human with 54 degrees of freedom, at about 150 fps when only kinematic constraints are applied, and at about 10 fps when both kinematic and dynamic constraints are applied. Experiments on various types of motion show that the proposed method produces remarkably realistic animations.

162 citations


Journal ArticleDOI
TL;DR: This work derives a generative model of expressive facial motion that incorporates emotion control, while maintaining accurate lip-synching, from a database of speech-related high-fidelity facial motions.
Abstract: Speech-driven facial motion synthesis is a well explored research topic. However, little has been done to model expressive visual behavior during speech. We address this issue using a machine learning approach that relies on a database of speech-related high-fidelity facial motions. From this training set, we derive a generative model of expressive facial motion that incorporates emotion control, while maintaining accurate lip-synching. The emotional content of the input speech can be manually specified by the user or automatically extracted from the audio signal using a Support Vector Machine classifier.

152 citations


Journal ArticleDOI
TL;DR: This work presents an approach for modeling and rendering a dynamic, real-world event from an arbitrary viewpoint, and at any time, using images captured from multiple video cameras, to compute a novel image from any viewpoint in the 4D space of position and time.
Abstract: We present an approach for modeling and rendering a dynamic, real-world event from an arbitrary viewpoint, and at any time, using images captured from multiple video cameras. The event is modeled as a nonrigidly varying dynamic scene, captured by many images from different viewpoints, at discrete times. First, the spatio-temporal geometric properties (shape and instantaneous motion) are computed. The view synthesis problem is then solved using a reverse mapping algorithm, ray-casting across space and time, to compute a novel image from any viewpoint in the 4D space of position and time. Results are shown on real-world events captured in the CMU 3D Room, by creating synthetic renderings of the event from novel, arbitrary positions in space and time. Multiple such recreated renderings can be put together to create retimed fly-by movies of the event, with the resulting visual experience richer than that of a regular video clip, or switching between images from multiple cameras.

141 citations


Journal ArticleDOI
TL;DR: A fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object is presented.
Abstract: We present a fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object. The algorithm removes all typical mesh artifacts such as degenerate triangles, incompatible face orientation, non-manifold vertices and edges, overlapping and penetrating polygons, internal redundant geometry, as well as gaps and holes up to a user-defined maximum size ρ. Moreover, the output mesh always stays within a prescribed tolerance e to the input mesh. Due to the effective use of a hierarchical octree data structure, the algorithm achieves high voxel resolution (up to 40963 on a 2GB PC) and processing times of just a few minutes for moderately complex objects. We demonstrate our technique on various architectural CAD models to show its robustness and reliability.

132 citations


Journal ArticleDOI
TL;DR: This work presents a novel generalization of the quadric error metric used in surface simplification that can be used for simplifying simplicial complexes of any type embedded in Euclidean spaces of any dimension and can produce high quality approximations of plane and space curves, triangulated surfaces, tetrahedralized volume data, and simplicial complex of mixed type.
Abstract: We present a novel generalization of the quadric error metric used in surface simplification that can be used for simplifying simplicial complexes of any type embedded in Euclidean spaces of any dimension. We demonstrate that our generalized simplification system can produce high quality approximations of plane and space curves, triangulated surfaces, tetrahedralized volume data, and simplicial complexes of mixed type. Our method is both efficient and easy to implement. It is capable of processing complexes of arbitrary topology, including nonmanifolds, and can preserve intricate boundaries.

120 citations


Journal ArticleDOI
TL;DR: It is shown that strict photometric uniformity is not a requirement for achieving photometric seamlessness, and this is the first approach and system that addresses the photometric variation problem from a perceptual stand point and generates truly seamless displays with high dynamic range.
Abstract: Arguably, the most vexing problem remaining for multi-projector displays is that of photometric (brightness) seamlessness within and across different projectors. Researchers have strived for strict photometric uniformity that achieves identical response at every pixel of the display. However, this goal typically results in displays with severely compressed dynamic range and poor image quality.In this article, we show that strict photometric uniformity is not a requirement for achieving photometric seamlessness. We introduce a general goal for photometric seamlessness by defining it as an optimization problem, balancing perceptual uniformity with display quality. Based on this goal, we present a new method to achieve perceptually seamless high quality displays. We first derive a model that describes the photometric response of projection-based displays. Then we estimate the model parameters and modify them using perception-driven criteria. Finally, we use the graphics hardware to reproject the image computed using the modified model parameters by manipulating only the projector inputs at interactive rates.Our method has been successfully demonstrated on three different practical display systems at Argonne National Laboratory, made of 2 × 2 array of four projectors, 2 × 3 array of six, projectors, and 3 × 5 array of fifteen projectors. Our approach is efficient, automatic and scalable---requiring only a digital camera and a photometer. To the best of our knowledge, this is the first approach and system that addresses the photometric variation problem from a perceptual stand point and generates truly seamless displays with high dynamic range.

118 citations


Journal ArticleDOI
TL;DR: A method for creating expressive facial animation by extracting information from the expression axis of a speech performance, and an audio-driven synthesis technique for generating new head motion is introduced.
Abstract: Motion capture-based facial animation has recently gained popularity in many applications, such as movies, video games, and human-computer interface designs. With the use of sophisticated facial motions from a human performer, animated characters are far more lively and convincing. However, editing motion data is difficult, limiting the potential of reusing the motion data for different tasks. To address this problem, statistical techniques have been applied to learn models of the facial motion in order to derive new motions based on the existing data. Most existing research focuses on audio-to-visual mapping and reordering of words, or on photo-realistically matching the synthesized face to the original performer. Little attention has been paid to modifying and controlling facial expression, or to mapping expressive motion onto other 3D characters.This article describes a method for creating expressive facial animation by extracting information from the expression axis of a speech performance. First, a statistical model for factoring the expression and visual speech is learned from video. This model can be used to analyze the facial expression of a new performance or modify the facial expressions of an existing performance. With the addition of this analysis of the facial expression, the facial motion can be more effectively retargeted to another 3D face model. The blendshape retargeting technique is extended to include subsets of morph targets that belong to different facial expression groups. The proportion of each subset included in a final animation is weighted according to the expression information. The resulting animation conveys much more emotion than if only the motion vectors were used for retargeting. Finally, since head motion is very important in adding liveness to facial animation, we introduces an audio-driven synthesis technique for generating new head motion.

112 citations


Journal ArticleDOI
TL;DR: A procedural object distribution function, a new texture basis function that distributes Procedurally generated objects over a procedurally generated texture, and a new texturing primitive that extends the range of textures that can be generated procedurally.
Abstract: In this article, we present a procedural object distribution function, a new texture basis function that distributes procedurally generated objects over a procedurally generated texture. The objects are distributed uniformly over the texture, and are guaranteed not to overlap. The scale, size, and orientation of the objects can be easily manipulated. The texture basis function is efficient to evaluate, and is suited for real-time applications. The new texturing primitive we present extends the range of textures that can be generated procedurally.The procedural object distribution function we propose is based on Poisson disk tiles and a direct stochastic tiling algorithm for Wang tiles. Poisson disk tiles are square tiles filled with a precomputed set of Poisson disk distributed points, inspired by Wang tiles. A single set of Poisson disk tiles enables the real-time generation of an infinite amount of Poisson disk distributions of arbitrary size. With the direct stochastic tiling algorithm, these Poisson disk distributions can be evaluated locally, at any position in the Euclidean plane.Poisson disk tiles and the direct stochastic tiling algorithm have many other applications in computer graphics. We briefly explore applications in object distribution, primitive distribution for illustration, and environment map sampling.

112 citations


Journal ArticleDOI
TL;DR: The introduction of the nondissipative technique means that, in contrast to previous methods, the simulated water does not unnecessarily lose mass, and its motion is not damped to an unphysical extent.
Abstract: This article presents a physically-based technique for simulating water. This work is motivated by the "stable fluids" method, developed by Stam [1999], to handle gaseous fluids. We extend this technique to water, which calls for the development of methods for modeling multiphase fluids and suppressing dissipation. We construct a multiphase fluid formulation by combining the Navier--Stokes equations with the level set method. By adopting constrained interpolation profile (CIP)-based advection, we reduce the numerical dissipation and diffusion significantly. We further reduce the dissipation by converting potentially dissipative cells into droplets or bubbles that undergo Lagrangian motion. Due to the multiphase formulation, the proposed method properly simulates the interaction of water with surrounding air, instead of simulating water in a void space. Moreover, the introduction of the nondissipative technique means that, in contrast to previous methods, the simulated water does not unnecessarily lose mass, and its motion is not damped to an unphysical extent. Experiments showed that the proposed method is stable and runs fast. It is demonstrated that two-dimensional simulation runs in real-time.

Journal ArticleDOI
TL;DR: A vision-based performance interface for controlling animated human characters that interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality.
Abstract: We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system.

Journal ArticleDOI
TL;DR: This article addresses the problem of controlling the density and dynamics of smoke so that the synthetic appearance of the smoke (gas) resembles a still or moving object and imposes carefully designed velocity constraints on the smoke boundary during a dynamic fluid simulation.
Abstract: This article addresses the problem of controlling the density and dynamics of smoke (a gas phenomenon) so that the synthetic appearance of the smoke (gas) resembles a still or moving object. Both the smoke region and the target object are represented as implicit functions. As a part of the target implicit function, a shape transformation is generated between an initial smoke region and the target object. In order to match the smoke surface with the target surface, we impose carefully designed velocity constraints on the smoke boundary during a dynamic fluid simulation. The velocity constraints are derived from an iterative functional minimization procedure for shape matching. The dynamics of the smoke is formulated using a novel compressible fluid model which can effectively absorb the discontinuities in the velocity field caused by imposed velocity constraints while reproducing realistic smoke appearances. As a result, a smoke region can evolve into a regular object and follow the motion of the object, while maintaining its smoke appearance.

Journal ArticleDOI
TL;DR: This iterative procedure refines the original motion with a sequence of minimal adjustments, implicitly favoring motions that are similar to the original performance, and transforming any input motion, including those that are difficult to characterize with an objective function.
Abstract: Adaptation of ballistic motion demands a technique that can make required adjustments in anticipation of flight periods when only some physically consistent changes are possible. This article describes a numerical procedure that adjusts a physically consistent motion to fulfill new adaptation requirements expressed in kinematic and dynamic constraints. This iterative procedure refines the original motion with a sequence of minimal adjustments, implicitly favoring motions that are similar to the original performance, and transforming any input motion, including those that are difficult to characterize with an objective function. In total, over twenty adaptations were generated from two recorded performances, a run and a jump, by varying foot placement, restricting muscle use, adding new environment constraints, and changing the length and mass of specific limbs.

Journal ArticleDOI
TL;DR: A set of architectural enhancements to the classical Z- buffer acceleration hardware which supports efficient execution of the irregular Z-buffer, and includes flexible atomic read-modify-write units located near the memory controller, an internal routing network between these units and the fragment processors, and a MIMD fragment processor design.
Abstract: The classical Z-buffer visibility algorithm samples a scene at regularly spaced points on an image plane. Previously, we introduced an extension of this algorithm called the irregular Z-buffer that permits sampling of the scene from arbitrary points on the image plane. These sample points are stored in a two-dimensional spatial data structure. Here we present a set of architectural enhancements to the classical Z-buffer acceleration hardware which supports efficient execution of the irregular Z-buffer. These enhancements enable efficient parallel construction and query of certain irregular data structures, including the grid of linked lists used by our algorithm. The enhancements include flexible atomic read-modify-write units located near the memory controller, an internal routing network between these units and the fragment processors, and a MIMD fragment processor design. We simulate the performance of this new architecture and demonstrate that it can be used to render high-quality shadows in geometrically complex scenes at interactive frame rates. We also discuss other uses of the irregular Z-buffer algorithm and the implications of our architectural changes in the design of chip-multiprocessors.

Journal ArticleDOI
TL;DR: The proposed error-resilient transmission method is scalable with respect to both channel bandwidth and channel packet-loss rate and jointly design source and channel coders using a statistical measure to maximize the expected decoded model quality.
Abstract: In this article, we propose an error-resilient transmission method for progressively compressed 3D models. The proposed method is scalable with respect to both channel bandwidth and channel packet-loss rate. We jointly design source and channel coders using a statistical measure that (i) calculates the number of both source and channel coding bits, and (ii) distributes the channel coding bits among the transmitted refinement levels in order to maximize the expected decoded model quality. In order to keep the total number of bits before and after applying error protection the same, we transmit fewer triangles in the latter case to accommodate the channel coding bits. When the proposed method is used to transmit a typical model over a channel with a 10p packet-loss rate, the distortion (measured using the Hausdorff distance between the original and the decoded models) is reduced by 50p compared to the case when no error protection is applied.

Journal ArticleDOI
TL;DR: This article shows how the statistical analysis of a densely sampled point model can be used to improve the geometry bandwidth bottleneck, both on the system bus and over the network as well as for randomized rendering, without sacrificing visual realism.
Abstract: Traditional geometry representations have focused on representing the details of the geometry in a deterministic fashion. In this article we propose a statistical representation of the geometry that leverages local coherence for very large datasets. We show how the statistical analysis of a densely sampled point model can be used to improve the geometry bandwidth bottleneck, both on the system bus and over the network as well as for randomized rendering, without sacrificing visual realism. Our statistical representation is built using a clustering-based hierarchical principal component analysis (PCA) of the point geometry. It gives us a hierarchical partitioning of the geometry into compact local nodes representing attributes such as spatial coordinates, normal, and color. We pack this information into a few bytes using classification and quantization. This allows our representation to directly render from compressed format for efficient remote as well as local rendering. Our representation supports both view-dependent and on-demand rendering. Our approach renders each node using quasi-random sampling utilizing the probability distribution derived from the PCA analysis. We show many benefits of our approach: (1) several-fold improvement in the storage and transmission complexity of point geometry; (2) direct rendering from compressed data; and (3) support for local and remote rendering on a variety of rendering platforms such as CPUs, GPUs, and PDAs.

Journal ArticleDOI
TL;DR: It is shown that, for certain classes of geometric mesh models, spectral decomposition using the eigenvectors of the symmetric Laplacian of the connectivity graph is equivalent to principal component analysis on that class, when equipped with a natural probability distribution.
Abstract: Spectral compression of the geometry of triangle meshes achieves good results in practice, but there has been little or no theoretical support for the optimality of this compression. We show that, for certain classes of geometric mesh models, spectral decomposition using the eigenvectors of the symmetric Laplacian of the connectivity graph is equivalent to principal component analysis on that class, when equipped with a natural probability distribution. Our proof treats connected one-and two-dimensional meshes with fixed convex boundaries, and is based on an asymptotic approximation of the probability distribution in the two-dimensional case. The key component of the proof is that the Laplacian is identical, up to a constant factor, to the inverse covariance matrix of the distribution of valid mesh geometries. Hence, spectral compression is optimal, in the mean square error sense, for these classes of meshes under some natural assumptions on their distribution.

Journal ArticleDOI
Sang Hun Lee1
TL;DR: This article proposes an algorithm for feature-based multiresolution solid modeling based on the effective volumes of features that guarantees the same resulting shape and the reasonable intermediate LOD models for an arbitrary rearrangement of the features, regardless of whether feature types are additive or subtractive.
Abstract: Recently, three-dimensional CAD systems based on feature-based solid modeling techniques have been widely used for product design. However, when part models associated with features are used in various downstream applications, simplified models at various levels of detail (LODs) are frequently more desirable than the full details of the parts. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. This article proposes an algorithm for feature-based multiresolution solid modeling based on the effective volumes of features. This algorithm guarantees the same resulting shape and the reasonable intermediate LOD models for an arbitrary rearrangement of the features, regardless of whether feature types are additive or subtractive. This characteristic enables various LOD criteria to be used for a wide range of applications including computer-aided design and analysis.

Journal ArticleDOI
TL;DR: A photo-realistic hemispherical twilight sky is computed in less than two hours on a conventional PC, useful for high-dynamic range environment mapping, outdoor global illumination calculations, mesopic vision research and optical aerosol load probing.
Abstract: We present a physically-based approach to compute the colors of the sky during the twilight period before sunrise and after sunset. The simulation is based on the theory of light scattering by small particles. A realistic atmosphere model is assumed, consisting of air molecules, aerosols, and water. Air density, aerosols, and relative humidity vary with altitude. In addition, the aerosol component varies in composition and particle-size distribution. This allows us to realistically simulate twilight phenomena for a wide range of different climate conditions. Besides considering multiple Rayleigh and Mie scattering, we take into account wavelength-dependent refraction of direct sunlight as well as the shadow of the Earth. Incorporating several optimizations into the radiative transfer simulation, a photo-realistic hemispherical twilight sky is computed in less than two hours on a conventional PC. The resulting radiometric data is useful, for instance, for high-dynamic range environment mapping, outdoor global illumination calculations, mesopic vision research and optical aerosol load probing.

Journal ArticleDOI
TL;DR: In this paper, a subdivision scheme for mixed triangle/quad meshes that is C2 everywhere except for isolated, extraordinary points is presented, and a proof based on Levin and Levin's [2003] joint spectral radius calculation is provided.
Abstract: In this article, we present a subdivision scheme for mixed triangle/quad meshes that is C2 everywhere except for isolated, extraordinary points. The rules that we describe are the same as Stam and Loop's scheme [2003] except that we perform an unzipping pass prior to subdivision. This simple modification improves the smoothness along the ordinary triangle/quad boundary from C1 to C2, and creates a scheme capable of subdividing arbitrary meshes. Finally, we end with a proof based on Levin and Levin's [2003] joint spectral radius calculation to show our scheme is indeed C2 along the triangle/quad boundary.

Journal ArticleDOI
TL;DR: A method of calculating a mapping between two implicit surfaces by solving two PDEs over a tetrahedralized hypersurface that connects the two surfaces in 4D and demonstrates the use of this approach to transfer texture between two surfaces that may have differing topologies.
Abstract: Mappings between surfaces have a variety of uses, including texture transfer, multi-way morphing, and surface analysis. Given a 4D implicit function that defines a morph between two implicit surfaces, this article presents a method of calculating a mapping between the two surfaces. We create such a mapping by solving two PDEs over a tetrahedralized hypersurface that connects the two surfaces in 4D. Solving the first PDE yields a vector field that indicates how points on one surface flow to the other. Solving the second PDE propagates position labels along this vector field so that the second surface is tagged with a unique position on the first surface. One strength of this method is that it produces correspondences between surfaces even when they have different topologies. Even if the surfaces split apart or holes appear, the method still produces a mapping entirely automatically. We demonstrate the use of this approach to transfer texture between two surfaces that may have differing topologies.

Journal ArticleDOI
TL;DR: A hierarchical triangular surface model that enables designers to create a complex smooth surface of arbitrary topology composed of a small number of patches to which details can be added by locally refining the patches until an arbitrary small size is reached.
Abstract: Smooth parametric surfaces interpolating triangular meshes are very useful for modeling surfaces of arbitrary topology. Several interpolants based on these kind of surfaces have been developed over the last fifteen years. However, with current 3D acquisition equipments, models are becoming more and more complex. Since previous interpolation methods lack a local refinement property, there is no way to locally adapt the level of detail. In this article, we introduce a hierarchical triangular surface model. The surface is overall tangent plane continuous and is defined parametrically as a piecewise quintic polynomial. It can be adaptively refined while preserving the overall tangent plane continuity. This model enables designers to create a complex smooth surface of arbitrary topology composed of a small number of patches to which details can be added by locally refining the patches until an arbitrary small size is reached. It is implemented as a hierarchical data structure where the top layer describes a coarse, smooth base surface and the lower levels encode the details in local frame coordinates.

Journal ArticleDOI
TL;DR: It is shown that the small singular value of the transformation matrix can be used to bound both the quantization error and the rounding error which is due to the use of floating-point arithmetic.
Abstract: This article presents an algebraic analysis of a mesh-compression technique called high-pass quantization [Sorkine et al. 2003]. In high-pass quantization, a rectangular matrix based on the mesh topological Laplacian is applied to the vectors of the Cartesian coordinates of a polygonal mesh. The resulting vectors, called δ-coordinates, are then quantized. The applied matrix is a function of the topology of the mesh and the indices of a small set of mesh vertices (anchors) but not of the location of the vertices. An approximation of the geometry can be reconstructed from the quantized δ-coordinates and the spatial locations of the anchors. In this article, we show how to algebraically bound the reconstruction error that this method generates. We show that the small singular value of the transformation matrix can be used to bound both the quantization error and the rounding error which is due to the use of floating-point arithmetic. Furthermore, we prove a bound on this singular value. The bound is a function of the topology of the mesh and of the selected anchors. We also propose a new anchor-selection algorithm, inspired by this bound. We show experimentally that the method is effective and that the computed upper bound on the error is not too pessimistic.

Journal ArticleDOI
TL;DR: This work fully model and analyze the use of octrees for MIP, and mathematically shows that the average MIP complexity can be reduced to O(n2) for an object-order algorithm, or to O-log(n) per ray when using the image-order variant of the algorithm.
Abstract: Many techniques have already been proposed to improve the efficiency of maximum intensity projection (MIP) volume rendering, but none of them considered the possible hypothesis of a better complexity than either O(n) for finding the maximum value of n samples along a ray or O(n3) for an object-order algorithm. Here, we fully model and analyze the use of octrees for MIP, and we mathematically show that the average MIP complexity can be reduced to O(n2) for an object-order algorithm, or to O(log(n)) per ray when using the image-order variant of our algorithm. Therefore, this improvement establishes a major advance for interactive MIP visualization of large-volume data.In parallel, we also present an object-order implementation of our algorithm, satisfying the theoretical O(n2) result. It is based on hierarchical occlusion maps that perform on-the-fly visibility of the data, and our results show that it is the most efficient solution for MIP available to date.

Journal ArticleDOI
TL;DR: A novel multi-level technique for fast character adaptation that can be easily integrated into most existing behavioral animation systems and is also fast and memory efficient.
Abstract: Adaptation (online learning) by autonomous virtual characters, due to interaction with a human user in a virtual environment, is a difficult and important problem in computer animation. In this article we present a novel multi-level technique for fast character adaptation. We specifically target environments where there is a cooperative or competitive relationship between the character and the human that interacts with that character.In our technique, a distinct learning method is applied to each layer of the character's behavioral or cognitive model. This allows us to efficiently leverage the character's observations and experiences in each layer. This also provides a convenient temporal distinction between what observations and experiences provide pertinent lessons for each layer. Thus the character can quickly and robustly learn how to better interact with any given unique human user, relying only on observations and natural performance feedback from the environment (no explicit feedback from the human). Our technique is designed to be general, and can be easily integrated into most existing behavioral animation systems. It is also fast and memory efficient.

Journal ArticleDOI
TL;DR: A physically-based model is presented that takes into account the physical parameters and processes directly associated with plasma flow, and can be extended to simulate the dynamics of other plasma phenomena as well as astrophysical phenomena.
Abstract: Simulating natural phenomena has always been a focal point for computer graphics research. Its importance goes beyond the production of appealing presentations, since research in this area can contribute to the scientific understanding of complex natural processes. The natural phenomena, known as the Aurora Borealis and Aurora Australis, are geomagnetic phenomena of impressive visual characteristics and remarkable scientific interest. Aurorae present a complex behavior that arises from interactions between plasma (hot, ionized gases composed of ions, electrons, and neutral atoms) and Earth's electromagnetic fields. Previous work on the visual simulation of auroral phenomena have focused on static physical models of their shape, modeled from primitives, like sine waves. In this article, we focus on the dynamic behavior of the aurora, and we present a physically-based model to perform 3D visual simulations. The model takes into account the physical parameters and processes directly associated with plasma flow, and can be extended to simulate the dynamics of other plasma phenomena as well as astrophysical phenomena. The partial differential equations associated with these processes are solved using a complete multigrid implementation of the electromagnetic interactions, leading to a simulation of the shape and motion of the auroral displays. In order to illustrate the applicability of our model, we provide simulation sequences rendered using a distributed forward mapping approach.

Journal ArticleDOI
TL;DR: It is shown that from the 3D Delaunay triangulation, it is possible to derive a distance measure among regions lying in adjacent slices that is used to define a positive integer parameter, called β, responsible for establishing the connections.
Abstract: Despite the significant evolution of techniques for 3D-reconstruction from planar cross sections, establishing the correspondence of regions in adjacent slices remains an important issue. In this article, we propose a novel approach for solving the correspondence problem in a flexible manner. We show that from the 3D Delaunay triangulation, it is possible to derive a distance measure among regions lying in adjacent slices. Such distance is used to define a positive integer parameter, called β, responsible for establishing the connections. Varying β thus allows the construction of different models from a given set of cross-sectional regions: small values of β causes closer regions to be connected into a single component, and as β increases, more distant regions are connected together. The algorithm, named β-connection, is described, and examples are provided that illustrate its applicability in solid modeling and model reconstruction from real data. The underlying reconstruction method is effective, which jointly with the β-connection correspondence strategy, improve the usability of volumetric reconstruction techniques considerably.