scispace - formally typeset
Search or ask a question

Showing papers in "Computer Graphics Forum in 2000"


Journal ArticleDOI
TL;DR: This approach is based on the determination of principal animation components and decouples the animation from the underlying geometry, and supports progressive animation compression with spatial, as well as temporal, level‐of‐detail and high compression ratios.
Abstract: In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.

321 citations


Journal ArticleDOI
TL;DR: This work provides an editor similar to paint systems for interactively creating stipple drawings and makes it possible to create such drawings within a matter of hours, instead of days or even weeks when the drawing is done manually.
Abstract: We present a method for computer generated pen-and-ink illustrations by the simulation of stippling. In a stipple drawing, dots are used to represent tone and also material of surfaces. We create such drawings by generating an initial dot set which is then processed by a relaxation method based on Voronoi diagrams. The point patterns generated are approximations of Poisson disc distributions and can also be used for integrating functions or the positioning of objects. We provide an editor similar to paint systems for interactively creating stipple drawings. This makes it possible to create such drawings within a matter of hours, instead of days or even weeks when the drawing is done manually.

241 citations


Journal ArticleDOI
TL;DR: An easy, practical and efficient full body cloning methodology that utilizes photos taken from the front, side and back of a person in any given imaging environment without requiring a special background or a controlled illuminating condition is presented.
Abstract: We present an easy, practical and efficient full body cloningmethodology. This system utilizes photos taken from the front, side and back of a person in any given imaging envir onment without requiring a special background or a controlled illuminating condition. A seamless generic bo dy specified in the VRML H-Anim 1.1 format is used to generate an individualized virtual human. The system is c of two major components: face-cloning and body-cloning. The face-cloning component uses feature poi nts on front and side images and then applies DFFD for shape modification. Next a fully automatic seamless text ure mapping is generated for 360 o coloring on a 3D polygonal model. The body-cloning component has two step s: (i) feature points specification, which enables automatic silhouette detection in an arbitrary background(ii) two-stage body modification by using feature points and body silhouette respectively. The final integrated huma n model has photo-realistic animatable face, hands, feet and body. The result can be visualized in any VRML compli ant browser.

147 citations


Journal ArticleDOI
TL;DR: In this article, a dynamic mesh representation is proposed for multiresolution shape deformation, which adapts the connectivity during the modification in order to maintain a prescribed mesh quality, which enables extreme deformations of the global shape while preventing the mesh from degenerating.
Abstract: Multiresolution shape representation is a very effective way to decompose surface geometry into several levels of detail. Geometric modeling with such representations enables flexible modifications of the global shape while preserving the detail information. Many schemes for modeling with multiresolution decompositions based on splines, polygonal meshes and subdivision surfaces have been proposed recently. In this paper we modify the classical concept of multiresolution representation by no longer requiring a global hierarchical structure that links the different levels of detail. Instead we represent the detail information implicitly by the geometric difference between independent meshes. The detail function is evaluated by shooting rays in normal direction from one surface to the other without assuming a consistent tesselation. In the context of multiresolution shape deformation, we propose a dynamic mesh representation which adapts the connectivity during the modification in order to maintain a prescribed mesh quality. Combining the two techniques leads to an efficient mechanism which enables extreme deformations of the global shape while preventing the mesh from degenerating. During the deformation, the detail is reconstructed in a natural and robust way. The key to the intuitive detail preservation is a transformation map which associates points on the original and the modified geometry with minimum distortion. We show several examples which demonstrate the effectiveness and robustness of our approach including the editing of multiresolution models and models with texture.

130 citations


Journal ArticleDOI
TL;DR: This paper presents models that produce realistic looking pencil marks, textures, and tones based on an observation of how lead pencils interact with drawing paper, and on the absorptive and dispersive properties of blenders and erasers interacting with lead material deposited over drawing paper.
Abstract: This paper presents models for graphite pencil, drawing paper, blenders, and kneaded eraser that produce realistic looking pencil marks, textures, and tones. Our models are based on an observation of how lead pencils interact with drawing paper, and on the absorptive and dispersive properties of blenders and erasers interacting with lead material deposited over drawing paper. The models consider parameters such as the particle composition of the lead, the texture of the paper, the position and shape of the pencil materials, and the pressure applied to them. We demonstrate the capabilities of our approach with a variety of images and compare them to digitized pencil drawings. We also present image-based rendering results implementing traditional graphite pencil tone rendering methods.

129 citations


Journal ArticleDOI
TL;DR: This work presents an algebraic framework, called Constructive Volume Geometryn (CVG), for modelling complex spatial objects using combinational operations, and describes the interior as well as the exterior of objects.
Abstract: We present an algebraic framework, called Constructive Volume Geometry (CVG), for modelling complex spatial objects using combinational operations. By utilising scalar elds as fundamental building blocks, CVG provides high-level algebraic representations of objects that are dened mathematically or built upon sampled or simulated datasets. It models amorphous phenomena as well as solid objects, and describes the interior as well as the exterior of objects. We also describe a hierarchical representation scheme for CVG, and a direct rendering method with a new approach for consistent sampling. The work has demonstrated the feasibility of combining a variety of graphics data types in a coherent modelling scheme.

129 citations


Journal ArticleDOI
TL;DR: An automatic camera placement method for generating image‐based models from scenes with known geometry that first approximately determines the set of surfaces visible from a given viewing area and then selects a small set of appropriate camera positions to sample the scene from.
Abstract: We present an automatic camera placement method for generating image-based models from scenes with known geometry. Our method first approximately determines the set of surfaces visible from a given viewing area and then selects a small set of appropriate camera positions to sample the scene from. We define a quality measure for a surface as seen, or covered, from the given viewing area. Along with each camera position, we store the set of surfaces which are best covered by this camera. Next, one reference view is generated from each camera position by rendering the scene. Pixels in each reference view that do not belong to the selected set of polygons are masked out. The image-based model generated by our method, covers every visible surface only once, associating it with a camera position from which it is covered with quality that exceeds a user-specified quality threshold. The result is a compact non-redundant image-based model with controlled quality. The problem of covering every visible surface with a minimum number of cameras (guards) can be regarded as an extension to the well-known Art Gallery Problem. However, since the 3D polygonal model is textured, the camera-polygon visibility relation is not binary; instead, it has a weight — the quality of the polygon's coverage.

105 citations


Journal ArticleDOI
TL;DR: A Digital Watermarking system dedicated for embedding watermarks into 3D polygonal models realizing watermarks with robustness against more complex operations, most noticeably polygon reduction is described.
Abstract: We describe a Digital Watermarking system dedicated for embedding watermarks into 3D polygonal models. The system consists of three watermarking algorithms, one named Vertex Flood Algorithm (VFA) suitable for embedding fragile public readable watermarks with high capacity and offering a way of model authentication, one realizing affine invariant watermarks, named Affine Invariant Embedding (AIE) and a third one, named Normal Bin Encoding (NBE) algorithm, realizing watermarks with robustness against more complex operations, most noticeably polygon reduction. The watermarks generated by these algorithms are stackable. We shortly discuss the implementation of the system, which is realized as a 3D Studio MAX plugin.

102 citations


Journal ArticleDOI
TL;DR: A new technique called motion balance filtering is presented, which corrects an unbalanced motion to a balanced one while preserving the original motion characteristics as much as possible, formulated as a constrained optimization problem.
Abstract: This paper presents a new technique called motion balance filtering, which corrects an unbalanced motion to a balanced one while preserving the original motion characteristics as much as possible. Differently from previous approaches that deal only with the balance of static posture, we solve the problem of balancing a dynamic motion. We achieve dynamic balance by analyzing and controlling the trajectory of the zero moment point (ZMP). Our algorithm consists of three steps. First, it analyzes the ZMP trajectory to find out the duration in which dynamic balance is violated. Dynamic imbalance is identified by the ZMP trajectory segments lying out of the supporting area. Next, the algorithm modifies the ZMP trajectory by projecting it into the supporting area. Finally, it generates the balanced motion that satisfies the new ZMP constraint. This process is formulated as a constrained optimization problem so that the new motion resembles the original motion as much as possible. Experiments prove that our motion balance filtering algorithm is a useful method to add physical realism to a kinematically edited motion.

99 citations


Journal ArticleDOI
TL;DR: The resulting one‐dimensional representation of the image has improved autocorrelation compared with universal scans such as the Peano‐Hilbert space filling curve and the potential of improved autOCorrelation of context‐based space filling curves for image and video lossless compression is discussed.
Abstract: A context-based scanning technique for images is presented. An image is scanned along a context-based space filling curve that is computed so as to exploit inherent coherence in the image. The resulting one-dimensi onal representation of the image has improved autocorrelation compared with universal scans such as the PeanoHilbert space filling curve. An efficient algorithm for computing context-based space filling curves is presented. We also discuss the potential of improved autocorrelation of context-based space filling curves for image and video lossless compression.

98 citations


Journal ArticleDOI
TL;DR: In this article, a new interpolatory subdivision scheme for triangle meshes is presented, where instead of splitting each edge and performing a 1-to-4 split for every triangle, the new vertices are computed with a Butterfly-like scheme.
Abstract: We present a new interpolatory subdivision scheme for triangle meshes. Instead of splitting each edge and performing a 1-to-4 split for every triangle we compute a new vertex for every triangle and retriangulate the old and the new vertices. Using this refinement operator the number of triangles only triples in each step. New vertices are computed with a Butterfly like scheme. In order to obtain overall smooth surfaces special rules are necessary in the neighborhood of extraordinary vertices. The scheme is suitable for adaptive refinement by using an easy forward strategy. No temporary triangles are produced here which allows simpler data structures and makes the scheme easy to implement.

Journal ArticleDOI
TL;DR: An algorithm is proposed that takes as input a generic set of unorganized points, sampled on a real object, and returns a closed interpolating surface that generates a closed 2‐manifold surface made of triangular faces, without limitations on the shape or genus of the original solid.
Abstract: In this paper an algorithm is proposed that takes as input a generic set of unorganized points, sampled on a real object, and returns a closed interpolating surface. Specifically, this method generates a closed 2-manifold surface made of triangular faces, without limitations on the shape or genus of the original solid. The reconstruction method is based on generation of the Delaunay tetrahedralization of the point set, followed by a sculpturing process constrained to particular criteria. The main applications of this tool are in medical analysis and in reverse engineering areas. It is possible, for example, to reconstruct anatomical parts starting from surveys based on TACs or magnetic resonance.

Journal ArticleDOI
TL;DR: A new gradient estimation method based on linear regression which provides at each voxel location the normal vector and the translation of the regression hyperplane which are considered as a gradient and a filtered density value respectively and can be used for surface smoothing and gradient estimation at the same time.
Abstract: In this paper a new gradient estimation method is presented which is based on linear regression. Previous contextual shading techniques try to fit an approximate function to a set of surface points in the neighborhood of a given voxel. Therefore a system of linear equations has to be solved using the computationally expensive Gaussian elimination. In contrast, our method approximates the density function itself in a local neighborhood with a 3D regression hyperplane. This approach also leads to a system of linear equations but we will show that it can be solved with an efficient convolution. Our method provides at each voxel location the normal vector and the translation of the regression hyperplane which are considered as a gradient and a filtered density value respectively. Therefore this technique can be used for surface smoothing and gradient estimation at the same time.

Journal ArticleDOI
TL;DR: A framework that combines both occlusion culling and level‐of‐detail rendering techniques to improve rendering times is presented, to estimate the degree of visibility of each object of the PVS using synthesized coarse occluders, and to arrange the objects of each PVS into several Hardly‐Visible Sets (HVS) with similar occlusions degree.
Abstract: Occlusion culling and level-of-detail rendering have become two powerful tools for accelerating the handling of very large models in real-time visualization applications. We present a framework to combine both techniques that improves rendering times. Classical occlusion culling algorithms compute potentially visible sets (PVS), overestimations of the sets of visible polygons. The novelty of our approach is to estimate the degree of visibility of each object of the PVS using different level-of-detail for the occluders. This allows to arrange the objects of each PVS into several Hardly-Visible Sets (HVS) by similar occlusion percentage. According to image accuracy and frame ratio requirements, HVS provide a way to avoid sending to the graphics pipeline those objects whose pixel contribution is low due to partial occlusion. The image loss can be bounded by the user at navigation time. On the other hand, as HVS offers a tighter estimation of the pixel contribution for each scene object it can be used for a more convenient selection of the level-of-detail at which objects are rendered. In this paper, we describe the new framework technique, provide details of its implementation using a visibility octree as the chosen occlusion culling data structure and show some experimental results on the image quality.

Journal ArticleDOI
TL;DR: Variable resolution 4‐k meshes is introduced, a powerful structure for the representation of geometric objects at multiple levels of detail that combines most properties of other related descriptions with several advantages, such as more flexibility and greater expressive power.
Abstract: In this paper we introduce variable resolution 4-k meshes, a powerful structure for the representation of geometric objects at multiple levels of detail. It combines most properties of other related descriptions with several advantages, such as more flexibility and greater expressive power. The main unique feature of the 4-k mesh structure lies in its variable resolution capability, which is crucial for adaptive computation. We also give an overview of the different methods for constructing the 4-k mesh representation, as well as the basic algorithms necessary to incorporate it in modeling and graphics applications.

Journal ArticleDOI
TL;DR: This paper presents a high‐quality MIP algorithm (trilinear interpolation within cells), which is up to 50 times faster than brute‐force MIP and at least 20 years faster than comparable optimized techniques.
Abstract: Maximum Intensity Projection (MIP) is a volume rendering technique which is used to visualize high-intensity structures within volumetric data. At each pixel the highest data value, which is encountered along a corresponding viewing ray is depicted. MIP is, for example, commonly used to extract vascular structures from medical data sets (angiography). Due to lack of depth information in MIP images, animation or interactive variation of viewing parameters is frequently used for investigation. Up to now no MIP algorithms exist which are of both interactive speed and high quality. In this paper we present a high-quality MIP algorithm (trilinear interpolation within cells), which is up to 50 times faster than brute-force MIP and at least 20 times faster than comparable optimized techniques. This speed-up is accomplished by using an alternative sto rage scheme for volume cells (sorted by value) and by removing cells which do not contribute to any MIP projection (regardless of the viewing direction) in a preprocessing step. Also, a fast maximum estimation within cells is used to further speed up the algorithm.

Journal ArticleDOI
TL;DR: A city modeler has been designed, using this model of virtual urban environments using structures and information suitable for behavioural animations, and enables complex urban environments for behavioural animation to be automatically produced.
Abstract: In order to populate virtual cities, it is necessary to specify the behaviour of dynamic entities such as pedestrians or car drivers. Since a complete mental model based on vision and image processing cannot be constructed in real time using purely geometrical information, higher levels of information are needed in a model of the virtual environment. For example, the autonomous actors of a virtual world would exploit the knowledge of the environment topology to navigate through it. In this article, we present a model of virtual urban environments using structures and information suitable for behavioural animations. Thanks to this knowledge, autonomous virtual actors can behave like pedestrians or car drivers in a complex city environment. A city modeler has been designed, using this model of urban environment, and enables complex urban environments for behavioural animation to be automatically produced.

Journal ArticleDOI
TL;DR: An automated method for calculating consistent collision response at different levels of detail is presented, which works closely with a system which uses a pre‐computed hierarchical volume model for collision detection.
Abstract: Interactive simulation is made possible in many applications by simplifying or culling the finer details that would make real-time performance impossible. This paper examines detail simplification in the specific problem of collision handling for rigid body animation. We present an automated method for calculating consistent collision response at different levels of detail. The mechanism works closely with a system which uses a pre-computed hierarchical volume model for collision detection.

Journal ArticleDOI
TL;DR: This approach is attractive for remote rendering applications such as web‐based scientific visualization where a client system may be a relatively low‐performance machine and limited network bandwidth makes transmission of large 3D data impractical.
Abstract: Web-based remote rendering enables Interactive 3D graphics visualization on the Web (Web3D) Web3D is gaining popularity and plays an important role in Scientific Visualization, E-business and Education The additional expense of Web3D applications over streaming video is the transmission of 3D models and their rendering costs at the client side As the 3D model size increases, both transmission and rendering costs increase Providing interactive 3D graphics over a network with large 3D models is challenging To increase the efficiency of Web3D, especially with large data sets, the developed method, Image-Based Rendering Acceleration and Compression (IBRAC) utilizes previously rendered and transmitted images for compression instead of transmitting 3D models to the client The method adapts an image-based rendering acceleration method to the direct rendering of compressed images; IBRAC exploits spatial and temporal coherence between new and previously rendered images and utilizes them for compression As result, IBRAC requires only modest computing power and low bandwidth at the remote client Without 3D models at the client, IBRAC can provide dynamic shading (changing lights and materials) by re-projecting the surface orientation (Normal Reprojection) Normal reprojection eliminates inconsistencies in shading that plague other image-based rendering methods IBRAC also enables a remote user to interactively classify volume data without the 3D model at the client, using a new image structure, Isomap The efficiency of IBRAC increases with data size since the rendering time is predominantly a function of image size This approach is attractive for remote rendering applications where a client system may be a relatively low-performance machine with limited network bandwidth, including wireless or palm computing systems

Journal ArticleDOI
TL;DR: It is shown how tone reproduction can also be introduced into interactive radiosity viewers, where the tone reproduction continuously adjusts to the current view of the user.
Abstract: When a rendering algorithm has created a pixel array of radiance values the task of producing an image is not yet completed. In fact, to visualize the result the radiance values still have to be mapped to luminances, which can be reproduced by the used display. This step is performed with the help of tone reproduction operators. These tools have mainly been applied to still images, but of course they are just as necessary for walkthrough applications, in which several images are created per second. In this paper we illuminate the physiological aspects of tone reproduction for interactive applications. It is shown how tone reproduction can also be introduced into interactive radiosity viewers, where the tone reproduction continuously adjusts to the current view of the user. The overall performance is decreased only moderately, still allowing walkthroughs of large scenes.

Journal ArticleDOI
TL;DR: A novel system that allows direct manipulation and interactive sculpting of PDE surfaces at arbitrary location, hence supporting various interactive techniques beyond the conventional boundary control and demonstrating many attractive advantages of the dynamic PDE formulation such as intuitive control, real‐time feedback, and usability to the general public.
Abstract: This paper presents an integrated approach and a unified algorithm that combine the benefits of PDE surfaces and powerful physics-based modeling techniques within one single modeling framework, in order to realize the full potential of PDE surfaces. We have developed a novel system that allows direct manipulation and interactive sculpting of PDE surfaces at arbitrary location, hence supporting various interactive techniques beyond the conventional boundary control. Our prototype software affords users to interactively modify point, normal, curvature, and arbitrary region of PDE surfaces in a predictable way. We employ several simple, yet effective numerical techniques including the finite-difference discretization of the PDE surface, the multigrid-like subdivision on the PDE surface, the mass-spring approximation of the elastic PDE surface, etc. to achieve real-time performance. In addition, our dynamic PDE surfaces can also be approximated using standard bivariate B-spline finite elements, which can subsequently be sculpted and deformed directly in real-time subject to intrinsic PDE constraints. Our experiments demonstrate many attractive advantages of our dynamic PDE formulation such as intuitive control, real-time feedback, and usability to the general public.

Journal ArticleDOI
TL;DR: This paper proposes an alternative solution for the visualization of unsteady flow fields based on the computation of temporal series of correlated images that gives full control on the image density so that it is able to produce smooth animations of arbitrary density.
Abstract: In recent years the work on vector field visualization has been concentrated on LIC-based methods. In this paper we propose an alternative solution for the visualization of unsteady flow fields. Our approach is based on the computation of temporal series of correlated images. While other methods are based on pathlines and try to correlate successive images at the pixel level, our approach consists in correlating instantaneous visualizations of the vector field at the streamline level. For each frame a feed forward algorithm computes a set of evenly-spaced streamlines as a function of the streamlines generated for the previous frame. This is achieved by establishing a correspondence between streamlines at successive time steps. A cyclical texture is mapped onto every streamline and textures of corresponding streamlines at different time steps are correlated together so that, during the animation, they move along the streamlines, giving the illusion that the flow is moving in the direction defined by the streamline. Our method gives full control on the image density so that we are able to produce smooth animations of arbitrary density, covering the field of representations from sparse, that is classical streamline-based images, to dense, that is texture-like images.

Journal ArticleDOI
TL;DR: This paper introduces a simple and efficient scheme for encoding the connectivity and the stripification of a triangle mesh in an interwoven fashion, that exploits the correlation existing between the two.
Abstract: In this paper we introduce a simple and efficient scheme for encoding the connectivity and the stripification of a triangle mesh. Since generating a good set of triangle strips is a hard problem, it is desirable to do this just once and store the computed strips with the triangle mesh. However, no previously reported mesh encoding scheme is designed to include triangle strip information into the compressed representation. Our algorithm encodes the stripification and the connectivity in an interwoven fashion, that exploits the correlation existing between the two.

Journal ArticleDOI
Bob Zeleznik1, Loring Holden1, Michael Capps1, Howard Abrams1, Tim Miller1 
TL;DR: The fundamental contribution of SGAB is that both the local and remote applications can be completely unaware of each other; that is, both applications can interoperate without code or binary modification despite each having no knowledge of networking or interoperability.
Abstract: We describe the Scene-Graph-As-Bus technique (SGAB), the first step in a staircase of solutions for sharing software components for virtual environments. The goals of SGAB are to allow, with minimal effort, independentlydesigned applications to share component functiona lity; and for multiple users to shareapplications designed for single users. This paper reports on the SGAB design for transparently conjoining different applications by unifying the state information contained in their scene graphs. SGAB monitors and maps changes in the local scene graph of one application to a neutral scene graph representation (NSG), distributes the NSG changes over the network to remote peer applications, and then maps the NSG changes to the local scene graph of the remote application. The fundamental contribution of SGAB is that both the local and remote applications can be completely unaware of each other; that is, both applications can interoperate without code or binary modification despite each having no knowledge of networking or interoperability.

Journal ArticleDOI
TL;DR: FootballMan is a reconstruction system designed to generate animated, virtual 3D views from two synchronous video sequences of a short part of a given soccer game, and to model players by texture objects.
Abstract: In this paper we present SoccerMan, a reconstruction system designed to generate animated, virtual 3D views from two synchronous video sequences of a short part of a given soccer game. After the reconstruction process, which needs also some manual interaction, the virtual 3D scene can be examined and ‘replayed’ from any viewpoint. Players are modeled as so-called animated texture objects, i.e. 2D player shapes are extracted from video and texture-mapped onto rectangles in 3D space. Animated texture objects have shown very appropriate as a 3D representation of soccer players in motion, as the visual nature of the original human motion is preserved. The trajectories of the players and the ball in 3D space are reconstructed accurately. In order to create a 3D reconstruction of a given soccer scene, the following steps have to be executed: 1) Camera parameters of all frames of both sequences are computed (camera calibration). 2) The playground texture is extracted from the video sequences. 3) Trajectories of the ball and the players' heads are computed after manually specifying their image positions in a few key frames. 4) Player textures are extracted automatically from video. 5) The shapes of colliding or occluding players are separated automatically. 6) For visualization, player shapes are texture-mapped onto appropriately placed rectangles in virtual space. SoccerMan is a novel experimental sports analysis system with fairly ambitious objectives. Its design decisions, in particular to start from two synchronous video sequences and to model players by texture objects, have already proven promising.

Journal ArticleDOI
TL;DR: In this article, an interactive system for the generation of high quality triangle meshes that allows us to handle hybrid geometry (point clouds, polygons,.. ) as input data is presented.
Abstract: We present an interactive system for the generation of high quality triangle meshes that allows us to handle hybrid geometry (point clouds, polygons, . . . ) as input data. In order to be able to robustly process huge data sets, we exploit graphics hardware features like the raster manager and the z-buffer for specific sub-tasks in the overall procedure. By this we significantly accelerate the stitching of mesh patches and obtain an algorithm for subsampling the data points in linear time. The target resolution and the triangle alignment in sub-regions of the resulting mesh can be controlled by adjusting the screen resolution and viewing transformation. An intuitive user interface provides a flexible tool for application dependent optimization of the mesh.

Journal ArticleDOI
TL;DR: This paper describes a geometric acoustic modeling algorithm that uses a priority queue to trace polyhedral beams representing reverberation paths in best‐first order up to some termination criteria (e.g., expired time‐slice), and studies the trade‐offs of the priority‐driven beam tracing algorithm with different priority functions.
Abstract: Geometric acoustic modeling systems spatialize sounds according to reverberation paths from a sound source to a receiver to give an auditory impression of a virtual 3D environment. These systems are useful for concert hall design, teleconferencing, training and simulation, and interactive virtual environments. In many cases, such as in an interactive walkthrough program, the reverberation paths must be updated within strict timing constraints ‐ e.g., as the sound receiver moves under interactive control by a user. In this paper, we describe a geometric acoustic modelingalgorithmthat uses a priority queue to trace polyhedral beams representing reverberation paths in best-first order up to some termination criteria (e.g., expired time-slice). The advantage of this algorithm is that it is more likely to find the highest priority reverberation paths within a fixed time-slice, avoiding many geometric computations for lower-priority beams. Yet, there is overhead in computing priorities and managing the priority queue. The focus of this paper is to study the trade-offs of the priority-drivenbeam tracing algorithm with different priority functions. During experiments computing reverberation paths between a source and a receiver in a 3D building environment, we find that priority functions incorporating more accurate estimates of source-to-receiver path length are more likely to find early reverberation paths useful for spatialization, especially in situations where the source and receiver cannot reach each other through trivial reverberation paths. However, when receivers are added to the environment such that it becomes more densely and evenly populated, this advantage diminishes.

Journal ArticleDOI
TL;DR: This paper presents a realization of tesselation‐on‐the‐fly for Loop subdivision surfaces as part of a framework for interactive visualization.
Abstract: Subdivision surfaces have become a standard technique for freeform shape modeling. They are intuitive to use and permit designers to flexibly add detail. But with larger control meshes, efficient adaptive rendering techniques are indispensable for interactive visualization and shape modeling. In this paper, we present a realization of tesselation-on-the-fly for Loop subdivision surfaces as part of a framework for interactive visualization.

Journal ArticleDOI
TL;DR: A new technique to render in real time objects which have part of their high frequency geometric detail encoded in bump maps, based on the quantization of normal‐maps, and achieves excellent result both in rendering time and rendering quality, with respect to other alternative methods.
Abstract: We present a new technique to render in real time objects which have part of their high frequency geometric detail encoded in bump maps. It is based on the quantization of normal-maps, and achieves excellent result both in rendering time and rendering quality, with respect to other alternative methods. The method proposed also allows to add many interesting visual effects, even for object with large bumb maps, including non-s rendering, chrome effects, shading under multiple lights, rendering of different materials within a single object, specular reflections and others. Moreover, the implementation of the method is not complex and can be eased by software reuse.

Journal ArticleDOI
TL;DR: The paper describes the process of building Internet‐transmittable, 3‐D digital virtual models of ancient heritage monuments from on‐site data, focusing especially on3‐D dimensional data acquisition techniques and color processing methods.
Abstract: The paper describes the process of building Internet-transmittable, 3-D digital virtual models of ancient heritage monuments from on-site data, focusing especially on 3-D dimensional data acquisition techniques and color processing methods. Section 1 considers project goals and the attendant problems; Section 2 provides a brief summary of state-of-the-art experience and the technologies adopted by the Authors; Section 3 illustrates the key features of the 3-D color data acquisition methods used as well as the shape and color processing pipeline; Section 4 describes the specific study conducted on single elements and faades of the Coliseum in Rome, while Section 6 outlines future work.