scispace - formally typeset
Search or ask a question

Showing papers in "Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing in 2000"


Journal ArticleDOI
TL;DR: A combination of analytical and numerical methods to solve generalized inverse kinematics problems including position, orientation, and aiming constraints suitable for an anthropomorphic arm or leg.
Abstract: In this paper we develop a set of inverse kinematics algorithms suitable for an anthropomorphic arm or leg. We use a combination of analytical and numerical methods to solve generalized inverse kinematics problems including position, orientation, and aiming constraints. Our combination of analytical and numerical methods results in faster and more reliable algorithms than conventional inverse Jacobian and optimization-based techniques. Additionally, unlike conventional numerical algorithms, our methods allow the user to interactively explore all possible solutions using an intuitive set of parameters that define the redundancy of the system.

655 citations


Journal ArticleDOI
TL;DR: A geometric interpretation of the injectivity of a uniform cubic B-spline function is proposed, with which 2D and 3D cases can be handled in a similar way and sufficient conditions for injectivity which are represented in terms of control point displacements are presented.
Abstract: Uniform cubic B-spline functions have been used for mapping functions in various areas such as image warping and morphing, 3D deformation, and volume morphing. The injectivity (one-to-one property) of a mapping function is crucial to obtaining desirable results in these areas. This paper considers the injectivity conditions of 2D and 3D uniform cubic B-spline functions. We propose a geometric interpretation of the injectivity of a uniform cubic B-spline function, with which 2D and 3D cases can be handled in a similar way. Based on our geometric interpretation, we present sufficient conditions for injectivity which are represented in terms of control point displacements. These sufficient conditions can be easily tested and will be useful in guaranteeing the injectivity of mapping functions in application areas.

116 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of scanning both the color and geometry of real objects and displaying realistic images of the scanned objects from arbitrary viewpoints with a complete system that uses a stereo camera setup with active lighting to scan the object surface geometry and color.
Abstract: This paper addresses the problem of scanning both the color and geometry of real objects and displaying realistic images of the scanned objects from arbitrary viewpoints. We describe a complete system that uses a stereo camera setup with active lighting to scan the object surface geometry and color. Scans expressed in sensor coordinates are registered into a single object-centered coordinate system by aligning both the color and geometry where the scans overlap. The range data are integrated into a surface model using a robust hierarchical space carving method. The fit of the resulting approximate mesh to data is improved and the mesh structure is simplified using mesh optimization methods. In addition, a method for view-dependent texturing of the reconstructed surfaces is described. The method projects the color data from the input images onto the surface model and blends the various images depending on the location of the viewpoint and other factors such as surface orientation.

99 citations


Journal ArticleDOI
TL;DR: It is shown that a digital boundary can be transformed directly into a triangulated iso-surface in the three-dimensional case, and significant digital boundary properties are derived from its continuous analog using the Jordan–Brouwer separation theorem.
Abstract: The definition and extraction of objects and their boundaries within an image are essential in many imaging applications. Classically, two approaches are followed. The first considers the image as a sample of a continuous scalar field: boundaries are implicit surfaces in this field; they are often called iso-surfaces. The second considers the image as a digital space with adjacency relations and classifies elements of this space as inside or outside: boundaries are pairs composed of one inside element and one outside element; they are called digital boundaries. In this paper, we show that these two approaches are closely related. This statement holds for arbitrary dimensions. To do so, we propose a local method to construct a continuous analog of a digital boundary. The continuous analog is designed to satisfy properties in the Euclidean space that are similar to the properties of its counterpart in the digital space (e.g., connectedness, closeness, separation). It appears that this continuous analog is indeed a piecewise linear approximation of an iso-(hyper)surface (i.e., a triangulated iso-surface in the three-dimensional case). Furthermore, we derive significant digital boundary properties from its continuous analog using the Jordan–Brouwer separation theorem: new Jordan pairs, new adjacencies between boundary elements, new Jordan triples. We conclude this paper by illustrating the 3D case more precisely. In particular, we show that a digital boundary can be transformed directly into a triangulated iso-surface. The implementation of this transformation and its efficiency are discussed with a comparison with the classical marching-cubes algorithm.

92 citations


Journal ArticleDOI
TL;DR: The main idea is to employ discrete distance fields enhanced with correspondence information that allows us not only to connect vertices from successive slices in a reasonable way but also to solve the branching problem by creating intermediate contours where adjacent contours differ too much.
Abstract: In this paper we consider the problem of reconstructing triangular surfaces from given contours. An algorithm solving this problem must decide which contours of two successive slices should be connected by the surface (branching problem) and, given that, which vertices of the assigned contours should be connected for the triangular mesh (correspondence problem). We present a new approach that solves both tasks in an elegant way. The main idea is to employ discrete distance fields enhanced with correspondence information. This allows us not only to connect vertices from successive slices in a reasonable way but also to solve the branching problem by creating intermediate contours where adjacent contours differ too much. Last but not least we show how the 2D distance fields used in the reconstruction step can be converted to a 3D distance field that can be advantageously exploited for distance calculations during a subsequent simplification step.

54 citations


Journal ArticleDOI
TL;DR: Geometric considerations will help to determine several distinguished curve and surface pairs which possess elementary computable bisectors, emphasis is on low-degree rational curves and surfaces, since they are of particular interest in surface modeling.
Abstract: This paper studies algebraic and geometric properties of curve?curve, curve?surface, and surface?surface bisectors. The computation is in general difficult since the bisector is determined by solving a system of nonlinear equations. Geometric considerations will help us to determine several distinguished curve and surface pairs which possess elementary computable bisectors. Emphasis is on low-degree rational curves and surfaces, since they are of particular interest in surface modeling.

52 citations


Journal ArticleDOI
TL;DR: A group of methods for decomposing an arbitrary 3D volume rotation into a sequence of simple shear operations, suitable for implementations on a multipipelined hardware or a massively parallel machine.
Abstract: We present a group of methods for decomposing an arbitrary 3D volume rotation into a sequence of simple shear (i.e., regular shift) operations. We explore different types of shear operations: 2Dbeam shear, a shear in one coordinate based on the other two coordinates; 2Dslice shear, a shear of a volume slice (in two coordinates) according to the third coordinate; and 2Dslice?beam shear, the combination of a beam shear and a slice shear. We show that an arbitrary 3D rotation can be decomposed into four 2D beam shears. We use this decomposition as a basis to obtain the sequence of 3D rotation decomposition into four 2D slice shears or three 2D slice?beam shears. Moreover, we observe that two consecutive slice shears can be achieved by shifting beams in 3D space, a transformation we call a 3D beam shear. Therefore, an arbitrary 3D rotation can be decomposed into only two 3D beam shears. Because of the regularity and simplicity of the shear operation, these decompositions are suitable for implementations on a multipipelined hardware or a massively parallel machine. In addition, we present a resampling scheme in which only a single-pass resampling is required for performing multiple-pass shears to achieve the 3D volume rotation.

48 citations


Journal ArticleDOI
TL;DR: This work embeds a volume density model into a generalized cylinder, which is used to specify the envelope shape of a hair cluster, so that the design and manipulation of hair can be globally and efficiently performed.
Abstract: Modeling and rendering human hair is a very challenging problem in computer graphics. The difficulty comes mainly from the amount of hair to be modeled and the fine shapes of the individual hairs. In this paper, a new cluster hair model is introduced. The new model allows the design and styling of hair to be efficiently performed at the abstract cluster level. The key idea of this work is to embed a volume density model into a generalized cylinder, which is used to specify the envelope shape of a hair cluster, so that the design and manipulation of hair can be globally and efficiently performed. The detail of the hair is then modeled by a volume density model in which a randomly generated density map on the hair base is projected into and deformed along the generalized cylinder. The main advantages of the new model are: (1) it provides a very compact and efficient representation for complex hair styles; (2) it improves significantly the feasibility of and capability for interactive hair modeling and styling; (3) it produces high quality hair images; and (4) it provides a multiresolution model for adaptive rendering. It also has the potential to support efficient dynamic simulation at the cluster level.

47 citations


Journal ArticleDOI
TL;DR: Given two connected subsets Y?X of the set of the surfels of a connected digital surface, this work proposes three equivalent ways to express Y being homotopic to X, based on the Euler characteristics of sets of surfels, which enables to define thinning algorithms within a digital Jordan surface.
Abstract: Given two connected subsets Y?X of the set of the surfels of a connected digital surface, we propose three equivalent ways to express Y being homotopic to X. The first characterization is based on sequential deletion of simple surfels. This characterization enables us to define thinning algorithms within a digital Jordan surface. The second characterization is based on the Euler characteristics of sets of surfels. This characterization enables us, given two connected sets Y?X of surfels, to decide whether Y is n-homotopic to X. The third characterization is based on the (digital) fundamental group.

34 citations


Journal ArticleDOI
TL;DR: This paper proposes an octree inflating and deflating strategy to preserve the octree structure as much as possible and to avoid useless or redundant computations.
Abstract: This paper describes an incremental polygonization technique for implicit surfaces built from skeletal elements. Our method is dedicated to fast previewing in an interactive modeling system environment. We rely on an octree decomposition of space combined with Lipschitz conditions to recursively subdivide cells until a given level of precision is reached and converge to the implicit surface. We use a trilinear interpolation approximation of the field function to create a topologically consistent tessellation characterized by an adjacency graph. Our algorithm aims at updating the mesh locally in regions of space where changes in the potential field occurred. Therefore, we propose an octree inflating and deflating strategy to preserve the octree structure as much as possible and to avoid useless or redundant computations. Timings show that our incremental algorithm dramatically speeds up the overall polygonization process for complex objects.

31 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for computing 2D and 3D Legendre moments is presented and a new approach is developed for computing Legendre polynomials with one variable that improves the computational efficiency significantly and can be implemented easily for high order of moments.
Abstract: The two-dimensional (2D) and three-dimensional (3D) orthogonal moments are useful tools for 2D and 3D object recognition and image analysis. However, the problem of computation of orthogonal moments has not been well solved because there exist few algorithms that can efficiently reduce the computational complexity. As is well known, the calculation of 2D and 3D orthogonal moments by a straightforward method requires a large number of additions and multiplications. In this paper, an efficient algorithm for computing 2D and 3D Legendre moments is presented. First, a new approach is developed for computing Legendre polynomials with one variable; the corresponding results are then used to calculate 1D Legendre moments. Second, we extend our method to calculating 2D Legendre moments, a more accurate approximation formula when an analog original image is digitized to its discrete form is also discussed, and the relationship between the usual approximation and the new approach is investigated. Finally, an efficient method for computing 3D Legendre moments is developed. As one can see, the proposed algorithm improves the computational efficiency significantly and can be implemented easily for high order of moments.

Journal ArticleDOI
TL;DR: A multilevel adaptive finite difference solver is proposed, which generates a target surface minimizing an energy functional based on an internal energy of the surface and an outer energy induced by the gradient of the volume.
Abstract: In this paper we present a hierarchical approach for the deformable surface technique. This technique is a three dimensional extension of the snake segmentation method. We use it in the context of visualizing three dimensional scalar data sets. In contrast to classical indirect volume visualization methods, this reconstruction is not based on iso-values but on boundary information derived from discontinuities in the data. We propose a multilevel adaptive finite difference solver, which generates a target surface minimizing an energy functional based on an internal energy of the surface and an outer energy induced by the gradient of the volume. The method is attractive for preprocessing in numerical simulation or texture mapping. Red-green triangulation allows adaptive refinement of the mesh. Special considerations help to prevent self interpenetration of the surfaces. We will also show some techniques that introduce the hierarchical aspect into the inhomogeneity of the partial differential equation. The approach proves to be appropriate for data sets that contain a collection of objects separated by distinct boundaries. These kind of data sets often occur in medical and technical tomography, as we will demonstrate in a few examples.

Journal ArticleDOI
TL;DR: Fuzzy B-splines are suitable for representing and simplifying both crisp and imprecise surface data and support interrogation of the model at different presumption levels.
Abstract: In the context of surface modeling, fuzzy B-splines are proposed as an integrated approach to uncertainty coding and data reduction. Fuzzy B-splines are suitable for representing and simplifying both crisp and imprecise surface data and support interrogation of the model at different presumption levels. A high degree of compression can be achieved through a procedure that defines the most significant representative among spatially clustered points. Experimental results are shown to prove the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: It can be shown that if each voxel P of S has only finitely many neighbors (voxels of S that intersect P), and if any nonempty intersection of neighbors of P intersects P, then the neighborhood N(P) of every voxels P is simply connected and without cavities.
Abstract: Classical digital geometry deals with sets of cubical voxels (or square pixels) that can share faces, edges, or vertices, but basic parts of digital geometry can be generalized to sets S of convex voxels (or pixels) that can have arbitrary intersections. In particular, it can be shown that if each voxel P of S has only finitely many neighbors (voxels of S that intersect P), and if any nonempty intersection of neighbors of P intersects P, then the neighborhood N(P) of every voxel P is simply connected and without cavities, and if the topology of N(P) does not change when P is deleted (i.e., P is a “simple” voxel), then deletion of P does not change the topology of S.

Journal ArticleDOI
TL;DR: A dynamic contour approach is applied to optimize the shape of the contour according to the recorded angiograms and the internal smoothness constraints and the solution is achieved following the minimization of a nonconvex energy function assigned to the contours with a simulated annealing algorithm.
Abstract: 3D luminal vessel geometry description and visualization are important for the diagnosis and the prognosis of heart attack and stroke. A general mathematical framework is proposed for 3D reconstruction of vessel sections from a few angiograms. Regularization is introduced by modeling the vessel boundary slices by smooth contours to get the reconstruction problem well posed. A dynamic contour approach is applied to optimize the shape of the contour according to the recorded angiograms and the internal smoothness constraints. The solution is achieved following the minimization of a nonconvex energy function assigned to the contour with a simulated annealing algorithm. Preliminary testing on noisy and truncated synthetic images produces promising results. Evaluation and validation of the method on hardware phantoms are also presented.

Journal ArticleDOI
TL;DR: New techniques that accelerate splatting algorithms by exploiting both object-space and image-space coherence are presented, and two visibility test methods suitable for octree-basedsplatting are proposed.
Abstract: Splatting is an object-order volume rendering algorithm that produces images of high quality, and for which several optimization techniques have been proposed. This paper presents new techniques that accelerate splatting algorithms by exploiting both object-space and image-space coherence. In particular, we propose two visibility test methods suitable for octree-based splatting. The first method, based on dynamic image-space range trees, offers an accurate occlusion test and does not trade off image quality. The second method, based on image-space quadtrees, uses an approximate occlusion test that is faster than the first algorithm. Although the approximate visibility test may produce visual artifacts in rendering, the introduced error is usually not found very often. Tests with several datasets of useful sizes and complexities showed considerable speedups with respect to the splatting algorithm enhanced with octree only. Considering that they are very easy to implement, and need little additional memory, our techniques will be used as very effective splatting methods.

Journal ArticleDOI
TL;DR: This scheme attempts to resolve the potential texture memory problem by compressing 3D textures using a wavelet-based encoding method and will make it easy to implement practical 3D texture mapping in software/hardware rendering systems including real-time 3D graphics APIs such as OpenGL and Direct3D.
Abstract: While 2D texture mapping is one of the most effective of the rendering techniques that make 3D objects appear visually interesting, it often suffers from visual artifacts produced when 2D image patterns are wrapped onto the surfaces of objects with arbitrary shapes. On the other hand, 3D texture mapping generates highly natural visual effects in which objects appear carved from lumps of materials rather than laminated with thin sheets as in 2D texture mapping. Storing 3D texture images in a table for fast mapping computations, instead of evaluating procedures on the fly, however, has been considered impractical due to the extremely high memory requirement. In this paper, we present a new effective method for 3D texture mapping designed for real-time rendering of polygonal models. Our scheme attempts to resolve the potential texture memory problem by compressing 3D textures using a wavelet-based encoding method. The experimental results on various nontrivial 3D textures and polygonal models show that high compression rates are achieved with few visual artifacts in the rendered images and a small impact on rendering time. The simplicity of our compression-based scheme will make it easy to implement practical 3D texture mapping in software/hardware rendering systems including real-time 3D graphics APIs such as OpenGL and Direct3D.

Journal ArticleDOI
TL;DR: A necessary and sufficient condition for the existence of the variance is given, together with a heuristic to be used in practical cases, and optimal probabilities are found for the case when the authors are interested in the whole scene and are equal to the reflectivities.
Abstract: In this paper we study random walk estimators for radiosity with generalized absorption probabilities. That is, a path will either die or survive on a patch according to an arbitrary probability. The estimators studied so far, the infinite path length estimator and the finite path length one, can be considered as particular cases. Practical applications of the random walks with generalized probabilities are given. A necessary and sufficient condition for the existence of the variance is given, together with a heuristic to be used in practical cases. The optimal probabilities are also found for the case when we are interested in the whole scene and are equal to the reflectivities.

Journal ArticleDOI
TL;DR: This paper introduces a notion of visibility curves obtained by projection of silhouette and boundary curves and decomposition of the surface into nonoverlapping regions and presents an algorithm for decomposing a given surface into regions so that each region is either completely visible or hidden from a given viewpoint.
Abstract: Computing the visible portions of curved surfaces from a given viewpoint is of great interest in many applications. It is closely related to the hidden surface removal problem in computer graphics, and machining applications in manufacturing. Most of the early work has focused on discrete methods based on polygonization or ray-tracing and hidden curve removal. In this paper we present an algorithm for decomposing a given surface into regions so that each region is either completely visible or hidden from a given viewpoint. Initially, it decomposes the domain of each surface based on silhouettes and boundary curves. To compute the exact visibility, we introduce a notion of visibility curves obtained by projection of silhouette and boundary curves and decomposition of the surface into nonoverlapping regions. These curves are computed using marching methods and we present techniques to compute all the components. The nonoverlapping and visible portions of the surface are represented as trimmed surfaces and we present a representation based on polygon trapezoidation algorithms. The algorithms presented use some recently developed algorithms from computational geometry like triangulation of simple polygons and point location. Given the nonoverlapping regions, we use an existing randomized algorithm for visibility computation. We also present results from a preliminary implementation of our algorithm.