scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Pattern Analysis and Machine Intelligence in 1992"


Journal ArticleDOI
Paul J. Besl1, H.D. McKay1
TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Abstract: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >

17,598 citations


Journal ArticleDOI
TL;DR: A camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions is presented and a type of measure is introduced that can be used to directly evaluate the performance of calibration and compare calibrations among different systems.
Abstract: A camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions is presented. The proposed calibration procedure consists of two steps: (1) the calibration parameters are estimated using a closed-form solution based on a distribution-free camera model; and (2) the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. The authors introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of the calibration procedure are tested with both synthetic data and real images taken by tele- and wide-angle lenses. >

1,896 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of thinning methodologies, including iterative deletion of pixels and nonpixel-based methods, is presented and the relationships among them are explored.
Abstract: A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. >

1,827 citations


Journal ArticleDOI
TL;DR: The authors describe a camera for performing single lens stereo analysis, which incorporates a single main lens along with a lenticular array placed at the sensor plane and extracts information about both horizontal and vertical parallax, which improves the reliability of the depth estimates.
Abstract: Ordinary cameras gather light across the area of their lens aperture, and the light striking a given subregion of the aperture is structured somewhat differently than the light striking an adjacent subregion. By analyzing this optical structure, one can infer the depths of the objects in the scene, i.e. one can achieve single lens stereo. The authors describe a camera for performing this analysis. It incorporates a single main lens along with a lenticular array placed at the sensor plane. The resulting plenoptic camera provides information about how the scene would look when viewed from a continuum of possible viewpoints bounded by the main lens aperture. Deriving depth information is simpler than in a binocular stereo system because the correspondence problem is minimized. The camera extracts information about both horizontal and vertical parallax, which improves the reliability of the depth estimates. >

1,229 citations


Journal ArticleDOI
TL;DR: The authors examine prior smoothness constraints of a different form, which permit the recovery of discontinuities without introducing auxiliary variables for marking the location of jumps and suspending the constraints in their vicinity.
Abstract: The linear image restoration problem is to recover an original brightness distribution X/sup 0/ given the blurred and noisy observations Y=KX/sup 0/+B, where K and B represent the point spread function and measurement error, respectively. This problem is typical of ill-conditioned inverse problems that frequently arise in low-level computer vision. A conventional method to stabilize the problem is to introduce a priori constraints on X/sup 0/ and design a cost functional H(X) over images X, which is a weighted average of the prior constraints (regularization term) and posterior constraints (data term); the reconstruction is then the image X, which minimizes H. A prominent weakness in this approach, especially with quadratic-type stabilizers, is the difficulty in recovering discontinuities. The authors therefore examine prior smoothness constraints of a different form, which permit the recovery of discontinuities without introducing auxiliary variables for marking the location of jumps and suspending the constraints in their vicinity. In this sense, discontinuities are addressed implicitly rather than explicitly. >

1,205 citations


Journal ArticleDOI
TL;DR: A shape representation technique suitable for tasks that call for recognition of a noisy curve of arbitrary shape at an arbitrary scale or orientation is presented and several evolution and arc length evolution properties of planar curves are discussed.
Abstract: A shape representation technique suitable for tasks that call for recognition of a noisy curve of arbitrary shape at an arbitrary scale or orientation is presented. The method rests on the describing a curve at varying levels of detail using features that are invariant with respect to transformations that do not change the shape of the curve. Three different ways of computing the representation are described. They result in three different representations: the curvature scale space image, the renormalized curvature scale space image, and the resampled curvature scale space image. The process of describing a curve at increasing levels of abstraction is referred to as the evolution or arc length evolution of that curve. Several evolution and arc length evolution properties of planar curves are discussed. >

1,032 citations


Journal ArticleDOI
TL;DR: The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form.
Abstract: Segmentation using boundary finding is enhanced both by considering the boundary as a whole and by using model-based global shape information. The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form. The parametric model is based on the elliptic Fourier decomposition of the boundary. Probability distributions on the parameters of the representation bias the model to a particular overall shape while allowing for deformations. Boundary finding is formulated as an optimization problem using a maximum a posteriori objective function. Results of the method applied to real and synthetic images are presented, including an evaluation of the dependence of the method on prior information and image quality. >

888 citations


Journal ArticleDOI
TL;DR: A theoretical framework for backpropagation (BP) is proposed and it is proven in particular that the convergence holds if the classes are linearly separable and that multilayered neural networks (MLNs) exceed perceptrons in generalization to new examples.
Abstract: The authors propose a theoretical framework for backpropagation (BP) in order to identify some of its limitations as a general learning procedure and the reasons for its success in several experiments on pattern recognition. The first important conclusion is that examples can be found in which BP gets stuck in local minima. A simple example in which BP can get stuck during gradient descent without having learned the entire training set is presented. This example guarantees the existence of a solution with null cost. Some conditions on the network architecture and the learning environment that ensure the convergence of the BP algorithm are proposed. It is proven in particular that the convergence holds if the classes are linearly separable. In this case, the experience gained in several experiments shows that multilayered neural networks (MLNs) exceed perceptrons in generalization to new examples. >

659 citations


Journal ArticleDOI
TL;DR: The general problem of recognizing both horizontal and vertical road curvature parameters while driving along the road has been solved recursively and a differential geometry representation decoupled for the two curvature components has been selected.
Abstract: The general problem of recognizing both horizontal and vertical road curvature parameters while driving along the road has been solved recursively. A differential geometry representation decoupled for the two curvature components has been selected. Based on the planar solution of E.D. Dickmanns and A. Zapp (1986) and its refinements, a simple spatio-temporal model of the driving process makes it possible to take both spatial and temporal constraints into account effectively. The estimation process determines nine road and vehicle state parameters recursively at 25 Hz (40 ms) using four Intel 80286 and one 386 microprocessors. Results with the test vehicle (VaMoRs), which is a 5-ton van, are given for a hilly country road. >

648 citations


Journal ArticleDOI
TL;DR: The approach uses two different types of primitives for matching: small surface patches, where differential properties can be reliably computed, and lines corresponding to depth or orientation discontinuities, which are represented by splashes and 3-D curves, respectively.
Abstract: The authors present an approach for the recognition of multiple 3-D object models from three 3-D scene data. The approach uses two different types of primitives for matching: small surface patches, where differential properties can be reliably computed, and lines corresponding to depth or orientation discontinuities. These are represented by splashes and 3-D curves, respectively. It is shown how both of these primitives can be encoded by a set of super segments, consisting of connected linear segments. These super segments are entered into a table and provide the essential mechanism for fast retrieval and matching. The issues of robustness and stability of the features are addressed in detail. The acquisition of the 3-D models is performed automatically by computing splashes in highly structured areas of the objects and by using boundary and surface edges for the generation of 3-D curves. The authors present results with the current system (3-D object recognition based on super segments) and discuss further extensions. >

577 citations


Journal ArticleDOI
TL;DR: A stochastic approach to the estimation of 2D motion vector fields from time-varying images is presented and the maximum a posteriori probability (MAP) estimation is incorporated into a hierarchical environment to deal efficiently with large displacements.
Abstract: A stochastic approach to the estimation of 2D motion vector fields from time-varying images is presented. The formulation involves the specification of a deterministic structural model along with stochastic observation and motion field models. Two motion models are proposed: a globally smooth model based on vector Markov random fields and a piecewise smooth model derived from coupled vector-binary Markov random fields. Two estimation criteria are studied. In the maximum a posteriori probability (MAP) estimation, the a posteriori probability of motion given data is maximized, whereas in the minimum expected cost (MEC) estimation, the expectation of a certain cost function is minimized. Both algorithms generate sample fields by means of stochastic relaxation implemented via the Gibbs sampler. Two versions are developed: one for a discrete state space and the other for a continuous state space. The MAP estimation is incorporated into a hierarchical environment to deal efficiently with large displacements. >

Journal ArticleDOI
TL;DR: In this paper, the authors describe a model of nonlinear image filtering for noise reduction and edge enhancement using anisotropic diffusion, which roughly corresponds to a nonlinear diffusion process with backward heat flow across the strong edges.
Abstract: The authors describe a model of nonlinear image filtering for noise reduction and edge enhancement using anisotropic diffusion. The method is designed to enhance not only edges, but corners and T junctions as well. The method roughly corresponds to a nonlinear diffusion process with backward heat flow across the strong edges. Such a process is ill posed, making the results depend strongly on how the algorithm differs from the diffusion process. Two ways of modifying the equations using simulations on a variable grid are studied. >

Journal ArticleDOI
TL;DR: A technique for detecting and localizing corners of planar curves is proposed based on Gaussian scale space, which consists of the maxima of absolute curvature of the boundary function presented at all scales.
Abstract: A technique for detecting and localizing corners of planar curves is proposed. The technique is based on Gaussian scale space, which consists of the maxima of absolute curvature of the boundary function presented at all scales. The scale space of isolated simple and double corners is first analyzed to investigate the behavior of scale space due to smoothing and interactions between two adjacent corners. The analysis shows that the resulting scale space contains line patterns that either persist, terminate, or merge with a neighboring line. Next, the scale space is transformed into a tree that provides simple but concise representation of corners at multiple scales. Finally, a multiple-scale corner detection scheme is developed using a coarse-to-fine tree parsing technique. The parsing scheme is based on a stability criterion that states that the presence of a corner must concur with a curvature maximum observable at a majority of scales. Experiments were performed to show that the scale space corner detector is reliable for objects with multiple-size features and noisy boundaries and compares favorably with other corner detectors tested. >

Journal ArticleDOI
TL;DR: An algorithm for the analysis of two-component motion in which tracking and nulling mechanisms applied to three consecutive image frames separate and estimate the individual components is given and is robust in the presence of noise.
Abstract: A fundamental assumption made in formulating optical-flow algorithms, that motion at any point in an image can be represented as a single pattern component undergoing a simple translation, fails for a number of situations that commonly occur in real-world images. An alternative formulation of the local motion assumption in which there may be two distinct patterns undergoing coherent (e.g. affine) motion within a given local analysis region is proposed. An algorithm for the analysis of two-component motion in which tracking and nulling mechanisms applied to three consecutive image frames separate and estimate the individual components is given. Precise results are obtained, even for components that differ only slightly in velocity as well as for a faint component in the presence of a dominant, masking component. The algorithm provides precise motion estimates for a set of elementary two-motion configurations and is robust in the presence of noise. >

Journal ArticleDOI
TL;DR: A method for shape description of planar objects that integrates both region and boundary features is presented, an implementation of a 2D dynamic grassfire that relies on a distance surface on which elastic contours minimize an energy function.
Abstract: A method for shape description of planar objects that integrates both region and boundary features is presented. The method is an implementation of a 2D dynamic grassfire that relies on a distance surface on which elastic contours minimize an energy function. The method is based on an active contour model. Numerous implementation aspects of the shape description method were optimized. A Euclidean metric was used for optimal accuracy, and the active contour model permits bypassing some of the discretization limitations inherent in using a digital grid. Noise filtering was performed on the basis of both contour feature measures and region measures, that is, curvature extremum significance and ridge support, respectively, to obtain robust shape descriptors. Other improvements and variations of the algorithmic implementation are proposed. >

Journal ArticleDOI
TL;DR: The approach first takes a set of 3-D volumetric modeling primitives and generates a hierarchical aspect representation based on the projected surfaces of the primitives; conditional probabilities capture the ambiguity of mappings between levels of the hierarchy.
Abstract: An approach to the recovery of 3-D volumetric primitives from a single 2-D image is presented. The approach first takes a set of 3-D volumetric modeling primitives and generates a hierarchical aspect representation based on the projected surfaces of the primitives; conditional probabilities capture the ambiguity of mappings between levels of the hierarchy. From a region segmentation of the input image, the authors present a formulation of the recovery problem based on the grouping of the regions into aspects. No domain-independent heuristics are used; only the probabilities inherent in the aspect hierarchy are exploited. Once the aspects are recovered, the aspect hierarchy is used to infer a set of volumetric primitives and their connectivity. As a front end to an object recognition system, the approach provides the indexing power of complex 3-D object-centered primitives while exploiting the convenience of 2-D viewer-centered aspect matching; aspects are used to represent a finite vocabulary of 3-D parts from which objects can be constructed. >

Journal ArticleDOI
TL;DR: From two panoramic views at the two planned locations, a modified binocular stereo method yields a more precise, but with direction-dependent uncertainties, local map, which is integrated into a more reliable global representation of the world with the adjacent local maps.
Abstract: Omnidirectional views of an indoor environment at different locations are integrated into a global map. A single camera swiveling about the vertical axis takes consecutive images and arranges them into a panoramic representation, which provides rich information around the observation point: a precise omnidirectional view of the environment and coarse ranges to objects in it. Using the coarse map, the system autonomously plans consecutive observations at the intersections of lines connecting object points, where the directions of the imaging are estimated easily and precisely. From two panoramic views at the two planned locations, a modified binocular stereo method yields a more precise, but with direction-dependent uncertainties, local map. New observation points are selected to decrease the uncertainty, and another local map is yielded, which is then integrated into a more reliable global representation of the world with the adjacent local maps. >

Journal ArticleDOI
TL;DR: A method that treats linear neighborhood operators within a unified framework that enables linear combinations, concatenations, resolution changes, or rotations of operators to be treated in a canonical manner is presented.
Abstract: A method that treats linear neighborhood operators within a unified framework that enables linear combinations, concatenations, resolution changes, or rotations of operators to be treated in a canonical manner is presented. Various families of operators with special kinds of symmetries (such as translation, rotation, magnification) are explicitly constructed in 1-D, 2-D, and 3-D. A concept of 'order' is defined, and finite orthonormal bases of functions closely connected with the operators of various orders are constructed. Linear transformations between the various representations are considered. The method is based on two fundamental assumptions: a decrease of resolution should not introduce spurious detail, and the local operators should be self-similar under changes of resolution. These assumptions merely sum up the even more general need for homogeneity isotropy, scale invariance, and separability of independent dimensions of front-end processing in the absence of a priori information. >

Journal ArticleDOI
TL;DR: A stereo vision system that attempts to achieve robustness with respect to scene characteristics, from textured outdoor scenes to environments composed of highly regular man-made objects is presented and gives an active role to edgels parallel to the epipolar lines, whereas they are discarded in most feature-based systems.
Abstract: A stereo vision system that attempts to achieve robustness with respect to scene characteristics, from textured outdoor scenes to environments composed of highly regular man-made objects is presented. It integrates area-based and feature-based primitives. The area-based processing provides a dense disparity map, and the feature-based processing provides an accurate location of discontinuities. An area-based cross correlation, an ordering constraint, and a weak surface smoothness assumption are used to produce an initial disparity map. This disparity map is only a blurred version of the true one because of the smoothing introduced by the cross correlation. The problem can be reduced by introducing edge information. The disparity map is smoothed and the unsupported points removed. This method gives an active role to edgels parallel to the epipolar lines, whereas they are discarded in most feature-based systems. Very good results have been obtained on complex scenes in different domains. >

Journal ArticleDOI
TL;DR: This study makes it possible to design an algorithm for detecting boundaries in the images that are likely to be extremal, and provides a better understanding of the relationship between the apparent and real shape of a 3-D object as well as algorithms for reconstructing the local shape of such an object along the rims.
Abstract: The extremal boundaries, of 3-D curved objects are the images of special curves drawn on the object and are called rims They are viewpoint dependent and characterized by the fact that the optical rays of their points are tangential to the surface of the object The mathematics of the relationship between the extremal boundaries and the surface of the object is studied This study makes it possible to design an algorithm for detecting those boundaries in the images that are likely to be extremal Once this has been done, one can reconstruct the rims and compute the differential properties of the surface of the object along them up to the second order If a qualitative description is sufficient, the sign of the Gaussian curvature of the surface along the rim can be computed in a much simpler way Experimental results are presented on synthetic and real images The work provides a better understanding of the relationship between the apparent and real shape of a 3-D object as well as algorithms for reconstructing the local shape of such an object along the rims >

Journal ArticleDOI
TL;DR: In this article, an approach to the solution of signal-to-symbol transformation in the domain of flow fields, such as oriented texture fields and velocity vector fields, is discussed.
Abstract: An approach to the solution of signal-to-symbol transformation in the domain of flow fields, such as oriented texture fields and velocity vector fields, is discussed. The authors use the geometric theory of differential equations to derive a symbol set based on the visual appearance of phase portraits which are a geometric representation of the solution curves of a system of differential equations. They also provide the computational framework to start with a given flow field and derive its symbolic representation. Specifically, they segment the given texture, derive its symbolic representation, and perform a quantitative reconstruction of the salient features of the original texture based on the symbolic descriptors. Results of applying this technique to several real texture images are presented. This technique is useful in describing complex flow visualization pictures, defects in lumber processing, defects in semiconductor wafer inspection, and optical flow fields. >

Journal ArticleDOI
TL;DR: The authors present a closed-form solution to motion and structure parameters from line correspondences through three monocular perspective views that makes use of redundancy in the data to improve the accuracy of the solutions.
Abstract: This work discusses estimating motion and structure parameters from line correspondences of a rigid scene. The authors present a closed-form solution to motion and structure parameters from line correspondences through three monocular perspective views. The algorithm makes use of redundancy in the data to improve the accuracy of the solutions. The uniqueness of the solution is established, and necessary and sufficient conditions for degenerate spatial line configurations are given. Optimization has been employed to further improve the accuracy of the estimates in the presence of noise. Simulations have shown that the errors of the optimized estimates are close to the theoretical lower error bound. >

Journal ArticleDOI
TL;DR: A data-driven system for segmenting scenes into objects and their components is presented, and applications of collations to stereo correspondence, object-level segmentation, and shape description are illustrated.
Abstract: A data-driven system for segmenting scenes into objects and their components is presented. This segmentation system generates hierarchies of features that correspond to structural elements such as boundaries and surfaces of objects. The technique is based on perceptual organization, implemented as a mechanism for exploiting geometrical regularities in the shapes of objects as projected on images. Edges are recursively grouped on geometrical relationships into a description hierarchy ranging from edges to the visible surfaces of objects. These edge groupings, which are termed collated features, are abstract descriptors encoding structural information. The geometrical relationships employed are quasi-invariant over 2-D projections and are common to structures of most objects. Thus, collations have a high likelihood of corresponding to parts of objects. Collations serve as intermediate and high-level features for various visual processes. Applications of collations to stereo correspondence, object-level segmentation, and shape description are illustrated. >

Journal ArticleDOI
TL;DR: A computational approach to image matching that uses multiple attributes associated with each image point to yield a generally overdetermined system of constraints, taking into account possible structural discontinuities and occlusions is described.
Abstract: A computational approach to image matching is described. It uses multiple attributes associated with each image point to yield a generally overdetermined system of constraints, taking into account possible structural discontinuities and occlusions. In the algorithm implemented, intensity, edgeness, and cornerness attributes are used in conjunction with the constraints arising from intraregional smoothness, field continuity and discontinuity, and occlusions to compute dense displacement fields and occlusion maps along the pixel grids. The intensity, edgeness, and cornerness are invariant under rigid motion in the image plane. In order to cope with large disparities, a multiresolution multigrid structure is employed. Coarser level edgeness and cornerness measures are obtained by blurring the finer level measures. The algorithm has been tested on real-world scenes with depth discontinuities and occlusions. A special case of two-view matching is stereo matching, where the motion between two images is known. The algorithm can be easily specialized to perform stereo matching using the epipolar constraint. >

Journal ArticleDOI
TL;DR: 3-D vision techniques for incrementally building an accurate 3-D representation of rugged terrain using multiple sensors and the locus method, which is used to estimate the vehicle position in the digital elevation map (DEM), are presented.
Abstract: The authors present 3-D vision techniques for incrementally building an accurate 3-D representation of rugged terrain using multiple sensors. They have developed the locus method to model the rugged terrain. The locus method exploits sensor geometry to efficiently build a terrain representation from multiple sensor data. The locus method is used to estimate the vehicle position in the digital elevation map (DEM) by matching a sequence of range images with the DEM. Experimental results from large-scale real and synthetic terrains demonstrate the feasibility and power of the 3-D mapping techniques for rugged terrain. In real world experiments, a composite terrain map was built by merging 125 real range images. Using synthetic range images, a composite map of 150 m was produced from 159 images. With the proposed system, mobile robots operating in rugged environments can build accurate terrain models from multiple sensor data. >

Journal ArticleDOI
TL;DR: This work concentrates on 3-D appearance modeling and succeeds under favorable viewing conditions by using simplified processes to segment objects from the scene and derive the spatial agreement of object features.
Abstract: The authors address the problem of generating representations of 3-D objects automatically from exploratory view sequences of unoccluded objects. In building the models, processed frames of a video sequence are clustered into view categories called aspects, which represent characteristic views of an object invariant to its apparent position, size, 2-D orientation, and limited foreshortening deformation. The aspects as well as the aspect transitions of a view sequence are used to build (and refine) the 3-D object representations online in the form of aspect-transition matrices. Recognition emerges as the hypothesis that has accumulated the maximum evidence at each moment. The 'winning' object continues to refine its representation until either the camera is redirected or another hypothesis accumulates greater evidence. This work concentrates on 3-D appearance modeling and succeeds under favorable viewing conditions by using simplified processes to segment objects from the scene and derive the spatial agreement of object features. >

Journal ArticleDOI
TL;DR: A one-pass parallel thinning algorithm based on a number of criteria, including connectivity, unit-width convergence, medial axis approximation, noise immunity, and efficiency, is proposed and extended to the derived-grid to attain an isotropic medial axis representation.
Abstract: A one-pass parallel thinning algorithm based on a number of criteria, including connectivity, unit-width convergence, medial axis approximation, noise immunity, and efficiency, is proposed. A pipeline processing model is assumed for the development. Precise analysis of the thinning process is presented to show its properties, and proofs of skeletal connectivity and convergence are provided. The proposed algorithm is further extended to the derived-grid to attain an isotropic medial axis representation. A set of measures based on the desired properties of thinning is used for quantitative evaluation of various algorithms. Image reconstruction from connected skeletons is also discussed. Evaluation shows that the procedures compare favorably to others. >

Journal ArticleDOI
TL;DR: One of the new methods, called the cross patch (CP) method, is shown to be very fast, robust in the presence of noise, and always based on a proper surface parameterization, provided the perturbations of the surface over the patch neighborhood are isotropically distributed.
Abstract: Curvature sampling of arbitrary, fully described 3-D objects (e.g. tomographic medical images) is difficult because of surface patch parameterization problems. Five practical solutions are presented and characterized-the Sander-Zucker approach, two novel methods based on direct surface mapping, a piecewise linear manifold technique, and a turtle geometry method. One of the new methods, called the cross patch (CP) method, is shown to be very fast, robust in the presence of noise, and is always based on a proper surface parameterization, provided the perturbations of the surface over the patch neighborhood are isotropically distributed. >

Journal ArticleDOI
TL;DR: A two-stage algorithm for visual surface reconstruction from scattered data while preserving discontinuities is presented and the weighted bicubic spline as a surface descriptor removes outliers and reduces Gaussian noise.
Abstract: A two-stage algorithm for visual surface reconstruction from scattered data while preserving discontinuities is presented The first stage consists of a robust local approximation algorithm (the moving least median of squares (MLMS) of error) to clean the data and create a grid from the original scattered data points This process is discontinuity preserving The second stage introduces a weighted bicubic spline (WBS) as a surface descriptor The WBS has a factor in the regularizing term that adapts the behavior of the spline across discontinuities The weighted bicubic approximating spline can approximate data with step discontinuities with no discernible distortion in the approximating surface The combination of robust surface fitting and WBSs removes outliers and reduces Gaussian noise Either stage by itself would not effectively remove both kinds of noise Experimental results with the two-stage algorithm are presented >

Journal ArticleDOI
TL;DR: Anon as mentioned in this paper is a methodology for the interpretation of images of engineering drawings based on the combination of schemata describing prototypical drawing constructs with a library of low-level image analysis routines and a set of explicit control rules applied by an LR(1) parser.
Abstract: A methodology for the interpretation of images of engineering drawings is presented. The approach is based on the combination of schemata describing prototypical drawing constructs with a library of low-level image analysis routines and a set of explicit control rules applied by an LR(1) parser. The resulting system (Anon) integrates bottom-up and top-down processing strategies within a single, flexible framework modeled on the human perceptual cycle. Anon's structure and operation are described and discussed, and examples of its interpretation of real mechanical drawings are shown. >