scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2009"


Journal ArticleDOI
TL;DR: It is both spatially and computationally more efficient to use quaternions for 3D rotations than bi-invariant metrics on SO(3) but that only four of them are boundedly equivalent to each other.
Abstract: 3D rotations arise in many computer vision, computer graphics, and robotics problems and evaluation of the distance between two 3D rotations is often an essential task. This paper presents a detailed analysis of six functions for measuring distance between 3D rotations that have been proposed in the literature. Based on the well-developed theory behind 3D rotations, we demonstrate that five of them are bi-invariant metrics on SO(3) but that only four of them are boundedly equivalent to each other. We conclude that it is both spatially and computationally more efficient to use quaternions for 3D rotations. Lastly, by treating the two rotations as a true and an estimated rotation matrix, we illustrate the geometry associated with iso-error measures.

620 citations


Journal ArticleDOI
TL;DR: A generative model for textures that uses a local sparse description of the image content that enforces the sparsity of the expansion of local texture patches on adapted atomic elements is presented.
Abstract: This paper presents a generative model for textures that uses a local sparse description of the image content. This model enforces the sparsity of the expansion of local texture patches on adapted atomic elements. The analysis of a given texture within this framework performs the sparse coding of all the patches of the texture into the dictionary of atoms. Conversely, the synthesis of a new texture is performed by solving an optimization problem that seeks for a texture whose patches are sparse in the dictionary. This paper explores several strategies to choose this dictionary. A set of hand crafted dictionaries composed of edges, oscillations, lines or crossings elements allows to synthesize synthetic images with geometric features. Another option is to define the dictionary as the set of all the patches of an input exemplar. This leads to computer graphics methods for synthesis and shares some similarities with non-local means filtering. The last method we explore learns the dictionary by an optimization process that maximizes the sparsity of a set of exemplar patches. Applications of all these methods to texture synthesis, inpainting and classification shows the efficiency of the proposed texture model.

191 citations


Journal ArticleDOI
TL;DR: An old and forgotten algorithm that is revealed wider than recent schemes and able to improve contemporary schemes is revealed and proved to be optimal among first-order numerical schemes for image restoration.
Abstract: This paper deals with first-order numerical schemes for image restoration. These schemes rely on a duality-based algorithm proposed in 1979 by Bermudez and Moreno. This is an old and forgotten algorithm that is revealed wider than recent schemes (such as the Chambolle projection algorithm) and able to improve contemporary schemes. Total variation regularization and smoothed total variation regularization are investigated. Algorithms are presented for such regularizations in image restoration. We prove the convergence of all the proposed schemes. We illustrate our study with numerous numerical examples. We make some comparisons with a class of efficient algorithms (proved to be optimal among first-order numerical schemes) recently introduced by Y. Nesterov.

158 citations


Journal ArticleDOI
TL;DR: The results presented here on real 3D locally affine registration suggest that the novel framework provides a general and efficient way of fusing local rigid or affine deformations into a global invertible transformation without introducing artifacts, independently of the way local deformations are first estimated.
Abstract: In this article, we focus on the parameterization of non-rigid geometrical deformations with a small number of flexible degrees of freedom. In previous work, we proposed a general framework called polyaffine to parameterize deformations with a finite number of rigid or affine components, while guaranteeing the invertibility of global deformations. However, this framework lacks some important properties: the inverse of a polyaffine transformation is not polyaffine in general, and the polyaffine fusion of affine components is not invariant with respect to a change of coordinate system. We present here a novel general framework, called Log-Euclidean polyaffine, which overcomes these defects. We also detail a simple algorithm, the Fast Polyaffine Transform, which allows to compute very efficiently Log-Euclidean polyaffine transformations and their inverses on regular grids. The results presented here on real 3D locally affine registration suggest that our novel framework provides a general and efficient way of fusing local rigid or affine deformations into a global invertible transformation without introducing artifacts, independently of the way local deformations are first estimated.

150 citations


Journal ArticleDOI
TL;DR: The proposed method is based on the conclusion that distorted points are concyclic and uses directly the distorted points not undistorted points, therefore it should be more robust, and computes the centre of radial distortion, which is important in obtaining optimal results.
Abstract: This paper presents a new simple method to determine the distortion function of camera systems suffering from radial lens distortion. Neither information about the intrinsic camera parameters nor 3D-point correspondences are required. It is based on single image and uses the constraint, that straight lines in the 3D world project to circular arcs in the image plane, under the single parameter Division Model. Most of former approaches to correct the radial distortion are based on the collinearity of undistorted points. The proposed method in this paper, however, is based on the conclusion that distorted points are concyclic and uses directly the distorted points not undistorted points, therefore it should be more robust. It also computes the centre of radial distortion, which is important in obtaining optimal results. The results of experimental measurements on synthetic and real data are presented and discussed.

144 citations


Journal ArticleDOI
TL;DR: This framework is developed to address image processing problems consisting in detecting a configuration of objects from a digital image and proves the convergence of the continuous process and proposes a discrete scheme converging to the continuous case.
Abstract: We define a new birth and death dynamics dealing with configurations of disks in the plane. We prove the convergence of the continuous process and propose a discrete scheme converging to the continuous case. This framework is developed to address image processing problems consisting in detecting a configuration of objects from a digital image. The derived algorithm is applied for tree crown extraction and bird detection from aerial images. The performance of this approach is shown on real data.

127 citations


Journal ArticleDOI
TL;DR: Numerical results show the improved denoising capabilities of higher order filtering compared to the classical methods and stability in the 2-norm in the continuous and discrete setting.
Abstract: This paper provides a mathematical analysis of higher order variational methods and nonlinear diffusion filtering for image denoising. Besides the average grey value, it is shown that higher order diffusion filters preserve higher moments of the initial data. While a maximum-minimum principle in general does not hold for higher order filters, we derive stability in the 2-norm in the continuous and discrete setting. Considering the filters in terms of forward and backward diffusion, one can explain how not only the preservation, but also the enhancement of certain features in the given data is possible. Numerical results show the improved denoising capabilities of higher order filtering compared to the classical methods.

97 citations


Journal ArticleDOI
TL;DR: A new method for segmenting closed contours and surfaces using a variant of the minimal path approach, which can be used for finding an open curve giving extra information as stopping criteria and applied to 3D data with promising results.
Abstract: In this paper, we present a new method for segmenting closed contours and surfaces. Our work builds on a variant of the minimal path approach. First, an initial point on the desired contour is chosen by the user. Next, new keypoints are detected automatically using a front propagation approach. We assume that the desired object has a closed boundary. This a-priori knowledge on the topology is used to devise a relevant criterion for stopping the keypoint detection and front propagation. The final domain visited by the front will yield a band surrounding the object of interest. Linking pairs of neighboring keypoints with minimal paths allows us to extract a closed contour from a 2D image. This approach can also be used for finding an open curve giving extra information as stopping criteria. Detection of a variety of objects on real images is demonstrated. Using a similar idea, we can extract networks of minimal paths from a 3D image called Geodesic Meshing. The proposed method is applied to 3D data with promising results.

82 citations


Journal ArticleDOI
TL;DR: This paper proposes an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image by minimizing a 4 total-degree polynomial in several variables.
Abstract: A very important property of the usual pinhole model for camera projection is that 3D lines in the scene are projected in 2D lines. Unfortunately, wide-angle lenses (specially low-cost lenses) may introduce a strong barrel distortion which makes the usual pinhole model fail. Lens distortion models try to correct such distortion. In this paper, we propose an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. Using the proposed method, the lens distortion parameters are obtained by minimizing a 4 total-degree polynomial in several variables. We perform numerical experiments using calibration patterns and real scenes to show the performance of the proposed method.

81 citations


Journal ArticleDOI
TL;DR: A statistical model to predict the detectability of a spot on a textured background based on an approximate representation of the background texture and the interest of the a-contrario observer is illustrated for two real applications: the detectable of opacities in mammograms and the perception of stains on pieces of clothing.
Abstract: Using the a-contrario framework recently introduced in the modeling of human visual perception, we build a statistical model to predict the detectability of a spot on a textured background. Contrary to classical formalisms (ideal observer and its extensions), which assume a known probability distribution for the signal to be detected, the a-contrario observer we build only relies on gestalt-driven measurements and on an approximate representation of the background texture. It extends the scope of previous a-contrario detectors by using a non-i.i.d. naive model and a notion of local context. The models we propose are first validated theoretically in the case of powerlaw textures, which are, in particular, classical models for mammograms. Then, going to more general microtextures (colored noise processes), we compute the relationship between the size of a spot and the minimum contrast required to reach a given detectability threshold according to the a-contrario observer. Three main types of microtextures pop out from this characterization, and in particular low-frequency textures for which curiously enough, the contrast being given, the most salient spots are the smallest ones. Last, we illustrate the interest of the a-contrario observer for two real applications: the detectability of opacities in mammograms and the perception of stains on pieces of clothing.

74 citations


Journal ArticleDOI
TL;DR: This work proposes an energy of infinitesimal deformations of continuous 1- and 2-dimensional shapes that is based on the elastic energy of deformed objects that defines a shape metric which is inherently invariant with respect to Euclidean transformations and yields very natural deformations which preserve details.
Abstract: Deformations of shapes and distances between shapes are an active research topic in computer vision. We propose an energy of infinitesimal deformations of continuous 1- and 2-dimensional shapes that is based on the elastic energy of deformed objects. This energy defines a shape metric which is inherently invariant with respect to Euclidean transformations and yields very natural deformations which preserve details. We compute shortest paths between planar shapes based on elastic deformations and apply our approach to the modeling of 2-dimensional shapes.

Journal ArticleDOI
TL;DR: This work formally proves that some optimum-path forest methods from two distinct region-based segmentation paradigms, with internal and external seeds and with only internal seeds, indeed minimize some graph-cut measures.
Abstract: Image segmentation can be elegantly solved by optimum-path forest and minimum cut in graph. Given that both approaches exploit similar image graphs, some comparative analysis is expected between them. We clarify their differences and provide their comparative analysis from the theoretical point of view, for the case of binary segmentation (object/background) in which hard constraints (seeds) are provided interactively. Particularly, we formally prove that some optimum-path forest methods from two distinct region-based segmentation paradigms, with internal and external seeds and with only internal seeds, indeed minimize some graph-cut measures. This leads to a proof of the necessary conditions under which the optimum-path forest algorithm and the min-cut/max-flow algorithm produce exactly the same segmentation result, allowing a comparative analysis between them.

Journal ArticleDOI
TL;DR: Two new nonlocal nonlinear diffusion models for noise reduction are proposed, analyzed and implemented and they do preserve and enhance the most cherished features of Perona-Malik while delivering well-posed equations which admit a stable natural discretization.
Abstract: Two new nonlocal nonlinear diffusion models for noise reduction are proposed, analyzed and implemented. They are both a close relative of the celebrated Perona-Malik equation. In a way, they can be viewed as a new regularization paradigm for Perona-Malik. They do preserve and enhance the most cherished features of Perona-Malik while delivering well-posed equations which admit a stable natural discretization. Unlike other regularizations, however, certain piecewise smooth functions are (meta)stable equilibria and, as a consequence, their dynamical behavior and that of their discrete implementations can be fully understood and do not lead to any "paradox". The presence of nontrivial equilibria also explains why blurring is kept in check. One of the models has been proved to be well-posed. Numerical experiments are presented that illustrate the main features of the new models and that provide insight into their interesting dynamical behavior as well as demonstrate their effectiveness as a denoising tool.

Journal ArticleDOI
TL;DR: In this paper, a generalized Mumford-Shah functional is proposed and numerically investigated for the segmentation of images modulated due to, e.g., coil sensitivities, and an algorithm for image segmentation with fully automatized initialization is presented.
Abstract: Topological sensitivity analysis is performed for the piecewise constant Mumford-Shah functional. Topological and shape derivatives are combined in order to derive an algorithm for image segmentation with fully automatized initialization. Segmentation of 2D and 3D data is presented. Further, a generalized Mumford-Shah functional is proposed and numerically investigated for the segmentation of images modulated due to, e.g., coil sensitivities.

Journal ArticleDOI
TL;DR: It is demonstrated that the most commonly used mathematical measure of circularity—the Form Factor—is highly resolution dependent, and two new measures are presented based on the theory of Mean Deviations and the mathematical definition of a circle, showing them to be better overall than the previous measures.
Abstract: In this paper we demonstrate that the most commonly used mathematical measure of circularity--the Form Factor--is highly resolution dependent. Furthermore we show that despite the abundance of papers proposing measures of roundness, most of the new measures are mathematically equivalent to the Form Factor. Only four measures were found that were different. We then present two new measures, the first based on the theory of Mean Deviations and the second based on the mathematical definition of a circle. When compared in terms of resolution dependence, order of complexity, ease of calculation, and how well they match human perception, the two new measures are shown to be better overall than the previous measures. The two new measures are resolution independent in the sense that changing the resolution makes no change to the order of circularity of different shapes. That is, changing the resolution does not change whether one object would be considered more round than another on the basis of the measure. None of the other measures has this property.

Journal ArticleDOI
TL;DR: The proposed approach for the detection of faces in three dimensional scenes is tolerant against partial occlusions produced by the presence of any kind of object and can be used to improve the robustness of all those systems requiring a face detection stage in non-controlled scenarios.
Abstract: This paper presents an innovative approach for the detection of faces in three dimensional scenes. The method is tolerant against partial occlusions produced by the presence of any kind of object. The detection algorithm uses invariant properties of the surfaces to segment salient facial features, namely the eyes and the nose. At least two facial features must be clearly visible in order to perform face detection. Candidate faces are then registered using an ICP (Iterative Correspondent Point) based approach aimed to avoid those samples which belong to the occluding objects. The final face versus non-face discrimination is computed by a Gappy PCA (GPCA) classifier which is able to classify candidate faces using only those regions of the surface which are considered to be non-occluded. The algorithm has been tested using the UND database obtaining 100% of correct detection and only one false alarm. The database has been then processed with an artificial occlusions generator producing realistic acquisitions that emulate unconstrained scenarios. A rate of 89.8% of correct detections shows that 3D data is particularly suited for handling occluding objects. The results have been also verified on a small test set containing real world occlusions obtaining 90.4% of correctly detected faces. The proposed approach can be used to improve the robustness of all those systems requiring a face detection stage in non-controlled scenarios.

Journal ArticleDOI
TL;DR: The task of finding optimal deformations, or geodesic paths, between facial surfaces reduces to that of finding geodesics between level curves, which is accomplished using the theory of elastic shape analysis of 3D curves.
Abstract: This paper studies the problem of analyzing variability in shapes of facial surfaces using a Riemannian framework, a fundamental approach that allows for joint matchings, comparisons, and deformations of faces under a chosen metric. The starting point is to impose a curvilinear coordinate system, named the Darcyan coordinate system, on facial surfaces; it is based on the level curves of the surface distance function measured from the tip of the nose. Each facial surface is now represented as an indexed collection of these level curves. The task of finding optimal deformations, or geodesic paths, between facial surfaces reduces to that of finding geodesics between level curves, which is accomplished using the theory of elastic shape analysis of 3D curves. The elastic framework allows for nonlinear matching between curves and between points across curves. The resulting geodesics between facial surfaces provide optimal elastic deformations between faces and an elastic metric for comparing facial shapes. We demonstrate this idea using examples from FSU face database.

Journal ArticleDOI
TL;DR: This paper proposes ℒ2- and information-theory-based non-rigid registration algorithms that are exactly symmetric and introduces a method for removing asymmetry in numerical computations and presents results of numerical experiments.
Abstract: This paper proposes ℒ2- and information-theory-based (IT) non-rigid registration algorithms that are exactly symmetric. Such algorithms pair the same points of two images after the images are swapped. Many commonly-used ℒ2 and IT non-rigid registration algorithms are only approximately symmetric. The asymmetry is due to the objective function as well as due to the numerical techniques used in discretizing and minimizing the objective function. This paper analyzes and provides techniques to eliminate both sources of asymmetry. This paper has five parts. The first part shows that objective function asymmetry is due to the use of standard differential volume forms on the domain of the images. The second part proposes alternate volume forms that completely eliminate objective function asymmetry. These forms, called graph-based volume forms, are naturally defined on the graph of the registration diffeomorphism f, rather than on the domain of f. When pulled back to the domain of f they involve the Jacobian J f and therefore appear “non-standard”. In the third part of the paper, graph-based volume forms are analyzed in terms of four key objective-function properties: symmetry, positive-definiteness, invariance, and lack of bias. Graph-based volume forms whose associated ℒ2 objective functions have the first three properties are completely classified. There is an infinite-dimensional space of such graph-based forms. But within this space, up to scalar multiple, there is a unique volume form whose associated ℒ2 objective function is unbiased. This volume form, which when pulled back to the domain of f is (1+det(J f )) times the standard volume form on Euclidean space, is exactly the differential-geometrically natural volume form on the graph of f. The fourth part of the paper shows how the same volume form also makes the IT objective functions symmetric, positive semi-definite, invariant, and unbiased. The fifth part of the paper introduces a method for removing asymmetry in numerical computations and presents results of numerical experiments. The new objective functions and numerical method are tested on a coronal slice of a 3-D MRI brain image. Numerical experiments show that, even in the presence of noise, the new volume form and numerical techniques reduces asymmetry practically down to machine precision without compromising registration accuracy.

Journal ArticleDOI
TL;DR: This paper considers a set of images randomly warped from a mean template which has to be recovered and defines an appropriate statistical parametric model to generate random diffeomorphic deformations in two-dimensions and proposes a gradient descent algorithm to compute this M-estimator.
Abstract: The problem of defining appropriate distances between shapes or images and modeling the variability of natural images by group transformations is at the heart of modern image analysis. A current trend is the study of probabilistic and statistical aspects of deformation models, and the development of consistent statistical procedure for the estimation of template images. In this paper, we consider a set of images randomly warped from a mean template which has to be recovered. For this, we define an appropriate statistical parametric model to generate random diffeomorphic deformations in two-dimensions. Then, we focus on the problem of estimating the mean pattern when the images are observed with noise. This problem is challenging both from a theoretical and a practical point of view. M-estimation theory enables us to build an estimator defined as a minimizer of a well-tailored empirical criterion. We prove the convergence of this estimator and propose a gradient descent algorithm to compute this M-estimator in practice. Simulations of template extraction and an application to image clustering and classification are also provided.

Journal ArticleDOI
TL;DR: A new method to segment high angular resolution diffusion imaging (HARDI) data using a region-based statistical surface evolution on this image of ODFs to efficiently find coherent white matter fiber bundles is developed.
Abstract: In this article we develop a new method to segment high angular resolution diffusion imaging (HARDI) data. We first estimate the orientation distribution function (ODF) using a fast and robust spherical harmonic (SH) method. Then, we use a region-based statistical surface evolution on this image of ODFs to efficiently find coherent white matter fiber bundles. We show that our method is appropriate to propagate through regions of fiber crossings and we show that our results outperform state-of-the-art diffusion tensor (DT) imaging segmentation methods, inherently limited by the DT model. Results obtained on synthetic data, on a biological phantom, on real datasets and on all 13 subjects of a public NMR database show that our method is reproducible, automatic and brings a strong added value to diffusion MRI segmentation.

Journal ArticleDOI
TL;DR: This paper shows in particular how to recover the first fundamental form of the image embedded in a LSHT-space (Luminance, Saturation, Hue, Temperature) equipped with a metric tensor.
Abstract: The aim of this paper is to perform edge detection in color-infrared images from the point of view of Clifford algebras. The main idea is that such an image can be seen as a section of a Clifford bundle associated to the RGBT-space (Red, Green, Blue, Temperature) of acquisition. Dealing with geometric calculus and covariant derivatives of appropriate sections with respect to well-chosen connections allows to get various color and temperature information needed for the segmentation. We show in particular how to recover the first fundamental form of the image embedded in a LSHT-space (Luminance, Saturation, Hue, Temperature) equipped with a metric tensor. We propose applications to color edge detection with some constraints on colors and to edge detection in color-infrared images with constraints on both colors and temperature. Other applications related to different choices of connections, sections and embedding spaces for nD images may be considered from this general theoretical framework.

Journal ArticleDOI
TL;DR: This article presents new results linking critical kernels to minimal non-simple sets (MNS) and P-simple points and shows that these two previously introduced notions can be retrieved, better understood and enriched in the framework of critical kernels.
Abstract: Critical kernels constitute a general framework in the category of abstract complexes for the study of parallel homotopic thinning in any dimension. In this article, we present new results linking critical kernels to minimal non-simple sets (MNS) and P-simple points, which are notions conceived to study parallel thinning in discrete grids. We show that these two previously introduced notions can be retrieved, better understood and enriched in the framework of critical kernels. In particular, we propose new characterizations which hold in dimensions 2, 3 and 4, and which lead to efficient algorithms for detecting P-simple points and minimal non-simple sets.

Journal ArticleDOI
TL;DR: It is proved that any subset of ℝ2 parametrized by a C1 periodic function and its derivative is the Euclidean invariant signature of a closed planar curve.
Abstract: We prove that any subset of ?2 parametrized by a C 1 periodic function and its derivative is the Euclidean invariant signature of a closed planar curve. This solves a problem posed by Calabi et al. (Int. J. Comput. Vis. 26:107---135, 1998). Based on the proof of this result, we then develop some cautionary examples concerning the application of signature curves for object recognition and symmetry detection as proposed by Calabi et al.

Journal ArticleDOI
TL;DR: The paper concerns the strong uniform consistency and the asymptotic distribution of the kernel density estimator of random objects on a Riemannian manifolds, proposed by Pelletier.
Abstract: The paper concerns the strong uniform consistency and the asymptotic distribution of the kernel density estimator of random objects on a Riemannian manifolds, proposed by Pelletier (Stat. Probab. Lett., 73(3):297---304, 2005). The estimator is illustrated via one example based on a real data.

Journal ArticleDOI
TL;DR: Numerical results illustrate the efficiency of the Chambolle gradient projection method and indicate that such a nonmonotone method is more suitable to solve some large-scale inverse problems.
Abstract: The main aim of this paper is to accelerate the Chambolle gradient projection method for total variation image restoration. In the proposed minimization method model, we use the well known Barzilai-Borwein stepsize instead of the constant time stepsize in Chambolle's method. Further, we adopt the adaptive nonmonotone line search scheme proposed by Dai and Fletcher to guarantee the global convergence of the proposed method. Numerical results illustrate the efficiency of this method and indicate that such a nonmonotone method is more suitable to solve some large-scale inverse problems.

Journal ArticleDOI
TL;DR: The proposed edge detector works by selecting those pieces of level surface which are well-contrasted according to a statistical test, called Helmholtz principle, which results in a good edge detector for a wide class of images, including several types of medical images from X-ray computed tomography and magnetic resonance.
Abstract: We propose a new edge detector for 3D gray-scale images, extending the 2D edge detector of Desolneux et al. (J. Math. Imaging Vis. 14(3):271---284, 2001). While the edges of a planar image are pieces of curve, the edges of a volumetric image are pieces of surface, which are more delicate to manage. The proposed edge detector works by selecting those pieces of level surface which are well-contrasted according to a statistical test, called Helmholtz principle. As it is infeasible to treat all the possible pieces of each level surface, we restrict the search to the regions that result of optimizing the Mumford-Shah functional of the gradient over the surface, throughout all scales. We assert that this selection device results in a good edge detector for a wide class of images, including several types of medical images from X-ray computed tomography and magnetic resonance.

Journal ArticleDOI
TL;DR: This paper derives from the Hodge decomposition of image flows a definition of TV regularization for vector-valued data that extends the standard componentwise definition in a natural way, and shows that this approach leads to a convex decompose of arbitrary vector fields, providing a richer decomposition into piecewise harmonic fields rather than piecewise constant ones, and motion texture.
Abstract: The total variation (TV) measure is a key concept in the field of variational image analysis. In this paper, we focus on vector-valued data and derive from the Hodge decomposition of image flows a definition of TV regularization for vector-valued data that extends the standard componentwise definition in a natural way. We show that our approach leads to a convex decomposition of arbitrary vector fields, providing a richer decomposition into piecewise harmonic fields rather than piecewise constant ones, and motion texture. Furthermore, our regularizer provides a measure for motion boundaries of piecewise harmonic image flows in the same way, as the TV measure does for contours of scalar-valued piecewise constant images.

Journal ArticleDOI
TL;DR: The aim of this article is to recall the applications of the topological asymptotic expansion to major image processing problems, and its historical application to the crack localization problem from boundary measurements.
Abstract: The aim of this article is to recall the applications of the topological asymptotic expansion to major image processing problems. We briefly review the topological asymptotic analysis, and then present its historical application to the crack localization problem from boundary measurements. A very natural application of this technique in image processing is the inpainting problem, which can be solved by identifying the optimal localization of the missing edges. A second natural application is then the image restoration or enhancement. The identification of the main edges of the image allows us to preserve them, and smooth the image outside the edges. If the conductivity outside edges goes to infinity, the regularized image is piecewise constant and provides a natural solution to the segmentation problem. The numerical results presented for each application are very promising. Finally, we must mention that all these problems are solved with a $\mathcal{O}(n.\log(n))$ complexity.

Journal ArticleDOI
TL;DR: A new approach to model 2D surfaces and 3D volumetric data, as well as an approach for non-rigid registration of models based on spheres are presented, developed in the geometric algebra framework.
Abstract: We present a new approach to model 2D surfaces and 3D volumetric data, as well as an approach for non-rigid registration; both are developed in the geometric algebra framework. The approach for modeling is based on marching cubes idea using however spheres and their representation in the conformal geometric algebra; it will be called marching spheres. Note that before we can proceed with the modeling, it is needed to segment the object we are interested in; therefore, we include an approach for image segmentation, which is based on texture and border information, developed in a region-growing strategy. We compare the results obtained with our modeling approach against the results obtained with other approach using Delaunay tetrahedrization, and our proposed approach reduces considerably the number of spheres. Afterward, a method for non-rigid registration of models based on spheres is presented. Registration is done in an annealing scheme, as in Thin-Plate Spline Robust Point Matching (TPS-RPM) algorithm. As a final application of geometric algebra, we track in real time objects involved in surgical procedures.

Journal ArticleDOI
TL;DR: This paper shows that usual fuzzy connectivity definitions have some drawbacks, and proposes a new definition that exhibits better properties, in particular in terms of continuity, which leads to a nested family of hyperconnections associated with a tolerance parameter.
Abstract: Fuzzy set theory constitutes a powerful representation framework that can lead to more robustness in problems such as image segmentation and recognition. This robustness results to some extent from the partial recovery of the continuity that is lost during digitization. In this paper we deal with connectivity measures on fuzzy sets. We show that usual fuzzy connectivity definitions have some drawbacks, and we propose a new definition that exhibits better properties, in particular in terms of continuity. This definition leads to a nested family of hyperconnections associated with a tolerance parameter. We show that corresponding connected components can be efficiently extracted using simple operations on a max-tree representation. Then we define attribute openings based on crisp or fuzzy criteria. We illustrate a potential use of these filters in a brain segmentation and recognition process.