scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2014"


Journal ArticleDOI
TL;DR: The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images—a known disadvantage of the ROF model—while being simple and efficiently numerically solvable.
Abstract: In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler's elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images--a known disadvantage of the ROF model--while being simple and efficiently numerically solvable.

292 citations


Journal ArticleDOI
TL;DR: A novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images and reveals that, despite its conceptual simplicity, Poisson PCA-based Denoising appears to be highly competitive in very low light regimes.
Abstract: Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.

289 citations


Journal ArticleDOI
TL;DR: This article provides an overview of various notions of shape spaces, including the space of parametrized and unparametrizing curves, thespace of immersions, the diffeomorphism group and the spaceof Riemannian metrics.
Abstract: This article provides an overview of various notions of shape spaces, including the space of parametrized and unparametrized curves, the space of immersions, the diffeomorphism group and the space of Riemannian metrics. We discuss the Riemannian metrics that can be defined thereon, and what is known about the properties of these metrics. We put particular emphasis on the induced geodesic distance, the geodesic equation and its well-posedness, geodesic and metric completeness and properties of the curvature.

177 citations


Journal ArticleDOI
TL;DR: The method runs fully automatically and provides a detailed model of the retinal vasculature, which is crucial as a sound basis for further quantitative analysis of the retina, especially in screening applications.
Abstract: This paper presents a method for retinal vasculature extraction based on biologically inspired multi-orientation analysis. We apply multi-orientation analysis via so-called invertible orientation scores, modeling the cortical columns in the visual system of higher mammals. This allows us to generically deal with many hitherto complex problems inherent to vessel tracking, such as crossings, bifurcations, parallel vessels, vessels of varying widths and vessels with high curvature. Our approach applies tracking in invertible orientation scores via a novel geometrical principle for curve optimization in the Euclidean motion group SE(2). The method runs fully automatically and provides a detailed model of the retinal vasculature, which is crucial as a sound basis for further quantitative analysis of the retina, especially in screening applications.

121 citations


Journal ArticleDOI
TL;DR: The results show that Riemannian polynomials provide a practical model for parametric curve regression, while offering increased flexibility over geodesics.
Abstract: We develop a framework for polynomial regression on Riemannian manifolds. Unlike recently developed spline models on Riemannian manifolds, Riemannian polynomials offer the ability to model parametric polynomials of all integer orders, odd and even. An intrinsic adjoint method is employed to compute variations of the matching functional, and polynomial regression is accomplished using a gradient-based optimization scheme. We apply our polynomial regression framework in the context of shape analysis in Kendall shape space as well as in diffeomorphic landmark space. Our algorithm is shown to be particularly convenient in Riemannian manifolds with additional symmetry, such as Lie groups and homogeneous spaces with right or left invariant metrics. As a particularly important example, we also apply polynomial regression to time-series imaging data using a right invariant Sobolev metric on the diffeomorphism group. The results show that Riemannian polynomials provide a practical model for parametric curve regression, while offering increased flexibility over geodesics.

92 citations


Journal ArticleDOI
TL;DR: This paper investigates the convergence behavior of a primal-dual splitting method for solving monotone inclusions involving mixtures of composite, Lipschitzian and parallel sum type operators proposed by Combettes and Pesquet and proposes two new schemes which accelerate the sequences of primal and/or dual iterates.
Abstract: In this paper we investigate the convergence behavior of a primal-dual splitting method for solving monotone inclusions involving mixtures of composite, Lipschitzian and parallel sum type operators proposed by Combettes and Pesquet (in Set-Valued Var. Anal. 20(2):307---330, 2012). Firstly, in the particular case of convex minimization problems, we derive convergence rates for the partial primal-dual gap function associated to a primal-dual pair of optimization problems by making use of conjugate duality techniques. Secondly, we propose for the general monotone inclusion problem two new schemes which accelerate the sequences of primal and/or dual iterates, provided strong monotonicity assumptions for some of the involved operators are fulfilled. Finally, we apply the theoretical achievements in the context of different types of image restoration problems solved via total variation regularization.

88 citations


Journal ArticleDOI
TL;DR: In this paper, a new set of low-rank recovery algorithms for linear inverse problems within the class of hard thresholding methods is presented and analyzed, and strategies on how to set up these algorithms via basic ingredients for different configurations to achieve complexity vs. accuracy tradeoffs.
Abstract: In this paper, we present and analyze a new set of low-rank recovery algorithms for linear inverse problems within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic ingredients for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we study acceleration schemes via memory-based techniques and randomized, ∈-approximate matrix projections to decrease the computational costs in the recovery process. For most of the configurations, we present theoretical analysis that guarantees convergence under mild problem conditions. Simulation results demonstrate notable performance improvements as compared to state-of-the-art algorithms both in terms of reconstruction accuracy and computational complexity.

76 citations


Journal ArticleDOI
TL;DR: S-parametrization simplifies the exponential map, the curvature formulas, the cusp-surface, and the boundary value problem, and shows that sub-Riemannian geodesics solve Petitot’s circle bundle model.
Abstract: To model association fields that underly perceptional organization (gestalt) in psychophysics we consider the problem P curve of minimizing $\int _{0}^{\ell} \sqrt{\xi^{2} +\kappa^{2}(s)} {\rm d}s $ for a planar curve having fixed initial and final positions and directions. Here ?(s) is the curvature of the curve with free total length l. This problem comes from a model of geometry of vision due to Petitot (in J. Physiol. Paris 97:265---309, 2003; Math. Inf. Sci. Humaines 145:5---101, 1999), and Citti & Sarti (in J. Math. Imaging Vis. 24(3):307---326, 2006). In previous work we proved that the range $\mathcal{R} \subset\mathrm{SE}(2)$ of the exponential map of the underlying geometric problem formulated on SE(2) consists of precisely those end-conditions (x fin,y fin,? fin) that can be connected by a globally minimizing geodesic starting at the origin (x in,y in,? in)=(0,0,0). From the applied imaging point of view it is relevant to analyze the sub-Riemannian geodesics and $\mathcal{R}$ in detail. In this article we

75 citations


Journal ArticleDOI
TL;DR: This paper extends the main result of Szlam and Bresson, which introduced an exact ℓ1 relaxation of the Cheeger ratio cut problem for unsupervised data classification, and deals with the multi-class transductive learning problem, which consists in learning several classes with a set of labels for each class.
Abstract: Recent advances in l 1 optimization for imaging problems provide promising tools to solve the fundamental high-dimensional data classification in machine learning. In this paper, we extend the main result of Szlam and Bresson (Proceedings of the 27th International Conference on Machine Learning, pp. 1039---1046, 2010), which introduced an exact l 1 relaxation of the Cheeger ratio cut problem for unsupervised data classification. The proposed extension deals with the multi-class transductive learning problem, which consists in learning several classes with a set of labels for each class. Learning several classes (i.e. more than two classes) simultaneously is generally a challenging problem, but the proposed method builds on strong results introduced in imaging to overcome the multi-class issue. Besides, the proposed multi-class transductive learning algorithms also benefit from recent fast l 1 solvers, specifically designed for the total variation norm, which plays a central role in our approach. Finally, experiments demonstrate that the proposed l 1 relaxation algorithms are more accurate and robust than standard l 2 relaxation methods s.a. spectral clustering, particularly when considering a very small number of labels for each class to be classified. For instance, the mean error of classification for the benchmark MNIST dataset of 60,000 data in $\mathbb{R}^{784}$ using the proposed l 1 relaxation of the multi-class Cheeger cut is 2.4 % when only one label is considered for each class, while the error of classification for the l 2 relaxation method of spectral clustering is 24.7 %.

65 citations


Journal ArticleDOI
TL;DR: A robust and feature-capturing surface reconstruction and simplification method that turns an input point set into a low triangle-count simplicial complex is introduced and is shown to exhibit both robustness to noise and outliers, as well as preservation of sharp features and boundaries.
Abstract: We introduce a robust and feature-capturing surface reconstruction and simplification method that turns an input point set into a low triangle-count simplicial complex. Our approach starts with a (possibly non-manifold) simplicial complex filtered from a 3D Delaunay triangulation of the input points. This initial approximation is iteratively simplified based on an error metric that measures, through optimal transport, the distance between the input points and the current simplicial complex--both seen as mass distributions. Our approach is shown to exhibit both robustness to noise and outliers, as well as preservation of sharp features and boundaries. Our new feature-sensitive metric between point sets and triangle meshes can also be used as a post-processing tool that, from the smooth output of a reconstruction method, recovers sharp features and boundaries present in the initial point set.

63 citations


Journal ArticleDOI
TL;DR: A new algorithm, the pole ladder, in which one diagonal of the parallelogram is the baseline-to-reference frame geodesic, which drastically reduces the number of geodesics to compute and shows that an important gain in sensitivity could be expected in group-wise comparisons.
Abstract: Group-wise analysis of time series of images requires to compare longitudinal evolutions of images observed on different subjects. In medical imaging, longitudinal anatomical changes can be modeled thanks to non-rigid registration of follow-up images. The comparison of longitudinal trajectories requires the transport (or "normalization") of longitudinal deformations in a common reference frame. We previously proposed an effective computational scheme based on the Schild's ladder for the parallel transport of diffeomorphic deformations parameterized by tangent velocity fields, based on the construction of a geodesic parallelogram on a manifold. Schild's ladder may be however inefficient for transporting longitudinal deformations from image time series of multiple time points, in which the computation of the geodesic diagonals is required several times. We propose here a new algorithm, the pole ladder, in which one diagonal of the parallelogram is the baseline-to-reference frame geodesic. This drastically reduces the number of geodesics to compute. Moreover, differently from the Schild's ladder, the pole ladder is symmetric with respect to the baseline-to-reference frame geodesic. From the theoretical point of view, we show that the pole ladder is rigorously equivalent to the Schild's ladder when transporting along geodesics. From the practical point of view, we establish the computational advantages and demonstrate the effectiveness of this very simple method by comparing with standard methods of transport on simulated images with progressing brain atrophy. Finally, we illustrate its application to a clinical problem: the measurement of the longitudinal progression in Alzheimer's disease. Results suggest that an important gain in sensitivity could be expected in group-wise comparisons.

Journal ArticleDOI
TL;DR: This work proposes a novel skeleton-based approach to gait recognition using the screened Poisson equation to construct a family of smooth distance functions associated with a given shape and demonstrates how the Skeleton Variance Image is a powerful gait cycle descriptor leading to a significant improvement over the existing state of the art gait recognized rate.
Abstract: We propose a novel skeleton-based approach to gait recognition using our Skeleton Variance Image. The core of our approach consists of employing the screened Poisson equation to construct a family of smooth distance functions associated with a given shape. The screened Poisson distance function approximation nicely absorbs and is relatively stable to shape boundary perturbations which allows us to define a rough shape skeleton. We demonstrate how our Skeleton Variance Image is a powerful gait cycle descriptor leading to a significant improvement over the existing state of the art gait recognition rate.

Journal ArticleDOI
TL;DR: A method for automatically solving apictorial jigsaw puzzles that is based on an extension of the method of differential invariant signatures, designed to solve challenging puzzles, without having to impose any restrictive assumptions on the shape of the puzzle, the shapes of the individual pieces, or their intrinsic arrangement.
Abstract: We present a method for automatically solving apictorial jigsaw puzzles that is based on an extension of the method of differential invariant signatures. Our algorithms are designed to solve challenging puzzles, without having to impose any restrictive assumptions on the shape of the puzzle, the shapes of the individual pieces, or their intrinsic arrangement. As a demonstration, the method was successfully used to solve two commercially available puzzles. Finally we perform some preliminary investigations into scalability of the algorithm for even larger puzzles.

Journal ArticleDOI
TL;DR: In this article, a geometrical model of functional architecture for the processing of spatio-temporal visual stimuli is developed, which arises from the properties of the receptive field linear dynamics of orientation and speed-selective cells in the visual cortex, embedded in the definition of a geometry where the connectivity between points is driven by the contact structure of a 5D manifold.
Abstract: In this paper we develop a geometrical model of functional architecture for the processing of spatio-temporal visual stimuli. The model arises from the properties of the receptive field linear dynamics of orientation and speed-selective cells in the visual cortex, that can be embedded in the definition of a geometry where the connectivity between points is driven by the contact structure of a 5D manifold. Then, we compute the stochastic kernels that are the approximations of two Fokker Planck operators associated to the geometry, and implement them as facilitation patterns within a neural population activity model, in order to reproduce some psychophysiological findings about the perception of contours in motion and trajectories of points found in the literature.

Journal ArticleDOI
TL;DR: This work provides a version of the backwards approach by using a “nested sequence of relations” which define the decreasing sequences of subspaces which need not be geodesic and are frequently more tractable and overcome difficulties with using geodesics.
Abstract: In non-Euclidean data spaces represented by manifolds (or more generally stratified spaces), analogs of principal component analysis can be more easily developed using a backwards approach. There has been a gradual evolution in the application of this idea from using increasing geodesic subspaces of submanifolds in analogy with PCA to using a "backward sequence" of a decreasing family of subspaces. We provide a version of the backwards approach by using a "nested sequence of relations" which define the decreasing sequences of subspaces which need not be geodesic. Because these are naturally inductively added in a backward sequence, they are frequently more tractable and overcome difficulties with using geodesics.

Journal ArticleDOI
TL;DR: It is proved that AD-LBR is in 2D asymptotically equivalent to a finite element discretization on an anisotropic Delaunay triangulation, a procedure more involved and computationally expensive, and benefits from the theoretical guarantees of this procedure, for a fraction of its cost.
Abstract: We introduce a new discretization scheme for Anisotropic Diffusion, AD-LBR, on two and three dimensional Cartesian grids. The main features of this scheme is that it is non-negative and has sparse stencils, of cardinality bounded by 6 in 2D, by 12 in 3D, despite allowing diffusion tensors of arbitrary anisotropy. The radius of these stencils is not a-priori bounded however, and can be quite large for pronounced anisotropies. Our scheme also has good spectral properties, which permits larger time steps and avoids e.g. chessboard artifacts. AD-LBR relies on Lattice Basis Reduction, a tool from discrete mathematics which has recently shown its relevance for the discretization on grids of strongly anisotropic Partial Differential Equations (Mirebeau in Preprint, 2012). We prove that AD-LBR is in 2D asymptotically equivalent to a finite element discretization on an anisotropic Delaunay triangulation, a procedure more involved and computationally expensive. Our scheme thus benefits from the theoretical guarantees of this procedure, for a fraction of its cost. Numerical experiments in 2D and 3D illustrate our results.

Journal ArticleDOI
TL;DR: A comparative analysis of these errors on the accurate computation of the three major ORIMs: Zernike moments (ZMs), Pseudo Zernikes moments (PZMs) and orthogonal Fourier-Mellin moments (OFMMs).
Abstract: Orthogonal rotation invariant moments (ORIMs) are among the best region based shape descriptors. Being orthogonal and complete, they possess minimum information redundancy. The magnitude of moments is invariant to rotation and reflection and with some geometric transformation, they can be made translation and scale invariant. Apart from these characteristics, they are robust to image noise. These characteristics of ORIMs make them suitable for many pattern recognition and image processing applications. Despite these characteristics, the ORIMs suffer from many digitization errors, thus they are incapable of representing subtle details in image, especially at high orders of moments. Among the various errors, the image discretization error, geometric and numerical integration errors are the most prominent ones. This paper investigates the contribution and effects of these errors on the characteristics of ORIMs and performs a comparative analysis of these errors on the accurate computation of the three major ORIMs: Zernike moments (ZMs), Pseudo Zernike moments (PZMs) and orthogonal Fourier-Mellin moments (OFMMs). Detailed experimental analysis reveals some interesting results on the performance of these moments.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive analysis in the continuum domain utilizing the concept of clustered sparsity, which besides leading to asymptotic error bounds also makes the superior behavior of directional representation systems over wavelets precise.
Abstract: Recently, compressed sensing techniques in combination with both wavelet and directional representation systems have been very effectively applied to the problem of image inpainting. However, a mathematical analysis of these techniques which reveals the underlying geometrical content is missing. In this paper, we provide the first comprehensive analysis in the continuum domain utilizing the novel concept of clustered sparsity, which besides leading to asymptotic error bounds also makes the superior behavior of directional representation systems over wavelets precise. First, we propose an abstract model for problems of data recovery and derive error bounds for two different recovery schemes, namely l 1 minimization and thresholding. Second, we set up a particular microlocal model for an image governed by edges inspired by seismic data as well as a particular mask to model the missing data, namely a linear singularity masked by a horizontal strip. Applying the abstract estimate in the case of wavelets and of shearlets we prove that--provided the size of the missing part is asymptotic to the size of the analyzing functions--asymptotically precise inpainting can be obtained for this model. Finally, we show that shearlets can fill strictly larger gaps than wavelets in this model.

Journal ArticleDOI
TL;DR: In this article, the authors develop and study two conceptually new ways to define convolution products for hypercomplex Fourier transforms, which will enable the development and fast implementation of new filters for quaternionic signals and systems, as well as for their higher dimensional counterparts.
Abstract: Hypercomplex Fourier transforms are increasingly used in signal processing for the analysis of higher-dimensional signals such as color images. A main stumbling block for further applications, in particular concerning filter design in the Fourier domain, is the lack of a proper convolution theorem. The present paper develops and studies two conceptually new ways to define convolution products for such transforms. As a by-product, convolution theorems are obtained that will enable the development and fast implementation of new filters for quaternionic signals and systems, as well as for their higher dimensional counterparts.

Journal ArticleDOI
TL;DR: This paper introduces a statistical dynamic model for the generation of turbulence based on linear dynamical systems (LDS) and expands the model to include the unknown image as part of the unobserved state and applies Kalman filtering to estimate such state.
Abstract: In this paper we address the problem of recovering an image from a sequence of distorted versions of it, where the distortion is caused by what is commonly referred to as ground-level turbulence. In mathematical terms, such distortion can be described as the cumulative effect of a blurring kernel and a time-dependent deformation of the image domain. We introduce a statistical dynamic model for the generation of turbulence based on linear dynamical systems (LDS). We expand the model to include the unknown image as part of the unobserved state and apply Kalman filtering to estimate such state. This operation yields a blurry image where the blurring kernel is effectively isoplanatic. Applying blind nonlocal Total Variation (NL-TV) deconvolution yields a sharp final result.

Journal ArticleDOI
TL;DR: A new data structure is defined, the component-graph, which extends the notion of component-tree to images taking their values in any (partially or totally) ordered set.
Abstract: Component-trees model the structure of grey-level images by considering their binary level-sets obtained from successive thresholdings. They also enable to define anti-extensive filtering procedures for such images. In order to extend this image processing approach to any (grey-level or multivalued) images, both the notion of component-tree, and its associated filtering framework, have to be generalised. In this article we deal with the generalisation of the component-tree structure. We define a new data structure, the component-graph, which extends the notion of component-tree to images taking their values in any (partially or totally) ordered set. The component-graphs are declined in three variants, of increasing richness and size, whose structural properties are studied.

Journal ArticleDOI
TL;DR: In this paper, the concept of functional current is introduced to represent and treat functional shapes, i.e. submanifold-supported signals. But functional currents are not efficient theoretically and numerically to model and process shapes as curves or surfaces, they are limited to the manipulation of purely geometrical objects.
Abstract: This paper introduces the concept of functional current as a mathematical framework to represent and treat functional shapes, i.e. submanifold-supported signals. It is motivated by the growing occurrence, in medical imaging and computational anatomy, of what can be described as geometrico-functional data, that is a data structure that involves a deformable shape (roughly a finite dimensional submanifold) together with a function defined on this shape taking values in another manifold. Whereas mathematical currents have already proved to be very efficient theoretically and numerically to model and process shapes as curves or surfaces, they are limited to the manipulation of purely geometrical objects. We show that the introduction of the concept of functional currents offers a genuine solution to the simultaneous processing of the geometric and signal information of any functional shape. We explain how functional currents can be equipped with a Hilbertian norm that successfully combines the geometrical and functional content of functional shapes under geometrical and functional perturbations, thus paving the way for various processing algorithms. We illustrate this potential on two problems: the redundancy reduction of functional shape representations through matching pursuit schemes on functional currents and the simultaneous geometric and functional registration of functional shapes under diffeomorphic transport.

Journal ArticleDOI
TL;DR: This analysis of MRA data shows a statistically significant effect of age and sex on brain artery structure, and variation in the proximity of brain arteries to the cortical surface results in strong statistical difference between sexes and statistically significant age effect.
Abstract: Statistical analysis of magnetic resonance angiography (MRA) brain artery trees is performed using two methods for mapping brain artery trees to points in phylogenetic treespace: cortical landmark correspondence and descendant correspondence. The differences in end-results based on these mappings are highlighted to emphasize the importance of correspondence in tree-oriented data analysis. Representation of brain artery systems as points in phylogenetic treespace, a mathematical space developed in (Billera et al. Adv. Appl. Math 27:733---767, 2001), facilitates this analysis. The phylogenetic treespace is a rich setting for tree-oriented data analysis. The Frechet sample mean or an approximation is reported. Multidimensional scaling is used to explore structure in the data set based on pairwise distances between data points. This analysis of MRA data shows a statistically significant effect of age and sex on brain artery structure. Variation in the proximity of brain arteries to the cortical surface results in strong statistical difference between sexes and statistically significant age effect. That particular observation is possible with cortical correspondence but did not show up in the descendant correspondence.

Journal ArticleDOI
TL;DR: This paper explores dissimilarity measures depending on the overall image content encapsulated in its local mutual information and shows its invariance to information preserving transforms in the framework of the connective segmentation and constrained connectivity paradigms.
Abstract: Connective segmentation based on the definition of a dissimilarity measure on pairs of adjacent pixels is an appealing framework to develop new hierarchical segmentation methods. Usually, the dissimilarity is fully determined by the intensity values of the considered pair of adjacent pixels, so that it is independent of the values of the other image pixels. In this paper, we explore dissimilarity measures depending on the overall image content encapsulated in its local mutual information and show its invariance to information preserving transforms. This is investigated in the framework of the connective segmentation and constrained connectivity paradigms and leads to the concept of dependent connectivities. An efficient probability estimator based on depth functions is proposed to handle multi-dimensional images. Experiments conducted on hyper-spectral and multi-angular remote sensing images highlight the robustness of the proposed approach.

Journal ArticleDOI
TL;DR: It is shown that the initial-boundary value problem has global in time dissipative solutions (in a sense going back to P.-L. Lions), and several properties of these solutions are established.
Abstract: In this paper, we consider a coupled system of partial differential equations (PDEs) based model for image restoration. Both the image and the edge variables are incorporated by coupling them into two different PDEs. It is shown that the initial-boundary value problem has global in time dissipative solutions (in a sense going back to P.-L. Lions), and several properties of these solutions are established. Some numerical examples are given to highlight the denoising nature of the proposed model along with some comparison results.

Journal ArticleDOI
TL;DR: A new hierarchical color quantization method based on self-organizing maps that provides different levels of quantization is presented and the experimental results show the good performance of this approach compared to other quantizers based onSelf-organization.
Abstract: In this paper, a new hierarchical color quantization method based on self-organizing maps that provides different levels of quantization is presented. Color quantization (CQ) is a typical image processing task, which consists of selecting a small number of code vectors from a set of available colors to represent a high color resolution image with minimum perceptual distortion. Several techniques have been proposed for CQ based on splitting algorithms or cluster analysis. Artificial neural networks and, more concretely, self-organizing models have been usually utilized for this purpose. The self-organizing map (SOM) is one of the most useful algorithms for color image quantization. However, it has some difficulties related to its fixed network architecture and the lack of representation of hierarchical relationships among data. The growing hierarchical SOM (GHSOM) tries to face these problems derived from the SOM model. The architecture of the GHSOM is established during the unsupervised learning process according to the input data. Furthermore, the proposed color quantizer allows the evaluation of different color quantization rates under different codebook sizes, according to the number of levels of the generated neural hierarchy. The experimental results show the good performance of this approach compared to other quantizers based on self-organization.

Journal ArticleDOI
TL;DR: Two important results linking watersheds and homotopy are established: any watershed of any map can be straightforwardly obtained from an ultimate collapse of this map, and conversely anyultimate collapse of the initial map straightforwardly induces a watershed.
Abstract: This work is settled in the framework of abstract simplicial complexes. We propose a definition of a watershed and of a collapse (i.e., a homotopic retraction) for maps defined on pseudomanifolds of arbitrary dimension. Then, we establish two important results linking watersheds and homotopy. The first one generalizes a property known for distance transforms in a continuous setting to any map on pseudomanifolds: a watershed of any map is a subset of an ultimate collapse of the support of this map. The second result establishes, through an equivalence theorem, a deep link between watershed and collapse of maps: any watershed of any map can be straightforwardly obtained from an ultimate collapse of this map, and conversely any ultimate collapse of the initial map straightforwardly induces a watershed.

Journal ArticleDOI
TL;DR: The refinement order on partitions corresponds to the operation of merging blocks in a partition, which is relevant to image segmentation and filtering methods, and its mathematical extension to partial partitions, that it is called standard order, involves several operations, not only merging, but also creating new blocks or inflating existing ones, which are equally relevant toimage segmentations and filtering techniques.
Abstract: The refinement order on partitions corresponds to the operation of merging blocks in a partition, which is relevant to image segmentation and filtering methods. Its mathematical extension to partial partitions, that we call standard order, involves several operations, not only merging, but also creating new blocks or inflating existing ones, which are equally relevant to image segmentation and filtering techniques. These three operations correspond to three basic partial orders on partial partitions, the merging, inclusion and inflating orders. There are three possible combinations of these three basic orders, one of them is the standard order, the other two are the merging-inflating and inclusion-inflating orders. We study these orders in detail, giving in particular their minimal and maximal elements, covering relations and height functions. We interpret hierarchies of partitions and partial partitions in terms of an adjunction between (partial) partitions (possibly with connected blocks) and scalars. This gives a lattice-theoretical interpretation of edge saliency, hence a typology for the edges in partial partitions. The use of hierarchies in image filtering, in particular with component trees, is also discussed. Finally, we briefly mention further orders on partial partitions that can be useful for image segmentation.

Journal ArticleDOI
TL;DR: It is shown that a more detailed reconstruction can be achieved compared to the traditional affine mapping, and a more precise, quadratic transformation for planar mapping of implicit surfaces is derived.
Abstract: Level Set methods use a non-parametric, implicit representation of surfaces such as the signed distance function. These methods are applied to multiview 3D reconstruction and other machine vision problems that need correspondences between views of a surface. Correspondences can be found by matching surface texture patches of the size sufficient for their identification. Good matching requires precise mapping of a patch across the two image planes. Affine mapping is often used for this purpose assuming that the patches are small and nearly flat. However, this assumption is violated in locations of high surface curvature and/or when the surface is poorly textured and large windows are needed to provide enough information. Using second-order surface approximation, we derive a more precise, quadratic transformation for planar mapping of implicit surfaces. To validate this theoretical result, we apply it to correlation based, variational multiview 3D reconstruction using Level Sets. It is shown that a more detailed reconstruction can be achieved compared to the traditional affine mapping.

Journal ArticleDOI
TL;DR: This article proposes an algorithm for evaluating digital images topological invariance based on the recently introduced notion of DRT graph, and the notion of simple point.
Abstract: In the continuous domain $\mathbb{R}^{n}$ , rigid transformations are topology-preserving operations. Due to digitization, this is not the case when considering digital images, i.e., images defined on $\mathbb{Z}^{n}$ . In this article, we begin to investigate this problem by studying conditions for digital images to preserve their topological properties under all rigid transformations on $\mathbb{Z}^{2}$ . Based on (i) the recently introduced notion of DRT graph, and (ii) the notion of simple point, we propose an algorithm for evaluating digital images topological invariance.