scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2004"



Journal ArticleDOI
TL;DR: The variational method furnishes a new framework for the processing of data corrupted with outliers and different kinds of impulse noise and is accurate and stable, as demonstrated by the experiments.
Abstract: We consider signal and image restoration using convex cost-functions composed of a non-smooth data-fidelity term and a smooth regularization term. We provide a convergent method to minimize such cost-functions. In order to restore data corrupted with outliers and impulsive noise, we focus on cost-functions composed of an e1 data-fidelity term and an edge-preserving regularization term. The analysis of the minimizers of these cost-functions provides a natural justification of the method. It is shown that, because of the e1 data-fidelity, these minimizers involve an implicit detection of outliers. Uncorrupted (regular) data entries are fitted exactly while outliers are replaced by estimates determined by the regularization term, independently of the exact value of the outliers. The resultant method is accurate and stable, as demonstrated by the experiments. A crucial advantage over alternative filtering methods is the possibility to convey adequate priors about the restored signals and images, such as the presence of edges. Our variational method furnishes a new framework for the processing of data corrupted with outliers and different kinds of impulse noise.

615 citations


Journal ArticleDOI
TL;DR: In this paper, a spectral non-iterative solution of the Euler-Lagrange equation is proposed for 3D active surface reconstruction of star-shaped surfaces parameterized in polar coordinates.
Abstract: Variational energy minimization techniques for surface reconstruction are implemented by evolving an active surface according to the solutions of a sequence of elliptic partial differential equations (PDE's). For these techniques, most current approaches to solving the elliptic PDE are iterative involving the implementation of costly finite element methods (FEM) or finite difference methods (FDM). The heavy computational cost of these methods makes practical application to 3D surface reconstruction burdensome. In this paper, we develop a fast spectral method which is applied to 3D active surface reconstruction of star-shaped surfaces parameterized in polar coordinates. For this parameterization the Euler-Lagrange equation is a Helmholtz-type PDE governing a diffusion on the unit sphere. After linearization, we implement a spectral non-iterative solution of the Helmholtz equation by representing the active surface as a double Fourier series over angles in spherical coordinates. We show how this approach can be extended to include region-based penalization. A number of 3D examples and simulation results are presented to illustrate the performance of our fast spectral active surface algorithms.

489 citations


Journal ArticleDOI
TL;DR: The idea is to use MCMC to solve the resulting problem articulated under a Bayesian framework, but to deploy purely deterministic mechanisms for dealing with the solution, which results in a relatively fast implementation that unifies many of the pixel-by-pixel schemes previously described in the literature.
Abstract: Recently, the problem of automated restoration of archived sequences has caught the attention of the Video Broadcast industry. One of the main problems is deadling with Blotches caused by film abrasion or dirt adhesion. This paper presents a new framework for the simultaneous treatment of missing data and motion in degraded video sequences. Using simple, translational models of motion, a joint solution for the detection, and reconstruction of missing data is proposed. The framework also incorporates the unique notion of dealing with occlusion and uncovering as it pertains to picture building. The idea is to use MCMC to solve the resulting problem articulated under a Bayesian framework, but to deploy purely deterministic mechanisms for dealing with the solution. This results in a relatively fast implementation that unifies many of the pixel-by-pixel schemes previously described in the literature.

434 citations


Journal ArticleDOI
TL;DR: A differential-geometric framework to define PDEs acting on some manifold constrained datasets, including the case of images taking value into matrix manifolds defined by orthogonal and spectral constraints is proposed.
Abstract: Nonlinear diffusion equations are now widely used to restore and enhance images. They allow to eliminate noise and artifacts while preserving large global features, such as object contours. In this context, we propose a differential-geometric framework to define PDEs acting on some manifold constrained datasets. We consider the case of images taking value into matrix manifolds defined by orthogonal and spectral constraints. We directly incorporate the geometry and natural metric of the underlying configuration space (viewed as a Lie group or a homogeneous space) in the design of the corresponding flows. Our numerical implementation relies on structure-preserving integrators that respect intrinsically the constraints geometry. The efficiency and versatility of this approach are illustrated through the anisotropic smoothing of diffusion tensor volumes in medical imaging.

311 citations


Journal ArticleDOI
TL;DR: In this paper, edge detection by a new approach to phase congruency and its relation to amplitude based methods, reconstruction from local amplitude and local phase, and the evaluation of the local frequency are discussed.
Abstract: In this paper we address the topics of scale-space and phase-based image processing in a unifying framework. In contrast to the common opinion, the Gaussian kernel is not the unique choice for a linear scale-space. Instead, we chose the Poisson kernel since it is closely related to the monogenic signal, a 2D generalization of the analytic signal, where the Riesz transform replaces the Hilbert transform. The Riesz transform itself yields the flux of the Poisson scale-space and the combination of flux and scale-space, the monogenic scale-space, provides the local features phase-vector and attenuation in scale-space. Under certain assumptions, the latter two again form a monogenic scale-space which gives deeper insight to low-level image processing. In particular, we discuss edge detection by a new approach to phase congruency and its relation to amplitude based methods, reconstruction from local amplitude and local phase, and the evaluation of the local frequency.

211 citations


Journal ArticleDOI
TL;DR: Numerical results of image denoising, image decomposition and texture discrimination are presented, showing that the new models decompose better a given image, possible noisy, into cartoon and oscillatory pattern of zero mean, than the standard ones.
Abstract: In this paper, we propose a new variational model for image denoising and decomposition, witch combines the total variation minimization model of Rudin, Osher and Fatemi from image restoration, with spaces of oscillatory functions, following recent ideas introduced by Meyer. The spaces introduced here are appropriate to model oscillatory patterns of zero mean, such as noise or texture. Numerical results of image denoising, image decomposition and texture discrimination are presented, showing that the new models decompose better a given image, possible noisy, into cartoon and oscillatory pattern of zero mean, than the standard ones. The present paper develops further the models previously introduced by the authors in Vese and Osher (Modeling textures with total variation minimization and oscillating patterns in image processing, UCLA CAM Report 02-19, May 2002, to appear in Journal of Scientific Computing, 2003). Other recent and related image decomposition models are also discussed.

190 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider alternative scale space representations beyond the well-established Gaussian case that satisfy all reasonable axioms and show that Poisson scale space is indeed a viable alternative to Gaussian scale space.
Abstract: We consider alternative scale space representations beyond the well-established Gaussian case that satisfy all “reasonable” axioms. One of these turns out to be subject to a first order pseudo partial differential equation equivalent to the Laplace equation on the upper half plane l(x, s) ∈ \Bbb Rd × \Bbb R v s > 0r. We investigate this so-called Poisson scale space and show that it is indeed a viable alternative to Gaussian scale space. Poisson and Gaussian scale space are related via a one-parameter class of operationally well-defined intermediate representations generated by a fractional power of (minus) the spatial Laplace operator.

143 citations


Journal ArticleDOI
TL;DR: This paper presents the first attempt to design a quadrature pair based on filters derived for optimal edge/line detection, and considers some new pairs derived from the classical feature detection literature to aid in the choice of a filter pair for a given image processing task.
Abstract: Band-pass quadrature filters are extensively used in computer vision to estimate information from images such as: phase, energy, frequency and orientation,1 possibly at different scales and utilise this in further processing-tasks. The estimation is intrinsically noisy and depends critically on the choice of the quadrature filters. In this paper, we first study the mathematical properties of the quadrature filter pairs most commonly seen in the literature and then consider some new pairs derived from the classical feature detection literature. In the case of feature detection, we present the first attempt to design a quadrature pair based on filters derived for optimal edge/line detection. A comparison of the filters is presented in terms of feature detection performance, wherever possible, in the sense of Canny and in terms of phase stability. We conclude with remarks on how our analysis can aid in the choice of a filter pair for a given image processing task.

132 citations


Journal ArticleDOI
TL;DR: In this article, a methodology and algorithm for generating diffeomorphisms of the sphere onto itself, given the displacements of a finite set of template landmarks, is presented, where deformation maps are constructed by integration of velocity fields that minimize a quadratic smoothness energy under the specified landmark constraints.
Abstract: This paper presents a methodology and algorithm for generating diffeomorphisms of the sphere onto itself, given the displacements of a finite set of template landmarks. Deformation maps are constructed by integration of velocity fields that minimize a quadratic smoothness energy under the specified landmark constraints. We present additional formulations of this problem which incorporate a given error variance in the positions of the landmarks. Finally, some experimental results are presented. This work has application in brain mapping, where surface data is typically mapped to the sphere as a common coordinate system.

119 citations


Journal ArticleDOI
TL;DR: An axiomatic analysis of image equalization is presented which leads to two possible methods, which are then compared in theory and in practice for two reliability criteria, namely their effect on quantization noise and on the support of the Fourier spectrum.
Abstract: Midway image equalization means any method giving to a pair of images the same histogram, while maintaining as much as possible their previous grey level dynamics. In this paper, we present an axiomatic analysis of image equalization which leads us to derive two possible methods. Both methods are then compared in theory and in practice for two reliability criteria, namely their effect on quantization noise and on the support of the Fourier spectrum. A mathematical analysis of the properties of the methods is performed. Their algorithms are described and they are tested on such typical pairs as satellite image stereo pairs and different photographs of a same painting.

Journal ArticleDOI
TL;DR: This work provides general conditions on cost-functions which ensure that their minimizers can satisfy weak constraints when noisy data range over an open subset, and considers the effect produced by non-smooth regularization, in comparison with smooth regularization.
Abstract: We focus on the question of how the shape of a cost-function determines the features manifested by its local (and hence global) minimizers Our goal is to check the possibility that the local minimizers of an unconstrained cost-function satisfy different subsets of affine constraints dependent on the data, hence the word “weak” A typical example is the estimation of images and signals which are constant on some regions We provide general conditions on cost-functions which ensure that their minimizers can satisfy weak constraints when noisy data range over an open subset These cost-functions are non-smooth at all points satisfying the weak constraints In contrast, the local minimizers of smooth cost-functions can almost never satisfy weak constraints These results, obtained in a general setting, are applied to analyze the minimizers of cost-functions, composed of a data-fidelity term and a regularization term We thus consider the effect produced by non-smooth regularization, in comparison with smooth regularization In particular, these results explain the stair-casing effect, well known in total-variation methods Theoretical results are illustrated using analytical examples and numerical experiments

Journal ArticleDOI
TL;DR: This paper proposes a class of separable isotropic filters generalizing Gaussian filtering to vector fields, which enables fast smoothing in the spatial domain and solves the problem of approximating a dense and a sparse displacement field at the same time.
Abstract: The aim of this paper is to propose new regularization and filtering techniques for dense and sparse vector fields, and to focus on their application to non-rigid registration. Indeed, most of the regularization energies used in non-rigid registration operate independently on each coordinate of the transformation. The only common exception is the linear elastic energy, which enables cross-effects between coordinates. Cross-effects are yet essential to give realistic deformations in the uniform parts of the image, where displacements are interpolated. In this paper, we propose to find isotropic quadratic differential forms operating on a vector field, using a known theorem on isotropic tensors, and we give results for differentials of order 1 and 2. The quadratic approximation induced by these energies yields a new class of vectorial filters, applied numerically in the Fourier domain. We also propose a class of separable isotropic filters generalizing Gaussian filtering to vector fields, which enables fast smoothing in the spatial domain. Then we deduce splines in the context of interpolation or approximation of sparse displacements. These splines generalize scalar Laplacian splines, such as thin-plate splines, to vector interpolation. Finally, we propose to solve the problem of approximating a dense and a sparse displacement field at the same time. This last formulation enables us to introduce sparse geometrical constraints in intensity based non-rigid registration algorithms, illustrated here on intersubject brain registration.

Journal ArticleDOI
Fernand Meyer1
TL;DR: This paper investigates a class of filters able to simplify an image without blurring or displacing its contours: the simplified image has less details, hence less contours.
Abstract: Before segmenting an image, one has often to simplify it. In this paper we investigate a class of filters able to simplify an image without blurring or displacing its contours: the simplified image has less details, hence less contours. As the contours of the simplified image are as accurate as in the initial image, the segmentation may be done on the simplified image, without going back to the initial image. The corresponding filters are called levelings. Their properties and construction are described in the present paper.

Journal ArticleDOI
TL;DR: The problem of segmentation of a given gray scale image by minimization of the Mumford-Shah functional is considered and it is suggested to use a positive definite approximation of the shape Hessian as a preconditioner for the gradient direction.
Abstract: The problem of segmentation of a given gray scale image by minimization of the Mumford-Shah functional is considered. The minimization problem is formulated as a shape optimization problem where the contour which separates homogeneous regions is the (geometric) optimization variable. Expressions for first and second order shape sensitivities are derived using the speed method from classical shape sensitivity calculus. Second order information (the shape Hessian of the cost functional) is used to set up a Newton-type algorithm, where a preconditioning operator is applied to the gradient direction to obtain a better descent direction. The issue of positive definiteness of the shape Hessian is addressed in a heuristic way. It is suggested to use a positive definite approximation of the shape Hessian as a preconditioner for the gradient direction. The descent vector field is used as speed vector field in the level set formulation for the propagating contour. The implementation of the algorithm is discussed in some detail. Numerical experiments comparing gradient and Newton-type flows for different images are presented.

Journal ArticleDOI
Hai-Hui Wang1
TL;DR: The experiment results show that this fusion algorithm, based on multiwavelet transform, is an effective approach in image fusion area and can merge information from original images adequately and improve abilities of information analysis and feature extraction.
Abstract: Image fusion refers to the techniques that integrate complementary information from multiple image sensor data such that the new images are more suitable for the purpose of human visual perception and the compute-processing tasks. In this paper, a new image fusion algorithm based on multiwavelet transform to fuse multisensor images is presented. The detailed discussions in the paper are focused on the two-wavelet and two-scaling function multiwavelets. Multiwavelets are extensions from scalar wavelet, and have several unique advantages in comparison with scalar wavelets, so that multiwavelet is employed to decompose and reconstruct images in this algorithm. In this paper, the image fusion is performed at the pixel level, other types of image fusion schemes, such as feature or decision fusion, are not considered. In this fusion algorithm, a feature-based fusion rule is used to combine original subimages and to form a pyramid for the fused image. When images are merged in multiwavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction. Extensive experiments including the fusion of registered multiband SPOT multispectral XS1\XS3 images, multifocus digital camera images, multisensor of VIS\IR images, and medical CT\MRI images are presented in this paper. In this paper, mutual information is employed as a means of objective assessing image fusion performance. The experiment results show that this fusion algorithm, based on multiwavelet transform, is an effective approach in image fusion area.

Journal ArticleDOI
TL;DR: A method for computing the likelihood that a completion joining two contour fragments passes through any given position and orientation in the image plane by representing the input, output, and intermediate states of the computation in a basis of shiftable-twistable functions.
Abstract: We describe a method for computing the likelihood that a completion joining two contour fragments passes through any given position and orientation in the image plane. Like computations in primary visual cortex (and unlike all previous models of contour completion), the output of our computation is invariant under rotations and translations of the input pattern. This is achieved by representing the input, output, and intermediate states of the computation in a basis of shiftable-twistable functions.

Journal ArticleDOI
TL;DR: A complete generalization and address the problem of general trajectory triangulation of moving points from non-synchronized cameras by considering a new representation of curves (trajectories) where a curve is represented by a family of hypersurfaces in the projective space ℙ5.
Abstract: The multiple view geometry of static scenes is now well understood. Recently attention was turned to dynamic scenes where scene points may move while the cameras move. The triangulation of linear trajectories is now well handled. The case of quadratic trajectories also received some attention. We present a complete generalization and address the problem of general trajectory triangulation of moving points from non-synchronized cameras. Two cases are considered: (i) the motion is captured in the images by tracking the moving point itself, (ii) the tangents of the motion only are extracted from the images. The first case is based on a new representation (to computer vision) of curves (trajectories) where a curve is represented by a family of hypersurfaces in the projective space \Bbb P5. The second case is handled by considering the dual curve of the curve generated by the trajectory. In both cases these representations of curves allow: (i) the triangulation of the trajectory of a moving point from non-synchronized sequences, (ii) the recovery of more standard representation of the whole trajectory, (iii) the computations of the set of positions of the moving point at each time instant an image was made. Furthermore, theoretical considerations lead to a general theorem stipulating how many independent constraints a camera provides on the motion of the point. This number of constraint is a function of the camera motion. On the computation front, in both cases the triangulation leads to equations where the unknowns appear linearly. Therefore the problem reduces to estimate a high-dimensional parameter in presence of heteroscedastic noise. Several method are tested.

Journal ArticleDOI
TL;DR: In this paper, a non-convex variational image sharpening problem is formulated as a variational problem, where the energy minimization flow results in sharpening of the dominant edges, while most noisy fluctuations are filtered out.
Abstract: Image sharpening in the presence of noise is formulated as a non-convex variational problem. The energy functional incorporates a gradient-dependent potential, a convex fidelity criterion and a high order convex regularizing term. The first term attains local minima at zero and some high gradient magnitude, thus forming a triple well-shaped potential (in the one-dimensional case). The energy minimization flow results in sharpening of the dominant edges, while most noisy fluctuations are filtered out.

Journal ArticleDOI
TL;DR: This work addresses the issue of low-level segmentation for real-valued images in terms of an energy partition of the image domain using a framework based on measuring a pseudo-metric distance to a source point.
Abstract: We address the issue of low-level segmentation for real-valued images. The proposed approach relies on the formulation of the problem in terms of an energy partition of the image domain. In this framework, an energy is defined by measuring a pseudo-metric distance to a source point. Thus, the choice of an energy and a set of sources determines a tessellation of the domain. Each energy acts on the image at a different level of analysiss through the study of two types of energies, two stages of the segmentation process are addressed. The first energy considered, the path variation, belongs to the class of energies determined by minimal paths. Its application as a pre-segmentation method is proposed. In the second part, where the energy is induced by a ultrametric, the construction of hierarchical representations of the image is discussed.

Journal ArticleDOI
Nir Sochen1
TL;DR: In this article, the role of different invariant principles in image processing and analysis is analyzed, and a distinction between passive and active principles is emphasized, and the geometric Beltrami framework is shown to incorporate and explain some of the known invariant flows e.
Abstract: We analyze the role of different invariant principles in image processing and analysis. A distinction between the passive and active principles is emphasized, and the geometric Beltrami framework is shown to incorporate and explain some of the known invariant flows e.g. the equi-affine invariant flow for hypersurfaces. It is also demonstrated that the new concepts put forward in this framework enable us to suggest new invariants namely the case where the codimension is greater than one.

Journal ArticleDOI
TL;DR: This study focuses on a projective differential invariant which allows to decide if one shape can be considered as the deformation of another one by a rotation of the camera.
Abstract: For comparison of shapes under subgroups of the projective group, we can use a lot of invariants and especially differential invariants coming from multiscale analysis. But such invariants, as we have to compute curvature, are very sensitive to the noise induced by the dicretization grid. In order to resolve this problem we use size functions which can recognize the “qualitative similarity” between graphs of functions that should be theorically coinciding but, unfortunately, change their values due to the presence of noise. Moreover, we focus this study on a projective differential invariant which allows to decide if one shape can be considered as the deformation of another one by a rotation of the camera.

Journal ArticleDOI
TL;DR: In this paper, the Fourier theory and Shannon's sampling theorem are used to measure the effective resolution of an image acquisition system and to restore the original image which is represented by the samples.
Abstract: Traditionally, discrete images are assumed to be sampled on a square grid and from a special kind of band-limited continuous image, namely one whose Fourier spectrum is contained within the rectangular “reciprocal cell” associated with the sampling grid. With such a simplistic model, resolution is just given by the distance between sample points. Whereas this model matches to some extent the characteristics of traditional acquisition systems, it doesn't explain aliasing problems, and it is no longer valid for certain modern ones, where the sensors may show a heavily anisotropic transfer function, and may be located on a non-square (in most cases hexagonal) grid. In this work we first summarize the generalizations of Fourier theory and of Shannon's sampling theorem, that are needed for such acquisition devices. Then we explore its consequences: (i) A new way of measuring the effective resolution of an image acquisition systems (ii) A more accurate way of restoring the original image which is represented by the samples. We show on a series of synthetic and real images, how the proposed methods make a better use of the information present in the samples, since they may drastically reduce the amount of aliasing with respect to traditional methods. Finally we show how in combination with Total Variation minimization, the proposed methods can be used to extrapolate the Fourier spectrum in a reasonable manner, visually increasing image resolution.

Journal ArticleDOI
TL;DR: New mean sets are defined by using the basic transformations of Mathematical Morphology (dilation, erosion, opening and closing) and can be considered, under some additional assumptions, as particular cases of the distance average of Baddeley and Molchanov.
Abstract: Many image processing tasks need some kind of average of different shapes Frequently, different shapes obtained from several images have to be summarized If these shapes can be considered as different realizations of a given random compact set, then the natural summaries are the different mean sets proposed in the literature In this paper, new mean sets are defined by using the basic transformations of Mathematical Morphology (dilation, erosion, opening and closing) These new definitions can be considered, under some additional assumptions, as particular cases of the distance average of Baddeley and Molchanov The use of the former and new mean sets as summary descriptors of shapes is illustrated with two applications: the analysis of human corneal endothelium images and the segmentation of the fovea in a fundus image The variation of the random compact sets is described by means of confidence sets for the mean and by using set intervals (a generalization of confidence intervals for random sets) Finally, a third application is proposed: a procedure for denoising a single image by using mean sets

Journal ArticleDOI
TL;DR: It is shown that there are no more than n-2 + O ( n-265-146) + (log n) = 1 for all integer points inside of a given real disc.
Abstract: A digital disc is defined as the set of all integer points inside of a given real disc. In this paper we show that there are no more than n2 + {\cal O}(n^{\frac{265}{146}}\cdot (\log n)^{\frac{315}{146}}) different (up to translations) digital discs consisting of n points.

Journal ArticleDOI
TL;DR: This work presents and tests an alternative shape deduced from statistical estimation using Bayesian theory that is optimal in LMSE and MAP senses and approximated by a scheme utilised in the noise reduction procedure.
Abstract: Methods for image noise reduction based on wavelet analysis perform by first decomposing the image and then by applying non-linear compression functions on the wavelet components. The approach commonly used to reduce the noise is to threshold the absolute pixel values of the components. The thresholding functions applied are members of a family of functions defining a specific shape. This shape has a fundamental influence on the characteristics of the output image. This work presents and tests an alternative shape deduced from statistical estimation. Optimal shapes are deduced using Bayesian theory and a new shape is defined to approximate them. The derivation of thresholding shapes is optimal in LMSE and MAP senses. The noise is assumed additive Gaussian and white (AWGN) and the components are assumed to have statistical distributions consistent with the real component distributions. The optimal shapes are then approximated by a scheme utilised in the noise reduction procedure. Results demonstrating the efficiency of the image noise reduction procedure are included in the work.

Journal ArticleDOI
TL;DR: This paper considers envelope design for gray-scale filters, in particular, aperture filters, and optimality of the design method relative to the constraint imposed by the envelope is stated.
Abstract: Machine design of a signal or image operator involves estimating the optimal filter from sample data. The optimal filter is the best filter, relative to the error measure useds however, owing to design error, the designed filter might not perform well. In general it is suboptimal. The envelope constraint involves using two humanly designed filters that form a lower and upper bound for the designed operator. The method has been employed for binary operators. This paper considers envelope design for gray-scale filters, in particular, aperture filters. Some basic theoretical properties are stated, including optimality of the design method relative to the constraint imposed by the envelope. Examples are given for noise reduction and de-blurring.

Journal ArticleDOI
TL;DR: In this article, an algorithm for minimizing the total variation of an image is proposed, and a proof of convergence is provided for the algorithm's convergence. But this algorithm is not suitable for image denoising, zooming, and the computation of the mean curvature of the image.
Abstract: We propose an algorithm for minimizing the total variation of an image, and provide a proof of convergence. We show applications to image denoising, zooming, and the computation of the mean curvatu...

Journal ArticleDOI
TL;DR: A regularized curvature flow (RCF) is introduced that admits non-trivial steady states based on a measure of the local curve smoothness that takes into account regularity of the curve curvature and serves as stopping term in the mean curvatures flow.
Abstract: Any image filtering operator designed for automatic shape restoration should satisfy robustness (whatever the nature and degree of noise is) as well as non-trivial smooth asymptotic behavior. Moreover, a stopping criterion should be determined by characteristics of the evolved image rather than dependent on the number of iterations. Among the several PDE based techniques, curvature flows appear to be highly reliable for strongly noisy images compared to image diffusion processes. In the present paper, we introduce a regularized curvature flow (RCF) that admits non-trivial steady states. It is based on a measure of the local curve smoothness that takes into account regularity of the curve curvature and serves as stopping term in the mean curvature flow. We prove that this measure decreases over the orbits of RCF, which endows the method with a natural stop criterion in terms of the magnitude of this measure. Further, in its discrete version it produces steady states consisting of piece-wise regular curves. Numerical experiments made on synthetic shapes corrupted with different kinds of noise show the abilities and limitations of each of the current geometric flows and the benefits of RCF. Finally, we present results on real images that illustrate the usefulness of the present approach in practical applications.

Journal ArticleDOI
TL;DR: This paper presents an extension to the recently introduced class of nonlinear filters known as Aperture Filters by taking a multiresolution approach and shows that more accurate filtering results may be achieved compared to the standard aperture filter given the same size of training set.
Abstract: This paper presents an extension to the recently introduced class of nonlinear filters known as Aperture Filters. By taking a multiresolution approach, it can be shown that more accurate filtering results (in terms of mean absolute error) may be achieved compared to the standard aperture filter given the same size of training set. Most optimisation techniques for nonlinear filters require a knowledge of the conditional probabilities of the output. These probabilities are estimated from observations of a representative training set. As the size of the training set is related to the number of input combinations of the filter, it increases very rapidly as the number of input variables increases. It can be impossibly large for all but the simplest binary filters. In order to design nonlinear filters of practical use, it is necessary to limit the size of the search space i.e. the number of possible filters (and hence the training set size) by the application of filter constraints. Filter constraints take several different forms, the most general of which is the window constraint where the output filter value is estimated from only a limited range of input variables. Aperture filters comprise a special case of nonlinear filters in which the input window is limited not only in its domain (or duration) but also in its amplitude. The reduced range of input signal leads directly to a reduction in the size of training set required to produce accurate output estimates. However in order to solve complex filtering problems, it is necessary for the aperture to be sufficiently large so as to observe enough of the signal to estimate its output accurately. In this paper it is shown how the input range of the aperture may be expanded without increasing the size of the search space by adopting a multiresolution approach. The constraint applied in this case is the resolution constraint. This paper presents both theoretical and practical results to demonstrate and quantify the improvement.