scispace - formally typeset
Search or ask a question
Author

Gilles Aubert

Bio: Gilles Aubert is an academic researcher from University of Nice Sophia Antipolis. The author has contributed to research in topics: Image segmentation & Image restoration. The author has an hindex of 37, co-authored 127 publications receiving 9469 citations. Previous affiliations of Gilles Aubert include Centre national de la recherche scientifique.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable, which leads to the definition of an original reconstruction algorithm, called ARTUR, which can be applied in a large number of applications in image processing.
Abstract: Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edge-preserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion half-quadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of 2D single photon emission tomography, but this method can be applied in a large number of applications in image processing.

1,360 citations

Book
26 Mar 2013
TL;DR: The updated 2nd edition of this book presents a variety of image analysis applications, reviews their precise mathematics and shows how to discretize them, and provides programming tools for creating simulations with minimal effort.
Abstract: The updated 2nd edition of this book presents a variety of image analysis applications, reviews their precise mathematics and shows how to discretize them. For the mathematical community, the book shows the contribution of mathematics to this domain, and highlights unsolved theoretical questions. For the computer vision community, it presents a clear, self-contained and global overview of the mathematics involved in image procesing problems. The second edition offers a review of progress in image processing applications covered by the PDE framework, and updates the existing material. The book also provides programming tools for creating simulations with minimal effort.

1,279 citations

Book
01 Jan 2001

771 citations

Proceedings ArticleDOI
13 Nov 1994
TL;DR: The authors propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable, which yields two algorithms, ARTUR and LEGEND, which are applied to the problem of SPECT reconstruction.
Abstract: Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. The authors first give sufficient conditions for the design of such an edge-preserving regularization. Under these conditions, it is possible to introduce an auxiliary variable whose role is twofold. Firstly, it marks the discontinuities and ensures their preservation from smoothing. Secondly, it makes the criterion half-quadratic. The optimization is then easier. The authors propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This yields two algorithms, ARTUR and LEGEND. The authors apply these algorithms to the problem of SPECT reconstruction. >

628 citations

Journal ArticleDOI
TL;DR: A functional is derived whose minimizer corresponds to the denoised image the authors want to recover and the existence of a minimizer is proved, and the capability of the model is shown on some numerical examples.
Abstract: This paper focuses on the problem of multiplicative noise removal. We draw our inspiration from the modeling of speckle noise. By using a MAP estimator, we can derive a functional whose minimizer corresponds to the denoised image we want to recover. Although the functional is not convex, we prove the existence of a minimizer and we show the capability of our model on some numerical examples. We study the associated evolution problem, for which we derive existence and uniqueness results for the solution. We prove the convergence of an implicit scheme to compute the solution.

516 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets is proposed, which can detect objects whose boundaries are not necessarily defined by the gradient.
Abstract: We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a "mean-curvature flow"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.

10,404 citations

Journal ArticleDOI
TL;DR: This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Abstract: This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

5,276 citations

Journal ArticleDOI
TL;DR: A general mathematical and experimental methodology to compare and classify classical image denoising algorithms and a nonlocal means (NL-means) algorithm addressing the preservation of structure in a digital image are defined.
Abstract: The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics In spite of the sophistication of the recently proposed methods, m

4,153 citations