scispace - formally typeset
Search or ask a question
Author

Antonin Chambolle

Bio: Antonin Chambolle is an academic researcher from École Polytechnique. The author has contributed to research in topics: Mean curvature & Mean curvature flow. The author has an hindex of 52, co-authored 201 publications receiving 16156 citations. Previous affiliations of Antonin Chambolle include Université Paris-Saclay & Paris Dauphine University.


Papers
More filters
Journal ArticleDOI
TL;DR: A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.
Abstract: In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.

4,487 citations

Journal ArticleDOI
TL;DR: A variant of the original TV minimization problem that handles correctly some situations where TV fails is proposed, and an alternative approach whose purpose is to handle the minimization of the minimum of several convex functionals is proposed.
Abstract: We study here a classical image denoising technique introduced by L. Rudin and S. Osher a few years ago, namely the constrained minimization of the total variation (TV) of the image. First, we give results of existence and uniqueness and prove the link between the constrained minimization problem and the minimization of an associated Lagrangian functional. Then we describe a relaxation method for computing the solution, and give a proof of convergence. After this, we explain why the TV-based model is well suited to the recovery of some images and not of others. We eventually propose an alternative approach whose purpose is to handle the minimization of the minimum of several convex functionals. We propose for instance a variant of the original TV minimization problem that handles correctly some situations where TV fails.

1,658 citations

Journal ArticleDOI
TL;DR: Extensive computations are presented that support the hypothesis that near-optimal shrinkage parameters can be derived if one knows (or can estimate) only two parameters about an image F: the largest alpha for which FinEpsilon(q)(alpha )(L( q)(I)),1/q=alpha/2+1/2, and the norm |F|B(q) alpha)(L(Q)(I)).
Abstract: This paper examines the relationship between wavelet-based image processing algorithms and variational problems. Algorithms are derived as exact or approximate minimizers of variational problems; in particular, we show that wavelet shrinkage can be considered the exact minimizer of the following problem. Given an image F defined on a square I, minimize over all g in the Besov space B11(L1(I)) the functional |F-g|L2(I)2+λ|g|(B11(L1(I))). We use the theory of nonlinear wavelet image compression in L2(I) to derive accurate error bounds for noise removal through wavelet shrinkage applied to images corrupted with i.i.d., mean zero, Gaussian noise. A new signal-to-noise ratio (SNR), which we claim more accurately reflects the visual perception of noise in images, arises in this derivation. We present extensive computations that support the hypothesis that near-optimal shrinkage parameters can be derived if one knows (or can estimate) only two parameters about an image F: the largest α for which F∈Bqα(Lq(I)),1/q=α/2+1/2, and the norm |F|Bqα(Lq(I)). Both theoretical and experimental results indicate that our choice of shrinkage parameters yields uniformly better results than Donoho and Johnstone's VisuShrink procedure; an example suggests, however, that Donoho and Johnstone's (1994, 1995, 1996) SureShrink method, which uses a different shrinkage parameter for each dyadic level, achieves a lower error than our procedure.

810 citations

Journal ArticleDOI
TL;DR: The state of the art in continuous optimization methods for such problems, and particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems are described.
Abstract: A large number of imaging problems reduce to the optimization of a cost function , with typical structural properties. The aim of this paper is to describe the state of the art in continuous optimization methods for such problems, and present the most successful approaches and their interconnections. We place particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems. We illustrate and compare the different algorithms using classical non-smooth problems in imaging, such as denoising and deblurring. Moreover, we present applications of the algorithms to more advanced problems, such as magnetic resonance imaging, multilabel image segmentation, optical flow estimation, stereo matching, and classification.

477 citations

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper proposes simple and easy to compute diagonal preconditioners for the first-order primal-dual algorithm for which convergence of the algorithm is guaranteed without the need to compute any step size parameters.
Abstract: In this paper we study preconditioning techniques for the first-order primal-dual algorithm proposed in [5]. In particular, we propose simple and easy to compute diagonal preconditioners for which convergence of the algorithm is guaranteed without the need to compute any step size parameters. As a by-product, we show that for a certain instance of the preconditioning, the proposed algorithm is equivalent to the old and widely unknown alternating step method for monotropic programming [7]. We show numerical results on general linear programming problems and a few standard computer vision problems. In all examples, the preconditioned algorithm significantly outperforms the algorithm of [5].

474 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Abstract: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.

11,413 citations

Journal ArticleDOI
TL;DR: This work addresses the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image, and uses the K-SVD algorithm to obtain a dictionary that describes the image content effectively.
Abstract: We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods

5,493 citations

Journal ArticleDOI
TL;DR: This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Abstract: This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices $\{\boldsymbol{X}^k,\boldsymbol{Y}^k\}$, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix $\boldsymbol{Y}^k$. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates $\{\boldsymbol{X}^k\}$ is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which $1,000\times1,000$ matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for $\ell_1$ minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

5,276 citations