scispace - formally typeset
Search or ask a question
Author

Mila Nikolova

Bio: Mila Nikolova is an academic researcher from École normale supérieure de Cachan. The author has contributed to research in topics: Regularization (mathematics) & Impulse noise. The author has an hindex of 30, co-authored 101 publications receiving 9587 citations. Previous affiliations of Mila Nikolova include Centre national de la recherche scientifique & Télécom ParisTech.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown how certain nonconvex optimization problems that arise in image processing and computer vision can be restated as convex minimization problems, which allows, in particular, the finding of global minimizers via standard conveX minimization schemes.
Abstract: We show how certain nonconvex optimization problems that arise in image processing and computer vision can be restated as convex minimization problems. This allows, in particular, the finding of global minimizers via standard convex minimization schemes.

1,142 citations

Journal ArticleDOI
TL;DR: This scheme can remove salt-and-pepper-noise with a noise level as high as 90% and show a significant improvement compared to those restored by using just nonlinear filters or regularization methods only.
Abstract: This paper proposes a two-phase scheme for removing salt-and-pepper impulse noise. In the first phase, an adaptive median filter is used to identify pixels which are likely to be contaminated by noise (noise candidates). In the second phase, the image is restored using a specialized regularization method that applies only to those selected noise candidates. In terms of edge preservation and noise suppression, our restored images show a significant improvement compared to those restored by using just nonlinear filters or regularization methods only. Our scheme can remove salt-and-pepper-noise with a noise level as high as 90%.

1,078 citations

Journal ArticleDOI
TL;DR: The variational method furnishes a new framework for the processing of data corrupted with outliers and different kinds of impulse noise and is accurate and stable, as demonstrated by the experiments.
Abstract: We consider signal and image restoration using convex cost-functions composed of a non-smooth data-fidelity term and a smooth regularization term. We provide a convergent method to minimize such cost-functions. In order to restore data corrupted with outliers and impulsive noise, we focus on cost-functions composed of an e1 data-fidelity term and an edge-preserving regularization term. The analysis of the minimizers of these cost-functions provides a natural justification of the method. It is shown that, because of the e1 data-fidelity, these minimizers involve an implicit detection of outliers. Uncorrupted (regular) data entries are fitted exactly while outliers are replaced by estimates determined by the regularization term, independently of the exact value of the outliers. The resultant method is accurate and stable, as demonstrated by the experiments. A crucial advantage over alternative filtering methods is the possibility to convey adequate priors about the restored signals and images, such as the presence of edges. Our variational method furnishes a new framework for the processing of data corrupted with outliers and different kinds of impulse noise.

615 citations

Journal ArticleDOI
TL;DR: The goal of this paper is to provide a systematic analysis of the convergence rate achieved by the multiplicative and additive half-quadratic regularizations, and determine their upper bounds for their root-convergence factors.
Abstract: We address the minimization of regularized convex cost functions which are customarily used for edge-preserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive half-quadratic reformulation of the original cost-function have been pioneered in Geman and Reynolds [IEEE Trans. Pattern Anal. Machine Intelligence, 14 (1992), pp. 367--383] and Geman and Yang IEEE Trans. Image Process., 4 (1995), pp. 932--946]. The alternate minimization of the resultant (augmented) cost-functions has a simple explicit form. The goal of this paper is to provide a systematic analysis of the convergence rate achieved by these methods. For the multiplicative and additive half-quadratic regularizations, we determine their upper bounds for their root-convergence factors. The bound for the multiplicative form is seen to be always smaller than the bound for the additive form. Experiments show that the number of iterations required for convergence for the multiplicative form is always less than that for the additive form. However, the computational cost of each iteration is much higher for the multiplicative form than for the additive form. The global assessment is that minimization using the additive form of half-quadratic regularization is faster than using the multiplicative form. When the additive form is applicable, it is hence recommended. Extensive experiments demonstrate that in our MATLAB implementation, both methods are substantially faster (in terms of computational times) than the standard MATLAB Optimization Toolbox routines used in our comparison study.

417 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.
Abstract: In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.

4,487 citations

Journal ArticleDOI
TL;DR: This paper proposes gradient projection algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems and test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method.
Abstract: Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance.

3,488 citations

Journal ArticleDOI
TL;DR: It is shown that various inverse problems in signal recovery can be formulated as the generic problem of minimizing the sum of two convex functions with certain regularity properties, which makes it possible to derive existence, uniqueness, characterization, and stability results in a unified and standardized fashion for a large class of apparently disparate problems.
Abstract: We show that various inverse problems in signal recovery can be formulated as the generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a unified and standardized fashion for a large class of apparently disparate problems. Recent results on monotone operator splitting methods are applied to establish the convergence of a forward-backward algorithm to solve the generic problem. In turn, we recover, extend, and provide a simplified analysis for a variety of existing iterative methods. Applications to geometry/texture image decomposition schemes are also discussed. A novelty of our framework is to use extensively the notion of a proximity operator, which was introduced by Moreau in the 1960s.

2,645 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size.
Abstract: Summary. Variable selection plays an important role in high dimensional statistical modelling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, accuracy of estimation and computational cost are two top concerns. Recently, Candes and Tao have proposed the Dantzig selector using L1-regularization and showed that it achieves the ideal risk up to a logarithmic factor log (p). Their innovative procedure and remarkable result are challenged when the dimensionality is ultrahigh as the factor log (p) can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size. In a fairly general asymptotic framework, correlation learning is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, iterative sure independence screening is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be accomplished by a well-developed method such as smoothly clipped absolute deviation, the Dantzig selector, lasso or adaptive lasso. The connections between these penalized least squares methods are also elucidated.

2,204 citations