scispace - formally typeset
Search or ask a question

Showing papers by "Jean-Christophe Pesquet published in 2012"


Journal ArticleDOI
TL;DR: A primal-dual splitting algorithm for solving monotone inclusions involving a mixture of sums, linear compositions, and parallel sums of set-valued and Lipschitzian operators was proposed in this paper.
Abstract: We propose a primal-dual splitting algorithm for solving monotone inclusions involving a mixture of sums, linear compositions, and parallel sums of set-valued and Lipschitzian operators. An important feature of the algorithm is that the Lipschitzian operators present in the formulation can be processed individually via explicit steps, while the set-valued operators are processed individually via their resolvents. In addition, the algorithm is highly parallel in that most of its steps can be executed simultaneously. This work brings together and notably extends various types of structured monotone inclusion problems and their solution methods. The application to convex minimization problems is given special attention.

410 citations


Journal Article
TL;DR: An extension of the Douglas-Rachford algorithm including inertia parameters is proposed and parallel versions to deal with the case of a sum of an arbitrary number of maximal operators are developed.
Abstract: The Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of maximal operators. Based on this algorithm, parallel proximal algorithms are proposed to minimize over a linear subspace of a Hilbert space the sum of a finite number of proper, lower semicontinuous convex functions composed with linear operators. It is shown that particular cases of these methods are the simultaneous direction method of multipliers proposed by Stetzer et al., the parallel proximal algorithm developed by Combettes and Pesquet, and a parallelized version of an algorithm proposed by Attouch and Soueycatt.

141 citations


Proceedings ArticleDOI
25 Mar 2012
TL;DR: This work proposes a convex optimization algorithm for the reconstruction of signals degraded by a linear operator and corrupted with mixed Poisson-Gaussian noise, and derives a primal-dual iterative scheme for minimizing the associated penalized criterion.
Abstract: A Poisson-Gaussian model accurately describes the noise present in many imaging systems such as CCD cameras or fluorescence microscopy. However most existing restoration strategies rely on approximations of the Poisson-Gaussian noise statistics. We propose a convex optimization algorithm for the reconstruction of signals degraded by a linear operator and corrupted with mixed Poisson-Gaussian noise. The originality of our approach consists of considering the exact continuous-discrete model corresponding to the data statistics. After establishing the Lipschitz differentiability of the Poisson-Gaussian log-likelihood, we derive a primal-dual iterative scheme for minimizing the associated penalized criterion. The proposed method is applicable to a large choice of penalty terms. The robustness of our scheme allows us to handle computational difficulties due to infinite sums arising from the computation of the gradient of the criterion. The proposed approach is validated on image restoration examples.

50 citations


Proceedings ArticleDOI
02 May 2012
TL;DR: In this article, a new fully automatic approach for noise parameter estimation in the context of fluorescence imaging systems is presented, which consists of an adequate moment based initialization followed by Expectation-Maximization iterations.
Abstract: In this paper, we present a new fully automatic approach for noise parameter estimation in the context of fluorescence imaging systems. In particular, we address the problem of Poisson-Gaussian noise modeling in the nonstationary case. In microscopy practice, the nonstationarity is due to the photobleaching effect. The proposed method consists of an adequate moment based initialization followed by Expectation-Maximization iterations. This approach is shown to provide reliable estimates of the mean and the variance of the Gaussian noise and of the scale parameter of Poisson noise, as well as of the photobleaching rates. The algorithm performance is demonstrated on both synthetic and real macro confocal laser scanning microscope image sequences.

36 citations


Proceedings Article
18 Oct 2012
TL;DR: This paper proposes to extend a recent primal-dual proximal splitting approach by introducing a preconditioning strategy that is shown to significantly speed up the algorithm convergence and is illustrated through image restoration examples.
Abstract: This paper addresses the problem of recovering an image degraded by a linear operator and corrupted with an additive Gaussian noise with a signal-dependent variance. The considered observation model arises in several digital imaging devices. To solve this problem, a variational approach is adopted relying on a weighted least squares criterion which is penalized by a non-smooth function. In this context, the choice of an efficient optimization algorithm remains a challenging task. We propose here to extend a recent primal-dual proximal splitting approach by introducing a preconditioning strategy that is shown to significantly speed up the algorithm convergence. The good performance of the proposed method is illustrated through image restoration examples.

31 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to relax these assumptions by considering a class of non-necessarily tight frame representations, thus offering the possibility of addressing a broader class of signal restoration problems and allowing both frame analysis and frame synthesis problems for various noise distributions.
Abstract: A fruitful approach for solving signal deconvolution problems consists of resorting to a frame-based convex variational formulation. In this context, parallel proximal algorithms and related alternating direction methods of multipliers have become popular optimization techniques to approximate iteratively the desired solution. Until now, in most of these methods, either Lipschitz differentiability properties or tight frame representations were assumed. In this paper, it is shown that it is possible to relax these assumptions by considering a class of non-necessarily tight frame representations, thus offering the possibility of addressing a broader class of signal restoration problems. In particular, it is possible to use non-necessarily maximally decimated filter banks with perfect reconstruction, which are common tools in digital signal processing. The proposed approach allows us to solve both frame analysis and frame synthesis problems for various noise distributions. In our simulations, it is applied to the deconvolution of data corrupted with Poisson noise or Laplacian noise by using (non-tight) discrete dual-tree wavelet representations and filter bank structures.

29 citations


Journal ArticleDOI
TL;DR: This article investigates techniques for optimizing sparsity criteria by focusing on the use of an ℓ1 criterion instead of an⁓2 one, and proposes to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights.
Abstract: Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an l1 criterion instead of an l2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted l1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

22 citations


Posted Content
TL;DR: A proximal approach to deal with a class of convex variational problems involving nonlinear constraints based on Non-Local Total Variation, which leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems.
Abstract: We propose a proximal approach to deal with a class of convex variational problems involving nonlinear constraints. A large family of constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but possibly overlapping, blocks of the signal. For such constraints, the associated projection operator generally does not have a simple form. We circumvent this difficulty by splitting the lower level set into as many epigraphs as functions involved in the sum. A closed half-space constraint is also enforced, in order to limit the sum of the introduced epigraphical variables to the upper bound of the original lower level set. In this paper, we focus on a family of constraints involving linear transforms of distance functions to a convex set or $\ell_{1,p}$ norms with $p\in \{1,2,\infty\}$. In these cases, the projection onto the epigraph of the involved function has a closed form expression. The proposed approach is validated in the context of image restoration with missing samples, by making use of constraints based on Non-Local Total Variation. Experiments show that our method leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems. A second application to a pulse shape design problem is provided in order to illustrate the flexibility of the proposed approach.

14 citations


Posted Content
01 Jan 2012
TL;DR: 4D-UWR-SENSE algorithm outperforms the SENSE reconstruction at the subject and group levels (15 subjects) for different contrasts of interest (eg, motor or computation tasks) and using different parallel acceleration factors on 2× 2× 3mm EPI images.
Abstract: Parallel MRI is a fast imaging technique that enables the acquisition of highly resolved images in space or/and in time. The performance of parallel imaging strongly depends on the reconstruction algorithm, which can proceed either in the original k-space (GRAPPA, SMASH) or in the image domain (SENSE-like methods). To improve the performance of the widely used SENSE algorithm, 2D- or slice-specific regularization in the wavelet domain has been deeply investigated. In this paper, we extend this approach using 3D-wavelet representations in order to handle all slices together and address reconstruction artifacts which propagate across adjacent slices. The gain induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE: 3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal acquisition is considered. Another important extension accounts for temporal correlations that exist between successive scans in functional MRI (fMRI). In addition to the case of 2D+t acquisition schemes addressed by some other methods like kt-FOCUSS, our approach allows us to deal with 3D+t acquisition schemes which are widely used in neuroimaging. The resulting 3D-UWR-SENSE and 4D-UWR-SENSE reconstruction schemes are fully unsupervised in the sense that all regularization parameters are estimated in the maximum likelihood sense on a reference scan. The gain induced by such extensions is illustrated on both anatomical and functional image reconstruction, and also measured in terms of statistical sensitivity for the 4D-UWR-SENSE approach during a fast event-related fMRI protocol. Our 4D-UWR-SENSE algorithm outperforms the SENSE reconstruction at the subject and group levels (15 subjects) for different contrasts of interest (eg, motor or computation tasks) and using different parallel acceleration factors (R=2 and R=4) on 2x2x3mm3 EPI images.

10 citations


Proceedings ArticleDOI
25 Mar 2012
TL;DR: This work introduces a new epigraphical projection technique, which allows us to consider more flexible data fidelity constraints than the standard linear or quadratic ones, and demonstrates the validity of this approach through an application to an image reconstruction problem in the presence of Poisson noise.
Abstract: The concept of cosparsity has been recently introduced in the arena of compressed sensing. In cosparse modelling, the l 0 (or l 1 ) cost of an analysis-based representation of the target signal isminimized under a data fidelity constraint. By taking benefit from recent advances in proximal algorithms, we show that it is possible to efficiently address a more general framework where a convex block sparsity measure is minimized under various convex constraints. The main contribution of this work is the introduction of a new epigraphical projection technique, which allows us to consider more flexible data fidelity constraints than the standard linear or quadratic ones. The validity of our approach is illustrated through an application to an image reconstruction problem in the presence of Poisson noise.

9 citations


Proceedings Article
18 Oct 2012
TL;DR: The global energy function is made convex by quantizing the disparity map and converting it into a set of binary fields and it is shown that the problem can then be efficiently solved by parallel proximal splitting approaches.
Abstract: Disparity estimation constitutes an active research area in stereo vision, and in recent years, global estimation methods aiming at minimizing an energy function over the whole image have gained a lot of attention. To overcome the difficulties raised by the nonconvexity of the minimized criterion, convex relaxations have been proposed by several authors. In this paper, the global energy function is made convex by quantizing the disparity map and converting it into a set of binary fields. It is shown that the problem can then be efficiently solved by parallel proximal splitting approaches. A primal algorithm and a primal-dual one are proposed and compared based on numerical tests.

Proceedings Article
18 Oct 2012
TL;DR: It is shown that estimating multiples is equivalent to identifying filters and a new variational framework based on Maximum A Posteriori (MAP) estimation is proposed to solve the problem of multiple removal.
Abstract: Due to complex subsurface structure properties, seismic records often suffer from coherent noises such as multiples. These undesired signals may hide the signal of interest, thus raising difficulties in interpretation. We propose a new variational framework based on Maximum A Posteriori (MAP) estimation. More precisely, the problem of multiple removal is formulated as a minimization problem involving time-varying filters, assuming that a disturbance signal template is available and the target signal is sparse in some orthonormal basis. We show that estimating multiples is equivalent to identifying filters and we propose to employ recently proposed convex optimization procedures based on proximity operators to solve the problem. The performance of the proposed approach as well as its robustness to noise is demonstrated on realistically simulated data.

18 Dec 2012
TL;DR: In this article, an iterative Expectation-Maximization (EM) approach is proposed to solve the considered parametric estimation problem, which is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as its exponential decay rate.
Abstract: The problem of estimating the parameters of a Poisson-Gaussian model from experimental data has recently raised much interest in various applications, especially for CCD imaging systems. In this context, a field of independent random variables is observed, which is varying both in time and space. Each variable is a sum of two components, one following a Poisson and the other a Gaussian distribution. In this paper, a general formulation is considered where the associated Poisson process is nonstationary in space and also exhibits an exponential decay in time, whereas the Gaussian component corresponds to a stationary white noise with arbitrary mean. To solve the considered parametric estimation problem, an iterative Expectation-Maximization (EM) approach is proposed. Much attention is paid to the initialization of the EM algorithm for which an adequate moment-based method using recent optimization tools is proposed. In addition, a performance analysis of the proposed approach is carried out by computing the Cramer-Rao bounds on the estimated variables. The performance of the proposed estimation procedure is illustrated on both synthetic data and real fluorescence microscopy image sequences. The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate.

Proceedings Article
18 Oct 2012
TL;DR: A two-dimensional non separable decomposition based on the concept of vector lifting scheme is proposed, which focuses on the optimization of all the lifting operators employed with the left and right images.
Abstract: Due to the great interest of stereo images in several applications, it becomes mandatory to improve the efficiency of existing coding techniques. For this purpose, many research works have been developed. The basic idea behind most of the reported methods consists of applying an inter-view prediction via the estimated disparity map followed by a separable wavelet transform. In this paper, we propose to use a two-dimensional non separable decomposition based on the concept of vector lifting scheme. Furthermore, we focus on the optimization of all the lifting operators employed with the left and right images. Experimental results carried out on different stereo images show the benefits which can be drawn from the proposed coding method.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: This paper presents a sparse optimization technique based on recent convex algorithms and applied to the prediction filters of a two-dimensional non separable lifting structure that leads to a new optimization criterion taking into account linear dependencies between the generated coefficients.
Abstract: Many existing works related to lossy-to-lossless image compression are based on the lifting concept. In this paper, we present a sparse optimization technique based on recent convex algorithms and applied to the prediction filters of a two-dimensional non separable lifting structure. The idea consists of designing these filters, at each resolution level, by minimizing the sum of the l1-norm of the three detail subbands. Extending this optimization method in order to perform a global minimization over all resolution levels leads to a new optimization criterion taking into account linear dependencies between the generated coefficients. Simulations carried out on still images show the benefits which can be drawn from the proposed optimization techniques.

Proceedings ArticleDOI
07 May 2012
TL;DR: This work considers the uniform scalar quantization of subband coefficients modeled by a Generalized Gaussian distribution to reformulate the bit allocation problem as a convex programming one.
Abstract: The objective of this paper is to design an efficient bit allocation algorithm in the subband coding context based on an analytical approach. More precisely, we consider the uniform scalar quantization of subband coefficients modeled by a Generalized Gaussian distribution. This model appears to be particularly well-adapted for data having a sparse representation in the wavelet domain. Our main contribution is to reformulate the bit allocation problem as a convex programming one. For this purpose, we firstly define new convex approximations of the entropy and distortion functions. Then, we derive explicit expressions of the optimal quantization parameters. Finally, we illustrate the application of the proposed method to wavelet-based coding systems.