scispace - formally typeset
Search or ask a question

Showing papers by "Emmanuel J. Candès published in 2013"


Journal ArticleDOI
TL;DR: It is shown that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques, and it is proved that the methodology is robust vis‐à‐vis additive noise.
Abstract: Suppose we wish to recover a signal \input amssym $\font\abc=cmmib10\def\bi#1{\hbox{\abc#1}} {\bi x} \in {\Bbb C}^n$ from m intensity measurements of the form , ; that is, from data in which phase information is missing. We prove that if the vectors are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program–-a trace-norm minimization problem; this holds with large probability provided that m is on the order of , and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis-a-vis additive noise. © 2012 Wiley Periodicals, Inc.

1,190 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel framework for phase retrieval, called PhaseLift, which combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave.
Abstract: This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to...

508 citations


Journal ArticleDOI
TL;DR: It is shown that as long as the sources are separated by 2/flo, solving a simple convex program produces a stable estimate in the sense that the approximation error between the higher-resolution reconstruction and the truth is proportional to the noise level times the square of the super-resolution factor (SRF) fhi/flo.
Abstract: This paper studies the recovery of a superposition of point sources from noisy bandlimited data. In the fewest possible words, we only have information about the spectrum of an object in the low-frequency band [−f lo,f lo] and seek to obtain a higher resolution estimate by extrapolating the spectrum up to a frequency f hi>f lo. We show that as long as the sources are separated by 2/f lo, solving a simple convex program produces a stable estimate in the sense that the approximation error between the higher-resolution reconstruction and the truth is proportional to the noise level times the square of the super-resolution factor (SRF) f hi/f lo.

376 citations


Journal ArticleDOI
TL;DR: This paper introduces an algorithm inspired by sparse subspace clustering (SSC) to cluster noisy data, and develops some novel theory demonstrating its correctness.
Abstract: Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2009) 2790-2797] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.

327 citations


Posted Content
TL;DR: It is shown that PhaseLift, a recent convex programming technique, recovers the phase information exactly from a number of random modulations, which is polylogarithmic in the number of unknowns.
Abstract: This paper considers the question of recovering the phase of an object from intensity-only measurements, a problem which naturally appears in X-ray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the intensity of its diffraction pattern, each modulation thereby producing a sort of coded diffraction pattern. We show that PhaseLift, a recent convex programming technique, recovers the phase information exactly from a number of random modulations, which is polylogarithmic in the number of unknowns. Numerical experiments with noiseless and noisy data complement our theoretical analysis and illustrate our approach.

244 citations


Journal ArticleDOI
TL;DR: The utility of the unbiased risk estimation for SVT-based denoising of real clinical cardiac MRI series data is demonstrated and an unbiased risk estimate formula for singular value thresholding (SVT), a popular estimation strategy that applies a soft-thresholding rule to the singular values of the noisy observations is given.
Abstract: In an increasing number of applications, it is of interest to recover an approximately low-rank data matrix from noisy observations. This paper develops an unbiased risk estimate-holding in a Gaussian model-for any spectral estimator obeying some mild regularity assumptions. In particular, we give an unbiased risk estimate formula for singular value thresholding (SVT), a popular estimation strategy that applies a soft-thresholding rule to the singular values of the noisy observations. Among other things, our formulas offer a principled and automated way of selecting regularization parameters in a variety of problems. In particular, we demonstrate the utility of the unbiased risk estimation for SVT-based denoising of real clinical cardiac MRI series data. We also give new results concerning the differentiability of certain matrix-valued functions.

242 citations


Journal ArticleDOI
TL;DR: It is proved that the advantages offered by clever adaptive strategies and sophisticated estimation procedures-no matter how intractable-over classical compressed acquisition/recovery schemes are, in general, minimal.
Abstract: Suppose we can sequentially acquire arbitrary linear measurements of an n -dimensional vector x resulting in the linear model y = A x + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages offered by clever adaptive strategies and sophisticated estimation procedures-no matter how intractable-over classical compressed acquisition/recovery schemes are, in general, minimal.

157 citations


Posted Content
TL;DR: In this article, a new estimator called SLOPE is proposed for sparse regression and variable selection, which is inspired by modern ideas in multiple testing, such as the BHq procedure [Benjamini and Hochberg, 1995].
Abstract: We introduce a novel method for sparse regression and variable selection, which is inspired by modern ideas in multiple testing. Imagine we have observations from the linear model y = X beta + z, then we suggest estimating the regression coefficients by means of a new estimator called SLOPE, which is the solution to minimize 0.5 ||y - Xb\|_2^2 + lambda_1 |b|_(1) + lambda_2 |b|_(2) + ... + lambda_p |b|_(p); here, lambda_1 >= \lambda_2 >= ... >= \lambda_p >= 0 and |b|_(1) >= |b|_(2) >= ... >= |b|_(p) is the order statistic of the magnitudes of b. The regularizer is a sorted L1 norm which penalizes the regression coefficients according to their rank: the higher the rank, the larger the penalty. This is similar to the famous BHq procedure [Benjamini and Hochberg, 1995], which compares the value of a test statistic taken from a family to a critical threshold that depends on its rank in the family. SLOPE is a convex program and we demonstrate an efficient algorithm for computing the solution. We prove that for orthogonal designs with p variables, taking lambda_i = F^{-1}(1-q_i) (F is the cdf of the errors), q_i = iq/(2p), controls the false discovery rate (FDR) for variable selection. When the design matrix is nonorthogonal there are inherent limitations on the FDR level and the power which can be obtained with model selection methods based on L1-like penalties. However, whenever the columns of the design matrix are not strongly correlated, we demonstrate empirically that it is possible to select the parameters lambda_i as to obtain FDR control at a reasonable level as long as the number of nonzero coefficients is not too large. At the same time, the procedure exhibits increased power over the lasso, which treats all coefficients equally. The paper illustrates further estimation properties of the new selection rule through comprehensive simulation studies.

107 citations


Journal ArticleDOI
TL;DR: In this paper, the authors established a lower bound on the mean-squared error, which holds regardless of sensing/design matrix being used and regardless of the estimation procedure, and this lower bound very nearly matches the known upper bound one gets by taking a random projection of the sparse vector followed by an l 1 estimation procedure such as the Dantzig selector.

101 citations


Journal ArticleDOI
TL;DR: A unified analysis of the recovery of simple objects from random linear measurements shows that an s-sparse vector in $${\mathbb{R}^n}$$ can be efficiently recovered from 2s log n measurements with high probability and a rank r, n × n matrix can be efficient recovered from r(6n − 5r) measurements withhigh probability.
Abstract: This note presents a unied analysis of the recovery of simple objects from random linear measurements. When the linear functionals are Gaussian, we show that an s-sparse vector in R n can be eciently recovered from 2 s logn measurements with high probability and a rank r, n n matrix can be eciently recovered from r(6n 5r) measurements with high probability. For sparse vectors, this is within an additive factor of the best known nonasymptotic bounds. For low-rank matrices, this matches the best known bounds. We present a parallel analysis for blocksparse vectors obtaining similarly tight bounds. In the case of sparse and block-sparse signals, we additionally demonstrate that our bounds are only slightly weakened when the measurement map is a random sign matrix. Our results are based on analyzing a particular dual point which certies optimality conditions of the respective convex programming problem. Our calculations rely only on standard large deviation inequalities and our analysis is self-contained.

61 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work presents a framework to super-resolve planar regions found in urban scenes and other man-made environments by taking into account their 3D geometry by using recently developed tools based on convex optimization to learn a transform that maps the image to a domain where its gradient has a simple group-sparse structure.
Abstract: We present a framework to super-resolve planar regions found in urban scenes and other man-made environments by taking into account their 3D geometry. Such regions have highly structured straight edges, but this prior is challenging to exploit due to deformations induced by the projection onto the imaging plane. Our method factors out such deformations by using recently developed tools based on convex optimization to learn a transform that maps the image to a domain where its gradient has a simple group-sparse structure. This allows to obtain a novel convex regularizer that enforces global consistency constraints between the edges of the image. Computational experiments with real images show that this data-driven approach to the design of regularizers promoting transform-invariant group sparsity is very effective at high super-resolution factors. We view our approach as complementary to most recent super-resolution methods, which tend to focus on hallucinating high-frequency textures.

Journal ArticleDOI
TL;DR: Results from group testing are leveraged and an architecture for a highly efficient readout of pixels using only a small number of time-to-digital converters is proposed, promising a low-cost sensor with high fill factor and high photon sensitivity.
Abstract: Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as light detection and ranging and positron-emission tomography. The demands placed on on-chip readout circuitry impose stringent trade-offs between fill factor and spatiotemporal resolution, causing many contemporary designs to severely underuse the technology’s full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs. We provide optimized design instances for various sensor parameters and compute explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization of a 60 × 60 photodiode sensor using only 142 TDCs. The design guarantees registration and unique recovery of up to four simultaneous photon arrivals using a fast decoding algorithm. By contrast, a cross-strip design requires 120 TDCs and cannot uniquely decode any simultaneous photon arrivals. Among other realistic simulations of scintillation events in clinical positron-emission tomography, the above design is shown to recover the spatiotemporal location of 99.98% of all detected photons.

Proceedings ArticleDOI
TL;DR: The application of the L+S decomposition is presented to reconstruct incoherently undersampled dynamic MRI data as a superposition of a slowly or coherently changing background and sparse innovations to enhance spatial and temporal resolution and to enable background suppression without the need of subtraction or modeling.
Abstract: L+S matrix decomposition finds the low-rank (L) and sparse (S) components of a matrix M by solving the following convex optimization problem: min‖L‖*L+S matrix decomposition finds the low-rank (L) and sparse (S) components of a matrix M by solving the following convex optimization problem: ‖L ‖* + λ‖S‖1, subject to M=L+S, where ‖L‖* is the nuclear-norm or sum of singular values of L and ‖S‖1 is the 11-norm| or sum of absolute values of S. This work presents the application of the L+S decomposition to reconstruct incoherently undersampled dynamic MRI data as a superposition of a slowly or coherently changing background and sparse innovations. Feasibility of the method was tested in several accelerated dynamic MRI experiments including cardiac perfusion, time-resolved peripheral angiography and liver perfusion using Cartesian and radial sampling. The high acceleration and background separation enabled by L+S reconstruction promises to enhance spatial and temporal resolution and to enable background suppression without the need of subtraction or modeling.

Proceedings ArticleDOI
31 Oct 2013
TL;DR: Compressive sensing has emerged in the last decade as a powerful tool and paradigm for acquiring signals of interest from fewer measurements than was thought possible, and very efficient randomized sensing protocols are used to sample such signals.
Abstract: Compressive sensing (CS) [1]-[3] has emerged in the last decade as a powerful tool and paradigm for acquiring signals of interest from fewer measurements than was thought possible. CS capitalizes on the the fact that many real-world signals inherently have far fewer degrees of freedom than the signal size might indicate. For instance, a signal with a sparse spectrum depends upon fewer degrees of freedom than the total bandwidth it may cover. CS theory then asserts that one can use very efficient randomized sensing protocols, which would sample such signals in proportion to their degrees of freedom rather than in proportion to the dimension of the larger space they occupy (e.g., Nyquist-rate sampling). An overview and mathematical description of CS can be found in [4].

Posted Content
11 Oct 2013
TL;DR: In this article, the authors considered the century-old phase retrieval problem of reconstructing a signal x in C^n from the amplitudes of its Fourier coefficients and showed that the signal x can be recovered exactly (up to a global phase factor) from such measurements using a semi-definite program.
Abstract: This paper considers the century-old phase retrieval problem of reconstructing a signal x in C^n from the amplitudes of its Fourier coefficients. To overcome the inherent ambiguity due to missing phase information, we create redundancy in the measurement process by distorting the signal multiple times, each time with a different mask. More specifically, we consider measurements of the form | | for k=1,2,...,n and l=1,2,...,L; where f_k^* are rows of the Fourier matrix and D_l are random diagonal matrices modeling the masks. We prove that the signal x can be recovered exactly (up to a global phase factor) from such measurements using a semi-definite program as long as the number of masks is at least on the order of (log n)^4. Numerical experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.

Journal ArticleDOI
TL;DR: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning and improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.
Abstract: Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality. Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of the objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency. Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10–15 and 30–35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%–30% and 40%–60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12–30 s and 30–80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery. Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.

Posted Content
TL;DR: In this paper, a novel and rather intuitive analysis of the algorithm in Martinsson et al. (2008) is presented, which yields theoretical guarantees about the approximation error and at the same time, ultimate limits of performance showing that their upper bounds are tight.
Abstract: The development of randomized algorithms for numerical linear algebra, e.g. for computing approximate QR and SVD factorizations, has recently become an intense area of research. This paper studies one of the most frequently discussed algorithms in the literature for dimensionality reduction---specifically for approximating an input matrix with a low-rank element. We introduce a novel and rather intuitive analysis of the algorithm in Martinsson et al. (2008), which allows us to derive sharp estimates and give new insights about its performance. This analysis yields theoretical guarantees about the approximation error and at the same time, ultimate limits of performance (lower bounds) showing that our upper bounds are tight. Numerical experiments complement our study and show the tightness of our predictions compared with empirical observations.