scispace - formally typeset
Search or ask a question

Showing papers by "Emmanuel J. Candès published in 2014"


Journal ArticleDOI
TL;DR: In this article, the authors developed a mathematical theory of super-resolution, which is the problem of recovering the details of an object from coarse scale information only from samples at the low end of the spectrum.
Abstract: This paper develops a mathematical theory of super-resolution. Broadly speaking, superresolution is the problem of recovering the ne details of an object|the high end of its spectrum| from coarse scale information only|from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in [0; 1] and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up until a frequency cut-o fc. We show that one can super-resolve these point sources with innite precision|i.e. recover the exact locations and amplitudes|by solving a simple convex optimization problem, which can essentially be reformulated as a semidenite program. This holds provided that the distance between sources is at least 2=fc. This result extends to higher dimensions and other models. In one dimension for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with innite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary.

1,157 citations


Journal ArticleDOI
TL;DR: The knockoff filter is introduced, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables, and empirical results show that the resulting method has far more power than existing selection rules when the proportion of null variables is high.
Abstract: In many fields of science, we observe a response variable together with a large number of potential explanatory variables, and would like to be able to discover which variables are truly associated with the response. At the same time, we need to know that the false discovery rate (FDR) - the expected fraction of false discoveries among all discoveries - is not too high, in order to assure the scientist that most of the discoveries are indeed true and replicable. This paper introduces the knockoff filter, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables. This method achieves exact FDR control in finite sample settings no matter the design or covariates, the number of variables in the model, or the amplitudes of the unknown regression coefficients, and does not require any knowledge of the noise level. As the name suggests, the method operates by manufacturing knockoff variables that are cheap - their construction does not require any new data - and are designed to mimic the correlation structure found within the existing variables, in a way that allows for accurate FDR control, beyond what is possible with permutation-based methods. The method of knockoffs is very general and flexible, and can work with a broad class of test statistics. We test the method in combination with statistics from the Lasso for sparse regression, and obtain empirical results showing that the resulting method has far more power than existing selection rules when the proportion of null variables is high.

503 citations


Journal ArticleDOI
TL;DR: It is shown that any complex vector can be recovered exactly from on the order of n quadratic equations of the form |〈ai,x0〉|2=bi, i=1,…,m, by using a semidefinite program known as PhaseLift, improving upon earlier bounds.
Abstract: This note shows that we can recover any complex vector $\boldsymbol {x}_{0} \in \mathbb {C}^{n}$ exactly from on the order of n quadratic equations of the form |?a i ,x 0?|2=b i , i=1,?,m, by using a semidefinite program known as PhaseLift. This improves upon earlier bounds in Candes et al. (Commun. Pure Appl. Math. 66:1241---1274, 2013), which required the number of equations to be at least on the order of nlogn. Further, we show that exact recovery holds for all input vectors simultaneously, and also demonstrate optimal recovery results from noisy quadratic measurements; these results are much sharper than previously known results.

366 citations


Journal ArticleDOI
TL;DR: This paper develops a nonconvex formulation of the phase retrieval problem as well as a concrete solution algorithm that is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements.
Abstract: We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complex-valued signal x of C^n about which we have phaseless samples of the form y_r = | |^2, r = 1,2,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a non-convex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of non-convex optimization schemes that may have implications for computational problems beyond phase retrieval.

318 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm inspired by sparse subspace clustering (SSC) was proposed to cluster noisy data, and a theory demonstrating its correctness was developed by using geometric functional analysis.
Abstract: Subspace clustering refers to the task of nding a multi-subspace representation that best ts a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) (25) to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its eectiveness.

297 citations


Proceedings Article
08 Dec 2014
TL;DR: In this paper, the authors derive a second-order ordinary differential equation (ODE), which is the limit of Nesterov's accelerated gradient method, which can serve as a tool for analysis.
Abstract: We derive a second-order ordinary differential equation (ODE), which is the limit of Nesterov's accelerated gradient method. This ODE exhibits approximate equivalence to Nesterov's scheme and thus can serve as a tool for analysis. We show that the continuous time ODE allows for a better understanding of Nesterov's scheme. As a byproduct, we obtain a family of schemes with similar convergence rates. The ODE interpretation also suggests restarting Nesterov's scheme leading to an algorithm, which can be rigorously proven to converge at a linear rate whenever the objective is strongly convex.

210 citations


Journal ArticleDOI
TL;DR: SLOPE, short for Sorted L-One Penalized Estimation, is the solution to λBH and appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.
Abstract: We introduce a new estimator for the vector of coefficients $\beta$ in the linear model $y=X\beta+z$, where $X$ has dimensions $n\times p$ with $p$ possibly larger than $n$. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to \[\min_{b\in\mathbb{R}^p}\frac{1}{2}\Vert y-Xb\Vert _{\ell_2}^2+\lambda_1\vert b\vert _{(1)}+\lambda_2\vert b\vert_{(2)}+\cdots+\lambda_p\vert b\vert_{(p)},\] where $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p\ge0$ and $\vert b\vert_{(1)}\ge\vert b\vert_{(2)}\ge\cdots\ge\vert b\vert_{(p)}$ are the decreasing absolute values of the entries of $b$. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical $\ell_1$ procedures such as the Lasso. Here, the regularizer is a sorted $\ell_1$ norm, which penalizes the regression coefficients according to their rank: the higher the rank - that is, stronger the signal - the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] procedure (BH) which compares more significant $p$-values with more stringent thresholds. One notable choice of the sequence $\{\lambda_i\}$ is given by the BH critical values $\lambda_{\mathrm {BH}}(i)=z(1-i\cdot q/2p)$, where $q\in(0,1)$ and $z(\alpha)$ is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with $\lambda_{\mathrm{BH}}$ provably controls FDR at level $q$. Moreover, it also appears to have appreciable inferential properties under more general designs $X$ while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.

8 citations


Patent
11 Feb 2014
TL;DR: In this article, a method for providing an image from a device with a plurality of sensors and a plurality time to digital converters (TDC) is provided, where the data signals are transmitted from the sensors to the TDCs, and a binary matrix indicates which sensors are connected to which TDC.
Abstract: A method for providing an image from a device with a plurality of sensors and a plurality of time to digital converters (TDC) is provided. Data signals are generated by some of the plurality of sensors, wherein each sensor of the plurality of sensors provides output in parallel to more than one TDC of the plurality of TDCs and wherein each TDC of the plurality of TDCs receives in parallel input from more than one sensor of the plurality of sensors and where a binary matrix indicates which sensors are connected to which TDC. The data signals are transmitted from the sensors to the TDCs. TDC signals are generated from the data signals. Group testing is used to decode the TDC signals based on the binary matrix.

5 citations


Journal ArticleDOI
01 Jan 2014
TL;DR: In 2013, the Dannie-Heineman-Preis 2013 wurde Emmanuel Jean Candès, Stanford/USA, verliehen as mentioned in this paper, who was einer der Architektendes Compressive Sensing Prinzips die Brücke zwischen Grundlagenforschung and der vielfältigen praktischen Nutzung.
Abstract: Der Dannie-Heineman-Preis 2013 wurde Emmanuel Jean Candès, Stanford/USA, verliehen.Herr Candès hat als einer der Architektendes Compressive Sensing Prinzips die Brücke zwischen Grundlagenforschung und der vielfältigen praktischen Nutzung dieser Theorie hergestellt und hierdurch die Entwicklung der mathematischen Statistik, der angewandten Mathematik und angrenzender Gebiete in jüngster Zeit maßgeblich geprägt.