scispace - formally typeset
Search or ask a question
Author

Irène Waldspurger

Bio: Irène Waldspurger is an academic researcher from Paris Dauphine University. The author has contributed to research in topics: Phase retrieval & Wavelet transform. The author has an hindex of 12, co-authored 17 publications receiving 1258 citations. Previous affiliations of Irène Waldspurger include Centre national de la recherche scientifique & École Normale Supérieure.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the phase retrieval problem is cast as a nonconvex quadratic program over a complex phase vector and formulated a tractable relaxation (called PhaseCut) similar to the classical MaxCut semidefinite program.
Abstract: Phase retrieval seeks to recover a signal $$x \in {\mathbb {C}}^p$$ x ? C p from the amplitude $$|A x|$$ | A x | of linear measurements $$Ax \in {\mathbb {C}}^n$$ A x ? C n . We cast the phase retrieval problem as a non-convex quadratic program over a complex phase vector and formulate a tractable relaxation (called PhaseCut) similar to the classical MaxCut semidefinite program. We solve this problem using a provably convergent block coordinate descent algorithm whose structure is similar to that of the original greedy algorithm in Gerchberg and Saxton (Optik 35:237---246, 1972), where each iteration is a matrix vector product. Numerical results show the performance of this approach over three different phase retrieval problems, in comparison with greedy phase retrieval algorithms and matrix completion formulations.

502 citations

Posted Content
TL;DR: This work casts the phase retrieval problem as a non-convex quadratic program over a complex phase vector and formulates a tractable relaxation similar to the classical MaxCut semidefinite program.
Abstract: Phase retrieval seeks to recover a signal x from the amplitude |Ax| of linear measurements. We cast the phase retrieval problem as a non-convex quadratic program over a complex phase vector and formulate a tractable relaxation (called PhaseCut) similar to the classical MaxCut semidefinite program. We solve this problem using a provably convergent block coordinate descent algorithm whose structure is similar to that of the original greedy algorithm in Gerchberg-Saxton, where each iteration is a matrix vector product. Numerical results show the performance of this approach over three different phase retrieval problems, in comparison with greedy phase retrieval algorithms and matrix completion formulations.

466 citations

Journal ArticleDOI
TL;DR: It is shown that exploiting structural assumptions on the signal and the observations, such as sparsity, smoothness or positivity, can significantly speed-up convergence and improve recovery performance.
Abstract: We study convex relaxation algorithms for phase retrieval on imaging problems. We show that exploiting structural assumptions on the signal and the observations, such as sparsity, smoothness or positivity, can significantly speed-up convergence and improve recovery performance. We detail numerical results in molecular imaging experiments simulated using data from the Protein Data Bank.

82 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider the phase retrieval problem in which one tries to reconstruct a function from the modulus of its wavelet transform, and study the uniqueness and stability of the reconstruction.
Abstract: We consider the phase retrieval problem in which one tries to reconstruct a function from the modulus of its wavelet transform. We study the uniqueness and stability of the reconstruction. In the case where the wavelets are Cauchy wavelets, we prove that the modulus of the wavelet transform uniquely determines the function up to a global phase. We show that the reconstruction operator is continuous but not uniformly continuous. We describe how to construct pairs of functions which are far away in \(L^2\)-norm but whose wavelet transforms are very close, in modulus. The principle is to modulate the wavelet transform of a fixed initial function by a phase which varies slowly in both time and frequency. This construction seems to cover all the instabilities that we observe in practice; we give a partial formal justification to this fact. Finally, we describe an exact reconstruction algorithm and use it to numerically confirm our analysis of the stability question.

78 citations

Journal ArticleDOI
TL;DR: In this paper, the authors show that with a suitable initialization procedure, the classical alternating projections (Gerchberg-Saxton) succeeds with high probability when $m\geq Cn$, for some $C>0$.
Abstract: We consider a phase retrieval problem, where we want to reconstruct a $n$ -dimensional vector from its phaseless scalar products with $m$ sensing vectors, independently sampled from complex normal distributions. We show that, with a suitable initialization procedure, the classical algorithm of alternating projections (Gerchberg–Saxton) succeeds with high probability when $m\geq Cn$ , for some $C>0$ . We conjecture that this result is still true when no special initialization procedure is used, and present numerical experiments that support this conjecture.

76 citations


Cited by
More filters
Book ChapterDOI
15 Feb 2011

1,876 citations

Journal ArticleDOI
TL;DR: The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification.
Abstract: A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.

1,337 citations

Journal ArticleDOI
TL;DR: In this article, a nonconvex formulation of the phase retrieval problem was proposed and a concrete solution algorithm was presented. But the main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements.
Abstract: We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complex-valued signal $ \boldsymbol {x}\in \mathbb {C}^{n}$ about which we have phaseless samples of the form $y_{r} = \left |{\langle \boldsymbol {a}_{r}, \boldsymbol {x} \rangle }\right |^{2}$ , $r = 1,\ldots , m$ (knowledge of the phase of these samples would yield a linear system). This paper develops a nonconvex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of nonconvex optimization schemes that may have implications for computational problems beyond phase retrieval.

1,096 citations

Journal ArticleDOI
TL;DR: The goal is to describe the current state of the art in this area, identify challenges, and suggest future directions and areas where signal processing methods can have a large impact on optical imaging and on the world of imaging at large.
Abstract: i»?The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its Fourier transform, arises in various fields of science and engineering, including electron microscopy, crystallography, astronomy, and optical imaging. Exploring phase retrieval in optical settings, specifically when the light originates from a laser, is natural since optical detection devices [e.g., charge-coupled device (CCD) cameras, photosensitive films, and the human eye] cannot measure the phase of a light wave. This is because, generally, optical measurement devices that rely on converting photons to electrons (current) do not allow for direct recording of the phase: the electromagnetic field oscillates at rates of ~1015 Hz, which no electronic measurement device can follow. Indeed, optical measurement/detection systems measure the photon flux, which is proportional to the magnitude squared of the field, not the phase. Consequently, measuring the phase of optical waves (electromagnetic fields oscillating at 1015 Hz and higher) involves additional complexity, typically by requiring interference with another known field, in the process of holography.

869 citations

Book
11 Apr 2019
TL;DR: This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level, and includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices.
Abstract: Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and adapt modern statistical methods suited to large-scale data.

748 citations