scispace - formally typeset
Search or ask a question
Author

Nicole Tomczak-Jaegermann

Bio: Nicole Tomczak-Jaegermann is an academic researcher from University of Alberta. The author has contributed to research in topics: Banach space & Convex body. The author has an hindex of 31, co-authored 127 publications receiving 4345 citations. Previous affiliations of Nicole Tomczak-Jaegermann include University of Kiel & Polish Academy of Sciences.


Papers
More filters
Journal ArticleDOI
TL;DR: The paper considers random matrices with independent subgaussian columns and provides a new elementary proof of the Uniform Uncertainty Principle for such matrices and combines a simple measure concentration and a covering argument, which are standard tools of high-dimensional convexity.
Abstract: The paper considers random matrices with independent subgaussian columns and provides a new elementary proof of the Uniform Uncertainty Principle for such matrices. The Principle was introduced by Candes, Romberg and Tao in 2004; for subgaussian random matrices it was carlier proved by the present authors, as a consequence of a general result based on a generic chaining method of Talagrand. The present proof combines a simple measure concentration and a covering argument, which are standard tools of high-dimensional convexity.

350 citations

Journal ArticleDOI
TL;DR: In this paper, the authors studied the behavior of the smallest singular value of a rectangular random matrix, i.e., matrix whose entries are independent random variables satisfying some additional conditions, and showed that such a matrix is a good isomorphism on its image.

322 citations

Journal ArticleDOI
TL;DR: In this article, the authors presented a randomized method to approximate any vector (up-upsilon) from a set of sets of vectors and k scalar products, where the data set is the set T and the scalar product vectors are i.i.d. isotropic subgaussian random vectors.
Abstract: We present a randomized method to approximate any vector \(\upsilon\) from a set \(T \subset {\mathbb{R}}^n\). The data one is given is the set T, vectors \((X_i)^{k}_{i=1}\) of \({\mathbb{R}}^n\) and k scalar products \((\langle X_i, \upsilon\rangle)^{k}_{i=1}\), where \((X_i)^k_{i=1}\) are i.i.d. isotropic subgaussian random vectors in \({\mathbb{R}}^n\), and \(k \ll n\). We show that with high probability, any \(y \in T\) for which \((\langle X_i, y\rangle)^k_{i=1}\) is close to the data vector \((\langle X_i, \upsilon\rangle)^k_{i=1}\) will be a good approximation of \(\upsilon\), and that the degree of approximation is determined by a natural geometric parameter associated with the set T.

259 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations

Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations

Posted Content
TL;DR: In this article, it was shown that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law, then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.
Abstract: Suppose we are given a vector $f$ in $\R^N$. How many linear measurements do we need to make about $f$ to be able to recover $f$ to within precision $\epsilon$ in the Euclidean ($\ell_2$) metric? Or more exactly, suppose we are interested in a class ${\cal F}$ of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy $\epsilon$? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal $f \in {\cal F}$ decay like a power-law (or if the coefficient sequence of $f$ in a fixed basis decays like a power-law), then it is possible to reconstruct $f$ to within very high accuracy from a small number of random measurements.

5,693 citations

Book
01 Feb 1993
TL;DR: Inequalities for mixed volumes 7. Selected applications Appendix as discussed by the authors ] is a survey of mixed volumes with bounding boxes and quermass integrals, as well as a discussion of their applications.
Abstract: 1. Basic convexity 2. Boundary structure 3. Minkowski addition 4. Curvature measure and quermass integrals 5. Mixed volumes 6. Inequalities for mixed volumes 7. Selected applications Appendix.

3,954 citations

Book ChapterDOI
01 May 2012
TL;DR: This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurementMatrices in compressed sensing.
Abstract: This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung off from the development of geometric functional analysis since the 1970's. They have applications in several fields, most notably in theoretical computer science, statistics and signal processing. A few basic applications are covered in this text, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing. These notes are written particularly for graduate students and beginning researchers in different areas, including functional analysts, probabilists, theoretical statisticians, electrical engineers, and theoretical computer scientists.

2,780 citations