scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Imaging via Compressive Sampling

21 Mar 2008-IEEE Signal Processing Magazine (IEEE)-Vol. 25, Iss: 2, pp 14-20
TL;DR: This article introduces compressive sampling and recovery using convex programming, which converts high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one.
Abstract: Image compression algorithms convert high-resolution images into a relatively small bit streams in effect turning a large digital data set into a substantially smaller one. This article introduces compressive sampling and recovery using convex programming.
Citations
More filters
Proceedings Article
01 Mar 2008
TL;DR: This paper overviews the recent work on compressive sensing, a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via measurements using more general, even random, test functions.
Abstract: This paper overviews the recent work on compressive sensing, a new approach to data acquisition in which analog signals are digitized for processing not via uniform sampling but via measurements using more general, even random, test functions. In stark contrast with conventional wisdom, the new theory asserts that one can combine "low-rate sampling" with digital computational power for efficient and accurate signal acquisition. Compressive sensing systems directly translate analog data into a compressed digital form; all we need to do is "decompress" the measured data through an optimization on a digital computer. The implications of compressive sensing are promising for many applications and enable the design of new kinds of analog-to-digital converters, cameras, and imaging systems.

1,537 citations

Journal ArticleDOI
TL;DR: A survey on the development of D2ITS is provided, discussing the functionality of its key components and some deployment issues associated with D2 ITS Future research directions for the developed system are presented.
Abstract: For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented.

1,336 citations

Journal ArticleDOI
TL;DR: A theoretical framework in which dynamic mode decomposition is defined as the eigendecomposition of an approximating linear operator, which generalizes DMD to a larger class of datasets, including nonsequential time series, and shows that under certain conditions, DMD is equivalent to LIM.
Abstract: Originally introduced in the fluid mechanics community, dynamic mode decomposition (DMD) has emerged as a powerful tool for analyzing the dynamics of nonlinear systems. However, existing DMD theory deals primarily with sequential time series for which the measurement dimension is much larger than the number of measurements taken. We present a theoretical framework in which we define DMD as the eigendecomposition of an approximating linear operator. This generalizes DMD to a larger class of datasets, including nonsequential time series. We demonstrate the utility of this approach by presenting novel sampling strategies that increase computational efficiency and mitigate the effects of noise, respectively. We also introduce the concept of linear consistency, which helps explain the potential pitfalls of applying DMD to rank-deficient datasets, illustrating with examples. Such computations are not considered in the existing literature, but can be understood using our more general framework. In addition, we show that our theory strengthens the connections between DMD and Koopman operator theory. It also establishes connections between DMD and other techniques, including the eigensystem realization algorithm (ERA), a system identification method, and linear inverse modeling (LIM), a method from climate science. We show that under certain conditions, DMD is equivalent to LIM.

1,067 citations


Cites background from "Imaging via Compressive Sampling"

  • ...) This approach has proven successful in a number of applications, including dynamic MRI [57, 98], facial recognition [182], imaging [46, 136], and radar[77, 129]....

    [...]

Journal ArticleDOI
TL;DR: This paper establishes achievable bounds for the l1 error of the best k -term approximation and derives bounds, with similar growth behavior, for the basis pursuit l1 recovery error, indicating that the sparse recovery may suffer large errors in the presence of basis mismatch.
Abstract: The theory of compressed sensing suggests that successful inversion of an image of the physical world (broadly defined to include speech signals, radar/sonar returns, vibration records, sensor array snapshot vectors, 2-D images, and so on) for its source modes and amplitudes can be achieved at measurement dimensions far lower than what might be expected from the classical theories of spectrum or modal analysis, provided that the image is sparse in an apriori known basis. For imaging problems in spectrum analysis, and passive and active radar/sonar, this basis is usually taken to be a DFT basis. However, in reality no physical field is sparse in the DFT basis or in any apriori known basis. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and consequently there is mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of compressed sensing to mismatch between the assumed and the actual sparsity bases. We start by analyzing the effect of basis mismatch on the best k-term approximation error, which is central to providing exact sparse recovery guarantees. We establish achievable bounds for the l1 error of the best k -term approximation and show that these bounds grow linearly with the image (or grid) dimension and the mismatch level between the assumed and actual bases for sparsity. We then derive bounds, with similar growth behavior, for the basis pursuit l1 recovery error, indicating that the sparse recovery may suffer large errors in the presence of basis mismatch. Although, we present our results in the context of basis pursuit, our analysis applies to any sparse recovery principle that relies on the accuracy of best k-term approximations for its performance guarantees. We particularly highlight the problematic nature of basis mismatch in Fourier imaging, where spillage from off-grid DFT components turns a sparse representation into an incompressible one. We substantiate our mathematical analysis by numerical examples that demonstrate a considerable performance degradation for image inversion from compressed sensing measurements in the presence of basis mismatch, for problem sizes common to radar and sonar.

822 citations

Journal ArticleDOI
TL;DR: In this paper, an advanced image reconstruction algorithm for pseudothermal ghost imaging, based on compressed sensing, is presented. But the algorithm is limited to pseudothermal images and cannot be applied to images taken from other pseudothermal imaging experiments.
Abstract: We describe an advanced image reconstruction algorithm for pseudothermal ghost imaging, reducing the number of measurements required for image recovery by an order of magnitude. The algorithm is based on compressed sensing, a technique that enables the reconstruction of an N-pixel image from much less than N measurements. We demonstrate the algorithm using experimental data from a pseudothermal ghost-imaging setup. The algorithm can be applied to data taken from past pseudothermal ghost-imaging experiments, improving the reconstruction’s quality.

793 citations

References
More filters
Book
01 Mar 2004
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Abstract: Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.

33,341 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations


"Imaging via Compressive Sampling" refers methods in this paper

  • ...of this research is the wavelet transform [9], [ 16 ]; switching from sinusoid-based representations...

    [...]

Book
01 May 1992
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

16,073 citations


"Imaging via Compressive Sampling" refers background in this paper

  • ...The most notable product of this research is the wavelet transform [9], [16]; switching from sinusoid-based representations to wavelets marked a watershed in image compression, and is the essential difference between the classical JPEG [18] and modern JPEG-2000 [22] standards....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations