scispace - formally typeset
Search or ask a question

Showing papers on "Compressed sensing published in 2006"


Journal ArticleDOI
TL;DR: A practical iterative algorithm for signal reconstruction is proposed, and potential applications to coding, analog-digital (A/D) conversion, and remote wireless sensing are discussed.
Abstract: Recent results show that a relatively small number of random projections of a signal can contain most of its salient information. It follows that if a signal is compressible in some orthonormal basis, then a very accurate reconstruction can be obtained from random projections. This "compressive sampling" approach is extended here to show that signals can be accurately recovered from random projections contaminated with noise. A practical iterative algorithm for signal reconstruction is proposed, and potential applications to coding, analog-digital (A/D) conversion, and remote wireless sensing are discussed

672 citations


Proceedings ArticleDOI
14 May 2006
TL;DR: A new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps, which is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals.
Abstract: We propose and study a new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps. The method is designed for sparse and compressible signals, i.e., ones that are well approximated by a short linear combination of vectors from an orthonormal basis. Signal reconstruction involves a non-linear Orthogonal Matching Pursuit algorithm that we implement efficiently by exploiting the nonadaptive, time-invariant structure of the measurement process. While simpler and more efficient than other random acquisition techniques like Compressed Sensing, random filtering is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals. Extensive numerical experiments demonstrate its efficacy for acquiring and reconstructing signals sparse in the time, frequency, and wavelet domains, as well as piecewise smooth signals and Poisson processes.

322 citations


Proceedings Article
01 Dec 2006
TL;DR: This paper proposes algorithms and hardware to support a new theory of Compressive Imaging based on a new digital image/video camera that directly acquires random projections of the light field without first collecting the pixels/voxels.
Abstract: Compressive Sensing is an emerging field based on the revelation that a small group of nonadaptive linear projections of a compressible signal contains enough information for reconstruction and processing In this paper, we propose algorithms and hardware to support a new theory of Compressive Imaging Our approach is based on a new digital image/video camera that directly acquires random projections of the light field without first collecting the pixels/voxels Our camera architecture employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns Its hallmarks include the ability to obtain an image with a single detection element while measuring the image/video fewer times than the number of pixels/voxels; this can significantly reduce the computation required for video acquisition/encoding Since our system relies on a single photon detector, it can also be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers We are currently testing a prototype design for the camera and include experimental results

270 citations


Proceedings ArticleDOI
14 May 2006
TL;DR: This paper demonstrates how CS principles can solve signal detection problems given incoherent measurements without ever reconstructing the signals involved, and proposes an incoherent detection and estimation algorithm (IDEA) based on matching pursuit.
Abstract: The recently introduced theory of compressed sensing (CS) enables the reconstruction or approximation of sparse or compressible signals from a small set of incoherent projections; often the number of projections can be much smaller than the number of Nyquist rate samples. In this paper, we show that the CS framework is information scalable to a wide range of statistical inference tasks. In particular, we demonstrate how CS principles can solve signal detection problems given incoherent measurements without ever reconstructing the signals involved. We specifically study the case of signal detection in strong inference and noise and propose an incoherent detection and estimation algorithm (IDEA) based on matching pursuit. The number of measurements and computations necessary for successful detection using IDEA is significantly lower than that necessary for successful reconstruction. Simulations show that IDEA is very resilient to strong interference, additive noise, and measurement quantization. When combined with random measurements, IDEA is applicable to a wide range of different signal classes

233 citations


Proceedings ArticleDOI
22 Mar 2006
TL;DR: The results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals.
Abstract: In sparse approximation theory, the fundamental problem is to reconstruct a signal AisinRn from linear measurements (A,psii) with respect to a dictionary of psii's. Recently, there is focus on the novel direction of Compressed Sensing where the reconstruction can be done with very few-O(klogn)-linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, the results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying mathematics and because of its potential applications. In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1/epsiv, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1+epsiv and improves the reconstruction time from poly(n) to poly (klogn). Our second result is a randomized construction of O(kpolylog(n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on compressed sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including learning theory, streaming algorithms and complexity theory for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement.

179 citations


Proceedings ArticleDOI
19 Apr 2006
TL;DR: A new framework for distributed coding and compression in sensor networks based on distributed compressed sensing, which is well-suited for sensor network applications, thanks to its simplicity, universality, computational asymmetry, tolerance to quantization and noise, robustness to measurement loss, and scalability.
Abstract: This paper develops a new framework for distributed coding and compression in sensor networks based on distributed compressed sensing (DCS). DCS exploits both intra-signal and inter-signal correlations through the concept of joint sparsity; just a few measurements of a jointly sparse signal ensemble contain enough information for reconstruction. DCS is well-suited for sensor network applications, thanks to its simplicity, universality, computational asymmetry, tolerance to quantization and noise, robustness to measurement loss, and scalability. It also requires absolutely no inter-sensor collaboration. We apply our framework to several real world datasets to validate the framework.

151 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work proposes a non-adaptive construction of a sparse Phi comprising only the values 0 and 1; hence the computation of y involves only sums of subsets of the elements of x.
Abstract: Sudocodes are a new scheme for lossless compressive sampling and reconstruction of sparse signals. Consider a sparse signal x isin RopfN containing only K Lt N non-zero values. Sudo-encoding computes the codeword via the linear matrix-vector multiplication y = Phix, with K < M Lt N. We propose a non-adaptive construction of a sparse Phi comprising only the values 0 and 1; hence the computation of y involves only sums of subsets of the elements of x. An accompanying sudodecoding strategy efficiently recovers x given y. Sudocodes require only M = O(Klog(N)) measurements for exact reconstruction with worst-case computational complexity O(Klog(K) log(N)). Sudocodes can be used as erasure codes for real-valued data and have potential applications in peer-to-peer networks and distributed data storage systems. They are also easily extended to signals that are sparse in arbitrary bases

148 citations


Book ChapterDOI
02 Jul 2006
TL;DR: The results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals.
Abstract: In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1/e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement

138 citations


Proceedings Article
01 Sep 2006
TL;DR: This work demonstrates that measurement noise is the crucial factor that dictates the number of measurements needed for reconstruction, and concisely captures the effect of measurement noise on the performance limits of signal reconstruction, thus enabling to benchmark the performance of specific reconstruction algorithms.
Abstract: Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measurements) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noise-free measurements. However, measurement noise is unavoidable in analog systems and must be accounted for. We demonstrate that measurement noise is the crucial factor that dictates the number of measurements needed for reconstruction. To establish this result, we evaluate the information contained in the measurements by viewing the measurement system as an information theoretic channel. Combining the capacity of this channel with the ratedistortion function of the sparse signal, we lower bound the rate-distortion performance of a compressed sensing system. Our approach concisely captures the effect of measurement noise on the performance limits of signal reconstruction, thus enabling to benchmark the performance of specific reconstruction algorithms.

122 citations


Proceedings ArticleDOI
01 Dec 2006
TL;DR: This paper investigates the utility of CS projection observations for signal classification (more specifically, m-ary hypothesis testing), and theoretical error bounds are derived and verified with several simulations.
Abstract: Compressive sampling (CS), also called compressed sensing, entails making observations of an unknown signal by projecting it onto random vectors. Recent theoretical results show that if the signal is sparse (or nearly sparse) in some basis, then with high probability such observations essentially encode the salient information in the signal. Further, the signal can be reconstructed from these "random projections," even when the number of observations is far less than the ambient signal dimension. The provable success of CS for signal reconstruction motivates the study of its potential in other applications. This paper investigates the utility of CS projection observations for signal classification (more specifically, m-ary hypothesis testing). Theoretical error bounds are derived and verified with several simulations.

117 citations


Proceedings ArticleDOI
22 Mar 2006
TL;DR: This work shows that there are sharp thresholds on sparsity below which these methods will succeed and above which they fail; it evaluates those thresholds precisely and hints at several interesting applications.
Abstract: The ubiquitous least squares method for systems of linear equations returns solutions which typically have all non-zero entries. However, solutions with the least number of non-zeros allow for greater insight. An exhaustive search for the sparsest solution is intractable, NP-hard. Recently, a great deal of research showed that linear programming can find the sparsest solution for certain `typical? systems of equations, provided the solution is sufficiently sparse. In this note we report recent progress determining conditions under which the sparsest solution to large systems is available by linear programming, [1]?[3]. Our work shows that there are sharp thresholds on sparsity below which these methods will succeed and above which they fail; it evaluates those thresholds precisely and hints at several interesting applications.

Patent
25 Oct 2006
TL;DR: In this paper, the authors show that compressive measurements are in fact information scalable, allowing one to answer a broad spectrum of questions about a signal when provided only with a reduced set of measurements.
Abstract: The recently introduced theory of Compressive Sensing (CS) enables a new method for signal recovery from incomplete information (a reduced set of 'compressive' linear measurements), based on the assumption that the signal is sparse in some dictionary (See Fig. 4). Such compressive measurement schemes are desirable in practice for reducing the costs of signal acquisition, storage, and processing (See Fig.4, input signal x). However, the current CS framework considers only a certain task (signal recovery) and only in a certain model setting (sparsity). We show that compressive measurements are in fact information scalable, allowing one to answer a broad spectrum of questions about a signal when provided only with a reduced set of compressive measurements (See Fig.4, item 402). These questions range from complete signal recovery at one extreme down to a simple binary detection decision at the other. (Questions in between include, for example, estimation and classification.) We provide techniques such as a 'compressive matched filter' for answering several of these questions given the available measurements, often without needing to first reconstruct the signal (See Fig. 4, output signal y). In many cases, these techniques can succeed with far fewer measurements than would be required for full signal recovery, and such techniques can also be computationally more efficient. Based on additional mathematical insight, we discuss information scalable algorithms in several model settings, including sparsity (as in CS), but also in parametric or manifold-based settings and in model-free settings for generic statements of detection, classification, and estimation problems (See Fig. 4, item 404).

Dissertation
01 Jan 2006
TL;DR: This work proves several results about the associated SBL cost function that elucidate its general behavior and provides solid theoretical justification for using it to find maximally sparse representations, and demonstrates how a generalized form of SBL uniquely satisfies two minimal performance criteria directly linked to sparsity.
Abstract: Finding the sparsest or minimum e0-norm representation of a signal given a (possibly) overcomplete dictionary of basis vectors is an important problem in many application domains, including neuroelectromagnetic source localization, compressed sensing, sparse component analysis, feature selection, image restoration/compression, and neural coding. Unfortunately, the required optimization is typically NP-hard, and so approximate procedures that succeed with high probability are sought. Nearly all current approaches to this problem, including orthogonal matching pursuit (OMP), basis pursuit (BP) (or the LASSO), and minimum e p quasi-norm methods, can be viewed in Bayesian terms as performing standard MAP estimation using a fixed, sparsity-inducing prior. In contrast, we advocate empirical Bayesian approaches such as sparse Bayesian learning (SBL), which use a parameterized prior to encourage sparsity through a process called evidence maximization. We prove several results about the associated SBL cost function that elucidate its general behavior and provide solid theoretical justification for using it to find maximally sparse representations. Specifically, we show that the global SBL minimum is always achieved at the maximally sparse solution, unlike the BP cost function, while often possessing a more limited constellation of local minima than comparable MAP methods which share this property. We also derive conditions, dependent on the distribution of the nonzero model weights embedded in the optimal representation, such that SBL has no local minima. Finally, we demonstrate how a generalized form of SBL, out of a large class of latent-variable models, uniquely satisfies two minimal performance criteria directly linked to sparsity. These results lead to a deeper understanding of the connections between various Bayesian-inspired strategies and suggest new sparse learning algorithms. Several extensions of SBL are also considered for handling sparse representations that arise in spatio-temporal settings and in the context of covariance component estimation. Here we assume that a small set of common features underly the observed data collected over multiple instances. The theoretical properties of these SBL-based cost functions are examined and evaluated in the context of existing methods. The resulting algorithms display excellent performance on extremely large, ill-posed, and ill-conditioned problems in neuroimaging, suggesting a strong potential for impacting this field and others.

Journal ArticleDOI
TL;DR: In this article, the problem of reconstructing a sparse signal from a limited number of linear measurements was considered and it was shown that the signal can be recovered with overwhelming probability when the number of measurements exceeds a certain threshold.
Abstract: We consider the problem of reconstructing a sparse signal $x^0\in\R^n$ from a limited number of linear measurements. Given $m$ randomly selected samples of $U x^0$, where $U$ is an orthonormal matrix, we show that $\ell_1$ minimization recovers $x^0$ exactly when the number of measurements exceeds \[ m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, \] where $S$ is the number of nonzero components in $x^0$, and $\mu$ is the largest entry in $U$ properly normalized: $\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|$. The smaller $\mu$, the fewer samples needed. The result holds for ``most'' sparse signals $x^0$ supported on a fixed (but arbitrary) set $T$. Given $T$, if the sign of $x^0$ for each nonzero entry on $T$ and the observed values of $Ux^0$ are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples.

Proceedings ArticleDOI
12 May 2006
TL;DR: This work proposes a notional system design that is highly underdetermined, yet still computationally invertable, and relies on recently-developed concepts in compressive sensing.
Abstract: A spectral imager provides a 3-D data cube in which the spatial information (2-D) of the image is complemented by spectral information (1-D) about each spatial location. Typically, these systems are operated in a fully-determined (or overdetermined) manner so that the measurements can be computationally inverted into a reliable estimate of the source. We propose a notional system design that is highly underdetermined, yet still computationally invertable. This approach relies on recently-developed concepts in compressive sensing. Because the number of required measurements is greatly reduced from traditional designs, the result is a faster and more economical sensor system.

Patent
21 Aug 2006
TL;DR: In this paper, the authors proposed a method for compressed sensing, which consists of forming a first compressed sensing matrix utilizing a first set of time indices corresponding to a first sampling rate, forming a second compressed sensing matrices utilizing a plurality of frequencies and a second set of indices corresponding to a second sampling rate and reconstructing at least a portion of the input signal utilizing the combined compressed matrix.
Abstract: Embodiments of the present invention provide a method and apparatus for compressed sensing. The method generally comprises forming a first compressed sensing matrix utilizing a first set of time indices corresponding to a first sampling rate, forming a second compressed sensing matrix utilizing a plurality of frequencies and a second set of time indices corresponding to a second sampling rate, forming a combined compressed sensing matrix from the first compressed sensing matrix and the second compressed sensing matrix, and reconstructing at least a portion of the input signal utilizing the combined compressed sensing matrix. The first and second sampling rates are each less than the Nyquist sampling rate for the input signal.

Proceedings ArticleDOI
18 May 2006
TL;DR: This paper describes a compressive sensing strategy developed under the Compressive Optical MONTAGE Photography Initiative and demonstrates that the system can achieve up to 50% compression with conventional benchmarking images.
Abstract: This paper describes a compressive sensing strategy developed under the Compressive Optical MONTAGE Photography Initiative. Multiplex and multi-channel measurements are generally necessary for compressive sensing. In a compressive imaging system described here, static focal plane coding is used with multiple image apertures for non-degenerate multiplexing and multiple channel sampling. According to classical analysis, one might expect the number of pixels in a reconstructed image to equal the total number of pixels across the sampling channels, but we demonstrate that the system can achieve up to 50% compression with conventional benchmarking images. In general, the compression rate depends on the compression potential of an image with respect to the coding and decoding schemes employed in the system.

Proceedings ArticleDOI
01 Jan 2006

Proceedings ArticleDOI
14 May 2006
TL;DR: Preliminary theoretical and experimental evidence is provided that manifold-based signal structure can be preserved using small numbers of random projections and Whitney's embedding theorem, which states that a K-dimensional manifold can be embedded in Ropf2K+1, is examined.
Abstract: Random projections have recently found a surprising niche in signal processing. The key revelation is that the relevant structure in a signal can be preserved when that signal is projected onto a small number of random basis functions. Recent work has exploited this fact under the rubric of compressed sensing (CS): signals that are sparse in some basis can be recovered from small numbers of random linear projections. In many cases, however, we may have a more specific low-dimensional model for signals in which the signal class forms a nonlinear manifold in RN. This paper provides preliminary theoretical and experimental evidence that manifold-based signal structure can be preserved using small numbers of random projections. The key theoretical motivation comes from Whitney's embedding theorem, which states that a K-dimensional manifold can be embedded in Ropf2K+1. We examine the potential applications of this fact. In particular, we consider the task of recovering a manifold-modeled signal from a small number of random projections. Thanks to our more specific model, we can recover certain signals using far fewer measurements than would be required using sparsity-driven CS techniques

Proceedings ArticleDOI
14 May 2006
TL;DR: It is shown that for certain classes of piecewise constant signals and high SNR regimes both CS and AS are near-optimal, the first evidence that shows that compressive sampling, which is non-adaptive, cannot be significantly outperformed by any other method, even in presence of noise.
Abstract: Compressive sampling (CS), or Compressed Sensing, has generated a tremendous amount of excitement in the signal processing community. Compressive sampling, which involves non-traditional samples in the form of randomized projections, can capture most of the salient information in a signal with a relatively small number of samples, often far fewer samples than required using traditional sampling schemes. Adaptive sampling (AS), also called Active Learning, uses information gleaned from previous observations (e.g., feedback) to focus the sampling process. Theoretical and experimental results have shown that adaptive sampling can dramatically outperform conventional (non-adaptive) sampling schemes. This paper compares the theoretical performance of compressive and adaptive sampling in noisy conditions, and it is shown that for certain classes of piecewise constant signals and high SNR regimes both CS and AS are near-optimal. This result is remarkable since it is the first evidence that shows that compressive sampling, which is non-adaptive, cannot be significantly outperformed by any other method (including adaptive sampling procedures), even in presence of noise.

Proceedings ArticleDOI
22 Mar 2006
TL;DR: Experiments show that random filtering is effective at acquiring sparse and compressible signals and has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.
Abstract: This paper discusses random filtering, a recently proposed method for directly acquiring a compressed version of a digital signal. The technique is based on convolution of the signal with a fixed FIR filter having random taps, followed by downsampling. Experiments show that random filtering is effective at acquiring sparse and compressible signals. This process has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.

Proceedings ArticleDOI
14 May 2006
TL;DR: This paper presents a specific row-action method and provides extensive empirical evidence that it is an effective technique for signal reconstruction and offers several advantages over interior-point methods, including minimal storage and computational requirements, scalability, and robustness.
Abstract: Compressed Sensing uses a small number of random, linear measurements to acquire a sparse signal. Nonlinear algorithms, such as l 1 minimization, are used to reconstruct the signal from the measured data. This paper proposes row-action methods as a computational approach to solving the l 1 optimization problem. This paper presents a specific row-action method and provides extensive empirical evidence that it is an effective technique for signal reconstruction. This approach offers several advantages over interior-point methods, including minimal storage and computational requirements, scalability, and robustness.

Proceedings ArticleDOI
05 May 2006
TL;DR: In this article, the authors consider compressive sensing in the context of optical spectroscopy and compare the fidelity of sampling and inference strategies over a family of spectral signals, and describe measurement constraints specific to optical spectrometers, inference models based on physical or statistical characteristics of the signals.
Abstract: We consider compressive sensing in the context of optical spectroscopy. With compressive sensing, the ratio between the number of measurements and the number of estimated values is less than one, without compromising the fidelity in estimation. A compressive sensing system is composed of a measurement subsystem that maps a signal to digital data and an inference algorithm that maps the data to a signal estimate. The inference algorithm exploits both the information captured in the measurement and certain a priori information about the signals of interest, while the measurement subsystem provides complementary, signal-specific information at the lowest sampling rate possible. Codesign of the measurement strategies, the model of a priori information, and the inference algorithm is the central problem of system design. This paper describes measurement constraints specific to optical spectrometers, inference models based on physical or statistical characteristics of the signals, as well as linear and nonlinear reconstruction algorithms. We compare the fidelity of sampling and inference strategies over a family of spectral signals.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: It is shown that the minimum number of storage locations that a peer note has to connect to reconstruct the entire file with high probability can be significantly smaller than the total number of blocks that the file is broken into.
Abstract: In a peer-to-peer file distribution network, a large file is split into blocks residing in multiple storage locations A peer node tries to retrieve the original file by downloading blocks from randomly chosen peers We compare the performance of four storage strategies: uncoded, erasure coding, random linear coding, and random linear coding over coded blocks We show that, in principle, random linear coding makes a better tradeoff between the storage requirement and decoding complexity However, the sparsity of the file blocks is not fully exploited by random linear combinations of all original blocks Motivated by the recent results from compressed sensing, we study the design tradeoff in random linear coding over coded blocks and propose an efficient decoding algorithm based on basis pursuit We show that the minimum number of storage locations that a peer note has to connect to reconstruct the entire file with high probability can be significantly smaller than the total number of blocks that the file is broken into

Proceedings ArticleDOI
17 Apr 2006
TL;DR: An ISP system which utilizes a near Infrared (NIR) Hadamard multiplexing imaging sensor and uses an ATR metric to send codes to the sensor in order to collect only the information relevant to the ATR problem, resulting in a multiple resolution hyperspectral cube.
Abstract: In this paper we present an information sensing system which integrates sensing and processing resulting in the direct collection of data which is relevant to the application. Broadly, integrated sensing and processing (ISP) considers algorithms that are integrated with the collection of data. That is, traditional sensor development tries to come up with the "best" sensor in terms of SNR, resolution, data rates, integration time, etc. and traditional algorithm development tasks might wish to optimize probability of detection, false alarm rate, class separability, etc. For a typical Automatic Target Recognition (ATR) problem, the goal of ISP is to field algorithms which "tell" the sensor what kind of data to collect next and the sensor alters its parameters to collect the "best" information in order that the algorithm performs optimally. We demonstrate an ISP system which utilizes a near Infrared (NIR) Hadamard multiplexing imaging sensor. This prototype sensor incorporates a digital mirror array (DMA) device in order to realize a Hadamard multiplexed imaging system. Specific Hadamard codes can be sent to the sensor to realize inner products of the underlying scene rather than the scene itself. The developed ISP algorithm uses an ATR metric to send codes to the sensor in order to collect only the information relevant to the ATR problem. The result is a multiple resolution hyperspectral cube with full resolution where targets are present and less than full resolution where there are no targets. Essentially, this is compressed sensing.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The effect of the assumed number of nonzero taps, the length of the training sequence and other parameters, on the performance of one algorithm motivated by recent results in compressed sensing are studied.
Abstract: The estimation and equalization of highly sparse wideband channels with large delay spreads is a challenging problem. The optimal maximum likelihood solution of this problem is computationally prohibitive and we must resort to sub-optimal solutions. In this paper we study the effect of the assumed number of non-zero taps, the length of the training sequence and other parameters, on the performance of one such algorithm. We also discuss an algorithm motivated by recent results in compressed sensing, where the dimension of the problem is reduced by projecting the received data on a relatively low dimensional subspace. The subspace is randomly chosen and does not assume any prior knowledge of the channel.

Proceedings ArticleDOI
04 Sep 2006
TL;DR: A new approach for sparse decomposition is introduced, based on a geometrical interpretation of sparsity, that performs nearly as well as LP, provided that the average number of active sources at each time instant is less than unity.
Abstract: We introduce a new approach for sparse decomposition, based on a geometrical interpretation of sparsity. By sparse decomposition we mean finding sufficiently sparse solutions of underdetermined linear systems of equations. This will be discussed in the context of Blind Source Separation (BSS). Our problem is then underdetermined BSS where there are fewer mixtures than sources. The proposed algorithm is based on minimizing a family of quadratic forms, each measuring the distance of the solution set of the system to one of the coordinate subspaces (i.e. coordinate axes, planes, etc.). The performance of the method is then compared to the minimal 1-norm solution, obtained using the linear programming (LP). It is observed that the proposed algorithm, in its simplest form, performs nearly as well as LP, provided that the average number of active sources at each time instant is less than unity. The computational efficiency of this simple form is much higher than LP. For less sparse sources, performance gains over LP may be obtained at the cost of increased complexity which will slow the algorithm at higher dimensions. This suggests that LP is still the algorithm of choice for high-dimensional moderately-sparse problems. The advantage of our algorithm is to provide a trade-of between complexity and performance.

Proceedings ArticleDOI
04 May 2006
TL;DR: It is shown that for certain classes of piecewise constant signals and high SNR regimes both CS and AS are near optimal, the first evidence that shows that compressive sampling, which is non-adaptive, cannot be significantly outperformed by any other method (including adaptive sampling procedures), even in the presence of noise.
Abstract: Compressive sampling (CS), or Compressed Sensing , has generated a tremendous amount of excitement in the signal processing community. Compressive sampling, which involves non-traditional samples in the form of randomized projections, can capture most of the salient information in a signal with a relatively small number of samples, often far fewer samples than required using traditional sampling schemes. Adaptive sampling (AS), also called Active Learning , uses information gleaned from previous observations (e.g. , feedback) to focus the sampling process. Theoretical and experimental results have shown that adaptive sampling can dramatically outperform conventional (non-adaptive) sampling schemes. This paper compares the theoretical performance of compressive and adaptive sampling for regression in noisy conditions, and it is shown that for certain classes of piecewise constant signals and high SNR regimes both CS and AS are near optimal. This result is remarkable since it is the first evidence that shows that compressive sampling, which is non-adaptive, cannot be significantly outperformed by any other method (including adaptive sampling procedures), even in the presence of noise. The performance of CS schemes for signal detection is also investigated.© (2006) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: This paper motivates the use of Compressive Sampling for imaging, presents theory predicting reconstruction error rates, and demonstrates its performance in electronic imaging with an example.
Abstract: Compressive Sampling, or Compressed Sensing, has recently generated a tremendous amount of excitement in the image processing community. Compressive Sampling involves taking a relatively small number of non-traditional samples in the form of projections of the signal onto random basis elements or random vectors (random projections). Recent results show that such observations can contain most of the salient information in the signal. It follows that if a signal is compressible in some basis, then a very accurate reconstruction can be obtained from these observations. In many cases this reconstruction is much more accurate than is possible using an equivalent number of conventional point samples. This paper motivates the use of Compressive Sampling for imaging, presents theory predicting reconstruction error rates, and demonstrates its performance in electronic imaging with an example.

Proceedings ArticleDOI
TL;DR: A practical image/video camera is developed based on this concept and realized by combining a micromirror-array with a single optical sensor and exploiting compressed sensing based on projections with white-noise basis.
Abstract: We design a camera by combining a micromirror-array with a single optical sensor and exploiting compressed sensing based on projections with white-noise basis. A practical image/video camera is developed based on this concept and realized.