scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors derived the field equations of a general circular array to obtain the field of an imperfect UCA (IUCA), which is caused by polluted orthogonalities of OAM modes, and then the reduced constraints method was proposed for the IUCA to achieve high purity and low loss OAM mode.
Abstract: Uniform circular array (UCA) is a pioneering approach to generate radio twisted beams carrying orbital angular momentum (OAM). However, faulty antenna elements in a UCA could render the array into an imperfect UCA (IUCA), consequently creating distortion for OAM-based communication or radar systems. To analyze the effects of an IUCA on generating OAM, elementary wave function and wave transformation are derived to obtain field equations of a general circular array. Further derivation of IUCA shows that the field of an IUCA is a superposition of OAM modes, which is caused by polluted orthogonalities of OAM modes. To achieve pure OAM modes, the orthogonal matching pursuit algorithm, the convex optimization tool, and the analytical solution are employed to design a feeding network to retrieve the orthogonal property at the cost of strong gain loss. Then, the reduced constraints method is proposed for the IUCA to achieve high-purity and low-loss OAM mode. Finally, four circular arrays with different shapes are explored for limited installation space to generate twisted beams.

58 citations

Journal ArticleDOI
TL;DR: A high-resolution staring imaging technique named radar coincidence imaging (RCI) is investigated, which captures super-resolution in azimuth, which breaks through the Rayleigh resolution limitation of antenna array by modulating the wavefront of transmissions.
Abstract: In radar sensing and imaging, the azimuth resolution is a main concern, which is limited by the antenna aperture, and as a result the targets within the beam cannot be distinguished. By enhancing the diversity of radiation, radar can obtain additional information for resolution. In this paper, a high-resolution staring imaging technique named radar coincidence imaging (RCI) is investigated. Originated from the classical optical coincidence imaging, the RCI captures super-resolution in azimuth, which breaks through the Rayleigh resolution limitation of antenna array by modulating the wavefront of transmissions. The spatial resolution of RCI is defined by the spatial correlation function of the stochastic radiation field. A scheme of RCI with a stochastic frequency modulated array using frequency-hopping waveforms is proposed, while the imaging model is established. Three image reconstruction algorithms, i.e. the pseudo-inverse algorithm, Tikhonov regularization method, and sparse reconstruction algorithm, are investigated and compared with respect to targets of different complexity. Performance analysis of these reconstruction methods in the presence of noise is presented by the relative imaging error. Finally, a typical RCI system based on the digital transmitter/receiver array is established. Outfield experiment results verify the effectiveness of the RCI.

58 citations

Proceedings ArticleDOI
01 Jan 2019
TL;DR: Wang et al. as discussed by the authors proposed a scalable Locality-Constrained Projective Dictionary Learning (LC-PDL) for efficient representation and classification, which incorporates a locality constraint of atoms into DL procedures to keep local information and obtain the codes of samples over each class separately.
Abstract: We propose a novel structured discriminative block-diagonal dictionary learning method, referred to as scalable Locality-Constrained Projective Dictionary Learning (LC-PDL), for efficient representation and classification. To improve the scalability by saving both training and testing time, our LC-PDL aims at learning a structured discriminative dictionary and a block-diagonal representation without using costly l0/l1-norm. Besides, it avoids extra time-consuming sparse reconstruction process with the well-trained dictionary for new sample as many existing models. More importantly, LC-PDL avoids using the complementary data matrix to learn the sub-dictionary over each class. To enhance the performance, we incorporate a locality constraint of atoms into the DL procedures to keep local information and obtain the codes of samples over each class separately. A block-diagonal discriminative approximation term is also derived to learn a discriminative projection to bridge data with their codes by extracting the special block-diagonal features from data, which can ensure the approximate coefficients to associate with its label information clearly. Then, a robust multiclass classifier is trained over extracted block-diagonal codes for accurate label predictions. Experimental results verify the effectiveness of our algorithm.

58 citations

Journal ArticleDOI
TL;DR: This paper casts the Takagi-Sugeno (T-S) fuzzy system identification into a hierarchical sparse representation problem, where the goal is to establish a T-S fuzzy system with a minimal number of fuzzy rules, which simultaneously have a minimum number of nonzero consequent parameters.
Abstract: “The curse of dimensionality” has become a significant bottleneck for fuzzy system identification and approximation. In this paper, we cast the Takagi-Sugeno (T-S) fuzzy system identification into a hierarchical sparse representation problem, where our goal is to establish a T-S fuzzy system with a minimal number of fuzzy rules, which simultaneously have a minimal number of nonzero consequent parameters. The proposed method, which is called hierarchical sparse fuzzy inference systems ( H-sparseFIS), explicitly takes into account the block-structured information that exists in the T-S fuzzy model and works in an intuitive way: First, initial fuzzy rule antecedent part is extracted automatically by an iterative vector quantization clustering method; then, with block-structured sparse representation, the main important fuzzy rules are selected, and the redundant ones are eliminated for better model accuracy and generalization performance; moreover, we simplify the selected fuzzy rules consequent with sparse regularization such that more consequent parameters can approximate to zero. This algorithm is very efficient and shows good performance in well-known benchmark datasets and real-world problems.

58 citations

Journal ArticleDOI
TL;DR: This letter proposes a presentation of antenna array with virtual elements (AAVE) by appending additional virtual antenna elements into the original antenna array and develops an efficient angle estimation algorithm by using compressive sensing theories.
Abstract: In this letter, we focus on the efficient channel estimation problem for millimeter wave (MMW) systems with massive antenna arrays and RF constraints, aiming at achieving a fast and high resolution angle-of-arrival/angle-of-departure (AoA/AoD) estimation. We first propose a presentation of antenna array with virtual elements (AAVE) by appending additional virtual antenna elements into the original antenna array. On the basis of the AAVE structure, we explore the channel sparsity in the angular domain and develop an efficient angle estimation algorithm by using compressive sensing theories. We then proposed a training design and prove that the sensing matrix in the proposed training can guarantee the accurate detection of angles with a high probability. Both the analytical and simulation results show that, without changing the physical antenna arrays, the proposed approach can achieve not only a lower overhead, but also a significantly higher resolution in angles estimation, compared to the existing algorithms.

58 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations