scispace - formally typeset
Open AccessJournal ArticleDOI

Fast and Efficient Compressive Sensing Using Structurally Random Matrices

Reads0
Chats0
TLDR
Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework, called Structurally Random Matrix (SRM), which has theoretical sensing performance comparable to that of completely random sensing matrices.
Abstract
This paper introduces a new framework to construct fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we prerandomize the sensing signal by scrambling its sample locations or flipping its sample signs and then fast-transform the randomized samples and finally, subsample the resulting transform coefficients to obtain the final sensing measurements. SRM is highly relevant for large-scale, real-time compressive sensing applications as it has fast computation and supports block-based processing. In addition, we can show that SRM has theoretical sensing performance comparable to that of completely random sensing matrices. Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XXX, NO. XXX, XXX 2009 1
Fast and Efficient Compressive Sensing using
Structurally Random Matrices
Thong T. Do, Student Member, IEEE, Lu Gan, Member, IEEE, Nam Nguyen and Trac D.
Tran, Member, IEEE
This work has been supported in part by the National Science Foundation under Grant CCF-0728893.
Thong T. Do, Nam Nguyen and Trac D. Tran are with the Johns Hopkins University, Baltimore, MD, 21218 USA.
Lu Gan is with the Brunel University, London, UK.
June 8, 2010 DRAFT

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XXX, NO. XXX, XXX 2009 2
Abstract
This paper introduces a fast and efficient framework for practical compressive sensing. Our framework is mainly
based on a novel design called Structurally Random Matrix (SRM). It is highly promising for large-scale, real-time
compressive sensing applications because it can be realized as a product of simple and fast operators and thus, there is
no need for storing the sensing matrix explicitly. The introduced framework is flexible and provides relevant features
such as universality, block-based processing and hardware friendliness to analog and optical domain implementation.
Despite all of these practical advantages, the framework can be shown to approach optimal performance, i.e. the
number of measurements for exact signal reconstruction is at the minimum bound. Simulation results with several
interesting SRM under various practical settings are also presented to verify the validity of the theory as well as to
illustrate the promising potentials of the proposed framework.
Index Terms
compressed sensing, compressive sensing, random projection, sparse reconstruction, fast and efficient algorithm
I. INTRODUCTION
Compressed sensing (CS) [1], [2] has attracted a lot of interests over the past few years as a revolutionary signal
sampling paradigm. Suppose that x
x
x is a length-N signal. It is said to be K-sparse (or compressible) if x
x
x can be
well approximated using only K ¿ N coefficients under some linear transform:
x
x
x = Ψ
Ψ
Ψα
α
α,
where Ψ
Ψ
Ψ is the sparsifying basis and α
α
α is the transform coefficient vector that has K (significant) nonzero entries.
According to the CS theory, such a signal can be acquired through the following random linear projection:
y
y
y = Φ
Φ
Φx
x
x + e
e
e,
where y
y
y is the sampled vector with M ¿ N data points, Φ
Φ
Φ represents a M×N random matrix and e
e
e is the acquisition
noise. The CS framework is attractive as it implies that x
x
x can be faithfully recovered from only M = O(K log N)
measurements, suggesting the potential of significant cost reduction in digital data acquisition.
While the sampling process is simply a random linear projection, the reconstruction to find the sparsest signal
from the received measurements is highly non-linear process. More precisely, the reconstruction algorithm is to
solve the l
1
-minimization of a transform coefficient vector:
min kα
α
αk
1
s.t. y
y
y = Φ
Φ
ΦΨ
Ψ
Ψα
α
α.
Linear programming [1], [2] and other convex optimization algorithms [3], [4], [5] have been proposed to solve
the l
1
minimization. Furthermore, there also exists a family of greedy pursuit algorithms [6], [7], [8], [9], [10]
offering another promising option for sparse reconstruction. These algorithms all need to compute Φ
Φ
ΦΨ
Ψ
Ψ and (Φ
Φ
ΦΨ
Ψ
Ψ)
T
multiple times. Thus, computational complexity of the system depends on the structure of sensing matrix Φ
Φ
Φ and its
transpose Φ
Φ
Φ
T
.
June 8, 2010 DRAFT

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XXX, NO. XXX, XXX 2009 3
Preferably, the sensing matrix Φ
Φ
Φ should be highly incoherent with sparsifying basis Ψ
Ψ
Ψ, i.e. rows of Φ
Φ
Φ do not
have any sparse representation in the basis Ψ
Ψ
Ψ. Incoherence between two matrices is mathematically quantified by
the mutual coherence coefficient [11].
Definition I.1. The mutual coherence of an orthonormal matrix N ×N Φ
Φ
Φ and another orthonormal matrix N ×N
Ψ
Ψ
Ψ is defined as:
µ(Φ
Φ
Φ,Ψ
Ψ
Ψ) = max
1i,jN
|hΦ
i
, Ψ
j
i|
where Φ
Φ
Φ
i
are rows of Φ
Φ
Φ and Ψ
Ψ
Ψ
j
are columns of Ψ
Ψ
Ψ, respectively.
If Φ
Φ
Φ and Ψ
Ψ
Ψ are two orthonormal matrices, kΦ
Φ
ΦΨ
Ψ
Ψ
j
k
2
= kΨ
Ψ
Ψ
j
k
2
= 1. Thus, it is easy to see that for two orthonormal
matrices Φ
Φ
Φ and Ψ
Ψ
Ψ , 1/
N µ 1. Incoherence implies that the mutual coherence or the maximum magnitude
of entries of the product matrix Φ
Φ
ΦΨ
Ψ
Ψ is relatively small. Two matrices are completely incoherent if their mutual
coherence coefficient approaches the lower bound value of 1/
N.
A popular family of sensing matrices is a random projection or a random matrix of i.i.d random variables from
a sub-Gaussian distribution such as Gaussian or Bernoulli [12], [13]. This family of sensing matrix is well-known
as it is universally incoherent with all other sparsifying basis. For example, if Φ
Φ
Φ is a random matrix of Gaussian
i.i.d entries and Ψ
Ψ
Ψ is an arbitrary orthonormal sparsifying basis, the sensing matrix in the transform domain Φ
Φ
ΦΨ
Ψ
Ψ is
also Gaussian i.i.d matrix. The universal property of a sensing matrix is important because it enables us to sense
a signal directly in its original domain without significant loss of sensing efficiency and without any other prior
knowledge. In addition, it can be shown that random projection approaches the optimal sensing performance of
M = O(K log N).
However, it is quite costly to realize random matrices in practical sensing applications as they require very high
computational complexity and huge memory buffering due to their completely unstructured nature [14]. For example,
to process a 512 ×512 image with 64K measurements (i.e., 25% of the original sampling rate), a Bernoulli random
matrix requires nearly gigabytes storage and giga-flop operations, which makes both the sampling and recovery
processes very expensive and in many cases, unrealistic.
Another class of sensing matrices is a uniformly random subset of rows of an orthonormal matrix in which
the partial Fourier matrix (or the partial FFT) is a special case [13], [14]. While the partial FFT is well known
for having fast and efficient implementation, it only works well in the transform domain or in the case that the
sparsifying basis is the identity matrix. More specifically, it is shown in [[14], Theorem 1.1] that the minimal number
of measurements required for exact recovery depends on the incoherence of Φ
Φ
Φ and Ψ
Ψ
Ψ:
M = O(µ
2
n
K log N ) (1)
where µ
n
is the normalized mutual coherence: µ
n
=
Nµ and 1 µ
n
N. With many well-known sparsifying
basis such as wavelets, this mutual coherence coefficient might be large and thus, resulting in performance loss.
Another approach is to design a sensing matrix to be incoherent with a given sparsifying basis. For example, Noiselets
is designed to be incoherent with the Haar wavelet basis in [15], i.e. µ
n
= 1 when Φ
Φ
Φ is Noiselets transform and Ψ
Ψ
Ψ
June 8, 2010 DRAFT

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XXX, NO. XXX, XXX 2009 4
is the Haar wavelet basis. Noiselets also has low-complexity implementation O(N log N) although it is unknown
if noiselets is also incoherent with other bases.
II. COMPRESSIVE SENSING WITH STRUCTURALLY RANDOM MATRICES
A. Overview
One of remaining challenges for CS in practice is to design a CS framework that has the following features:
Optimal or near optimal sensing performance: the number of measurements for exact recovery is almost
minimal, i.e. on the order of O(K log N);
Universality: sensing performance is equally good with all sparsifying bases;
Low complexity, fast implementation that can support block-based processing: this is necessary for large-scale,
realtime sensing applications;
Easy and cheap to implement in hardware and optics domain: Preferably, entries of the sensing matrix should
only take values in the set {0, 1, 1}.
In this paper, we propose a framework that aims to satisfy the above wish-list. Lying at the heart of our framework
is the concept of Structurally Random Matrix(SRM) that is defined as a product of three matrices:
Φ
Φ
Φ =
r
N
M
D
D
DF
F
FR
R
R (2)
where:
R
R
R N × N is either a uniform random permutation matrix or a diagonal random matrix whose diagonal
entries R
ii
are i.i.d Bernoulli random variables with identical distribution P (R
ii
= ±1) = 1/2. A uniformly
random permutation matrix scrambles signal’s sample locations globally while a diagonal matrix of Bernoulli
random variables flips signal’s sample signs locally. Hence, we often refer the former as the global randomizer
and the latter as the local randomizer.
F
F
F N × N is an orthonormal matrix that,in practice, is selected to be fast computable such as popular
fast transforms: FFT, DCT, WHT or their block diagonal versions. The purpose of the matrix F
F
F is to spread
information (or energy) of the signal’s samples over all measurements
D
D
D M ×N is a subsampling matrix/operator. The operator D
D
D selects a random subset of rows of the matrix
F
F
FR
R
R. If the probability of selecting a row P (a row is selected) is M/N, the number of rows selected would
be M in average. In matrix representation, D
D
D is simply a random subset of M rows of the identity matrix
of size N × N. The scale coefficient
q
N
M
is to normalize the transform so that energy of the measurement
vector is almost similar to that of the input signal vector.
The proposed sensing algorithm can be described step by step as follows:
Step 1 (Signal pre-randomization): Randomize a target signal by either flipping its sample signs or uniformly
permuting its sample locations. This step corresponds to multiplying the signal with the matrix D
D
D
Step 2 (Signal transform): Apply a fast transform F
F
F to the randomized signal
June 8, 2010 DRAFT

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XXX, NO. XXX, XXX 2009 5
Step 3 (Signal subsampling): randomly pick up M measurements out of N transform coefficients. This step
corresponds to multiplying the transform coefficients with the matrix D
D
D
Conventional CS reconstruction algorithm is employed to recover the transform coefficient vector α
α
α by solving
the l
1
minimization:
b
α
α
α = argminkα
α
αk
1
s.t. y
y
y = Φ
Φ
ΦΨ
Ψ
Ψα
α
α. (3)
Finally, the signal is recovered as
b
x
x
x = Ψ
Ψ
Ψ
b
α
α
α. The framework can achieve perfect reconstruction if
b
x
x
x = x
x
x.
From the best of our knowledge, the proposed sensing algorithm is distinct from currently existing methods such
as random projection [16], random filters [17], structured Toeplitz [18], random convolution [19] via the step of
pre-randomization. The main idea of this step is to deliberately scramble the structure of the signal, converting
the signal to be sampled into a white noise-like one. Detail analysis in the following section will show that pre-
randomization is necessary for obtaining universally incoherent sensing. The intuition behind this pre-randomization
strategy is that scrambling a signal into a white noise-like form enables the sensing process to be independent of
the signal’s sparsifying basis.
The remaining of the paper is organized as follows. We first discuss about incoherence between SRMs and
sparsifying transforms in Section III. More specifically, Section III-A will give us a rough intuition of why SRM
could work as well as a random Gaussian matrix. Detail quantitative analysis of the incoherence for SRM with
local randomizer and global randomizer is presented in Section III-B and Section III-C, respectively. Based on
these incoherence results, theoretical performance of the proposed framework is analyzed in Section IV and then
followed by experiment validation in Section V. Finally, Section VI concludes the paper with detail discussion of
practical advantages of the proposed framework and relationship between the proposed framework and other related
works.
B. Notations
We reserve a bold letter for a vector, a capital and bold letter for a matrix, a capital and bold letter with one
sub-index for a row or a column of a matrix and a capital letter with two sub-indices for an entry of a matrix.
We often employ x
x
x R
N
for the input signal, y
y
y R
N
for the measurement vector, Φ
Φ
Φ R
M×N
for the sensing
matrix, Ψ
Ψ
Ψ R
N×N
for the sparsifying matrix and α
α
α R
N
for the transform coefficient vector (x
x
x = Ψ
Ψ
Ψα
α
α). We use
the notation supp(z
z
z) to indicate the index set (or coordinate set) of nonzero entries of the vector z
z
z. Occasionally,
we also use T to alternatively refer to this index set of nonzero entries (i.e., T =supp(z
z
z)). In this case, z
z
z
T
denotes
the portion of vector z
z
z indexed by the set T and A
A
A
T
denotes the submatrix of A
A
A whose columns are indexed by
the set T .
Let S
ij
, F
ij
be the entry at the i
th
row and j
th
column of A
A
AΨ
Ψ
Ψ and F
F
F , R
kk
be the k
th
entry on the diagonal of
the diagonal matrix R, A
A
A
i
and Ψ
Ψ
Ψ
j
be the i
th
row of A
A
A and j
th
column of Ψ
Ψ
Ψ, respectively.
In addition, we also employ the following notations:
June 8, 2010 DRAFT

Citations
More filters
Journal ArticleDOI

ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing

TL;DR: Two versions of a novel deep learning architecture are proposed, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data-driven deep learning method for image reconstruction from sparsely sampled measurements, which achieved favorable reconstruction accuracy in fast computational speed compared with the traditional and the other deep learning methods.
Journal ArticleDOI

A Systematic Review of Compressive Sensing: Concepts, Implementations and Applications

TL;DR: To bridge the gap between theory and practicality of CS, different CS acquisition strategies and reconstruction approaches are elaborated systematically in this paper.
Journal ArticleDOI

Application of Compressive Sensing in Cognitive Radio Communications: A Survey

TL;DR: This survey paper provides a detailed review of the state of the art related to the application of CS in CR communications and provides a classification of the main usage areas based on the radio parameter to be acquired by a wideband CR.
Journal ArticleDOI

A Review of Compressive Sensing in Information Security Field

TL;DR: This paper reviews CS in information security field from two aspects: theoretical security and application security, and indicates some other possible application research topics in future.
Journal ArticleDOI

Exploiting chaos-based compressed sensing and cryptographic algorithm for image encryption and compression

TL;DR: A solution for simultaneous image encryption and compression using compressed sensing using structurally random matrix (SRM), and permutation-diffusion type image encryption using 3-D cat map is presented.
References
More filters
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Journal ArticleDOI

Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit

TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Journal ArticleDOI

Decoding by linear programming

TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What are the contributions mentioned in the paper "Fast and efficient compressive sensing using structurally random matrices" ?

This paper introduces a fast and efficient framework for practical compressive sensing. The introduced framework is flexible and provides relevant features such as universality, block-based processing and hardware friendliness to analog and optical domain implementation. It is highly promising for large-scale, real-time compressive sensing applications because it can be realized as a product of simple and fast operators and thus, there is no need for storing the sensing matrix explicitly. Simulation results with several interesting SRM under various practical settings are also presented to verify the validity of the theory as well as to illustrate the promising potentials of the proposed framework. 

The scale coefficient √N M is to normalize the transform so that energy of the measurementvector is almost similar to that of the input signal vector. 

The intuition behind this pre-randomization strategy is that scrambling a signal into a white noise-like form enables the sensing process to be independent of the signal’s sparsifying basis. 

The condition that each row of F has zero average sum is to guarantee that entries of FΨ have zero mean while the condition that entries on each row of F and on each column of Ψ are not all equal is to prevent the degenerate case that entries of FΨ might become a deterministic quantity. 

If F is a dense and uniform rather than block-diagonal(e.g. DCT or normalized WHT matrix), the number of measurement needed is on the order of O(K log2(Nδ )). 

One of remaining challenges for CS in practice is to design a CS framework that has the following features:• Optimal or near optimal sensing performance: the number of measurements for exact recovery is almostminimal, i.e. on the order of O(K log N); • Universality: sensing performance is equally good with all sparsifying bases; • Low complexity, fast implementation that can support block-based processing: this is necessary for large-scale,realtime sensing applications;• 

the condition that absolute average sum of every column of the sparsifying basis Ψ are on the order of o( 1√ N ) is also close to the reality because the majority of columns of the sparsifying basis Ψ can be roughly viewed as bandpass and highpass filters whose average sum of the coefficients are always zero. 

Commonly speaking, this is good for signal recovery from a small subset of measurements because if energy of some transform coefficients were concentrated in few measurements that happens to be bypassed in the sampling process, there is no hope for exact signal recovery even when employing the most sophisticated reconstruction method. 

With compressible signals (e.g., images), the number of measurements acquired tends to be proportional with the signal dimension, for example, M = N/4, then computational complexity reduction if using SRM is N4 log N .Table III summarizes practical advantages of employing a SRM over a random sensing matrix. 

For each value of sparsity K ∈ {10, 20, 30, 40, 50, 60}, the authors repeat the experiment 500 times and count the probability of exact recovery.