scispace - formally typeset
Open AccessJournal ArticleDOI

Modified-CS: Modifying compressive sensing for problems with partially known support

Namrata Vaswani, +1 more
- Vol. 58, Iss: 9, pp 4595-4607
TLDR
The idea of the proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T, and obtain sufficient conditions for exact reconstruction using modified-CS.
Abstract
We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known” part of the support, denoted T, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the “known” part. The idea of our proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T. We obtain sufficient conditions for exact reconstruction using modified-CS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called regularized modified-CS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 9, SEPTEMBER 2010 4595
Modified-CS: Modifying Compressive Sensing for
Problems With Partially Known Support
Namrata Vaswani and Wei Lu
Abstract—We study the problem of reconstructing a sparse
signal from a limited number of its linear projections when a part
of its support is known, although the known part may contain
some errors. The “known” part of the support, denoted
, may
be available from prior knowledge. Alternatively, in a problem
of recursively reconstructing time sequences of sparse spatial
signals, one may use the support estimate from the previous time
instant as the “known” part. The idea of our proposed solution
(modified-CS) is to solve a convex relaxation of the following
problem: find the signal that satisfies the data constraint and is
sparsest outside of
. We obtain sufficient conditions for exact
reconstruction using modified-CS. These are much weaker than
those needed for compressive sensing (CS) when the sizes of the
unknown part of the support and of errors in the known part are
small compared to the support size. An important extension called
regularized modified-CS (RegModCS) is developed which also
uses prior signal estimate knowledge. Simulation comparisons for
both sparse and compressible signals are shown.
Index Terms—Compressive sensing, modified-CS, partially
known support, prior knowledge, sparse reconstruction.
I. INTRODUCTION
I
N this work, we study the sparse reconstruction problem
from noiseless measurements when a part of the support is
known, although the known part may contain some errors. The
“known” part of the support may be available from prior knowl-
edge. For example, consider MR image reconstruction using
the 2-D discrete wavelet transform (DWT) as the sparsifying
basis. If it is known that an image has no (or very little) black
background, all (or most) approximation coefficients will be
nonzero. In this case, the “known support” is the set of indexes
of the approximation coefficients. Alternatively, in a problem
of recursively reconstructing time sequences of sparse spatial
signals, one may use the support estimate from the previous
time instant as the “known support”. This latter problem occurs
in various practical applications such as real-time dynamic MRI
Manuscript received May 19, 2009; accepted April 21, 2010. Date of publi-
cation May 24, 2010; date of current version August 11, 2010. The associate
editor coordinating the review of this manuscript and approving it for publi-
cation was Prof. Pierre Vandergheynst. A shorter version of this work first ap-
peared in Proceedings of the IEEE International Symposium on Information
Theory (ISIT) 2009 and Proceedings of the IEEE International Conference on
Image Processing (ICIP) 2009. This research was supported by NSF Grants
ECCS-0725849 and CCF-0917015.
The authors are with the Electrical Department, Iowa State University, Ames,
IA 50010 USA (e-mail: namrata@iastate.edu; luwei@iastate.edu).
This paper has supplementary downloadable multimedia material available
at http://ieeexplore.ieee.org provided by the authors.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSP.2010.2051150
reconstruction, real-time single-pixel camera video imaging or
video compression/decompression. There are also numerous
other potential applications where sparse reconstruction for
time sequences of signals/images may be needed, e.g., see [3]
and [4].
Sparse reconstruction has been well studied for a while, e.g.,
see [5] and [6]. Recent work on compressed sensing (CS) gives
conditions for its exact reconstruction [7]–[9] and bounds the
error when this is not possible [10], [11].
Our recent work on least squares CS-residual (LS-CS) [12],
[13] can be interpreted as a solution to the problem of sparse
reconstruction with partly known support. LS-CS replaces CS
on the observation by CS on the LS observation residual, com-
puted using the “known” part of the support. Since the obser-
vation residual measures the signal residual which has much
fewer
large nonzero components, LS-CS greatly improves re-
construction error when fewer measurements are available. But
the exact sparsity size (total number of nonzero components) of
the signal residual is equal to or larger than that of the signal.
Since the number of measurements required for exact recon-
struction is governed by the exact sparsity size, LS-CS is not
able to achieve exact reconstruction using fewer noiseless mea-
surements than those needed by CS.
Exact reconstruction using fewer noiseless measurements
than those needed for CS is the focus of the current work.
Denote the “known” part of the support by
. Our proposed
solution (modified-CS) solves an
relaxation of the following
problem: find the signal that satisfies the data constraint and is
sparsest outside of
. We derive sufficient conditions for exact
reconstruction using modified-CS. When
is a fairly accurate
estimate of the true support, these are much weaker than the
sufficient conditions for CS. For a recursive time sequence
reconstruction problem, this holds if the reconstruction at
is exact and the support changes slowly over time. The former
can be ensured by using more measurements at
, while
the latter is often true in practice, e.g., see Fig. 1.
We also develop an important extension called regularized
modified-CS which also uses prior signal estimate knowledge.
It improves the error when exact reconstruction is not possible.
A part of this work appeared in [1]. In parallel and indepen-
dent work in [14], Khajehnejad et al. have also studied a sim-
ilar problem to ours but they assume a probabilistic prior on the
support. Other related work includes [15]. Very recent work on
causal reconstruction of time sequences includes [16] (focuses
on the time-invariant support case) and [17] (use past estimates
to only speed up the current optimization but not to improve
reconstruction error). Except [14], none of these prove exact re-
construction using fewer measurements and except [14], [15],
none of these even demonstrate it.
1053-587X/$26.00 © 2010 IEEE

4596 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 9, SEPTEMBER 2010
Fig. 1. In Fig. 1(a), we show two medical image sequences. In Fig. 1(b),
N
refers to the 99% energy support of the two-level Daubechies-4 2-D discrete
wavelet transform (DWT) of these sequences.
j
N
j
varied between 4121–4183
(
0
:
07
m
)
for larynx and between 1108–1127
(
0
:
06
m
)
for cardiac. We
plot the number of additions (left) and the number of removals (right) as a frac-
tion of
j
N
j
. (Notice that all changes are less than 2% of the support size.)
(a) Top: Larynx image sequence; bottom: cardiac sequence; (b) slow support
change plots; left: additions; right: removals.
Other recent work, e.g., [18], applies CS on observation dif-
ferences to reconstruct the difference signal. While their goal
is to only estimate the difference signal, the approach could be
easily modified to also reconstruct the actual signal sequence
(we refer to this as CS-diff). But, since all nonzero coefficients
of a sparse signal in any sparsity basis will typically change over
time, though gradually, and some new elements will become
nonzero, thus the exact sparsity size of the signal difference will
also be equal to/larger than that of the signal itself. As a result
CS-diff will also not achieve exact reconstruction using fewer
measurements, e.g., see Fig. 3.
In this work, whenever we use the term CS, we are actually
referring to basis pursuit (BP) [5]. As pointed out by an anony-
mous reviewer, modified-CS is a misnomer and a more appro-
priate name for our approach should be modified-BP.
As pointed out by an anonymous reviewer, modified-CS can
be used in conjunction with multiscale CS for video compres-
sion [19] to improve their compression ratios.
The paper is organized as follows. We give the notation
and problem definition below. Modified-CS is developed in
Section II. We obtain sufficient conditions for exact recon-
struction using it in Section III. In Section IV, we compare
these with the corresponding conditions for CS and we also
do a Monte Carlo comparison of modified-CS and CS. We
discuss dynamic modified-CS and regularized modified CS in
Section V. Comparisons for actual images and image sequences
are given in Section VI and conclusions and future work in
Section VII.
A. Notation
We use
for transpose. The notation denotes the norm
of the vector
. The pseudo-norm, , counts the number
of nonzero elements in
. For a matrix, , denotes its
induced
norm, i.e., .
We use the notation
to denote the sub-matrix containing
the columns of
with indexes belonging to . For a vector, the
notation
(or ) refers to a sub-vector that contains the
elements with indexes in
. The notation, .
We use
to denote the complement of the set w.r.t. ,
i.e.,
. The set operations, , stand for set union
and intersection, respectively. Also
denotes
set difference. For a set
, denotes its size (cardinality). But
for a scalar,
, denotes the magnitude of .
The
-restricted isometry constant [9], , for a matrix, ,is
defined as the smallest real number satisfying
(1)
for all subsets
of cardinality and all real
vectors
of length . The restricted orthogonality constant [9],
, is defined as the smallest real number satisfying
(2)
for all disjoint sets
with and
and with , and for all vectors , of length ,
, respectively. By setting in (2), it is easy
to see that
(3)
The notation
means that is Gaussian dis-
tributed with mean
and covariance while is used
to denote the value of the Gaussian PDF.
B. Problem Definition
We measure an
-length vector where
(4)
We need to estimate
which is a sparse -length vector with
. The support of , denoted , can be split as
where is the “known” part of the support,
is the error in the known part and is the
unknown part. Thus,
and , are disjoint. Also,
.
We use
to denote the size of the support ( ),
to denote the size of the known ( ) part of the support,
to denote the size of the error ( ) in the known part and
to denote the size of the unknown ( ) part of the support.
We assume that
satisfies the -restricted isometry prop-
erty (RIP) [9] for
. -RIP means
that
where is the RIP constant for defined in (1).
In a static problem,
is available from prior knowledge. For
example, in the MRI problem described in the introduction, let
be the (unknown) set of all DWT coefficients with magni-
tude above a certain zeroing threshold. Assume that the smaller
coefficients are set to zero. Prior knowledge tells us that most
image intensities are nonzero and so the approximation coeffi-
cients are mostly nonzero. Thus, we can let
be the (known) set
of indexes of all the approximation coefficients. The (unknown)
set of indexes of the approximation coefficients which are zero

VASWANI AND LU: MODIFIED-CS 4597
form . The (unknown) set of indexes of the nonzero detail
coefficients form
.
For the time series problem,
and with support,
, and is the support estimate
from the previous time instant. If exact reconstruction occurs
at
, . In this case, is the
set of indexes of elements that were nonzero at
, but are
now zero (deletions) while
is the newly added
coefficients at
(additions). Slow sparsity pattern change over
time (see, e.g., Fig. 1) then implies that
and
are much smaller than .
When exact reconstruction does not occur,
includes both
deletions and the extras from
, . Similarly,
includes both additions and the misses from , .
In this case slow support change, along with
being an
accurate estimate of
, still implies that and .
II. M
ODIFIED
COMPRESSIVE
SENSING
Our goal is to find a signal that satisfies the data constraint
given in (4) and whose support contains the smallest number
of new additions to
, although it may or may not contain all
elements of
. In other words, we would like to solve
subject to (5)
If
is empty, i.e., if , then the solution of (5) is
also the sparsest solution whose support contains
.
As is well known, minimizing the
norm is a combinatorial
optimization problem [20]. We propose to use the same trick
that resulted in CS [5], [7], [8], [10]. We replace the
norm
by the
norm, which is the closest norm to that makes the
optimization problem convex, i.e., we solve
subject to (6)
Denote its output by
. If needed, the support can be estimated
as
(7)
where
is a zeroing threshold. If exact reconstruction
occurs,
can be zero. We discuss threshold setting for cases
where exact reconstruction does not occur in Section V-A.
III. E
XACT RECONSTRUCTION
RESULT
We first analyze the
version of modified-CS in
Section III-A. We then give the exact reconstruction result
for the actual
problem in Section III-B. In Section III-C, we
give the two key lemmas that lead to its proof and we explain
how they lead to the proof. The complete proof is given in the
Appendix. The proof of the lemmas is given in Section III-D.
Recall that
, , and .
A. Exact Reconstruction Result:
Version of Modified-CS
Consider the
problem, (5). Using a rank argument similar
to [9, Lemma 1.2], we can show the following. The proof is
given in the Appendix.
Proposition 1: Given a sparse vector,
, with support,
, where and are disjoint and . Consider
reconstructing it from
by solving (5). is the unique
minimizer of (5) if
( satisfies the -RIP).
Using
, this is equivalent to .
Compare this with [9, Lemma 1.2], for the
version of CS.
It requires
which is much stronger when and
, as is true for time series problems.
B. Exact Reconstruction Result: Modified-CS
Of course we do not solve (5) but its
relaxation, (6). Just
like in CS, the sufficient conditions for this to give exact recon-
struction will be slightly stronger. In the next few subsections,
we prove the following result.
Theorem 1 (Exact Reconstruction): Given a sparse vector,
,
whose support,
, where and are disjoint and
. Consider reconstructing it from by solving
(6).
is the unique minimizer of (6) if
1)
and and
2)
, where
(8)
The above conditions can be rewritten using
.
To understand the second condition better and
relate it to the corresponding CS result, let us sim-
plify it.
. Simplifying
further, a sufficient condition for
is
. Further, a suffi-
cient condition for this is
.
To get a condition only in terms of
’s, use the fact that
[9]. A sufficient condition is
. Further, notice that if and if
, then
.
Corollary 1 (Exact Reconstruction): Given a sparse vector,
,
whose support,
, where and are disjoint and
. Consider reconstructing it from by solving
(6).
is the unique minimizer of (6) if and
(9)
This, in turn, holds if
This, in turn, holds if and
These conditions can be rewritten by substituting .
Compare (9) to the sufficient condition for CS given in [9]:
(10)
As shown in Fig. 1, usually
, and (which
means that
). Consider the case when the number of mea-
surements,
, is smaller than what is needed for exact recon-

4598 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 9, SEPTEMBER 2010
struction for a given support size, , but is large enough to ensure
that
. Under these assumptions, compare (9) with
(10). Notice that (a) the first bracket of the left-hand side (LHS)
of (9) will be small compared to the LHS of (10). The same will
hold for the second and third terms of its second bracket com-
pared with the second and third terms of (10). The first term of
its second bracket,
, will be smaller than the first term of (10),
. Thus, for a certain range of values of , it may happen that
(9) holds, but (10) does not hold, e.g., if
, (10) will not
hold, but if
, (9) can hold if are small
enough. A detailed comparison is done in Section IV.
Proof of Theorem 1: Main Lemmas and Proof Outline
The idea of the proof is motivated by that of [9, Theorem 1.3].
Suppose that we want to minimize a convex function
sub-
ject to
and that is differentiable. The Lagrange mul-
tiplier optimality condition requires that there exists a Lagrange
multiplier,
, s.t. . Thus, for to be a solu-
tion we need
. In our case,
. Thus, for and
for .For , . Since is not dif-
ferentiable at 0, we require that
lie in
the subgradient set of
at 0, which is the set [21].
In summary, we need a
that satisfies
if
if and
if (11)
Lemma 1 below shows that by using (11) but with
replaced by for all , we get a set of
sufficient conditions for
to be the unique solution of (6).
Lemma 1: The sparse signal,
, with support as defined in
Theorem 1, and with
, is the unique minimizer of (6) if
and if we can find a vector satisfying the following:
1)
if ;
2)
if ;
3)
if .
Recall that
and .
The proof is given in the next subsection.
Next we give Lemma 2 which constructs a
which satisfies
and for any set disjoint with of size
and for any given vector of size . It also bounds
for all where is called an “exceptional
set.” We prove Theorem 1 by applying Lemma 2 iteratively to
construct a
that satisfies the conditions of Lemma 1 under the
assumptions of Theorem 1.
Lemma 2: Given the known part of the support,
, of size .
Let
, be such that and .
Let
be a vector supported on a set , that is disjoint with ,of
size
. Then there exists a vector and an exceptional
set,
, disjoint with , s.t.
(12)
and
(13)
where
is defined in (8) and
(14)
The proof is given in the next subsection.
Proof Outline of Theorem 1: To prove Theorem 1, apply
Lemma 2 iteratively, in a fashion similar to that of the proof
of [9, Lemma 2.2], (this proof had some important typos). The
main idea is as follows. At iteration zero, apply Lemma 2 with
(so that ), , and ,
to get a
and an exceptional set , of size less than , that
satisfy the above conditions. At iteration
, apply Lemma
2 with
(so that ), ,
and to get a and an ex-
ceptional set
, of size less than . Lemma 2 is applicable
in the above fashion because condition 1 of Theorem 1 holds.
Define
. We then argue that if condition
2 of Theorem 1 holds,
satisfies the conditions of Lemma 1.
Applying Lemma 1, the result follows. We give the entire proof
in the Appendix.
Proofs of Lemmas 1 and 2
We prove the lemmas from the previous subsection here. Re-
call that
and .
Proof of Lemma 1: The proof is motivated by [9, Sec. II-A].
There is clearly at least one element in the feasible set of (6) -
-
and hence there will be at least one minimizer of (6). Let
be a
minimizer of (6). We need to prove that if the conditions of the
lemma hold, it is equal to
. For any minimizer, ,
(15)
Recall that
is zero outside of , and are disjoint,
and
is always nonzero on the set .Takea that satisfies the
three conditions of the lemma. Then
(16)

VASWANI AND LU: MODIFIED-CS 4599
Now, the only way (16) and (15) can hold simultaneously is if
all inequalities in (16) are actually equalities. Consider the first
inequality. Since
is strictly less than 1 for all ,
the only way
is if
for all .
Since both
and solve (6), . Since
for all , this means that
or that . Since
, is full rank and so the only way this can
happen is if
. Thus, any minimizer, ,
i.e.,
is the unique minimizer of (6).
Proof of Lemma 2: The proof of this lemma is significantly
different from that of the corresponding lemma in [9], even
though the form of the final result is similar.
Any
that satisfies will be of the form
(17)
We need to find a
s.t. , i.e., . Let
. Then .
This follows because
since is a projec-
tion matrix. Thus
(18)
Consider any set
with disjoint with . Then
(19)
Consider the first term from the right-hand side (RHS) of (19).
(20)
Consider the second term from the RHS of (19). Since
is non-negative definite
(21)
Now,
which is the difference of two symmetric non-negative definite
matrices. Let
denote the first matrix and the second one.
Use the fact that
where denote the min-
imum, maximum eigenvalue. Since
and
,
thus
(22)
as long as the denominator is positive. It is positive because we
have assumed that
. Using (20) and (22) to
bound (19), we get that for any set
with ,
(23)
where
is defined in (8). Notice that is non-
decreasing in
, , . Define an exceptional set as
(24)
Notice that
must obey since otherwise we can
contradict (23) by taking
.
Since
and is disjoint with , (23) holds for
, i.e., . Also, by definition of
, , for all . Finally,
(25)
since
(holds because is a projection matrix). Thus,
all equations of (13) hold. Using (18), (12) holds.
IV. COMPARISON OF
CS AND MODIFIED-CS
In Theorem 1 and Corollary 1, we derived sufficient
conditions for exact reconstruction using modified-CS. In
Section IV-A, we compare the sufficient conditions for mod-
ified-CS with those for CS. In Section IV-B, we use Monte
Carlo to compare the probabilities of exact reconstruction for
both methods.
A. Comparing Sufficient Conditions
We compare the sufficient conditions for modified-CS and for
CS, expressed only in terms of
’s. Sufficient conditions for
an algorithm serve as a designer’s tool to decide the number of
measurements needed for it and in that sense comparing the two
sufficient conditions is meaningful.
For modified-CS, from Corollary 1, the sufficient condition
in terms of only
’s is .
Using
, this becomes
(26)
For CS, two of the best (weakest) sufficient conditions that use
only
’s are given in [22], [23], and [11]. Between these two,
it is not obvious which one is weaker. Using [22] and [11], CS
achieves exact reconstruction if either
(27)
To compare (26) and (27), we use
which
is typical for time series applications (see Fig. 1). One way to
compare them is to use
[24, Corollary 3.4] to get the
LHS’s of both in terms of a scalar multiple of
. Thus, (26)
holds if
and . Since
, the second condition implies the first, and so
only
is sufficient. On the other hand, (27) holds
if
which is clearly stronger.
Alternatively, we can compare (26) and (27) using the high
probability upper bounds on
as in [9]. Using [9, eq. 3.22],

Citations
More filters
Journal ArticleDOI

Sparse Signal Recovery With Temporally Correlated Source Vectors Using Sparse Bayesian Learning

TL;DR: This paper derives two sparse Bayesian learning algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation, and provides analysis of the global and local minima of their cost function.
Journal ArticleDOI

Signal Processing With Compressive Measurements

TL;DR: This paper takes some first steps in the direction of solving inference problems-such as detection, classification, or estimation-and filtering problems using only compressive measurements and without ever reconstructing the signals involved.
Journal ArticleDOI

Compressed Sensing for Wireless Communications: Useful Tips and Tricks

TL;DR: In this article, the authors provide essential knowledge and useful tips and tricks that wireless communication researchers need to know when designing CS-based wireless systems, including basic setup, sparse recovery algorithm, and performance guarantee.
Journal ArticleDOI

Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

TL;DR: In this article, the authors provide a magazine-style overview of the entire field of robust subspace learning (RSL) and tracking (RST) for long data sequences, where the authors assume that the data lies in a low-dimensional subspace that can change over time, albeit gradually.
Journal ArticleDOI

Recovering Compressively Sampled Signals Using Partial Support Information

TL;DR: In this paper, the authors study the recovery conditions of weighted l1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available, and they show that if at least 50% of the (partial) information is accurate, then weighted l 1 minimization is stable and robust under weaker sufficient conditions than the analogous conditions for standard l1 minimizing.
References
More filters
Book ChapterDOI

I and J

Journal ArticleDOI

A mathematical theory of communication

TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What contributions have the authors mentioned in the paper "Modified-cs: modifying compressive sensing for problems with partially known support" ?

The authors study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The idea of their proposed solution ( modified-CS ) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of. 

RegModCS is useful when exact reconstruction does not occur—either is too small for exact reconstruction or the signal is compressible. 

Modified-CS used when and increased it to when (a) , , (b) , , .to on it, then the solution of (32) will be the causal MAP solution under that model. 

The -restricted isometry constant [9], , for a matrix, , is defined as the smallest real number satisfying(1)for all subsets of cardinality and all real vectors of length . 

2) Generate the random-Gaussian matrix, (generate an matrix with independent identically distributed (i.i.d.) zero mean Gaussian entries and normalize each column to unit norm). 

For MRI, is a partial Fourier matrix, i.e., where is an mask which contains a single 1 at a different randomly selected location in each row and all other entries are zero and is the matrix corresponding to the 2-D discrete Fourier transform (DFT). 

Sufficient conditions for an algorithm serve as a designer’s tool to decide the number of measurements needed for it and in that sense comparing the two sufficient conditions is meaningful. 

In practice though, at least with random Gaussian measurements and small enough noise, (6) did turn out to be feasible, i.e., the authors were able find a solution, in all their simulations. 

A more important question for recursive reconstruction of signal sequences from noisy measurements, is the stability of the error over time (i.e., how to obtain a time-invariant and small bound on the error over time).