scispace - formally typeset
Open AccessJournal ArticleDOI

Recovery of Sparsely Corrupted Signals

Reads0
Chats0
TLDR
In this paper, the authors investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting sparse representations in another general dictionary.
Abstract
We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, super-resolution, signal separation, and recovery of signals that are impaired by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge about the signal and noise support sets.

read more

Content maybe subject to copyright    Report

TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY 1
Recovery of Sparsely Corrupted Signals
Christoph Studer, Member, IEEE, Patrick Kuppinger, Student Member, IEEE,
Graeme Pope, Student Member, IEEE, and Helmut Bölcskei, Fellow, IEEE
Abstract—We investigate the recovery of signals exhibiting
a sparse representation in a general (i.e., possibly redundant
or incomplete) dictionary that are corrupted by additive noise
admitting a sparse representation in another general dictionary.
This setup covers a wide range of applications, such as image
inpainting, super-resolution, signal separation, and recovery of
signals that are impaired by, e.g., clipping, impulse noise, or
narrowband interference. We present deterministic recovery
guarantees based on a novel uncertainty relation for pairs of
general dictionaries and we provide corresponding practicable
recovery algorithms. The recovery guarantees we find depend
on the signal and noise sparsity levels, on the coherence param-
eters of the involved dictionaries, and on the amount of prior
knowledge about the signal and noise support sets.
Index Terms—Uncertainty relations, signal restoration, signal
separation, coherence-based recovery guarantees, `
1
-norm mini-
mization, greedy algorithms.
I. INTRODUCTION
We consider the problem of identifying the sparse vec-
tor x C
N
a
from M linear and non-adaptive measurements
collected in the vector
z = Ax + Be (1)
where A C
M×N
a
and B C
M×N
b
are known deterministic
and general (i.e., not necessarily of the same cardinality, and
possibly redundant or incomplete) dictionaries, and e C
N
b
represents a sparse noise vector. The support set of e and the
corresponding nonzero entries can be arbitrary; in particular,
e may also depend on x and/or the dictionary A.
This recovery problem occurs in many applications, some
of which are described next:
Clipping: Non-linearities in (power-)amplifiers or in
analog-to-digital converters often cause signal clipping
or saturation [2]. This impairment can be cast into the
signal model (1) by setting B = I
M
, where I
M
denotes
the M ×M identity matrix, and rewriting (1) as z = y+e
Part of this paper was presented at the IEEE International Symposium on
Information Theory (ISIT), Saint-Petersburg, Russia, July 2011 [1]. This work
was supported in part by the Swiss National Science Foundation (SNSF) under
Grant PA00P2-134155.
C. Studer was with the Dept. of Information Technology and Electrical
Engineering, ETH Zurich, Switzerland, and is now with the Dept. of Electrical
and Computer Engineering, Rice University, Houston, TX, USA (e-mail:
studer@rice.edu).
P. Kuppinger was with the Dept. of Information Technology and Electri-
cal Engineering, ETH Zurich, Switzerland, and is now with UBS, Zurich,
Switzerland (e-mail: patrick.kuppinger@gmail.com).
G. Pope and H. Bölcskei are with the Dept. of Information Tech-
nology and Electrical Engineering, ETH Zurich, Switzerland (e-mail:
gpope@nari.ee.ethz.ch; boelcskei@nari.ee.ethz.ch).
with e = g
a
(y) y. Concretely, instead of the M-
dimensional signal vector y = Ax of interest, the device
in question delivers g
a
(y), where the function g
a
(y) re-
alizes entry-wise signal clipping to the interval [a, +a].
The vector e will be sparse, provided the clipping level is
high enough. Furthermore, in this case the support set of e
can be identified prior to recovery, by simply comparing
the absolute values of the entries of y to the clipping
threshold a. Finally, we note that here it is essential that
the noise vector e be allowed to depend on the vector x
and/or the dictionary A.
Impulse noise: In numerous applications, one has to
deal with the recovery of signals corrupted by impulse
noise [3]. Specific applications include, e.g., reading out
from unreliable memory [4] or recovery of audio signals
impaired by click/pop noise, which typically occurs dur-
ing playback of old phonograph records. The model in (1)
is easily seen to incorporate such impairments. Just set
B = I
M
and let e be the impulse-noise vector. We would
like to emphasize the generality of (1) which allows
impulse noise that is sparse in general dictionaries B.
Narrowband interference: In many applications one is
interested in recovering audio, video, or communication
signals that are corrupted by narrowband interference.
Electric hum, as it may occur in improperly designed
audio or video equipment, is a typical example of such
an impairment. Electric hum typically exhibits a sparse
representation in the Fourier basis as it (mainly) con-
sists of a tone at some base-frequency and a series of
corresponding harmonics, which is captured by setting
B = F
M
in (1), where F
M
is the M-dimensional discrete
Fourier transform (DFT) matrix defined below in (2).
Super-resolution and inpainting: Our framework also
encompasses super-resolution [5], [6] and inpainting [7]
for images, audio, and video signals. In both applications,
only a subset of the entries of the (full-resolution) signal
vector y = Ax is available and the task is to fill in the
missing entries of the signal vector such that y = Ax.
The missing entries are accounted for by choosing the
vector e such that the entries of z = y + e corresponding
to the missing entries in y are set to some (arbitrary)
value, e.g., 0. The missing entries of y are then filled in
by first recovering x from z and then computing y = Ax.
Note that in both applications the support set E is known
(i.e., the locations of the missing entries can easily be
identified) and the dictionary A is typically redundant
(see, e.g., [8] for a corresponding discussion), i.e., A
has more dictionary elements (columns) than rows, which
XXXXX.00 © 2012 IEEE

2 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY
demonstrates the need for recovery results that apply to
general (i.e., possibly redundant) dictionaries.
Signal separation: Separation of (audio or video) signals
into two distinct components also fits into our framework.
A prominent example for this task is the separation of
texture from cartoon parts in images (see [9], [10] and
references therein). In the language of our setup, the
dictionaries A and B are chosen such that they allow
for sparse representation of the two distinct features;
x and e are the corresponding coefficients describing
these features (sparsely). Note that here the vector e
no longer plays the role of (undesired) noise. Signal
separation then amounts to simultaneously extracting the
sparse vectors x and e from the observation (e.g., the
image) z = Ax + Be.
Naturally, it is of significant practical interest to identify
fundamental limits on the recovery of x (and e, if appropriate)
from z in (1). For the noiseless case z = Ax such recovery
guarantees are known [11]–[13] and typically set limits on the
maximum allowed number of nonzero entries of x or—more
colloquially—on the “sparsity” level of x. These recovery
guarantees are usually expressed in terms of restricted isom-
etry constants (RICs) [14], [15] or in terms of the coherence
parameter [11]–[13], [16] of the dictionary A. In contrast to
coherence parameters, RICs can, in general, not be computed
efficiently. In this paper, we focus exclusively on coherence-
based recovery guarantees. For the case of unstructured noise,
i.e., z = Ax + n with no constraints imposed on n apart
from knk
2
< , coherence-based recovery guarantees were
derived in [16]–[20]. The corresponding results, however, do
not guarantee perfect recovery of x, but only ensure that either
the recovery error is bounded above by a function of knk
2
or
only guarantee perfect recovery of the support set of x. Such
results are to be expected, as a consequence of the generality
of the setup in terms of the assumptions on the noise vector n.
A. Contributions
In this paper, we consider the following questions: 1) Under
which conditions can the vector x (and the vector e, if
appropriate) be recovered perfectly from the (sparsely cor-
rupted) observation z = Ax + Be, and 2) can we formulate
practical recovery algorithms with corresponding (analytical)
performance guarantees? Sparsity of the signal vector x and
the error vector e will turn out to be key in answering
these questions. More specifically, based on an uncertainty
relation for pairs of general dictionaries, we establish recovery
guarantees that depend on the number of nonzero entries in x
and e, and on the coherence parameters of the dictionaries
A and B. These recovery guarantees are obtained for the
following different cases: I) The support sets of both x and
e are known (prior to recovery), II) the support set of only
x or only e is known, III) the number of nonzero entries of
only x or only e is known, and IV) nothing is known about x
and e. We formulate efficient recovery algorithms and derive
corresponding performance guarantees. Finally, we compare
our analytical recovery thresholds to numerical results and we
demonstrate the application of our algorithms and recovery
guarantees to an image inpainting example.
B. Outline of the paper
The remainder of the paper is organized as follows. In Sec-
tion II, we briefly review relevant previous results. In Sec-
tion III, we derive a novel uncertainty relation that lays the
foundation for the recovery guarantees reported in Section IV.
A discussion of our results is provided in Section V and
numerical results are presented in Section VI. We conclude
in Section VII.
C. Notation
Lowercase boldface letters stand for column vectors and
uppercase boldface letters designate matrices. For the matrix
M, we denote its transpose and conjugate transpose by M
T
and M
H
, respectively, its (Moore–Penrose) pseudo-inverse by
M
=
M
H
M
1
M
H
, its kth column by m
k
, and the entry
in the kth row and `th column by [M]
k,`
. The kth entry of the
vector m is [m]
k
. The space spanned by the columns of M is
denoted by R(M). The M ×M identity matrix is denoted by
I
M
, the M × N all zeros matrix by 0
M,N
, and the all-zeros
vector of dimension M by 0
M
. The M × M discrete Fourier
transform matrix F
M
is defined as
[F
M
]
k,`
=
1
M
exp
2πi(k 1)(` 1)
M
, k, ` = 1, . . . , M
(2)
where i
2
= 1. The Euclidean (or `
2
) norm of the vector x is
denoted by kxk
2
, kxk
1
stands for the `
1
-norm of x, and kxk
0
designates the number of nonzero entries in x. Throughout the
paper, we assume that the columns of the dictionaries A and
B have unit `
2
-norm. The minimum and maximum eigenvalue
of the positive-semidefinite matrix M is denoted by λ
min
(M)
and λ
max
(M), respectively. The spectral norm of the matrix
M is kMk =
p
λ
max
(M
H
M). Sets are designated by upper-
case calligraphic letters; the cardinality of the set T is |T |.
The complement of a set S (in some superset T ) is denoted
by S
c
. For two sets S
1
and S
2
, s
S
1
+ S
2
means that s
is of the form s = s
1
+ s
2
, where s
1
S
1
and s
2
S
2
. The
support set of the vector m is designated by supp(m). The
matrix M
T
is obtained from M by retaining the columns of
M with indices in T ; the vector m
T
is obtained analogously.
We define the N ×N diagonal (projection) matrix P
S
for the
set S {1, . . . , N} as follows:
[P
S
]
k,`
=
1, k = ` and k S
0, otherwise.
For x R, we set [x]
+
= max{x, 0}.
II. REVIEW OF RELEVANT PREVIOUS RESULTS
Recovery of the vector x from the sparsely corrupted mea-
surement z = Ax + Be corresponds to a sparse-signal re-
covery problem subject to structured (i.e., sparse) noise. In
this section, we briefly review relevant existing results for
sparse-signal recovery from noiseless measurements, and we
summarize the results available for recovery in the presence
of unstructured and structured noise.

STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS 3
A. Recovery in the noiseless case
Recovery of x from z = Ax where A is redundant (i.e.,
M < N
a
) amounts to solving an underdetermined linear sys-
tem of equations. Hence, there are infinitely many solutions x,
in general. However, under the assumption of x being sparse,
the situation changes drastically. More specifically, one can
recover x from the observation z = Ax by solving
(P0) minimize kxk
0
subject to z = Ax.
This approach results, however, in prohibitive computational
complexity, even for small problem sizes. Two of the most
popular and computationally tractable alternatives to solving
(P0) by an exhaustive search are basis pursuit (BP) [11]–[13],
[21]–[23] and orthogonal matching pursuit (OMP) [13], [24],
[25]. BP is essentially a convex relaxation of (P0) and amounts
to solving
(BP) minimize kxk
1
subject to z = Ax.
OMP is a greedy algorithm that recovers the vector x by
iteratively selecting the column of A that is most “correlated”
with the difference between z and its current best (in `
2
-norm
sense) approximation.
The questions that arise naturally are: Under which con-
ditions does (P0) have a unique solution and when do BP
and/or OMP deliver this solution? To formulate the answer to
these questions, define n
x
= kxk
0
and the coherence of the
dictionary A as
µ
a
= max
k,`,k6=`
a
H
k
a
`
. (3)
As shown in [11]–[13], a sufficient condition for x to be the
unique solution of (P0) applied to z = Ax and for BP and
OMP to deliver this solution is
n
x
<
1
2
1 + µ
1
a
. (4)
B. Recovery in the presence of unstructured noise
Coherence-based recovery guarantees in the presence of un-
structured (and deterministic) noise, i.e., for z = Ax+n, with
no constraints imposed on n apart from knk
2
< , were
derived in [16]–[20] and the references therein. Specifically,
it was shown in [16] that a suitably modified version of BP,
referred to as BP denoising (BPDN), recovers an estimate
ˆ
x
satisfying kx
ˆ
xk
2
< Cknk
2
provided that (4) is met. Here,
C > 0 depends on the coherence µ
a
and on the sparsity level
n
x
of x. Note that the support set of the estimate
ˆ
x may differ
from that of x. Another result, reported in [17], states that
OMP delivers the correct support set (but does not perfectly
recover the nonzero entries of x) provided that
n
x
<
1
2
1 + µ
1
a
knk
2
µ
a
|x
min
|
(5)
where |x
min
| denotes the absolute value of the component of x
with smallest nonzero magnitude. The recovery condition (5)
yields sensible results only if knk
2
/|x
min
| is small. Results
similar to those reported in [17] were obtained in [18], [19].
Recovery guarantees in the case of stochastic noise n can be
found in [19], [20]. We finally point out that perfect recovery
of x is, in general, impossible in the presence of unstructured
noise. In contrast, as we shall see below, perfect recovery is
possible under structured noise according to (1).
C. Recovery guarantees in the presence of structured noise
As outlined in the introduction, many practically relevant
signal recovery problems can be formulated as (sparse) signal
recovery from sparsely corrupted measurements, a problem
that seems to have received comparatively little attention in the
literature so far and does not appear to have been developed
systematically.
A straightforward way leading to recovery guarantees in the
presence of structured noise, as in (1), follows from rewrit-
ing (1) as
z = Ax + Be = Dw (6)
with the concatenated dictionary D = [ A B ] and the stacked
vector w = [ x
T
e
T
]
T
. This formulation allows us to invoke
the recovery guarantee in (4) for the concatenated dictionary
D, which delivers a sufficient condition for w (and hence,
x and e) to be the unique solution of (P0) applied to z =
Dw and for BP and OMP to deliver this solution [11], [12].
However, the so obtained recovery condition
n
w
= n
x
+ n
e
<
1
2
1 + µ
1
d
(7)
with the dictionary coherence µ
d
defined as
µ
d
= max
k,`,k6=`
d
H
k
d
`
(8)
ignores the structure of the recovery problem at hand, i.e., is
agnostic to i) the fact that D consists of the dictionaries A and
B with known coherence parameters µ
a
and µ
b
, respectively,
and ii) knowledge about the support sets of x and/or e that
may be available prior to recovery. As shown in Section IV,
exploiting these two structural aspects of the recovery problem
yields superior (i.e., less restrictive) recovery thresholds. Note
that condition (7) guarantees perfect recovery of x (and e) in-
dependent of the `
2
-norm of the noise vector, i.e., kBek
2
may
be arbitrarily large. This is in stark contrast to the recovery
guarantees for noisy measurements in [16] and (5) (originally
reported in [17]).
Special cases of the general setup (1), explicitly taking into
account certain structural aspects of the recovery problem were
considered in [3], [14], [26]–[30]. Specifically, in [26] it was
shown that for A = F
M
, B = I
M
, and knowledge of the
support set of e, perfect recovery of the M -dimensional vector
x is possible if
2n
x
n
e
< M (9)
where n
e
= kek
0
. In [27], [28], recovery guarantees based
on the RIC of the matrix A for the case where B is an
orthonormal basis (ONB), and where the support set of e
is either known or unknown, were reported; these recovery
guarantees are particularly handy when A is, for example,
i.i.d. Gaussian [31], [32]. However, results for the case of
A and B both general (and deterministic) dictionaries taking
into account prior knowledge about the support sets of x and

4 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION THEORY
|P||Q|
[(1 + µ
a
)(1
P
) |P|µ
a
]
+
[(1 + µ
b
)(1
Q
) |Q|µ
b
]
+
µ
2
m
. (10)
e seem to be missing in the literature. Recovery guarantees
for A i.i.d. non-zero mean Gaussian, B = I
M
, and the sup-
port sets of x and e unknown were reported in [29]. In [30]
recovery guarantees under a probabilistic model on both x
and e and for unitary A and B = I
M
were reported showing
that x can be recovered perfectly with high probability (and
independently of the `
2
-norm of x and e). The problem of
sparse-signal recovery in the presence of impulse noise (i.e.,
B = I
M
) was considered in [3], where a particular nonlinear
measurement process combined with a non-convex program
for signal recovery was proposed. In [14], signal recovery in
the presence of impulse noise based on `
1
-norm minimization
was investigated. The setup in [14], however, differs consider-
ably from the one considered in this paper as A in [14] needs
to be tall (i.e., M > N
a
) and the vector x to be recovered is
not necessarily sparse.
We conclude this literature overview by noting that the
present paper is inspired by [26]. Specifically, we note that
the recovery guarantee (9) reported in [26] is obtained from
an uncertainty relation that puts limits on how sparse a given
signal can simultaneously be in the Fourier basis and in the
identity basis. Inspired by this observation, we start our discus-
sion by presenting an uncertainty relation for pairs of general
dictionaries, which forms the basis for the recovery guarantees
reported later in this paper.
III. A GENERAL UNCERTAINTY RELATION FOR
-CONCENTRATED VECTORS
We next present a novel uncertainty relation, which ex-
tends the uncertainty relation in [33, Lem. 1] for pairs of
general dictionaries to vectors that are -concentrated rather
than perfectly sparse. As shown in Section IV, this extension
constitutes the basis for the derivation of recovery guarantees
for BP.
A. The uncertainty relation
Define the mutual coherence between the dictionaries A and
B as
µ
m
= max
k,`
a
H
k
b
`
.
Furthermore, we will need the following definition, which
appeared previously in [26].
Definition 1: A vector r C
N
r
is said to be
R
-concentrated
to the set R {1, . . . , N
r
} if kP
R
rk
1
(1
R
)krk
1
, where
R
[0, 1]. We say that the vector r is perfectly concentrated
to the set R and, hence, |R|-sparse if P
R
r = r, i.e., if
R
= 0.
We can now state the following uncertainty relation for pairs
of general dictionaries and for -concentrated vectors.
Theorem 1: Let A C
M×N
a
be a dictionary with coher-
ence µ
a
, B C
M×N
b
a dictionary with coherence µ
b
, and
denote the mutual coherence between A and B by µ
m
. Let s
be a vector in C
M
that can be represented as a linear combina-
tion of columns of A and, similarly, as a linear combination
of columns of B. Concretely, there exists a pair of vectors
p C
N
a
and q C
N
b
such that s = Ap = Bq (we exclude
the trivial case where p = 0
N
a
and q = 0
N
b
).
1
If p is
P
-
concentrated to P and q is
Q
-concentrated to Q, then (10)
holds.
Proof: The proof follows closely that of [33, Lem. 1],
which applies to perfectly concentrated vectors p and q. We
therefore only summarize the modifications to the proof of
[33, Lem. 1]. Instead of using
P
p∈P
|[p]
p
| = kpk
1
to arrive
at [33, Eq. 29]
[(1 + µ
a
) |P|µ
a
]
+
kpk
1
|P|µ
m
kqk
1
we invoke
P
p∈P
|[p]
p
| (1
P
)kpk
1
to arrive at the fol-
lowing inequality valid for
P
-concentrated vectors p:
[(1 + µ
a
)(1
P
) |P|µ
a
]
+
kpk
1
|P|µ
m
kqk
1
. (11)
Similarly,
Q
-concentration, i.e.,
P
Q
|[q]
q
| (1
Q
)kqk
1
,
is used to replace [33, Eq. 30] by
[(1 + µ
b
)(1
Q
) |Q|µ
b
]
+
kqk
1
|Q|µ
m
kpk
1
. (12)
The uncertainty relation (10) is then obtained by multiply-
ing (11) and (12) and dividing the resulting inequality by
kpk
1
kqk
1
.
In the case where both p and q are perfectly concentrated,
i.e.,
P
=
Q
= 0, Theorem 1 reduces to the uncertainty
relation reported in [33, Lem. 1], which we restate next for
the sake of completeness.
Corollary 2 ([33, Lem. 1]): If P = supp(p) and Q =
supp(q), the following holds:
|P||Q|
[1 µ
a
(|P| 1)]
+
[1 µ
b
(|Q| 1)]
+
µ
2
m
. (13)
As detailed in [33], [34], the uncertainty relation in Corollary 2
generalizes the uncertainty relation for two orthonormal bases
(ONBs) found in [23]. Furthermore, it extends the uncertainty
relations provided in [35] for pairs of square dictionaries
(having the same number of rows and columns) to pairs of
general dictionaries A and B.
B. Tightness of the uncertainty relation
In certain special cases it is possible to find signals that
satisfy the uncertainty relation (10) with equality. As in [26],
consider A = F
M
and B = I
M
, so that µ
m
= 1/
M, and
define the comb signal containing equidistant spikes of unit
height as
[δ
t
]
`
=
1, if (` 1) mod t = 0
0, otherwise
1
The uncertainty relation continues to hold if either p = 0
N
a
or q = 0
N
b
,
but does not apply to the trivial case p = 0
N
a
and q = 0
N
b
. In all three
cases we have s = 0
M
.

STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS 5
where we shall assume that t divides M. It can be shown
that the vectors p = δ
M
and q = δ
M
, both having
M
nonzero entries, satisfy F
M
p = I
M
q. If P = supp(p) and
Q = supp(q), the vectors p and q are perfectly concentrated to
P and Q, respectively, i.e.,
P
=
Q
= 0. Since |P| = |Q| =
M and µ
m
= 1/
M it follows that |P||Q| = 1
2
m
= M
and, hence, p = q = δ
M
satisfies (10) with equality.
We will next show that for pairs of general dictionaries
A and B, finding signals that satisfy the uncertainty relation
(10) with equality is NP-hard. For the sake of simplicity, we
restrict ourselves to the case P = supp(p) and Q = supp(q),
which implies |P| = kpk
0
and |Q| = kqk
0
. Next, consider
the problem
(U0)
(
minimize kpk
0
kqk
0
subject to Ap = Bq, kpk
0
1, kqk
0
1.
Since we are interested in the minimum of kpk
0
kqk
0
for
nonzero vectors p and q, we imposed the constraints kpk
0
1
and kqk
0
1 to exclude the case where p = 0
N
a
and/or
q = 0
N
b
. Now, it follows that for the particular choice B =
z C
M
and hence q = q C \{0} (note that we exclude the
case q = 0 as a consequence of the requirement kqk
0
1)
the problem (U0) reduces to
(U0
) minimize kxk
0
subject to Ax = z
where x = p/q. However, as (U0
) is equivalent to (P0),
which is NP-hard [36], in general, we can conclude that finding
a pair p and q satisfying the uncertainty relation (10) with
equality is NP-hard.
IV. RECOVERY OF SPARSELY CORRUPTED SIGNALS
Based on the uncertainty relation in Theorem 1, we next
derive conditions that guarantee perfect recovery of x (and of
e, if appropriate) from the (sparsely corrupted) measurement
z = Ax + Be. These conditions will be seen to depend
on the number of nonzero entries of x and e, and on the
coherence parameters µ
a
, µ
b
, and µ
m
. Moreover, in contrast
to (5), the recovery conditions we find will not depend on the
`
2
-norm of the noise vector kBek
2
, which is hence allowed
to be arbitrarily large. We consider the following cases: I) The
support sets of both x and e are known (prior to recovery),
II) the support set of only x or only e is known, III) the
number of nonzero entries of only x or only e is known, and
IV) nothing is known about x and e. The uncertainty relation
in Theorem 1 is the basis for the recovery guarantees in all four
cases considered. To simplify notation, motivated by the form
of the right-hand side (RHS) of (13), we define the function
f(u, v) =
[1 µ
a
(u 1)]
+
[1 µ
b
(v 1)]
+
µ
2
m
.
In the remainder of the paper, X denotes supp(x) and E stands
for supp(e). We furthermore assume that the dictionaries A
and B are known perfectly to the recovery algorithms. More-
over, we assume that
2
µ
m
> 0.
2
If µ
m
= 0, the space spanned by the columns of A is orthogonal to
the space spanned by the columns of B. This makes the separation of the
components Ax and Be given z straightforward. Once this separation is
accomplished, x can be recovered from Ax using (P0), BP, or OMP, if (4)
is satisfied.
A. Case I: Knowledge of X and E
We start with the case where both X and E are known prior
to recovery. The values of the nonzero entries of x and e are
unknown. This scenario is relevant, for example, in applica-
tions requiring recovery of clipped band-limited signals with
known spectral support X. Here, we would have A = F
M
,
B = I
M
, and E can be determined as follows: Compare the
measurements [z]
i
, i = 1, . . . , M, to the clipping threshold a;
if |[z]
i
| = a add the corresponding index i to E.
Recovery of x from z is then performed as follows. We first
rewrite the input-output relation in (1) as
z = A
X
x
X
+ B
E
e
E
= D
X,E
s
X,E
with the concatenated dictionary D
X,E
= [ A
X
B
E
] and the
stacked vector s
X,E
=
x
T
X
e
T
E
T
. Since X and E are known,
we can recover the stacked vector s
X,E
=
x
T
X
e
T
E
T
, per-
fectly and, hence, the nonzero entries of both x and e, if the
pseudo-inverse D
X,E
exists. In this case, we can obtain s
X,E
,
as
s
X,E
= D
X,E
z. (14)
The following theorem states a sufficient condition for D
X,E
to have full (column) rank, which implies existence of the
pseudo-inverse D
X,E
. This condition depends on the coher-
ence parameters µ
a
, µ
b
, and µ
m
, of the involved dictionaries
A and B and on X and E through the cardinalities |X| and
|E|, i.e., the number of nonzero entries in x and e, respectively.
Theorem 3: Let z = Ax + Be with X = supp(x) and
E = supp(e). Define n
x
= kxk
0
and n
e
= kek
0
. If
n
x
n
e
< f (n
x
, n
e
), (15)
then the concatenated dictionary D
X,E
= [ A
X
B
E
] has full
(column) rank.
Proof: See Appendix A.
For the special case A = F
M
and B = I
M
(so that µ
a
=
µ
b
= 0 and µ
m
= 1/
M) the recovery condition (15) reduces
to n
x
n
e
< M , a result obtained previously in [26]. Tightness
of (15) can be established by noting that the pairs x = λδ
M
,
e = (1 λ)δ
M
with λ (0, 1) and x
0
= λ
0
δ
M
, e
0
=
(1 λ
0
)δ
M
with λ
0
6= λ and λ
0
(0, 1) both satisfy (15)
with equality and lead to the same measurement outcome z =
F
M
x + e = F
M
x
0
+ e
0
[34].
It is interesting to observe that Theorem 3 yields a sufficient
condition on n
x
and n
e
for any (M n
e
) ×n
x
-submatrix of
A to have full (column) rank. To see this, consider the special
case B = I
M
and hence, D
X,E
= [ A
X
I
E
]. Condition (15)
characterizes pairs (n
x
, n
e
), for which all matrices D
X,E
with
n
x
= |X| and n
e
= |E| are guaranteed to have full (column)
rank. Hence, the sub-matrix consisting of all rows of A
X
with
row index in E
c
must have full (column) rank as well. Since
the result holds for all support sets X and E with |X| = n
x
and |E| = n
e
, all possible (M n
e
) × n
x
-submatrices of A
must have full (column) rank.
B. Case II: Only X or only E is known
Next, we find recovery guarantees for the case where either
only X or only E is known prior to recovery.

Citations
More filters
Journal ArticleDOI

Image Inpainting : Overview and Recent Advances

TL;DR: This work has shown that disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras can be removed in a context of editing.
Journal ArticleDOI

Robust 1-bit Compressive Sensing Using Adaptive Outlier Pursuit

TL;DR: This paper proposes a robust method for recovering signals from 1-bit measurements using adaptive outlier pursuit that will detect the positions where sign flips happen and recover the signals using “correct” measurements.
Journal ArticleDOI

Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions

TL;DR: In this paper, the authors improved existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted, and they introduced three new theorems: (1) if the m×n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable l 1 minimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m/(log(n/m)+1)).
Posted Content

Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions

TL;DR: It is proved that one can recover an n×n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m/(nlog2n)); again, this holds when there is a positive fraction of corrupted samples.
Journal ArticleDOI

A Survey on Nonconvex Regularization-Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

TL;DR: An overview of nonconvex regularization based sparse and low-rank recovery in various fields in signal processing, statistics, and machine learning, including compressive sensing, sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA is given.
References
More filters
Book

Matrix Analysis

TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Book

A wavelet tour of signal processing

TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Journal ArticleDOI

Atomic Decomposition by Basis Pursuit

TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What are the contributions mentioned in the paper "Recovery of sparsely corrupted signals" ?

The authors investigate the recovery of signals exhibiting a sparse representation in a general ( i. e., possibly redundant or incomplete ) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. The authors present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and they provide corresponding practicable recovery algorithms. 

Two of the most popular and computationally tractable alternatives to solving (P0) by an exhaustive search are basis pursuit (BP) [11]–[13], [21]–[23] and orthogonal matching pursuit (OMP) [13], [24], [25]. 

More specifically, based on an uncertainty relation for pairs of general dictionaries, the authors establish recovery guarantees that depend on the number of nonzero entries in x and e, and on the coherence parameters of the dictionaries A and B. 

The authors split this frame into two sets of 80 elements (columns) each and organize them in the matrices A and B such that the corresponding coherence parameters are given by µa ≈ 0.1258, µb ≈ 0.1319, and µm ≈ 0.1321. 

More specifically, in the noiseless case (i.e., for e = 0Nb ), the threshold (4) states that recovery can be guaranteed only for up to √ M nonzero entries in x. 

The authors will next show that for pairs of general dictionaries A and B, finding signals that satisfy the uncertainty relation (10) with equality is NP-hard. 

A corresponding application scenario would be the restoration of an audio signal (whose spectrum is sparse with unknown support set) that is corrupted by impulse noise, e.g., click or pop noise occurring at unknown locations. 

The underlying reasons are i) the deterministic nature of the results, i.e., the recovery guarantees in (15), (18), (22), and (26) are valid for all dictionary pairs (with given coherence parameters) and all signal and noise realizations (with given sparsity level), and ii) the authors plot the 50% success-rate contour, whereas the analytical results guarantee perfect recovery in 100% of the cases. 

In [27], [28], recovery guarantees based on the RIC of the matrix A for the case where B is an orthonormal basis (ONB), and where the support set of e is either known or unknown, were reported; these recovery guarantees are particularly handy when A is, for example, i.i.d. 

Using the alternating projection method described in [56], the authors generate an approximate equiangular tight frame (ETF) for RM consisting of 160 columns.