scispace - formally typeset
Open AccessJournal ArticleDOI

Fast Non-Negative Orthogonal Matching Pursuit

Mehrdad Yaghoobi, +2 more
- 16 Jan 2015 - 
- Vol. 22, Iss: 9, pp 1229-1233
Reads0
Chats0
TLDR
A novel fast implementation of the Non-Negative OMP is presented, which is based on the QR decomposition and an iterative coefficients update, which can fully incorporate the positivity constraint of the coefficients, throughout the selection stage of the algorithm.
Abstract
One of the important classes of sparse signals is the non-negative signals. Many algorithms have already been proposed to recover such non-negative representations, where greedy and convex relaxed algorithms are among the most popular methods. The greedy techniques have been modified to incorporate the non-negativity of the representations. One such modification has been proposed for Orthogonal Matching Pursuit (OMP), which first chooses positive coefficients and uses a non-negative optimisation technique as a replacement for the orthogonal projection onto the selected support. Beside the extra computational costs of the optimisation program, it does not benefit from the fast implementation techniques of OMP. These fast implementations are based on the matrix factorisations. We here first investigate the problem of positive representation, using pursuit algorithms. We will then describe a new implementation, which can fully incorporate the positivity constraint of the coefficients, throughout the selection stage of the algorithm. As a result, we present a novel fast implementation of the Non-Negative OMP, which is based on the QR decomposition and an iterative coefficients update. We will empirically show that such a modification can easily accelerate the implementation by a factor of ten in a reasonable size problem.

read more

Content maybe subject to copyright    Report

Edinburgh Research Explorer
Fast Non-Negative Orthogonal Matching Pursuit
Citation for published version:
Yaghoobi, M, Wu, D & Davies, ME 2015, 'Fast Non-Negative Orthogonal Matching Pursuit', IEEE Signal
Processing Letters, vol. 22, no. 9, pp. 1229-1233. https://doi.org/10.1109/LSP.2015.2393637
Digital Object Identifier (DOI):
10.1109/LSP.2015.2393637
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
IEEE Signal Processing Letters
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s)
and / or other copyright owners and it is a condition of accessing these publications that users recognise and
abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer
content complies with UK legislation. If you believe that the public display of this file breaches copyright please
contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and
investigate your claim.
Download date: 09. Aug. 2022

1
Fast Non-Negative Orthogonal Matching Pursuit
Mehrdad Yaghoobi, Di Wu and Mike E. Davies
Institute for Digital Communications, the University of Edinburgh, EH9 3JL, UK
{m.yaghoobi-vaighan, d.wu and mike.davies}@ed.ac.uk
Abstract—One of the important classes of sparse signals is
the non-negative signals. Many algorithms have already been
proposed to recover such non-negative representations, where
greedy and convex relaxed algorithms are among the most
popular methods. The greedy techniques have been modified
to incorporate the non-negativity of the representations. One
such modification has been proposed for Orthogonal Matching
Pursuit (OMP), which first chooses positive coefficients and uses
a non-negative optimisation technique as a replacement for the
orthogonal projection onto the selected support. Beside the extra
computational costs of the optimisation program, it does not
benefit from the fast implementation techniques of OMP. These
fast implementations are based on the matrix factorisations. We
here first investigate the problem of positive representation, using
pursuit algorithms. We will then describe a new implementation,
which can fully incorporate the positivity constraint of the
coefficients, throughout the selection stage of the algorithm. As
a result, we present a novel fast implementation of the Non-
Negative OMP, which is based on the QR decomposition and an
iterative coefficients update. We will empirically show that such a
modification can easily accelerate the implementation by a factor
of ten in a reasonable size problem.
Index Terms—Matching Pursuit, Orthogonal Matching Pur-
suit, Non-negative Sparse Approximations, QR Matrix Factori-
sation, Non-negative Least Square and Spectral Decomposition
I. INTRODUCTION
If the signal of interest is y R
M
and a dictionary of
elementary functions Φ R
M,N
are given, the linear sparse
approximation can be formulated as finding the sparsest x
R
N
, M < N, i.e. having the minimum number of non-zero
elements, as follows,
y Φx. (1)
The greedy sparse approximation algorithms are generally
computationally low cost algorithms, suitable for real-time
and large scale sparse approximations. One simple greedy
algorithm is called Matching Pursuit (MP) [1], whichaa builds
the sparse representation of a signal by iteratively adding the
most correlated element of the dictionary, which is called
an atom, to the set of selected elements. A disadvantage
of MP is that the representation found by the algorithm is
not the best representation using selected atoms. It may also
reselect already selected atoms in the later iterations, which
slows down the convergence of the algorithm. OMP algorithm
[2], [3] is proposed to compensate these issues, with some
extra computation cost, which is mainly due to the orthogonal
projection of the signal y onto the selected support s. Such a
projection can be found by,
˜
x
s
:= argmin
x
s
ky Φ
s
x
s
k
2
, (2)
This work was supported by EPSRC grants EP/K014277/1 and the MOD
University Defence Research Collaboration in Signal Processing.
where Φ
s
and x
s
are respectively the sub-dictionary and the
coefficient vector restricted to the support s. As the residual
signal, after pruning out the contribution of current atoms, is
orthogonal to the selected atoms, these atoms would not be
selected in the future iterations.
There are many applications for which not only are the
coefficient vectors sparse, but they are also non-negative.
Spectral and multi-spectral unmixing [4], [5] and microarray
analysis [6] are only few examples of such applications.
Raman spectral deconvolution [7] and multi-touch sensing [8]
have been our initial motivation for the current work.
Some modifications have been proposed for the MP and
OMP algorithms to incorporate the non-negativity of the
coefficients [9]. The only necessary modification for MP is to
choose the most “positively” correlated atom, at each iteration
of the algorithm. This means that we only select the atoms with
the positive coefficients. In OMP, we have a projection step at
each iteration, which may find some negative coefficients. A
sub-optimal approach, which guarantees the non-negativity of
the coefficients, is to use a non-negative least square (NNLS)
program to refine the selected coefficients at each iteration [9],
as follows,
argmin
x
s
0
ky Φ
s
x
s
k
2
,
where operator is the component-wise greater or equal
operator. A pseudo-code of Canonical Non-Negative OMP
(CNNOMP) is presented in Algorithm 1. Let r
k
:= yΦx
k
be
the kth signal residual. In this algorithm, the sub-optimality of
such approach is due to the fact that the selected positive coef-
ficient at an iteration, may force already selected coefficients
to become zero to still remain in the admissible set, which
reduce the efficiency of the algorithm. In other words, this is
caused by dividing the selection and the NNLS as two separate
tasks. We show here that there is an alternative approach which
combines these two steps. As a result of such combination, we
can implement the algorithm in an efficient approach, which
has similarities with the canonical fast OMP implementations,
i.e. OMP with QR factorisation [10]–[13].
To introduce our algorithm, we first need to briefly explain
the fast factorization based OMP, which can be found in
section II. In section III, we explain how the non-negativity
constraint of the coefficients stops us of using the canonical
OMP and how we can modify the algorithm to not only have
a more intuitive atom selection step, but also have a lower
computational complexity. We then show that the proposed
method has a much faster implementation in the simulation
part of section IV.

2
Algorithm 1 Canonical Non-Negative Orthogonal Matching
Pursuit
1: initialisation: s = , k = 0, x = 0 and r
0
= y
2: while k < K & max(Φ
T
r
k
) > 0 do
3: (ζ, ι) max(Φ
T
r
k
)
4: s s ι
5: x
s
argmin
θ>0
ky Φ
s
θk
2
6: r
k+1
y Φ
s
x
s
7: k k + 1
8: end while
9: x|
s
x
s
II. FAST ORTHOGONAL MATCHING PURSUIT (FOMP)
In the standard implementation of OMP, we solve (2) at each
iteration. It can be solved by calculating the Moore-Penrose
pseudo-inverse of Φ
s
, i.e. Φ
s
and
˜
x
s
= Φ
s
y. At iteration k,
|s| = k and calculation of Φ
s
needs a matrix inversion with the
size k ×k. When k increases, it computationally becomes very
expensive, i.e. O(k
3
). To combat such a computational bur-
den, incorporation of a matrix decomposition of the selected
sub-dictionary has been proposed, where QR factorisation is
among the most effective techniques. Let Φ
k
= Ψ
k
R
k
be
the QR factorisation of selected k atoms of dictionary Φ,
where Ψ
k
is column orthonormal and R
k
is upper-triangle
with positive diagonal elements on its main diagonal. With
some abuse of notation, we assume that in iteration k, the
columns of Φ
k
are sorted based on the iteration number
and φ
i
, for 1 i k, is the ith selected atom. As the
column span of Φ
k
and Ψ
k
are equivalent, we can simply
solve,
˜
z
k
:= argmin
z
k
ky Ψ
k
z
k
k
2
, instead of solving (2),
with Φ
s
Φ
k
and
˜
x
s
x
k
, and find the solution by
˜
x
k
= R
1
k
˜
z
k
. As Ψ
k
is orthonormal,
˜
z
k
= Ψ
T
k
y, which can
be implemented quickly. The algorithm would not be fast, if
the calculation of Ψ, R and R
1
could not be done efficiently.
There has been some fast realisations of Ψ
k
and R
k
based on
its iterative updates. To derive an update formula for Ψ
k
, we
recall the Graham-Schmidt (GS) orthogonalisation procedure.
As the first k terms of Φ
k+1
has a QR factorisation Ψ
k
R
k
,
we only need to continue the GS procedure to find the last
column of Ψ
k+1
. In this setting, we have,
Ψ
k+1
= [Ψ
k
ψ
k+1
],
where ψ
k+1
= q
k+1
/kq
k+1
k
2
and q
k+1
= (IΨ
k
Ψ
T
k
)φ
k+1
.
Ψ
k
Ψ
T
k
operator projects the operand to the span of Ψ
k
and
(I Ψ
k
Ψ
T
k
) finds the orthogonal element to the span of Ψ
k
.
We normalise such an orthogonal element to find Ψ
k+1
. A
similar approach can be used to calculate R
k+1
using R
k
as
follows:
R
k+1
=
R
k
ν
0 µ
, (3)
where ν = Ψ
T
k
φ
k+1
and µ = kq
k+1
k
2
. If we calculate R
1
in the same way, we have the following update formula,
R
1
k+1
=
"
R
1
k
R
1
k
ν
µ
0
1
µ
#
. (4)
The OMP algorithm with this implementation is faster than
standard pseudo-inverse implementation, for medium to large
Algorithm 2 Fast Non-Negative Orthogonal Matching Pursuit
1: initialisation: s = z
0
= , k = 0 and r
0
= y
2: while k < K & max(Φ
T
r
k
) > 0 do
3: (ζ, ι) sort
(Φ
T
r
k
)
4: p 1
5: p
c
1
6: z
c
0
7: while Terminate & p < N do
8: z
t
from (5)
9: z ψ
T
ι(p)
r
k
: ψ
ι(p)
= q/kqk
2
, q = (I Ψ
k
Ψ
T
k
)φ
ι(p)
10: Update based on Table I
11: end while
12: s s ι(p)
13: Update Ψ and R
1
14: z
k+1
[z
k
; z
k+1
]
15: r
k+1
r
k
z
k+1
ψ
k+1
16: k k + 1
17: end while
18: output: x|
s
R
1
z
K
If Then
0 < z z
t
, z > z
c
z
k+1
z, Terminate
0 < z z
t
, z z
c
z
k+1
z
c
, p p
c
, Terminate
z > z
c
z
t
p p + 1
z
c
z > z
t
z
k+1
z
c
, p p
c
, Terminate
z > z
t
> z
c
z
c
z
t
, p
c
p, p p + 1
z < 0 Terminate
TABLE I
DECISION RULES TO GUARANTEE POSITIVITY OF THE COEFFICIENTS.
ks. After a close look at this implementation, we realise that
we do not need to keep track of x
k
s in the intermediate
iterations! We only need to calculate z
k
at each iteration and
find x after the last iteration K, i.e. x = R
1
K
z
K
, where z
K
is z at the Kth iteration. It is also worth mentioning that we
also do not keep track of R
k
, only updating Φ
k
and R
1
k
are
necessary.
When we choose positively correlated atoms, then finding
z
k
, we may get negative elements in the corresponding x
k
.
This fact does not allow us to directly use the FOMP technique
in a non-negative setting.
III. FAST NON-NEGATIVE ORTHOGONAL MATCHING
PURSUIT (FNNOMP)
Canonical NNOMP chooses the atom which maximises
Φ
T
r
k
in the kth iteration. In the first iteration, we do not
need any orthogonalisation and we have φ
1
= ψ
1
and
R = [ 1 ]. In the kth iteration, let the best approximation
of y, with the non-negative coefficients and using Φ
k
, be
P
k
i=1
x
i
φ
i
=
P
k
i=1
z
i
ψ
i
. In the k + 1 iteration, we have,
k+1
X
i=1
z
i
ψ
i
=
k
X
i=1
z
i
ψ
i
+ z
k+1
ψ
k+1
=
k
X
i=1
x
i
φ
i
+ z
k+1
ψ
k+1
As ψ
k+1
lives in the span of the non-redundant set
{φ
j
}
j[1,k+1]
, ψ
k+1
=
P
k+1
j=1
γ
j
φ
j
for some unique γ
j
. We

3
then have,
k+1
X
i=1
z
i
ψ
i
=
k
X
i=1
x
i
φ
i
+
k+1
X
j=1
z
k+1
γ
j
φ
j
=
k
X
i=1
(x
i
+ z
k+1
γ
i
)φ
i
+ z
k+1
γ
k+1
φ
k+1
.
As z
k+1
γ
k+1
is always positive, we only need to assure that
x
i
+ z
k+1
γ
i
0 or
z
k+1
z
t
:=
(
min
i,γ
i
<0
x
i
γ
i
i, γ
i
< 0
+ Otherwise .
(5)
To assure that x
i
s are all non-negative, z
i
s should comply
with the condition of (5). We then choose the atom that the
corresponding z
k+1
, or its shrunk by upper-bound of (5), i.e.
z
t
, if it is larger than the upper-bound, has the largest value.
We therefore need to track the record of the best possible
solution, if the most positively correlated atom has a z
k
that
does not comply with (5). If we call such a possible solution
z
c
, (ζ, ι) = sort
(Φ
T
r
k
), where sort
is the sorting operator
in a descent order, and z is the current candidate, starting with
z = ζ(p), p = 1, in an internal loop in the kth iteration, we
make the decision based on the rules of Table I.
After the termination of the inner-loop, we add ι(p) to the
support s and update Ψ
k
and R
1
k
. The overall steps of the
algorithm are presented in Algorithm 2. The algorithm consists
of two loops. The external loop terminates in a finite number of
iterations as K is finite. We only need to check the termination
of the inner-loop which makes the fast NNOMP different from
canonical NNOMP.
Lemma 1. The inner loop of Algorithm 2 converges in a finite
number of iterations.
Proof: The updating conditions of Table I have a “Ter-
minate” as an output or it increases p. As the dictionary
has a finite dimension N, the inner loop terminates when a
“Terminate” signal occurs or p = N .
While there might be worst cases for which the inner-loop
has to check all elements before termination, our observation
is that this loop terminates after only a few iterations.
For a fast implementation, we have to efficiently calculate γ.
By a careful investigation, we realise that it can be calculated
using R
1
k
, which is already kept in the memory. In this
setting, we can easily check that γ is the last column of R
1
k+1
,
if φ
k+1
is the selected atom, i.e.,
γ = [
R
1
k
ν
µ
;
1
µ
], (6)
where ν and µ are the same as what were defined after (3).
It is worth mentioning that we do not update R
1
k
as we are
not sure at this stage that it is the most appropriate atom.
Based on (5), we only need to check this condition if some
γ
i
s are negative. A question may be, is this really happening?
The answer to this question is important, as otherwise, the
proposed algorithm would be the same as FOMP. We have
demonstrated a simple case to show that this case actually
happens, in Figure 1. In this figure, we assume that φ
1
is
already selected as the first atom and the next selection is
Fig. 1. A simple example in which gamma has negative elements for γ
2
.
φ
2
. In this case, it is easy to check that ψ
2
=
3φ
1
+
2φ
2
. This simple example justifies the use of FNNOMP, which
guarantees the positivity of the coefficients, at each iteration
of algorithm.
A. Computational Complexity
Having a structured dictionary may help us having a fast
dictionary-coefficient multiplication. However, the analysis
here is based on a dictionary without such a fast multipli-
cation, as it applies to many applications with non-negative
sparsity models. An extension to the fast dictionary is also
possible, if we accurately know the complexity of dictionary
multiplication.
The new implementation has a significantly different com-
putational complexity to CNNOMP. The CNNOMP of [9] has
an internal non-negative least square optimisation step, which
has an asymptotic computational complexity of O(LMk
2
),
where k is the iteration number and L is the number of
internal iterations [14]. L is normally in the order of k. This
optimisation step repeats at each iteration, which makes the
total cost of O(LMK
3
).
In the fast NNOMP, the inner-loop has some comparison
operations from Table I, which have negligible computa-
tional cost. The other steps are the calculation of z
t
and z,
which respectively cost O(M(k + 1) + (k + 1)
2
+ 1) and
O(2M(k + 1) + 1). This inner-loop repeats a few times, i.e.
P . Our observation is that P does not proportionally scale with
the size of problem. The total cost of repeating the inner-loop
will be O(P (3M (K +1)+K
2
)). Another extra computational
cost of FNNOMP is in the sorting of the coefficients, which
is O(N log(P )) in the worst case for finding sorted P largest
coefficients. As we need to sort the coefficients in each
iteration, the total cost will be O(KN log(P)). The inversion
of matrix R, which is a necessary to find x at the end of
algorithm [10], can be avoided using an iterative update of
R
1
.
If we calculate the computational cost of each step of the
two algorithms, we can derive Table II. The total cost of
two algorithms, after ignoring some low-complexity terms, are
presented in the last row of this table. As it can be seen, the
complexity of the CNNOMP is of order ve when K is large
and comparable with M . In comparison, FNNOMP has a term
which depends on P . While P is small, the computational
complexity of FNNOMP is of order three. This shows that
FNNOMP is more favourable for the large scale problems
with medium to large Ks.
The conclusion of this part relies on the fact that L is scaling
with the order of K and P is not scaling directly with the

4
CNNOMP FNNOMP
2: M N + N 2,3: M N + N + N log(N)
5: LM k
2
8: M (k + 1) + k
2
+ 2k + 2
6: M k 18: K
2
Total: MNK + LMK
3
Total: KN (M + 1) + KN log(P ) + P M K(K + 1) + P K
3
+ K
2
TABLE II
COMPUTATIONAL COMPLEXITY OF DIFFERENT IMPLEMENTATIONS OF NNOMP. THE BEGINNING NUMBERS ARE THE CORRESPONDING LINES IN
ALGORITHMS 1 AND 2
0 5 10 15 20 25 30 35 40 45 50
0
0.2
0.4
0.6
0.8
1
Average Exact Recovery
Canonical NNOMP, M = 64
Fast NNOMP, M = 128
Canonical NNOMP, M = 128
Fast NNOMP, M = 128
0 5 10 15 20 25 30 35 40 45 50
10
4
10
3
10
2
10
1
Computational Time (Sec)
Sparsity
Canonical NNOMP, M = 64
Canonical NNOMP, M = 128
Fast NNOMP, M = 128
Fast NNOMP, M = 128
Fig. 2. Exact recovery (top panel) and computation time (botom panel).
N = 256 and M = 64 & 128 are fixed while sparsity K is changing.
dimension of the problem. Although this is a hypothesis, we
next show that it seems to be true in practice, based on our
observations.
IV. SIMULATIONS
In this section, we investigate the computational costs and
the algorithm outputs of the CNNOMP and FNNOMP on a
single core of an Intel core 2.6 GHz processor. In the first
experiment, we randomly generated Φ with 256 atoms and
64 or 128 rows using i.i.d. Gaussian random variables. y was
generated using a Gaussian-Bernoulli model, i.e. uniformly
random support and Normal distribution for the non-zero
coefficients. With different sparsity levels and using the pro-
posed CNNOMP and FNNOMP, we repeated the simulations
1000 times. The exact recovery, i.e. correct recovery of the
support, and computational time are shown in Figure 2. While
the exact recovery is very similar, the proposed method is
significantly faster for large Ks. Different increasing rates
of computational cost is due to different dependencies on
K, which is presented in Table II. The dominant term of
the complexity of CNNOMP is LMK
3
, which changes like
K
4
, when L K. On the other hand, the dominant term
of FNNOMP behaves like O(K
3
), for small P s. The other
important observation in Figure 2 is the significant difference
between CNNOMP and FNNOMP in the computational time,
for a fixed K, in favour of the proposed technique.
In the second experiment, we used the same method to
generate the dictionary, but we fixed N = 256, to investigate
the computation time as a function of M . Here K = 24
20 40 60 80 100 120 140 160 180 200
10
4
10
3
10
2
10
1
Computational Time (Sec)
Signal Dimension (M)
Canonical NNOMP, K = 24
Canonical NNOMP, K = 32
Fast NNOMP, K = 32
Fast NNOMP, K = 24
Fig. 3. Computation time for the fixed N = 256 and K = 24 & 32.
100 150 200 250 300 350 400 450 500 550
10
3
10
2
10
1
10
0
Computational Time (Sec)
Coefficient Vector Dimension (N)
Canonical NNOMP, K = 64
Canonical NNOMP, K = 96
Fast NNOMP, K = 64
Fast NNOMP, K = 96
Fig. 4. Computation time for the fixed M = 128 and K = 64 & 96.
or 32 and M is between 32 and 196. The computational
time is plotted in Figure 3. The computational time is slowly
increasing with M, as its order in the total complexity is one,
see Table II. However, the rate of increase is higher for the
CNNOMP.
In the last experiment, we fixed M = 128 and changed
N from 128 to 512 and repeated the simulations as before for
1000 times. The sparsity was set to be 64 or 96, and the results
are shown in Figure 4. The computational cost is increasing
very slowly by increasing N in FNNOMP, while it is almost
constant for CNNOMP. This difference seems to come from
the fact that FNNOMP has the term N log(P ), in contrast with
N in CNNOMP. Since the computational time of CNNOMP
is much higher than FNNOMP, CNNOMP will be competitive
only for very large Ns.
V. CONCLUSION
We presented a new greedy technique based on OMP,
suitable for non-negative sparse representation, which is much
faster than the state of the art algorithm. The new algorithm has
a slightly different atom selection procedure, which guarantees
the non-negativity of the signal approximations. Although the
selection step is more involved, the overall algorithm has
a much faster implementation. The reason is that with the
new selection procedure, we can use fast QR implemen-
tation of the OMP. The computational complexity of two
NNOMP implementations were derived and the differences
were demonstrated.

Citations
More filters
Posted Content

A Compressed Sensing Approach to Group-testing for COVID-19 Detection

TL;DR: Tapestry is a novel approach to pooled testing with application to COVID-19 testing with quantitative Polymerase Chain Reaction (PCR) that can result in shorter testing time and conservation of reagents and testing kits.
Journal ArticleDOI

A Compressed Sensing Approach to Pooled RT-PCR Testing for COVID-19 Detection

TL;DR: Deterministic binary pooling matrices based on combinatorial design ideas of Kirkman Triple Systems are proposed, which balance between good reconstruction properties and matrix sparsity for ease of pooling while requiring fewer tests in practice, which enables large savings using Tapestry at low prevalence rates while maintaining viability at prevalence rates as high as 9.5%.
Journal ArticleDOI

Manifold learning based data-driven modeling for soft biological tissues.

TL;DR: It is shown that when sufficient data is available, data-driven computing can be an alternative method for modeling complex biological materials and reinstates the importance of having sufficiently rich data coverage in the date-driven and machine learning type of approaches.
Journal ArticleDOI

Non-Negative Orthogonal Greedy Algorithms

TL;DR: A class of non-negative orthogonal greedy algorithms is defined and they are exhibit their structural properties and assessed in terms of accuracy and computational complexity for a sparse spike deconvolution problem and an application to near-infrared spectra decomposition is presented.
Journal ArticleDOI

An integrated manifold learning approach for high-dimensional data feature extractions and its applications to online process monitoring of additive manufacturing

TL;DR: As an effective dimension reduction and feature extraction technique, manifold learning has been successfully applied to high-dimensional data analysis with rapid development of sensor technology.
References
More filters
Journal ArticleDOI

Matching pursuits with time-frequency dictionaries

TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Proceedings ArticleDOI

Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition

TL;DR: A modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual at every step and thereby leads to improved convergence is proposed.
Journal ArticleDOI

Sparse Approximate Solutions to Linear Systems

TL;DR: It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most at most $\left\lceil 18 \mbox{ Opt} ({\bf \epsilon}/2) \|{\bf A}^+\|^2_2 \ln(\|b\|_2/{\bf
Journal ArticleDOI

Sparse Unmixing of Hyperspectral Data

TL;DR: The experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra.
Journal ArticleDOI

Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis

TL;DR: The experimental results illustrate that the proposed sparse NMF algorithm often achieves better clustering performance with shorter computing time compared to other existing NMF algorithms.
Related Papers (5)
Frequently Asked Questions (12)
Q1. What have the authors contributed in "Fast non-negative orthogonal matching pursuit" ?

One such modification has been proposed for Orthogonal Matching Pursuit ( OMP ), which first chooses positive coefficients and uses a non-negative optimisation technique as a replacement for the orthogonal projection onto the selected support. The authors here first investigate the problem of positive representation, using pursuit algorithms. The authors will then describe a new implementation, which can fully incorporate the positivity constraint of the coefficients, throughout the selection stage of the algorithm. As a result, the authors present a novel fast implementation of the NonNegative OMP, which is based on the QR decomposition and an iterative coefficients update. The authors will empirically show that such a modification can easily accelerate the implementation by a factor of ten in a reasonable size problem. 

The CNNOMP of [9] has an internal non-negative least square optimisation step, which has an asymptotic computational complexity of O(LMk2), where k is the iteration number and L is the number of internal iterations [14]. 

y was generated using a Gaussian-Bernoulli model, i.e. uniformly random support and Normal distribution for the non-zero coefficients. 

The conclusion of this part relies on the fact that L is scaling with the order of K and P is not scaling directly with thedimension of the problem. 

In the kth iteration, let the best approximation of y, with the non-negative coefficients and using Φk, be∑k i=1 xiφi = ∑k i=1 ziψi. 

Another extra computational cost of FNNOMP is in the sorting of the coefficients, which is O(N log(P )) in the worst case for finding sorted P largest coefficients. 

To combat such a computational burden, incorporation of a matrix decomposition of the selected sub-dictionary has been proposed, where QR factorisation is among the most effective techniques. 

the analysis here is based on a dictionary without such a fast multiplication, as it applies to many applications with non-negative sparsity models. 

As the dictionary has a finite dimension N , the inner loop terminates when a “Terminate” signal occurs or p = N .While there might be worst cases for which the inner-loop has to check all elements before termination, their observation is that this loop terminates after only a few iterations. 

With some abuse of notation, the authors assume that in iteration k, the columns of Φk are sorted based on the iteration number and φi, for 1 ≤ i ≤ k, is the ith selected atom. 

The inversion of matrix R, which is a necessary to find x at the end of algorithm [10], can be avoided using an iterative update of R−1. 

In this setting, the authors can easily check that γ is the last column of R−1k+1, if φk+1 is the selected atom, i.e.,γ = [− R−1k ν µ ; 1 µ ], (6)where ν and µ are the same as what were defined after (3).