scispace - formally typeset

Proceedings ArticleDOI

A new method for moving-average parameter estimation

01 Nov 2010-pp 1817-1820

TL;DR: An apparently original method for moving-average parameter estimation, based on covariance fitting and convex optimization, is introduced, shown by means of numerical simulation to provide much more accurate parameter estimates, in difficult scenarios, than a related existing method does.

AbstractWe introduce an apparently original method for moving-average parameter estimation, based on covariance fitting and convex optimization. The proposed method is shown by means of numerical simulation to provide much more accurate parameter estimates, in difficult scenarios, than a related existing method does. We derive the new method via an analogy with a covariance fitting interpretation of the Capon beamforming from array processing. In the process, we also point out some new facts on Capon beamforming.

Topics: Covariance (57%), Covariance matrix (55%), Array processing (53%), Beamforming (53%), Convex optimization (52%)

Summary (2 min read)

Introduction

  • For the sake of simplicity the authors assume that {y(k)} is a scalar sequence; however, note that the discussion in this paper can be readily extended to vector sequences by using results from [2].
  • A host of alternative computationally simpler methods have been proposed for MA parameter estimation (see e.g., [1], [3]).
  • Of these methods, the covariance fitting approach of [4], [5] (see also [3]) is somewhat unique in that it has, in its most refined form, an accuracy comparable with that of the MLE, and yet it obtains parameter estimates from the solution of a convex problem that can be reliably and efficiently computed in polynomial time.
  • The inspiration for using this The work was supported in part by the Swedish Research Council (VR) and by the National Science Foundation Grant No. CCF-0634786.
  • New form of covariance fitting criterion comes from a recent letter ([6]) as well as from one of the possible derivations of the Capon beamforming method in array processing [7], [8] (also [3]) - see the next section for details.

A. The basic method of [4], [5]

  • Also, let {rp} denote the theoretical covariances of {y(k)}.
  • Two convex parameterizations of {rp} have been derived in [4], [5] (see also the references therein): the “trace parameterization” and the “Kalman-Yakubovich-Popov lemma - based parameterization.”.
  • The problem in Step 1 can be easily reformulated as a semi-definite program (SDP) that can be solved reliably and efficiently in polynomial time using public-domain software [9], [10].
  • Asilomar 2010 an estimation accuracy viewpoint, the parameter estimates obtained with BM may be statistically rather inefficient.

C. The new method

  • Reference [6] (Problem 1) explores this idea for decomposing Toeplitz matrices into one corresponding to an MA noise-component plus a singular one, for the purpose of identifying possible spectral lines in the residual.
  • Therefore, for conciseness reasons, the authors will focus on NM.
  • The authors have observed empirically that the so-obtained extension of NM does not necessarily have better accuracy than NM.

III. NUMERICAL ILLUSTRATION

  • For MA sequences with roots well outside the unit circle (see (3)), the sample covariances {r̂p}np=0 belong to the set of valid MA(n) covariances, with a high probability.
  • In such cases, BM and NM give very similar results (the solution to the covariance fitting problem is likely to be {rp = r̂p}np=0 for both BM and NM).
  • For each method and each value of N , the authors estimate the average mean squared error (AMSE) of the parameter estimates, viz.

IV. CONCLUSIONS

  • The authors have proposed a computationally attractive MA parameter estimation method based on the use of convex optimization and of an original covariance fitting criterion.
  • The new method has been shown via numerical simulations to provide more accurate parameter estimates than the basic version of an existing competitive method.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

A New Method for Moving-Average
Parameter Estimation
Petre Stoica
Lin Du
Jian Li
Tryphon Georgiou
§
Dept. of Information Technology, Uppsala University, SE-75105 Uppsala, Sweden
Dept. of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611-6130, USA
§
Dept. of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA
Abstract We introduce an apparently original method for
moving-average parameter estimation, based on covariance fit-
ting and convex optimization. The proposed method is shown by
means of numerical simulation to provide much more accurate
parameter estimates, in difficult scenarios, than a related existing
method does. We derive the new method via an analogy with a
covariance fitting interpretation of the Capon beamforming from
array processing. In the process, we also point out some new facts
on Capon beamforming.
I. INTRODUCTION AND PRELIMINARIES
Let {y(k)} be a moving-average (MA) time series of order
n:
y(k)=c
0
e(k)+c
1
e(k 1) +···+ c
n
e(k n),k=1, 2, ···
(1)
where {e(k)} is the driving white noise sequence,
E[e(k)e(
¯
k)] =
1 if k =
¯
k
0 else
(2)
and
c
0
+ c
1
z + ···+ c
n
z
n
=0 for |z|≤1 (3)
(as is well-known, the “minimum-phase” condition in (3)
ensures that (1) is a unique description of the power spectrum
of {y(k)} [1]).
Our problem is to obtain estimates of the MA parameters
{c
p
} from N observations of {y(k)}. For the sake of simplic-
ity we assume that {y(k)} is a scalar sequence; however, note
that the discussion in this paper can be readily extended to
vector sequences by using results from [2]. We also assume
that the MA order, n, is prespecified.
The maximum likelihood estimation (MLE) of {c
p
} requires
making assumptions on the distribution of {y(k)}, and even
under the Gaussian assumption it leads to a highly nonlinear
problem. Consequently, a host of alternative computationally
simpler methods have been proposed for MA parameter esti-
mation (see e.g., [1], [3]). Of these methods, the covariance
fitting approach of [4], [5] (see also [3]) is somewhat unique
in that it has, in its most refined form, an accuracy comparable
with that of the MLE, and yet it obtains parameter estimates
from the solution of a convex problem that can be reliably and
efficiently computed in polynomial time.
In this paper we introduce a new method for MA parameter
estimation that, similarly to the method of [4], [5], is based
on a covariance fitting criterion whose minimization leads to
a convex optimization problem. The main difference between
the proposed method and that in the cited references lies in a
novel form of the fitting criterion. The inspiration for using this
The work was supported in part by the Swedish Research Council (VR)
and by the National Science Foundation Grant No. CCF-0634786.
new form of covariance fitting criterion comes from a recent
letter ([6]) as well as from one of the possible derivations of
the Capon beamforming method in array processing [7], [8]
(also [3]) - see the next section for details. Compared with the
basic form of the method in [4], [5], the method proposed here
is shown via Monte-Carlo simulations to provide much more
accurate MA parameter estimates in difficult cases (where the
roots of the polynomial in (3) lie close to the unit circle).
II. M
AIN RESULTS
A. The basic method of [4], [5]
Let
ˆr
p
=
1
N
N
k=p+1
y(k)y(k p)=ˆr
p
p =0, 1, 2, ··· (4)
denote the standard sample covariances of {y(k)}. Also, let
{r
p
} denote the theoretical covariances of {y(k)}. Two convex
parameterizations of {r
p
} have been derived in [4], [5] (see
also the references therein): the “trace parameterization” and
the “Kalman-Yakubovich-Popov lemma - based parameteri-
zation. These two parameterizations are equivalent in most
respects, but the trace parameterization is easier to describe
and thus it will be the one used in what follows. In this
parameterization, the covariances {r
p
} of the MA sequence
in (1) have the following expressions:
r
p
= tr
p
(Q) p =0, ±1, ··· , ±n; Q 0 (5)
where the (n +1)× (n +1) matrix Q is positive semidefinite
(Q 0) but otherwise arbitrary, and tr
p
(Q) denotes the sum of
the elements on the pth diagonal of Q (with the main diagonal
corresponding to p =0, and so forth).
Making use of (5), the basic form of the method of [4], [5],
which we refer to as the basic method (BM), can be described
by a two-step procedure:
Step 1. Solve the convex minimization problem
min
Q0
n
p=n
r
p
r
p
)
2
; {r
p
= tr
p
(Q)} (6)
Step 2. Obtain estimates of {c
p
} from the solution of Step 1
by using a spectral factorization algorithm.
The problem in Step 1 can be easily reformulated as a
semi-definite program (SDP) that can be solved reliably and
efficiently in polynomial time using public-domain software
[9], [10]. The spectral factorization problem in Step 2 can
also be solved efficiently by means of any of several available
algorithms (see, e.g., [11] for a recent account). Therefore
BM is a computationally appealing method. However, from
1817978-1-4244-9721-8/10/$26.00 ©2010 IEEE Asilomar 2010

an estimation accuracy viewpoint, the parameter estimates
obtained with BM may be statistically rather inefficient.
The approach employed in [4], [5] to improve the statistical
efficiency of BM relies on a somewhat involved weighted
covariance fitting criterion. Here we take a simpler route, as
will be explained in what follows. Compared with the BM, or
its enhanced version in [4], [5], which rely on fitting {r
p
} to
{ˆr
p
}, the new method (NM) is based on fitting the theoretical
covariance matrix
R =
r
0
r
1
··· r
n
r
1
r
0
··· r
n1
.
.
.
.
.
.
.
.
.
.
.
.
r
n
··· r
1
r
0
(7)
to an estimate of it, let us say
ˆ
R, obtained from the available
observations. A possible expression for
ˆ
R is the standard
Toeplitz sample covariance matrix:
ˆ
R
T
=
ˆr
0
ˆr
1
··· ˆr
n
ˆr
1
ˆr
0
··· ˆr
n1
.
.
.
.
.
.
.
.
.
.
.
.
ˆr
n
··· ˆr
1
ˆr
0
(8)
Another commonly used
ˆ
R is the following non-Toeplitz
sample covariance matrix:
ˆ
R
nT
=
1
N n
N
k=n+1
y(k)
.
.
.
y(k n)
y(k) ··· y(k n)
(9)
There is compelling evidence, obtained from numerical studies
of a diversity of estimation problems, which suggests that in
small-or-medium-sized samples the use of
ˆ
R
nT
can lead to
more accurate parameter estimates than the use of
ˆ
R
T
. We will
show in the numerical example section that the same is usually
true in the MA parameter estimation problem considered here,
in the sense that the NM based on
ˆ
R
nT
is more accurate
than the NM that uses
ˆ
R
T
; we will also show that NM is
much more accurate than BM, which can use only the Toeplitz
covariances.
B. Capon and Pisarenko methods
The main inspiration for the NM has come from a covari-
ance fitting-based derivation of the Capon beamformer in array
processing. In the said derivation, the theoretical covariance
matrix R of the observed data is modeled (using real-valued
variables, for analogy with the estimation problem discussed
in this paper) as:
R = σ
2
aa
T
+ Γ (10)
where σ
2
is the signal power, which is the unknown parameter
of main interest, a is a given vector, and Γ is an unknown
residual covariance matrix (which is usually a nuisance quan-
tity). The Capon beamforming method for determining σ
2
can
be obtained as the solution to the following covariance fitting
problem [7], [8]:
max
σ
2
σ
2
subject to R σ
2
aa
T
0 (11)
where in practice R must be replaced by
ˆ
R.
Along similar lines to the Capon formalism, in the so-called
Pisarenko harmonic analysis [3], one seeks to decompose a
given Toeplitz covariance R into the sum of a singular Toeplitz
matrix and a matrix that corresponds to the background white
noise. The variance ν
2
of the white noise can be obtained as
the solution to
max
ν
2
ν
2
subject to R ν
2
I 0 (11a)
where I denotes the identity matrix. Clearly the solution of
the above optimization problem coincides with the smallest
eigenvalue of R, and the residual matrix Γ in the decompo-
sition
R = ν
2
I + Γ (11b)
is a singular Toeplitz matrix that corresponds to a finite number
of spectral lines.
C. The new method
It follows from the discussion in the previous sub-section
that the basic rationale in Capon beamforming as well as
in Pisarenko harmonic analysis can be seen as seeking a
decomposition of the given or estimated covariance
ˆ
R into
the sum of a covariance matrix corresponding to a component
that is postulated as being present, and a residual matrix which
accounts for noise, uncertainty, estimation errors, or signals of
a particular type. Reference [6] (Problem 1) explores this idea
for decomposing Toeplitz matrices into one corresponding to
an MA noise-component plus a singular one, for the purpose
of identifying possible spectral lines in the residual. Inspired
by the Capon and Pisarenko rationales and by the approach in
[6], we explore a similar decomposition from which we seek to
estimate the MA parameters of the underlying process. More
specifically, we propose to estimate the MA covariances {r
p
}
(and thus the MA parameters {c
p
}, see Step 2 of BM) by
solving the following covariance fitting problem:
max
Q0
r
0
subject to
ˆ
R R 0; {r
p
= tr
p
(Q)}
(12)
where
ˆ
R is either
ˆ
R
T
or
ˆ
R
nT
; we denote the resulting two
versions of NM as NM
T
(based on
ˆ
R
T
) and NM
nT
(based on
ˆ
R
nT
). An equivalent, but perhaps intuitively more appealing,
form of (12) is as follows:
min
Q0
tr(
ˆ
RR) subject to
ˆ
RR 0; {r
p
= tr
p
(Q)}
(13)
Similarly to (6), this is a convex problem (namely a SDP) that
can be efficiently solved in polynomial time using publicly
available software [9], [10].
Several remarks on the NM are in order:
(i) The trace criterion in (13) can be replaced by other related
criteria, for instance by
min
Q0
λ
max
(
ˆ
RR) subject to
ˆ
RR 0; {r
p
= tr
p
(Q)}
(14)
where λ
max
denotes the maximum eigenvalue. The so-obtained
problems, such as (14) above, are also convex and therefore
they can be efficiently solved as well. However, we have ob-
served in a number of numerical simulations that the statistical
accuracies of NM and of the modified NMs (such as (14))
are quite similar to one another. Therefore, for conciseness
reasons, we will focus on NM.
Interestingly, for the Capon beamforming problem in (11),
we show in the Appendix that (11) (with R replaced by
ˆ
R),
1818

or equivalently
min
σ
2
tr(
ˆ
R σ
2
aa
T
) subject to
ˆ
R σ
2
aa
T
0 (15)
has the same solution as the following alternative problems:
min
σ
2
λ
max
(
ˆ
R σ
2
aa
T
) subject to
ˆ
R σ
2
aa
T
0
(16)
and
min
σ
2
det(
ˆ
R σ
2
aa
T
) subject to
ˆ
R σ
2
aa
T
0
(17)
where det(·) denotes the determinant. While this apparently
new result on Capon beamforming is not strictly related to the
MA parameter estimation problem discussed here, at least it
lends some theoretical support to our observation that NM and
certain modifications of it, such as (14), have similar statistical
performances.
(ii) Evidently BM cannot make use of covariances {ˆr
p
} for
p>n, whereas NM can be readily extended to use longer
covariance sequences. However, we have observed empirically
that the so-obtained extension of NM does not necessarily
have better accuracy than NM. A heuristic explanation of this
behavior is as follows: an indirect effect of increasing the
dimension of
ˆ
R and R in (13) is to decrease the “weight”
of the covariances {ˆr
p
} and {r
p
} for p =0, ··· ,n (which
are the parameters of main interest) in the fitting criterion;
furthermore, the covariance estimates with lags larger than
n carry “information” on the MA parameters only via their
correlation with the covariance estimates for lags 0, ··· ,n,
and this correlation is not exploited adequately in the fitting
criterion used by NM.
(iii) We might think of reversing the positions of
ˆ
R and R in
(13) to obtain a different version of NM, viz.:
min
Q0
tr(R
ˆ
R) subject to R
ˆ
R 0; {r
p
= tr
p
(Q)}
(18)
However, somewhat similarly to what we said in (ii) above, we
have observed empirically that this modification of NM may
provide less accurate parameter estimates than (13). Heuris-
tically, we can try to explain this behavior in the following
way: for the sake of discussion, let
ˆ
R =
ˆ
R
T
in (13) and
(18); then (13) implies ˆr
0
>r
0
, whereas for (18) we must
have r
0
> ˆr
0
; when r
0
> ˆr
0
, the conditions for {r
p
}
n
p=1
to belong to the set of MA(n) covariances are weaker than
the corresponding conditions when r
0
< ˆr
0
; consequently, for
(18) the minimizing {r
p
}
n
p=1
may be closer to {ˆr
p
}
n
p=1
than
for (13); this fact may explain why (18) has been observed in
some of our simulations to behave somewhat similarly to BM.
Finally, we remark on the fact that while the condition
ˆ
R
σ
2
aa
T
0 in the Capon beamforming problem (see (11))
is easily motivated, the similar condition
ˆ
R R 0, used
in the NM for MA parameter estimation, is more intriguing.
However, “the proof of the pudding is in the eating”: the NM
works well, and it provides more accurate parameter estimates
than the BM in difficult scenarios, as we show in the next
section.
III. N
UMERICAL ILLUSTRATION
For MA sequences with roots well outside the unit circle
(see (3)), the sample covariances {ˆr
p
}
n
p=0
belong to the set
of valid MA(n) covariances, with a high probability. In such
cases, BM and NM give very similar results (the solution to
the covariance fitting problem is likely to be {r
p
r
p
}
n
p=0
for both BM and NM).
In this section we consider a “difficult example” (see [4]):
an MA(3) for which the polynomial in (3) has roots at
1
0.95
and
1
0.98e
±iπ/4
. The corresponding MA coefficients have the
following values:
c
0
=1 c
1
= 2.3359 c
2
=2.2770 c
3
= 0.9124 (19)
We will use BM, NM
T
and NM
nT
to estimate {c
p
} from sam-
ples of {y(k)} of varying length: N = 100, 200, ··· , 1000.
For each method and each value of N, we estimate the average
mean squared error (AMSE) of the parameter estimates, viz.
AMSE =
1
n +1
n
p=0
Ec
p
c
p
)
2
(20)
by using 1000 Monte-Carlo simulation runs. The obtained
results are shown in Fig. 1.
100 200 300 400 500 600 700 800 900 1000
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Number of Samples N
AMSE
BM
NM
T
NM
nT
Fig. 1. The estimated AMSE of BM, NM
T
and NM
nT
versus N .
Fig. 2 shows the theoretical spectrum of the considered
MA(3) sequence, viz. |C(e
)|
2
(where C(·) is the polynomial
in (3) and ω [0, 2π] is the frequency variable), along with
the mean and standard deviation of the spectra estimated via
BM and NM
nT
for N = 1000. As can be seen from these
figures, NM
nT
provides slightly more accurate estimates than
NM
T
and significantly more accurate estimates than BM.
IV. C
ONCLUSIONS
We have proposed a computationally attractive MA parame-
ter estimation method based on the use of convex optimization
and of an original covariance fitting criterion. The new method
has been shown via numerical simulations to provide more
accurate parameter estimates than the basic version of an
existing competitive method.
A
PPENDIX:PROBLEMS (15), (16) AND (17) HAVE THE
SAME SOLUTION
First note (assuming that
ˆ
R is nonsingular) that:
det(
ˆ
R σ
2
aa
T
) = det(
ˆ
R) det(I
ˆ
R
1
σ
2
aa
T
)
= det(
ˆ
R)(1 σ
2
a
T
ˆ
R
1
a) (A.1)
which implies immediately that the problems (15) and (17)
are equivalent. To show the same result for (15) and (16), let
1819

0 1 2 3 4
10
4
10
3
10
2
10
1
10
0
10
1
10
2
Frequency ω (rad/s)
Power spectral density
BM
0 1 2 3 4
10
4
10
3
10
2
10
1
10
0
10
1
10
2
Frequency ω (rad/s)
Power spectral density
NM
nT
theoretical
estimated mean
±1 standard deviation
theoretical
estimated mean
±1 standard deviation
Fig. 2. The theoretical spectrum along with the mean, and the mean ±1
standard deviation curves for the spectra estimated via BM and NM
nT
(N =
1000).
ˆ
R
1
2
denote a symmetric square-root of
ˆ
R
1
, and observe
that:
ˆ
R σ
2
aa
T
0
I σ
2
ˆ
R
1
2
aa
T
ˆ
R
1
2
0
σ
2
1
a
T
ˆ
R
1
a
ˆσ
2
(A.2)
and therefore that ˆσ
2
above is the solution to (15). The
previous calculation also implies that
ˆ
R σ
2
aa
T
=
ˆ
R ˆσ
2
aa
T
+(ˆσ
2
σ
2
)aa
T
ˆ
R ˆσ
2
aa
T
(A.3)
for all values of σ
2
that satisfy the constraint in (15) or (16).
It follows that λ
max
(
ˆ
R σ
2
aa
T
) λ
max
(
ˆ
R ˆσ
2
aa
T
), under
the constraint in (16) on σ
2
, and hence that ˆσ
2
is the solution
to the problem (16) as well.
R
EFERENCES
[1] T. W. Anderson, The Statistical Analysis of Time Series. New York, NY:
Wiley, 1994.
[2] P. Stoica, L. Xu, J. Li, and Y. Xie, “Optimal correction of an indefinite
estimated MA spectral density matrix, Statistics and Probability Letters,
vol. 77, pp. 973–980, June 2007.
[3] P. Stoica and R. L. Moses, Spectral Analysis of Signals. Upper Saddle
River, NJ: Prentice-Hall, 2005.
[4] P. Stoica, T. McKelvey, and J. Mari, “MA estimation in polynomial
time, IEEE Transactions on Signal Processing, vol. 48, pp. 1999–2012,
July 2000.
[5] B. Dumitrescu, I. Tabus, and P. Stoica, “On the parameterization of pos-
itive real sequences and MA parameter estimation, IEEE Transactions
on Signal Processing, vol. 49, pp. 2630–2639, November 2001.
[6] T. T. Georgiou, “Decomposition of Toeplitz matrices via convex op-
timization, IEEE Signal Processing Letters, vol. 13, pp. 537–540,
September 2006.
[7] T. L. Marzetta, A new interpretation for Capon’s maximum likelihood
method of frequency-wavenumber spectrum estimation, IEEE Transac-
tions on Acoustics, Speech, and Signal Processing, vol. 31, pp. 445–449,
April 1983.
[8] P. Stoica, Z. Wang, and J. Li, “Robust Capon beamforming, IEEE
Signal Processing Letters, vol. 10, pp. 172–175, June 2003.
[9] J. F. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization
over symmetric cones, Optimization Methods and Software, vol. 11-12,
pp. 625–653, October 1999. Available: http://sedumi.ie.lehigh.edu/.
[10] J. L
¨
ofberg, “YALMIP: A toolbox for modeling and optimization in MAT-
LAB, The 2004 IEEE International Symposium on Computer Aided
Control Systems Design, Taipei, Taiwan,, pp. pp. 284–289, September
2004. Available: http://users.isy.liu.se/johanl/yalmip/.
[11] L. M. Li, “Factorization of moving-average spectral densities by state-
space representations and stacking, Journal of Multivariate Analysis,
vol. 96, pp. 425–438, October 2005.
1820
Citations
More filters

Posted Content
TL;DR: This work considers problems of estimation of structured covariance matrices, and in particular of matrices with a Toeplitz structure, and advocates a specific one which represents the Wasserstein distance between the corresponding Gaussians distributions and shows that it coincides with the so-called Bures/Hellinger distance between covarianceMatrices as well.
Abstract: Author(s): Ning, Lipeng; Jiang, Xianhua; Georgiou, Tryphon | Abstract: We consider problems of estimation of structured covariance matrices, and in particular of matrices with a Toeplitz structure. We follow a geometric viewpoint that is based on some suitable notion of distance. To this end, we overview and compare several alternatives metrics and divergence measures. We advocate a specific one which represents the Wasserstein distance between the corresponding Gaussians distributions and show that it coincides with the so-called Bures/Hellinger distance between covariance matrices as well. Most importantly, besides the physically appealing interpretation, computation of the metric requires solving a linear matrix inequality (LMI). As a consequence, computations scale nicely for problems involving large covariance matrices, and linear prior constraints on the covariance structure are easy to handle. We compare this transportation/Bures/Hellinger metric with the maximum likelihood and the Burg methods as to their performance with regard to estimation of power spectra with spectral lines on a representative case study from the literature.

14 citations


Cites methods from "A new method for moving-average par..."

  • ...rent assumptions has been used to justify different methods. For instance, assuming that xˆ = x+vwhere xand vare independent leads to min T∈T n trace(Tˆ −T) | Tˆ −T≥ 0 o which is a method proposed in [18]. Then, also, assuming a “symmetric” noise contribution as in xˆ +vˆ = x+v, where the noise vectors ˆv and vare independent of xand xˆ, leads to min T∈T ,Q,Qˆ n trace(Qˆ +Q) | Tˆ +Qˆ = T+Q, Q,Qˆ ≥ 0 o...

    [...]


Proceedings ArticleDOI
27 Jun 2012
TL;DR: The metric induced by Monge-Kantorovich transportation of the respective probability measures leads to an efficient linear matrix inequality (LMI) formulation of the approximation problem and relates to approximation in the Hellinger metric.
Abstract: The problem considered in this paper is that of approximating a sample covariance matrix by one with a Toeplitz structure. The importance stems from the apparent sensitivity of spectral analysis on the linear structure of covariance statistics in conjunction with the fact that estimation error destroys the Toepliz pattern. The approximation is based on appropriate distance measures. To this end, we overview some of the common metrics and divergence measures which have been used for this purpose as well as introduce certain alternatives. In particular, the metric induced by Monge-Kantorovich transportation of the respective probability measures leads to an efficient linear matrix inequality (LMI) formulation of the approximation problem and relates to approximation in the Hellinger metric. We compare these with the maximum likelihood and the Burg method on a representative case study from the literature.

7 citations


Proceedings ArticleDOI
01 Aug 2015
TL;DR: This paper proposes a way based on K-means method to address the moving average (MA) parameters estimation issue based only on noisy observations and without any knowledge on the variance of the additive stationary white Gaussian measurement noise.
Abstract: In this paper, we propose to address the moving average (MA) parameters estimation issue based only on noisy observations and without any knowledge on the variance of the additive stationary white Gaussian measurement noise. For this purpose, the MA process is approximated by a high-order AR process and its parameters are estimated by using an errors-in-variables (EIV) approach, which also makes it possible to derive the variances of both the driving process and the additive white noise. The method is based on the Frisch scheme. One of the main difficulties in this case is to evaluate the minimal AR-process order that must be considered to have a "good" approximation of the MA process. To this end, we propose a way based on K-means method. Simulation results of the proposed method are presented and compared to existing MA-parameter estimation approaches.

4 citations


Cites methods from "A new method for moving-average par..."

  • ...Variants of the basic method have also been proposed for instance in [6] in which the purpose is to estimate the covariance matrix rather than the covariance function itself....

    [...]


Dissertation
04 Dec 2014
Abstract: La radio intelligente (RI) a ete proposee pour ameliorer l’utilisation du spectre radiofrequence. Pour cela, il s’agit de donner un acces opportuniste aux utilisateurs non licencies (nommes utilisateurs secondaires) au spectre alloue a l’utilisateur licencie (nomme utilisateur primaire). Dans cette these, notre but est de proposer un scenario specifique a la RI et de presenter des solutions a certains problemes connexes. Pour cela, nous considerons une RI emettant ses informations en “sur-couche” des utilisateurs primaires (technique overlay). Le systeme etudie est constitue d’une macro-cellule primaire et de petites cellules cognitives secondaires equipees de stations de base cooperant ensemble. Nous suggerons l’etude d’un schema de communication hybride ou une modulation “FilterBanc Multi Carrier” (FBMC) est utilisee pour les utilisateurs secondaires, alors que dans le cas des utilisateurs primaires, une modulation “Orthogonal Frequency Division Multiplexing”(OFDM) est adoptee. Ce choix est motive par les raisons suivantes: l’OFDM est utilisee dans de nombreux systemes primaires actuels large bande; ainsi lorsque l’OFDM est consideree au niveau de l’utilisateur primaire, une bande passante importante peut etre reutilisee. Concernant le systeme secondaire, bien que l’OFDM ait ete reconnue comme forme d’onde eligible aux systemes de la RI, la modulation FBMC peut etre une autre candidate capable de palier certains defauts de l’OFDM. En effet, comparee a l’OFDM,la modulation FBMC a l’avantage de reduire le niveau d’interferences de l’utilisateur secondairequi est induit par la difference de frequence des oscillateurs locaux equipant les stations de base secondaires et les utilisateurs primaires. Pour annuler ces interferences,un precodage peut etre insere au niveau des stations de base secondaires. Par consequent,nous proposons de calculer l’expression des interferences dues au systeme secondaire au niveau du recepteur primaire. A partir de ce resultat nous proposons d’annuler les interferences en utilisant la methode “Zero Forcing Beamforming” (ZFBF) . Afin de confirmer l’efficacite du systeme propose, nous le comparons avec un systeme fonde sur une RIutilisant une modulation OFDM a la fois au primaire et au secondaire.Toutefois, l’application de la methode ZFBF depend des canaux entre les stations de base secondaires et les utilisateurs primaires avec lesquels on souhaite s’orthogonaliser. Une estimation de canal est donc necessaire. Pour ce faire, nous proposons de modeliser le canal par un processus autoregressif (AR) et d’aborder l’estimation du canal en utilisant une sequence d’apprentissage. Les signaux recus, appeles aussi “observations”, sont perturbes par un bruit de mesure additif.

1 citations


References
More filters

Journal ArticleDOI
Jos F. Sturm1
TL;DR: This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.
Abstract: SeDuMi is an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.

7,286 citations


"A new method for moving-average par..." refers background in this paper

  • ...min Q≥0 tr(R̂−R) subject to R̂−R ≥ 0; {rp = trp(Q)} (13) Similarly to (6), this is a convex problem (namely a SDP) that can be efficiently solved in polynomial time using publicly available software [9], [10]....

    [...]

  • ...The problem in Step 1 can be easily reformulated as a semi-definite program (SDP) that can be solved reliably and efficiently in polynomial time using public-domain software [9], [10]....

    [...]


Proceedings ArticleDOI
02 Sep 2004
TL;DR: Free MATLAB toolbox YALMIP is introduced, developed initially to model SDPs and solve these by interfacing eternal solvers by making development of optimization problems in general, and control oriented SDP problems in particular, extremely simple.
Abstract: The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems

7,174 citations


"A new method for moving-average par..." refers background in this paper

  • ...min Q≥0 tr(R̂−R) subject to R̂−R ≥ 0; {rp = trp(Q)} (13) Similarly to (6), this is a convex problem (namely a SDP) that can be efficiently solved in polynomial time using publicly available software [9], [10]....

    [...]

  • ...The problem in Step 1 can be easily reformulated as a semi-definite program (SDP) that can be solved reliably and efficiently in polynomial time using public-domain software [9], [10]....

    [...]


Book
01 Jan 2005
TL;DR: 1. Basic Concepts. 2. Nonparametric Methods. 3. Parametric Methods for Rational Spectra.
Abstract: 1. Basic Concepts. 2. Nonparametric Methods. 3. Parametric Methods for Rational Spectra. 4. Parametric Methods for Line Spectra. 5. Filter Bank Methods. 6. Spatial Methods. Appendix A: Linear Algebra and Matrix Analysis Tools. Appendix B: Cramer-Rao Bound Tools. Appendix C: Model Order Selection Tools. Appendix D: Answers to Selected Exercises. Bibliography. References Grouped by Subject. Subject Index.

2,459 citations


"A new method for moving-average par..." refers methods in this paper

  • ...new form of covariance fitting criterion comes from a recent letter ([6]) as well as from one of the possible derivations of the Capon beamforming method in array processing [7], [8] (also [3]) - see the next section for details....

    [...]

  • ...Along similar lines to the Capon formalism, in the so-called Pisarenko harmonic analysis [3], one seeks to decompose a given Toeplitz covariance R into the sum of a singular Toeplitz matrix and a matrix that corresponds to the background white noise....

    [...]

  • ...Of these methods, the covariance fitting approach of [4], [5] (see also [3]) is somewhat unique in that it has, in its most refined form, an accuracy comparable with that of the MLE, and yet it obtains parameter estimates from the solution of a convex problem that can be reliably and efficiently computed in polynomial time....

    [...]


Book
01 Jan 1971
Abstract: The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I Richard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold M. S. Coxeter Introduction to Modern Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Bruno de Finetti Theory of Probability, Volume 1 Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research Amos de Shalit & Herman Feshbach Theoretical Nuclear Physics, Volume 1 --Nuclear Structure J. L. Doob Stochastic Processes Nelson Dunford & Jacob T. Schwartz Linear Operators, Part One, General Theory Nelson Dunford & Jacob T. Schwartz Linear Operators, Part Two, Spectral Theory--Self Adjoint Operators in Hilbert Space Nelson Dunford & Jacob T. Schwartz Linear Operators, Part Three, Spectral Operators Herman Fsehbach Theoretical Nuclear Physics: Nuclear Reactions Bernard Friedman Lectures on Applications-Oriented Mathematics Gerald d. Hahn & Samuel S. Shapiro Statistical Models in Engineering Morris H. Hansen, William N. Hurwitz & William G. Madow Sample Survey Methods and Theory, Volume I--Methods and Applications Morris H. Hansen, William N. Hurwitz & William G. Madow Sample Survey Methods and Theory, Volume II--Theory Peter Henrici Applied and Computational Complex Analysis, Volume 1--Power Series--lntegration--Conformal Mapping--Location of Zeros Peter Henrici Applied and Computational Complex Analysis, Volume 2--Special Functions--Integral Transforms--Asymptotics--Continued Fractions Peter Henrici Applied and Computational Complex Analysis, Volume 3--Discrete Fourier Analysis--Cauchy Integrals--Construction of Conformal Maps--Univalent Functions Peter Hilton & Yel-Chiang Wu A Course in Modern Algebra Harry Hochetadt Integral Equations Erwin O. Kreyezig Introductory Functional Analysis with Applications William H. Louisell Quantum Statistical Properties of Radiation All Hasan Nayfeh Introduction to Perturbation Techniques Emanuel Parzen Modern Probability Theory and Its Applications P.M. Prenter Splines and Variational Methods Walter Rudin Fourier Analysis on Groups C. L. Siegel Topics in Complex Function Theory, Volume I--Elliptic Functions and Uniformization Theory C. L. Siegel Topics in Complex Function Theory, Volume II--Automorphic and Abelian integrals C. L Siegel Topics in Complex Function Theory, Volume III--Abelian Functions & Modular Functions of Several Variables J. J. Stoker Differential Geometry J. J. Stoker Water Waves: The Mathematical Theory with Applications J. J. Stoker Nonlinear Vibrations in Mechanical and Electrical Systems

2,135 citations


Book ChapterDOI
01 Jan 2010
Abstract: Chapter 31 contains formulas relevant for time series analysis: 31.1. Predictions in Time Series, 31.2. Decomposition of (Economic) Time Series, 31.3. Estimation of Correlation and Spectral Characteristics, 31.4. Linear Time Series, 31.5 Nonlinear and Financial Time Series, 31.6 Multivariate Time Series, 31.7. Kalman Filter.

428 citations


"A new method for moving-average par..." refers background in this paper

  • ...(as is well-known, the “minimum-phase” condition in (3) ensures that (1) is a unique description of the power spectrum of {y(k)} [1])....

    [...]