scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Statistical Theory of Passive Location Systems

01 Mar 1984-IEEE Transactions on Aerospace and Electronic Systems (Springer, New York, NY)-Vol. 20, Iss: 2, pp 183-198
TL;DR: In this article, a derivation of the principal algorithms and an analysis of the performance of the two most important passive location systems for stationary transmitters, hyperbolic location systems and directionfinding location systems, are presented.
Abstract: A derivation of the principal algorithms and an analysis of the performance of the two most important passive location systems for stationary transmitters, hyperbolic location systems and directionfinding location systems, are presented. The concentration ellipse, the circular error probability, and the geometric dilution of precision are defined and related to the location-system and received-signal characteristics. Doppler and other passive location systems are briefly discussed.

Summary (1 min read)

Ill. ESTIMATOR ACCURACY

  • P is a diagonal matrix, the principal axes of the hyperellipsoids are not aligned with the coordinate axes.
  • The region of integration, which is indicated below the integral signs, is the interior of a hypersphere.
  • To simplify (25), the authors rotate the coordinate axes so that they are aligned with the principal axes of the hyperellipsoid.

IV. TWO-DIMENSIONAL ESTIMATORS

  • For the estimator of a two-dimensional vector, such as position coordinates on the surface of the Earth, the bivariate covariance matrix can be expressed as Suppose that new coordinates are defined by rotating the axes of the old coordinate system counterclockwise through an angle 0, as shown in Fig. 3 .
  • Thus (57) represents the angular offset of one of the principal axes of the ellipse relative to the old coordinate axes.
  • The CEP is a measure of the uncertainty in the location estimator x^relative to its mean E[x].

V. HYPERBOLIC LOCATION SYSTEMS

  • Let the column vector RO, with components xo, yo, and zo, specify a reference point near the transmitter position.
  • Equation (12) with the above relations and substitutions gives the least squares or maximum likelihood estimator; (16) provides the covariance matrix of the estimator.
  • Let NE denote the covariance matrix of the arrival-time errors.
  • These events correspond to reference points that lie along a line passing through two of the stations.

VI. LOCATION USING BEARING MEASUREMENTS

  • A single bearing angle may be measured at each station of the location system.
  • The authors conclude that the symmetrical, but not necessarily linear, placement of the stations with respect to an accurately located reference leads to uncorrelated coordinate estimates.

VIl. OTHER LOCATION METHODS

  • When the receivers are moving, it may be possible to use the known receiver trajectories to enhance the accuracy of the transmitter location.
  • Three bearing measurements and two turns by an aircraft can be used to greatly reduce the effect of strong unknown biases in the measurements [91.
  • Another approach, which may be less sensitive to inaccuracies in the assumed value off,, is to measure the Doppler difference, which is defined as (160) ftnl frn2 = (f/c) (V1 COS o)1 -V2 COS &p2) where the subscripts 1 and 2 refer to receivers 1 and 2.
  • Because of the need for a precise estimate off, orfa,a, Doppler location systems appear to be most useful in the location of transmitters of narrowband signals.
  • To accommodate a moving transmitter, the observation interval can be decreased so that the transmitter is nearly stationary during the interval and points on the trajectory can be located.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

1.
INTRODUCTION
Statistical
Theory
of
Passive
Location
Systems
DON
J.
TORRIERI
U.S.
Army
Countermeasures/Counter-Countermeasures
Center
A
derivation
of
the
principal
algorithms
and
an
analysis
of
the
performance
of the
two
most
important
passive
location
systems
for
stationary
transmitters,
hyperbolic
location
systems
and
direction-
finding
location
systems,
are
presented.
The
concentration
ellipse,
the
circular
error
probability,
and
the
geometric
dilution
of
precision
are
defined
and
related
to
the
location-system
and
received-signal
characteristics.
Doppler
and
other
passive
location
systems
are
briefly
discussed.
Manuscript
received
June
21,
1983.
The
position
of
a
stationary
transmitter
or
radiating
emitter
can
be
estimated
from
passive
measurements
of
the
arrival
times,
directions
of
arrival,
or
Doppler
shifts
of
electromagnetic
waves
received
at
various
sites.
This
paper
presents
a
derivation
of
the
principal
algorithms
and
an
analysis
of
the
two
most
important
passive
location
systems
for
stationary
transmitters:
hyperbolic
location
systems
and
direction-finding
location
systems
[1,
2].
Hyperbolic
location
systems,
often
called
time
difference
of
arrival
(TDOA)
systems,
locate
a
transmitter
by
processing
signal
arrival-time
measurements
at
three
or
more
stations.
The
measurements
at
the
various
stations
are
sent
to
a
station
that
is
designated
the
master
station
and
does
the
processing.
The
arrival-time
measurements
at
two
stations
are
combined
to
produce
a
relative
arrival
time
that,
in
the
absence
of
noise
and
interference,
restricts
the
possible
transmitter
location
to
a
hyperboloid
with
the
two
stations
as
foci.
Transmitter
location
is
estimated
from
the
intersections
of
three
or
more
independently
generated
hyperboloids
determined
from
at
least
four
stations.
If
the
transmitter
and
the
stations
lie
in
the
same
plane,
location
is
estimated
from
the
intersections
of
two
or
more
hyperbolas
determined
from
three
or
more
stations.
Fig.
1
illustrates
two
hyperbolas,
each
of
which
has
two
branches,
derived
from
measurements
at
three
stations.
The
two
hyperbolas
have
two
points
of
intersection.
The
resulting
location
ambiguity
may
be
resolved
by
using
a
priori
information
about
the
location,
bearing
measurements
at
one
or
more
of
the
stations,
or
a
fourth
station
to
generate
an
additional
hyperbola.
Fig.
2
depicts
an
aircraft
with
a
direction-finding
location
system
that
makes
bearing
measurements
at
three
different
points
in
its
trajectory.
The
intersection
of
two
bearing
lines
provides
an
estimate
of
the
location
of
the
transmitter,
which
may
be
on
the
surface
of
the
Earth
or
airborne.
In
the
presence
of
noise,
more
than
two
bearing
lines
will
not
intersect
at
a
single
point.
However,
the
appropriate
processing
allows
an
improved
estimate
of
the
transmitter
position.
The
following
three
sections
of
this
paper
present
the
basic
methods
of
estimation
applicable
to
transmitter
location
and
determine
the
accuracy
of
suitable
estimators.
Sections
5
and
6,
respectively,
consider
passive
location
systems
using
arrival-time
and
bearing
measurements.
Section
7
summarizes
the
use
of
Doppler
information.
Since
the
next
three
sections
provide
the
theoretical
framework
for
the
statistical
analysis
of
any
passive
location
system,
the
reader
who
is
only
interested
in
the
applications
may
wish
to
omit
this
material,
referring
to
it
as
necessary
while
reading
Sections
5-7.
Author's
address:
U.
S.
Department
of
the
Army,
Countermeasures/
Counter-Countermeasures
Center,
2800
Powder
Mill
Rd.,
Adelphi,
MD
1i.
ESTIMATION
METHODS
20783.
0018-9251/84/0300-0183
$1.00
1984
IEEE
The
components
of
an
n-dimensional
vector
x
that
is
to
be
estimated
are
the
position
coordinates
in
two
or
IEEE
TRANSACTIONS
ON
AEROSPACE
AND
ELECTRONIC
SYSTEMS
VOL.
AES-20,
NO.
2
MARCH
1984
183

HYPERBOLA
FROM
STATION
1
AND
2
10o
HYPERBOLA
FROM
STATION
2
AND
3
2o
o3
Fig.
1.
Intersecting
hyperbolas
from
three
stations.
1
2
3
AIRcRAFT
_
_
_
-
TRAJECtORY
AIRCRAFT
\
X
BEARING
\
LINE
-
\
i
BEARING
\
/
LINE
\Id
TRANSMITTER
Fig.
2.
Bearing
lines
from
aircraft
positions.
three
dimensions
and
possibly
other
parameters
such
as
the
time
of
emission
of
the
radiation.
A
set
of
N
measurements
ri,
i
=
1,
2,
...,
N,
is
collected
at
various
positions.
In
the
absence
of
random
measurement
errors,
ri
is
equal
to
a
known
functionf.(x).
In
the
presence
of
additive
errors,
ri
=
fi(x)
+
ni,
i
=
1,
2,
...,
N.
(1)
These
N
equations
can
be
written
as a
single
equation
for
N-dimensional
column
vectors:
r
=
f(x)
+
n.
(2)
The
measurement
error
n
is
assumed
to
be
a
multivariate
random
vector
with
an
N
x
N
positive-definite
covariance
matrix
N
=
E[(n
-
E[n])(n
-
E[n])T]
where
E[
]
denotes
the
expected
value
and
the
superscript
T
denotes
the
transpose.
If
x
is
regarded
as
an
unknown
but
nonrandom
veci
and
n
is
assumed
to
have
a
zero
mean
and
a
Gaussian
distribution,
then
the
conditional
density
function
of
r
given
x
is
p(rlx)
=
(2
.)N/2
INI
1/2
exp{-(1
/2)[r
-f(x)]TN
l
[r
-f(x)]}
where
|NI
denotes
the
determinant
of
N
and
the
superscript
-
1
denotes
the
inverse.
Because
N
is
symmetric
and
positive
definite,
its
inverse
exists.
The
maximum
likelihood
estimator
is
that
value
of
x
which
maximizes
(4).
Thus
the
maximum
likelihood
estimatoi
minimizes
the
quadratic
form
Q(x)
=
[r
-f(x)]TN`1[r
-f(x)].
Minimization
of
Q(x)
is
a
reasonable
criterion
for
determination
of
an
estimator
even
when
the
additive
error
cannot
be
assumed
to
be
Gaussian.
In
this
case,
the
resulting
estimator
is
called
the
least
squares
estimator
and
N`
is
regarded
as
a
matrix
of
weighting
coefficients.
In
general,
f(x)
is
a
nonlinear
vector
function.
To
determine
a
reasonably
simple
estimator,
f(x)
can
be
linearized
by
expanding
it
in
a
Taylor
series
about
a
reference
point
specified
by
the
vector
xo
and
retaining
the
first
two
terms;
that
is,
we
use
f(x)
-f(xo)
+
G(x
-
Xo)
where
x
and
xo
are
n
x
1
column
vectors
and
G
is
the
N
x
n
matrix
of
derivatives
evaluated
at
xo:
of1
axl
x=xo
afN
Oxl
X=XO
Of,
Oxn
x=X0
afN
Each
row
of
this
matrix
is
the
gradient
vector
of
one
of
the
components
of
f(x).
The
vector
xo
could
be
an
estimate
of
x
determined
from
a
previous
iteration
of
the
estimation
procedure
or
based
upon
a
priori
information.
It
is
assumed
in
the
subsequent
analysis
that
xo
is
sufficiently
close
to
x
that
(6)
is
an
accurate
approximation.
Combining
(5)
and
(6)
gives
Q(x)
=
(r
-GX)TN-
1(rl
-
Gx)
(8)
where
(3)
rl
=
r
-
f(xo)
+
Gxo.
(9)
To
determine
the
necessary
condition
for
the
estimator
x
that
minimizes
Q(x),
we
calculate
the
gradient
of
Q(x),
defined
by
V
Q(x)
=
LdQ
OQ
dQ
]T
Oxl
Ox2
Oxn
(10)
and
then
solve
for
the
x
such
that
VxQ(x)
=
0.
From
its
definition,
N
is
a
symmetric
matrix;
that
is,
NT
=
N.
Since
(N-1)T
=
(NT)',
it
follows
that
(N-
)T
=
N-
,
which
implies
that
N`
is
a
symmetric
matrix.
Therefore,
V,Q(x)
{x=x
=
2GTN-
Gx
-
2GTN-
1r
=
0.
(1
1)
We
assume
that
the
matrix
GTN-
'G
is
nonsingular.
Thus
the
solution
of
(11)
is
x
=
(GN-
lG)
-1GN-
lrl
(5)
=
xo
+
(GTN-
G)-
lGTN-
[r
-
f(xo)].
Using
(12),
direct
calculation
shows
that
(8)
can
be
written
in
the
form
(12)
IEEE
TRANSACTIONS
ON
AEROSPACE
AND
ELECTRONIC
SYSTEMS
VOL.
AES-20,
NO.
2
MARCH
1984
(6)
(7)
:X0
184

Q(x)
=
[x
-
IJTGTN
`G[x
-
x]
-
rTN
`G(GTN
-'G)
-'r,
+
rTN
l'r,
(13)
where
only
the
first
term
depends
upon
x.
Since
N
is
symmetric
and
positive
definite,
it
has
positive
eigenvalues.
If
Ne
=
Xe,
then
N-
1e
=
A-
'e.
Thus
if
e
is
an
eigenvector
of
N
with
eigenvalue
A,
then
e
is
also
an
eigenvector
of
N`
with
eigenvalue
1/A.
Since
it
is
symmetric
and
its
eigenvalues
are
positive,
N`
is
positive
definite.
Therefore,
x
=
x
minimizes
Q(x).
The
estimator
of
(12)
is
called
the
linearized
least
squares
estimator.
Substituting
(2)
into
(12)
and
rearranging
terms,
the
expression
for
x
can
be
written
in
the
form
x
=
x
+
(G'N-
'
G)-
I
GTN-
1
lf(x)
-f(xo)
-
G(x
-
x0)
+
n]
(14)
which
shows
how
the
estimator
error
is
affected
by
the
linearization
error
and
the
noise.
The
bias
of
the
estimator
x
is
defined
as
b
=
E[x]
-
x.
Using
(14),
we
obtain
b
=(GTN`'G)
-lGTN
-{f(x)
-f(xo)
-
G(x
-
xo)
+
E[n]}.
(15)
If
f(x)
is
linear,
as
in
(6),
and
E[n]
=
0,
then
the
least
squares
estimator
is
unbiased.
If
systematic
errors
occur
in
the
measurements,
then
E[n]
#&
0.
To
minimize
the
estimator
bias
due
to
systematic
errors,
the
magnitude
of
each
E[ni]
should
be
minimized
through
system
calibrations.
If
some
of
the
E[nij
are
known
functions
of
various
parameters
and
N
is
sufficiently
large,
these
parameters
can
be
made
components
of
the
vector
x
and
estimated
along
with
the
other
components
of
x.
The
bias
due
to
the
nonlinearity
of
f(x)
can
be
estimated
by
expanding
f(x)
in
a
Taylor
series
about
x0
and
retaining
second-order
terms.
Let
P
denote
the
covariance
matrix
of
x^.
Equation
(14)
yields
P
=
E(i
-
E[i)(i
-
E[i])T]
=
(GTN-
IG)-'.
(16)
The
diagonal
elements
of
P
give
the
variances
of
the
errors
in
the
estimated
components
of
x.
Since
P
is
part
of
the
estimator
given
by
(12),
one
can
compute
both
estimate
and
covariance
simultaneously.
If
n
is
zero-mean
Gaussian,
the
maximum
likelihood
or
least
squares
estimator
for
the
linearized
model
is
the
same
as
the
minimum
variance
unbiased
estimator
[3].
The
measurement
error
vector
n
is
assumed
to
encompass
all
the
contributions
to
error,
including
uncertainties
in
the
system
or
physical
parameters,
such
as
the
station
coordinates
or
the
speed
of
propagation.
If
q
is
a
vector
of
the
parameters,
then
the
measurement
vector
r
can
often
be
expressed
as
r
-fl(x,q)
+
ni
denote
the
assumed
value
of
q.
If
qo
is
sufficiently
close
to
q,
then
a
Taylor
series
expansion
yields
fl
(x,q)
-fl
(x,qo)
+
G1
(q
-
qo)
(18)
where
G,
is
the
matrix
of
derivatives
with
respect
to
q
evaluated
at
q0.
Equation
(2)
results
from
making
the
identifications
f(x)
=
fl
(x,qo),
n
-
G1
(q
-
qo)
+
n1
.
(19)
If
q
is
nonrandom,
then
the
parameter
uncertainties
ultimately
contribute
to
the
bias
of
the
least
squares
estimator.
If
q
is
random,
then
the
variance
and
possibly
the
bias
are
affected.
Any
a
priori
information
can
be
incorporated
into
the
estimation
procedure
in
several
ways.
It
can
be
used
to
select
an
accurate
reference
point
x0
for
the
first
iteration
of
the
least
squares
estimator.
If
the
transmitter
is
known
to
be
located
within
a
region,
but
the
estimated
position
is
outside
this
region,
a
logical
procedure
is
to
change
the
estimate
to
the
point
in
the
region
that
is
closest
to
the
original
estimate.
If
an
a
priori
distribution
function
for
the
transmitter
position
can
be
specified,
a
Bayesian
estimator
can
be
determined.
However,
the
Bayesian
estimator
is
usually
too
complex
a
mathematical
function
to
yield
a
simple
computational
algorithm
unless
simplifying
assumptions
are
made
about
the
a
priori
distribution
[4].
The
location
estimate
can
be
continually
refined
if
a
sequence
of
measurements
is
taken.
If
successive
measurements
are
uncorrelated,
a
new
least
squares
estimate
can
be
determined
by
combining
new
measurements
with
the
old
estimate
[31.
Since
measurements
do
not
have
to
be
stored
after
processing,
a
significant
computational
savings
is
sometimes
possible.
Ill.
ESTIMATOR
ACCURACY
If
r
is
a
Gaussian
random
vector,
then
(12)
indicates
that
x
is
a
Gaussian
random
vector.
Its
probability
density
function
is
=
[(2r)Ty,2
1
p
1
1/21
-1
exp[
-
(l
/2)
(g
-
m)TP-
(g
-
m)]
(20)
where
m
=
E[x]
is
the
mean
vector,
and
P
=
E[(x
-
m)(x
-
m)T]
(21)
is
the
covariance
matrix
given
by
(16).
By
definition,
P
is
symmetric
and
positive
semidefinite.
Thus
it
has
nonnegative
eigenvalues.
Equation
(16)
indicates
that
P`
exists
and
in
equal
to
GTN
'G
Therefore,
P
does
not
have
zero
as
an
eigenvalue.
Thus
P
is
positive
definite.
The
loci
of
constant
density
function
values
are
described
by
equations
of
the
form
(17)
(g
-
m)TP-'(g
-
m)
=
K
(22)
where
f1
(
)
is
a
vector
function
and
n1
is
the
random
error
due
to
causes
unrelated
to
uncertainties
in
q.
Let
q0
where
K
is
a
constant
that
determines
the
size
of
the
n-
dimensional
region
enclosed
by
the
surface.
In
two
TORRIERI:
STATISTICAL
THEORY
OF
PASSIVE
LOCATION
SYSTEMS
185

dimensions,
the
surface
is
an
ellipse;
in
three
dimensions,
it
is
an
ellipsoid;
in
the
general
case
of
n
dimensions,
it
may
be
considered
a
hyperellipsoid.
Unless
P
is
a
diagonal
matrix,
the
principal
axes
of
the
hyperellipsoids
are
not
aligned
with
the
coordinate
axes.
The
probability
that
x
lies
inside
the
hyperellipsoid
of
(22)
is
R
=
g
:
>
.
c
K3
(31)
and
the
(i
are
the
components
of
(.
Regions
R2
is
the
interior
of
a
hyperellipsoid
with
principal
axes
of
lengths
2VKXi,
i
=
1,
2,
...,
n.
By
introducing
new
variables
Tni
=
Pe(K)
=
jf...
*
ff(g)
dt,
dt2
...
dt,l
R
where
the
region
of
integration
is
R
=
{g:
(g
-
m)TP
1
(g
-
m)
<
K}.
(23)
(24)
To
reduce
(23)
to
a
single
integral,
we
perform
a
succession
of
coordinate
transformations.
First,
we
translate
the
coordinate
system
so
that
its
origin
coincides
with
m
by
making
the
change
of
variables
y
=
m.
Since
the
Jacobian
is
unity,
we
obtain
Pe(K)
=
a
.J
f
exp(-
I
Tp-
)
RI
dy
dY2
...
dy,l
(25)
where
i=
1,
2,...,n
(32)
we
can
simplify
(30)
further.
Since
the
determinant
of
P
is
equal
to
the
product
of
the
eigenvalues
of
P,
(27)
and
(30)
to
(32)
give
Pe(K)
=
(21T)
-i2
ff
*..
f
n
l=
li
exp(--
2
E
T)
dll
drh
2
...
dtl
(33)
The
region
of
integration,
which
is
indicated
below
the
integral
signs,
is
the
interior
of
a
hypersphere.
It
is
shown
below
that
the
volume
of
an
n-
dimensional
hypersphere
of
radius
(26)
R
=
{-y
:yTp-1
<
K}
a
=
[(2r1T)y/2
IPI
1/2]
-
I
(27)
To
simplify
(25),
we
rotate
the
coordinate
axes
so
that
they
are
aligned
with
the
principal
axes
of
the
hyperellipsoid.
Because
P
is
a
symmetric
positive-definite
matrix,
so
is
P`.
Therefore,
an
orthogonal
matrix
A
(with
eigenvectors
as
columns)
exists
that
diagonalizes
P`.
Thus
AT
_
A
and
A2
0
[A-']
A,,
'_
(28)
where
XI,
X2,
...,
A,,
are
the
eigenvalues
of
P.
A
rotation
of
axes
results
in
new
variables
defined
by
;
=
ATy.
(29)
Since
ATA
=
I
and
the
determinant
of
the
product
of
matrices
is
equal
to
the
product
of
the
determinants
of
the
matrices,
the
determinant
of
AT,
which
is
the
Jacobian
of
the
transformation,
is
unity.
Substituting
(28)
and
(29)
into
(25)
and
(26)
yields
Pe(K)
=
a
J4...f
exp(-2
(T[XA
']
d;,
d42
*
*d4,,
(30)
where
(
1
)1/2
lS
V,,
(()
)12
-F(n/2
+
1)
where
F(
)
is
the
gamma
function.
Therefore,
the
differential
volume
between
p
and
p
+
dp
is
n
TT
i1/2
pnl-
I
dv
=
+
dp
1f(n/2
+
1)
(34)
(35)
(36)
and
(33)
can
be
reduced
to
n
V
For(K)
=22(n/2
+
1)
J
pn1
exp-P2
)dp.
(37)
For
n
=
1,
2,
and
3,
this
integral
can
be
expressed
in
simpler
terms:
Pe(K)
=
erf(V'K/2),
Pe(K)
=
1
-
exp(-K/2),
Pe(K)
=
erf(\/K/2)
n
=
1
(38)
n
=
2
(39)
-
V/1T
exp(-K/2),
n
=
3
where
the
error
function
is
defined
by
2
2Udt
erf(x)
=
>
exp(-t2)dt.
Equation
(40)
is
obtained
by
integration
by
parts.
To
verify
(35),
we
define
the
volume
of
a
hypersphere
of
radius
p
by
(40)
(41)
IEEE
TRANSACTIONS
ON
AEROSPACE
AND
ELECTRONIC
SYSTEMS
VOL.
AES-20,
NO.
2
MARCH
1984
A
Tp
'A
=
[i
LA
l
186
1
11
.
2
a
...
exp
1
i
d.l
d.2
...
d.,,
R2
2
it
Xi

V,(p)
=
f
f
dX
dx2
*
dx,n.
X2
p2
A
change
of
coordinates
shows
that
Vn(p)
=
p'V,2(l)
where
VJ(1)
is
the
volume
of
a
unit
hypersphere.
Straightforward
calculations
give
V,(l)
=
2,
V2(1)
=
TT
where
the
"volumes"
are
a
length
and
an
area,
respectively.
We
define
the
sets
(42)
corresponding
to
probability
Pe
is
defined
to
be
the
(42)
particular
hyperellipsoid
for
which
Pe
is
the
probability
that
x
lies
inside
it.
Thus
the
concentration
ellipsoid
is
a
multidimensional
measure
of
accuracy
for
an
unbiased
estimator.
(43)
A
scalar
measure
of
estimator
accuracy
is
the
root-
mean-square
error
Er,
which
is
defined
by
(44)
2=
E[>
(i,
-
x,i)2J
Expanding
(51)
and
using
(21),
we
obtain
n
er2i=
tr(P)
+
,
b
i
=I
(51)
(52)
B
=
{(xl,x2)
x1
+
X-
<
1}
(45)
n
C
=
{
(X3,
...I.
X)1
:
E:
X~
i::
I
X,
I
X,2}.-
i=3
(46)
For
n
-
3,
Fubini's
theorem
for
interchanging
the
order
of
integrations
[5]
and
a
change
of
coordinates
in
(42)
give
Vn(l)
=
ff
dx1
dx2,,
.
Jdx3
dxn
B
C
B
Vn-2
(/1
-
x2
-X22
dX]
dX2
(47)
Equation
(43)
and
further
coordinate
changes
yield
1)
=
V-
2(0)
JJ(1
-x2-x2)1n-2)12
dx1
dx2
B
Vnf20)
J
J
(1
-
r2)(n-
2)2
r
dr
dO
=
T
V20(1)
X(n
2)2
dx
=
27rV2,-2(l)/n.
(48)
where
tr(P)
denotes
the
trace
of
P
and
bi
=
E[xi]
-xi
denotes
a
component
of
the
bias
vector
b.
IV.
TWO-DIMENSIONAL
ESTIMATORS
For
the
estimator
of
a
two-dimensional
vector,
such
as
position
coordinates
on
the
surface
of
the
Earth,
the
bivariate
covariance
matrix
can
be
expressed
as
P=
[:*
(53)
a12
2
A
straightforward
calculation
yields
the
eigenvalues:
21
=
2
[u
+
(T2
+
x/(or
-
o.)'
+
2
.2
+
u~
V(o.2~&)2±421-
(54)
(55)
where
the
positive
square
root
is
used.
By
definition,
X1
>
A2.
Suppose
that
new
coordinates
are
defined
by
rotating
the
axes
of
the
old
coordinate
system
counterclockwise
through
an
angle
0,
as
shown
in
Fig.
3.
A
vector
By
induction,
this
recursion
relation
and
(44)
imply
that
(2m)'
V2m(0)
=
2.m4
...
(2m)9
2
(2rr)
1m
m=
,2
V2m
(l)=1.3
...(2m
-
1)
(49)
We
can
express
V1(1)
in
terms
of
a
compact
formula
by
using
the
properties
of
the
gamma
function:
F(t
+
1)
=
tF(t);
F(l)
=
1;
r(l/2)
=
A/;.
We
obtain
F(n/2
+
1)
n
=
1,
2,....
(50)
Combining
(43)
and
(50)
yields
(35).
If
Pe
is
specified,
say
Pe
=
1/2,
then
(37),
(38),
(39),
or
(40)
can
be
solved
numerically
to
determine
the
corresponding
value
of
K,
which
in
turn
defines
a
hyperellipsoid
by
(22).
The
concentration
ellipsoid
V
Fig.
3.
Concentration
ellipse
and
coordinate
axes.
represented
by
-
in
the
old
coordinates
is
represented
in
the
new
coordinates
by
;
=
AT^y,
where
A
is
the
orthogonal
matrix
cos
0
-
sin
0
A
=.
Lsin
0
cos
0J
(56)
TORRIERI:
STATISTICAL
THEORY
OF
PASSIVE
LOCATION
SYSTEMS
~2
187

Citations
More filters
Journal ArticleDOI
01 Nov 2007
TL;DR: Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.
Abstract: Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented.

4,123 citations

Journal ArticleDOI
TL;DR: Using the models, the authors have shown the calculation of a Cramer-Rao bound (CRB) on the location estimation precision possible for a given set of measurements in wireless sensor networks.
Abstract: Accurate and low-cost sensor localization is a critical requirement for the deployment of wireless sensor networks in a wide variety of applications. In cooperative localization, sensors work together in a peer-to-peer manner to make measurements and then forms a map of the network. Various application requirements influence the design of sensor localization systems. In this article, the authors describe the measurement-based statistical models useful to describe time-of-arrival (TOA), angle-of-arrival (AOA), and received-signal-strength (RSS) measurements in wireless sensor networks. Wideband and ultra-wideband (UWB) measurements, and RF and acoustic media are also discussed. Using the models, the authors have shown the calculation of a Cramer-Rao bound (CRB) on the location estimation precision possible for a given set of measurements. The article briefly surveys a large and growing body of sensor localization algorithms. This article is intended to emphasize the basic statistical signal processing background necessary to understand the state-of-the-art and to make progress in the new and largely open areas of sensor network localization research.

3,080 citations

Journal ArticleDOI
TL;DR: An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed and is shown to attain the Cramer-Rao lower bound near the small error region.
Abstract: An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. >

2,202 citations

Journal ArticleDOI
TL;DR: An overview of the measurement techniques in sensor network localization and the one-hop localization algorithms based on these measurements are provided and a detailed investigation on multi-hop connectivity-based and distance-based localization algorithms are presented.

1,870 citations

Journal ArticleDOI
01 Jun 1991
TL;DR: An algorithm for, model-based localization that relies on the concept of a geometric beacon, a naturally occurring environment feature that can be reliably observed in successive sensor measurements and can be accurately described in terms of a concise geometric parameterization, is developed.
Abstract: The application of the extended Kaman filter to the problem of mobile robot navigation in a known environment is presented. An algorithm for, model-based localization that relies on the concept of a geometric beacon, a naturally occurring environment feature that can be reliably observed in successive sensor measurements and can be accurately described in terms of a concise geometric parameterization, is developed. The algorithm is based on an extended Kalman filter that utilizes matches between observed geometric beacons and an a priori map of beacon locations. Two implementations of this navigation algorithm, both of which use sonar, are described. The first implementation uses a simple vehicle with point kinematics equipped with a single rotating sonar. The second implementation uses a 'Robuter' mobile robot and six static sonar transducers to provide localization information while the vehicle moves at typical speeds of 30 cm/s. >

1,394 citations

References
More filters
Book
01 Jan 1979
TL;DR: In this paper, the convergence of distributions is considered in the context of conditional probability, i.e., random variables and expected values, and the probability of a given distribution converging to a certain value.
Abstract: Probability. Measure. Integration. Random Variables and Expected Values. Convergence of Distributions. Derivatives and Conditional Probability. Stochastic Processes. Appendix. Notes on the Problems. Bibliography. List of Symbols. Index.

6,334 citations

Book
01 Jan 1974
TL;DR: This is the first book on the optimal estimation that places its major emphasis on practical applications, treating the subject more from an engineering than a mathematical orientation, and the theory and practice of optimal estimation is presented.
Abstract: This is the first book on the optimal estimation that places its major emphasis on practical applications, treating the subject more from an engineering than a mathematical orientation. Even so, theoretical and mathematical concepts are introduced and developed sufficiently to make the book a self-contained source of instruction for readers without prior knowledge of the basic principles of the field. The work is the product of the technical staff of the The Analytic Sciences Corporation (TASC), an organization whose success has resulted largely from its applications of optimal estimation techniques to a wide variety of real situations involving large-scale systemsArthur Gelb writes in the Foreword that "It is our intent throughout to provide a simple and interesting picture of the central issues underlying modern estimation theory and practice. Heuristic, rather than theoretically elegant, arguments are used extensively, with emphasis on physical insights and key questions of practical importance."Numerous illustrative examples, many based on actual applications, have been interspersed throughout the text to lead the student to a concrete understanding of the theoretical material. The inclusion of problems with "built-in" answers at the end of each of the nine chapters further enhances the self-study potential of the text.After a brief historical prelude, the book introduces the mathematics underlying random process theory and state-space characterization of linear dynamic systems. The theory and practice of optimal estimation is them presented, including filtering, smoothing, and prediction. Both linear and non-linear systems, and continuous- and discrete-time cases, are covered in considerable detail. New results are described concerning the application of covariance analysis to non-linear systems and the connection between observers and optimal estimators. The final chapters treat such practical and often pivotal issues as suboptimal structure, and computer loading considerations.This book is an outgrowth of a course given by TASC at a number of US Government facilities. Virtually all of the members of the TASC technical staff have, at one time and in one way or another, contributed to the material contained in the work

6,015 citations

Book
17 Jan 2012

3,718 citations

Book
01 Jan 1971
TL;DR: In underwater sonar systems, external acoustic noise is generated by waves and wind on the water surface, by biological agents (fish, prawns, etc.), and by man-made sources such as engine noise.
Abstract: Probability. Random Processes. Narrowband Signals. Gaussian Derived Processes. Hypothesis Testing. Detection of Known Signals. Detection of Signals with Random Parameters. Multiple Pulse Detection of Signals. Detection of Signals in Colored Gaussian Noise. Estimation of Signal Parameters. Extensions. References. Bibliography. Index.

1,421 citations

Book
01 Jan 1979
TL;DR: This text provides a comprehensive treatment of estimation theory which should be suitable for graduate level engineers and is suitable for students studying estimation theory.
Abstract: : Much current interest in the areas of communication and control is devoted to a study of estimation theory. This text provides a comprehensive treatment of estimation theory which should be suitable for graduate level engineers. There are nine chapters in the text: Introduction to Estimation Theory, Review of Probability Theory and Random Variables, Stochast Processes, Gauss-Markov Processes and Stochastic Differential Equations, Decision Theory, Basic Estimation Theory, The Optimum Linear Filter, Extensions of the Optimum Linear Filter, Nonlinear Estimation. (Author)

781 citations