1.
INTRODUCTION
Statistical
Theory
of
Passive
Location
Systems
DON
J.
TORRIERI
U.S.
Army
Countermeasures/Counter-Countermeasures
Center
A
derivation
of
the
principal
algorithms
and
an
analysis
of
the
performance
of the
two
most
important
passive
location
systems
for
stationary
transmitters,
hyperbolic
location
systems
and
direction-
finding
location
systems,
are
presented.
The
concentration
ellipse,
the
circular
error
probability,
and
the
geometric
dilution
of
precision
are
defined
and
related
to
the
location-system
and
received-signal
characteristics.
Doppler
and
other
passive
location
systems
are
briefly
discussed.
Manuscript
received
June
21,
1983.
The
position
of
a
stationary
transmitter
or
radiating
emitter
can
be
estimated
from
passive
measurements
of
the
arrival
times,
directions
of
arrival,
or
Doppler
shifts
of
electromagnetic
waves
received
at
various
sites.
This
paper
presents
a
derivation
of
the
principal
algorithms
and
an
analysis
of
the
two
most
important
passive
location
systems
for
stationary
transmitters:
hyperbolic
location
systems
and
direction-finding
location
systems
[1,
2].
Hyperbolic
location
systems,
often
called
time
difference
of
arrival
(TDOA)
systems,
locate
a
transmitter
by
processing
signal
arrival-time
measurements
at
three
or
more
stations.
The
measurements
at
the
various
stations
are
sent
to
a
station
that
is
designated
the
master
station
and
does
the
processing.
The
arrival-time
measurements
at
two
stations
are
combined
to
produce
a
relative
arrival
time
that,
in
the
absence
of
noise
and
interference,
restricts
the
possible
transmitter
location
to
a
hyperboloid
with
the
two
stations
as
foci.
Transmitter
location
is
estimated
from
the
intersections
of
three
or
more
independently
generated
hyperboloids
determined
from
at
least
four
stations.
If
the
transmitter
and
the
stations
lie
in
the
same
plane,
location
is
estimated
from
the
intersections
of
two
or
more
hyperbolas
determined
from
three
or
more
stations.
Fig.
1
illustrates
two
hyperbolas,
each
of
which
has
two
branches,
derived
from
measurements
at
three
stations.
The
two
hyperbolas
have
two
points
of
intersection.
The
resulting
location
ambiguity
may
be
resolved
by
using
a
priori
information
about
the
location,
bearing
measurements
at
one
or
more
of
the
stations,
or
a
fourth
station
to
generate
an
additional
hyperbola.
Fig.
2
depicts
an
aircraft
with
a
direction-finding
location
system
that
makes
bearing
measurements
at
three
different
points
in
its
trajectory.
The
intersection
of
two
bearing
lines
provides
an
estimate
of
the
location
of
the
transmitter,
which
may
be
on
the
surface
of
the
Earth
or
airborne.
In
the
presence
of
noise,
more
than
two
bearing
lines
will
not
intersect
at
a
single
point.
However,
the
appropriate
processing
allows
an
improved
estimate
of
the
transmitter
position.
The
following
three
sections
of
this
paper
present
the
basic
methods
of
estimation
applicable
to
transmitter
location
and
determine
the
accuracy
of
suitable
estimators.
Sections
5
and
6,
respectively,
consider
passive
location
systems
using
arrival-time
and
bearing
measurements.
Section
7
summarizes
the
use
of
Doppler
information.
Since
the
next
three
sections
provide
the
theoretical
framework
for
the
statistical
analysis
of
any
passive
location
system,
the
reader
who
is
only
interested
in
the
applications
may
wish
to
omit
this
material,
referring
to
it
as
necessary
while
reading
Sections
5-7.
Author's
address:
U.
S.
Department
of
the
Army,
Countermeasures/
Counter-Countermeasures
Center,
2800
Powder
Mill
Rd.,
Adelphi,
MD
1i.
ESTIMATION
METHODS
20783.
0018-9251/84/0300-0183
$1.00
(©
1984
IEEE
The
components
of
an
n-dimensional
vector
x
that
is
to
be
estimated
are
the
position
coordinates
in
two
or
IEEE
TRANSACTIONS
ON
AEROSPACE
AND
ELECTRONIC
SYSTEMS
VOL.
AES-20,
NO.
2
MARCH
1984
183
HYPERBOLA
FROM
STATION
1
AND
2
10o
HYPERBOLA
FROM
STATION
2
AND
3
2o
o3
Fig.
1.
Intersecting
hyperbolas
from
three
stations.
1
2
3
AIRcRAFT
_
_
_
-
TRAJECtORY
AIRCRAFT
\
X
BEARING
\
LINE
-
\
i
BEARING
\
/
LINE
\Id
TRANSMITTER
Fig.
2.
Bearing
lines
from
aircraft
positions.
three
dimensions
and
possibly
other
parameters
such
as
the
time
of
emission
of
the
radiation.
A
set
of
N
measurements
ri,
i
=
1,
2,
...,
N,
is
collected
at
various
positions.
In
the
absence
of
random
measurement
errors,
ri
is
equal
to
a
known
functionf.(x).
In
the
presence
of
additive
errors,
ri
=
fi(x)
+
ni,
i
=
1,
2,
...,
N.
(1)
These
N
equations
can
be
written
as a
single
equation
for
N-dimensional
column
vectors:
r
=
f(x)
+
n.
(2)
The
measurement
error
n
is
assumed
to
be
a
multivariate
random
vector
with
an
N
x
N
positive-definite
covariance
matrix
N
=
E[(n
-
E[n])(n
-
E[n])T]
where
E[
]
denotes
the
expected
value
and
the
superscript
T
denotes
the
transpose.
If
x
is
regarded
as
an
unknown
but
nonrandom
veci
and
n
is
assumed
to
have
a
zero
mean
and
a
Gaussian
distribution,
then
the
conditional
density
function
of
r
given
x
is
p(rlx)
=
(2
.)N/2
INI
1/2
exp{-(1
/2)[r
-f(x)]TN
l
[r
-f(x)]}
where
|NI
denotes
the
determinant
of
N
and
the
superscript
-
1
denotes
the
inverse.
Because
N
is
symmetric
and
positive
definite,
its
inverse
exists.
The
maximum
likelihood
estimator
is
that
value
of
x
which
maximizes
(4).
Thus
the
maximum
likelihood
estimatoi
minimizes
the
quadratic
form
Q(x)
=
[r
-f(x)]TN`1[r
-f(x)].
Minimization
of
Q(x)
is
a
reasonable
criterion
for
determination
of
an
estimator
even
when
the
additive
error
cannot
be
assumed
to
be
Gaussian.
In
this
case,
the
resulting
estimator
is
called
the
least
squares
estimator
and
N`
is
regarded
as
a
matrix
of
weighting
coefficients.
In
general,
f(x)
is
a
nonlinear
vector
function.
To
determine
a
reasonably
simple
estimator,
f(x)
can
be
linearized
by
expanding
it
in
a
Taylor
series
about
a
reference
point
specified
by
the
vector
xo
and
retaining
the
first
two
terms;
that
is,
we
use
f(x)
-f(xo)
+
G(x
-
Xo)
where
x
and
xo
are
n
x
1
column
vectors
and
G
is
the
N
x
n
matrix
of
derivatives
evaluated
at
xo:
of1
axl
x=xo
afN
Oxl
X=XO
Of,
Oxn
x=X0
afN
Each
row
of
this
matrix
is
the
gradient
vector
of
one
of
the
components
of
f(x).
The
vector
xo
could
be
an
estimate
of
x
determined
from
a
previous
iteration
of
the
estimation
procedure
or
based
upon
a
priori
information.
It
is
assumed
in
the
subsequent
analysis
that
xo
is
sufficiently
close
to
x
that
(6)
is
an
accurate
approximation.
Combining
(5)
and
(6)
gives
Q(x)
=
(r
-GX)TN-
1(rl
-
Gx)
(8)
where
(3)
rl
=
r
-
f(xo)
+
Gxo.
(9)
To
determine
the
necessary
condition
for
the
estimator
x
that
minimizes
Q(x),
we
calculate
the
gradient
of
Q(x),
defined
by
V
Q(x)
=
LdQ
OQ
dQ
]T
Oxl
Ox2
Oxn
(10)
and
then
solve
for
the
x
such
that
VxQ(x)
=
0.
From
its
definition,
N
is
a
symmetric
matrix;
that
is,
NT
=
N.
Since
(N-1)T
=
(NT)',
it
follows
that
(N-
)T
=
N-
,
which
implies
that
N`
is
a
symmetric
matrix.
Therefore,
V,Q(x)
{x=x
=
2GTN-
Gx
-
2GTN-
1r
=
0.
(1
1)
We
assume
that
the
matrix
GTN-
'G
is
nonsingular.
Thus
the
solution
of
(11)
is
x
=
(GN-
lG)
-1GN-
lrl
(5)
=
xo
+
(GTN-
G)-
lGTN-
[r
-
f(xo)].
Using
(12),
direct
calculation
shows
that
(8)
can
be
written
in
the
form
(12)
IEEE
TRANSACTIONS
ON
AEROSPACE
AND
ELECTRONIC
SYSTEMS
VOL.
AES-20,
NO.
2
MARCH
1984
(6)
(7)
:X0
184
Q(x)
=
[x
-
IJTGTN
`G[x
-
x]
-
rTN
`G(GTN
-'G)
-'r,
+
rTN
l'r,
(13)
where
only
the
first
term
depends
upon
x.
Since
N
is
symmetric
and
positive
definite,
it
has
positive
eigenvalues.
If
Ne
=
Xe,
then
N-
1e
=
A-
'e.
Thus
if
e
is
an
eigenvector
of
N
with
eigenvalue
A,
then
e
is
also
an
eigenvector
of
N`
with
eigenvalue
1/A.
Since
it
is
symmetric
and
its
eigenvalues
are
positive,
N`
is
positive
definite.
Therefore,
x
=
x
minimizes
Q(x).
The
estimator
of
(12)
is
called
the
linearized
least
squares
estimator.
Substituting
(2)
into
(12)
and
rearranging
terms,
the
expression
for
x
can
be
written
in
the
form
x
=
x
+
(G'N-
'
G)-
I
GTN-
1
lf(x)
-f(xo)
-
G(x
-
x0)
+
n]
(14)
which
shows
how
the
estimator
error
is
affected
by
the
linearization
error
and
the
noise.
The
bias
of
the
estimator
x
is
defined
as
b
=
E[x]
-
x.
Using
(14),
we
obtain
b
=(GTN`'G)
-lGTN
-{f(x)
-f(xo)
-
G(x
-
xo)
+
E[n]}.
(15)
If
f(x)
is
linear,
as
in
(6),
and
E[n]
=
0,
then
the
least
squares
estimator
is
unbiased.
If
systematic
errors
occur
in
the
measurements,
then
E[n]
#&
0.
To
minimize
the
estimator
bias
due
to
systematic
errors,
the
magnitude
of
each
E[ni]
should
be
minimized
through
system
calibrations.
If
some
of
the
E[nij
are
known
functions
of
various
parameters
and
N
is
sufficiently
large,
these
parameters
can
be
made
components
of
the
vector
x
and
estimated
along
with
the
other
components
of
x.
The
bias
due
to
the
nonlinearity
of
f(x)
can
be
estimated
by
expanding
f(x)
in
a
Taylor
series
about
x0
and
retaining
second-order
terms.
Let
P
denote
the
covariance
matrix
of
x^.
Equation
(14)
yields
P
=
E(i
-
E[i)(i
-
E[i])T]
=
(GTN-
IG)-'.
(16)
The
diagonal
elements
of
P
give
the
variances
of
the
errors
in
the
estimated
components
of
x.
Since
P
is
part
of
the
estimator
given
by
(12),
one
can
compute
both
estimate
and
covariance
simultaneously.
If
n
is
zero-mean
Gaussian,
the
maximum
likelihood
or
least
squares
estimator
for
the
linearized
model
is
the
same
as
the
minimum
variance
unbiased
estimator
[3].
The
measurement
error
vector
n
is
assumed
to
encompass
all
the
contributions
to
error,
including
uncertainties
in
the
system
or
physical
parameters,
such
as
the
station
coordinates
or
the
speed
of
propagation.
If
q
is
a
vector
of
the
parameters,
then
the
measurement
vector
r
can
often
be
expressed
as
r
-fl(x,q)
+
ni
denote
the
assumed
value
of
q.
If
qo
is
sufficiently
close
to
q,
then
a
Taylor
series
expansion
yields
fl
(x,q)
-fl
(x,qo)
+
G1
(q
-
qo)
(18)
where
G,
is
the
matrix
of
derivatives
with
respect
to
q
evaluated
at
q0.
Equation
(2)
results
from
making
the
identifications
f(x)
=
fl
(x,qo),
n
-
G1
(q
-
qo)
+
n1
.
(19)
If
q
is
nonrandom,
then
the
parameter
uncertainties
ultimately
contribute
to
the
bias
of
the
least
squares
estimator.
If
q
is
random,
then
the
variance
and
possibly
the
bias
are
affected.
Any
a
priori
information
can
be
incorporated
into
the
estimation
procedure
in
several
ways.
It
can
be
used
to
select
an
accurate
reference
point
x0
for
the
first
iteration
of
the
least
squares
estimator.
If
the
transmitter
is
known
to
be
located
within
a
region,
but
the
estimated
position
is
outside
this
region,
a
logical
procedure
is
to
change
the
estimate
to
the
point
in
the
region
that
is
closest
to
the
original
estimate.
If
an
a
priori
distribution
function
for
the
transmitter
position
can
be
specified,
a
Bayesian
estimator
can
be
determined.
However,
the
Bayesian
estimator
is
usually
too
complex
a
mathematical
function
to
yield
a
simple
computational
algorithm
unless
simplifying
assumptions
are
made
about
the
a
priori
distribution
[4].
The
location
estimate
can
be
continually
refined
if
a
sequence
of
measurements
is
taken.
If
successive
measurements
are
uncorrelated,
a
new
least
squares
estimate
can
be
determined
by
combining
new
measurements
with
the
old
estimate
[31.
Since
measurements
do
not
have
to
be
stored
after
processing,
a
significant
computational
savings
is
sometimes
possible.
Ill.
ESTIMATOR
ACCURACY
If
r
is
a
Gaussian
random
vector,
then
(12)
indicates
that
x
is
a
Gaussian
random
vector.
Its
probability
density
function
is
=
[(2r)Ty,2
1
p
1
1/21
-1
exp[
-
(l
/2)
(g
-
m)TP-
(g
-
m)]
(20)
where
m
=
E[x]
is
the
mean
vector,
and
P
=
E[(x
-
m)(x
-
m)T]
(21)
is
the
covariance
matrix
given
by
(16).
By
definition,
P
is
symmetric
and
positive
semidefinite.
Thus
it
has
nonnegative
eigenvalues.
Equation
(16)
indicates
that
P`
exists
and
in
equal
to
GTN
'G
Therefore,
P
does
not
have
zero
as
an
eigenvalue.
Thus
P
is
positive
definite.
The
loci
of
constant
density
function
values
are
described
by
equations
of
the
form
(17)
(g
-
m)TP-'(g
-
m)
=
K
(22)
where
f1
(
)
is
a
vector
function
and
n1
is
the
random
error
due
to
causes
unrelated
to
uncertainties
in
q.
Let
q0
where
K
is
a
constant
that
determines
the
size
of
the
n-
dimensional
region
enclosed
by
the
surface.
In
two
TORRIERI:
STATISTICAL
THEORY
OF
PASSIVE
LOCATION
SYSTEMS
185
dimensions,
the
surface
is
an
ellipse;
in
three
dimensions,
it
is
an
ellipsoid;
in
the
general
case
of
n
dimensions,
it
may
be
considered
a
hyperellipsoid.
Unless
P
is
a
diagonal
matrix,
the
principal
axes
of
the
hyperellipsoids
are
not
aligned
with
the
coordinate
axes.
The
probability
that
x
lies
inside
the
hyperellipsoid
of
(22)
is
R
=
g
:
>
.
c
K3
(31)
and
the
(i
are
the
components
of
(.
Regions
R2
is
the
interior
of
a
hyperellipsoid
with
principal
axes
of
lengths
2VKXi,
i
=
1,
2,
...,
n.
By
introducing
new
variables
Tni
=
Pe(K)
=
jf...
*
ff(g)
dt,
dt2
...
dt,l
R
where
the
region
of
integration
is
R
=
{g:
(g
-
m)TP
1
(g
-
m)
<
K}.
(23)
(24)
To
reduce
(23)
to
a
single
integral,
we
perform
a
succession
of
coordinate
transformations.
First,
we
translate
the
coordinate
system
so
that
its
origin
coincides
with
m
by
making
the
change
of
variables
y
=
m.
Since
the
Jacobian
is
unity,
we
obtain
Pe(K)
=
a
.J
f
exp(-
I
Tp-
)
RI
dy
dY2
...
dy,l
(25)
where
i=
1,
2,...,n
(32)
we
can
simplify
(30)
further.
Since
the
determinant
of
P
is
equal
to
the
product
of
the
eigenvalues
of
P,
(27)
and
(30)
to
(32)
give
Pe(K)
=
(21T)
-i2
ff
*..
f
n
l=
li
exp(--
2
E
T)
dll
drh
2
...
dtl
(33)
The
region
of
integration,
which
is
indicated
below
the
integral
signs,
is
the
interior
of
a
hypersphere.
It
is
shown
below
that
the
volume
of
an
n-
dimensional
hypersphere
of
radius
(26)
R
=
{-y
:yTp-1
<
K}
a
=
[(2r1T)y/2
IPI
1/2]
-
I
(27)
To
simplify
(25),
we
rotate
the
coordinate
axes
so
that
they
are
aligned
with
the
principal
axes
of
the
hyperellipsoid.
Because
P
is
a
symmetric
positive-definite
matrix,
so
is
P`.
Therefore,
an
orthogonal
matrix
A
(with
eigenvectors
as
columns)
exists
that
diagonalizes
P`.
Thus
AT
_
A
and
A2
0
[A-']
A,,
'_
(28)
where
XI,
X2,
...,
A,,
are
the
eigenvalues
of
P.
A
rotation
of
axes
results
in
new
variables
defined
by
;
=
ATy.
(29)
Since
ATA
=
I
and
the
determinant
of
the
product
of
matrices
is
equal
to
the
product
of
the
determinants
of
the
matrices,
the
determinant
of
AT,
which
is
the
Jacobian
of
the
transformation,
is
unity.
Substituting
(28)
and
(29)
into
(25)
and
(26)
yields
Pe(K)
=
a
J4...f
exp(-2
(T[XA
']
d;,
d42
*
*d4,,
(30)
where
(
1
)1/2
lS
V,,
(()
)12
-F(n/2
+
1)
where
F(
)
is
the
gamma
function.
Therefore,
the
differential
volume
between
p
and
p
+
dp
is
n
TT
i1/2
pnl-
I
dv
=
+
dp
1f(n/2
+
1)
(34)
(35)
(36)
and
(33)
can
be
reduced
to
n
V
For(K)
=22(n/2
+
1)
J
pn1
exp-P2
)dp.
(37)
For
n
=
1,
2,
and
3,
this
integral
can
be
expressed
in
simpler
terms:
Pe(K)
=
erf(V'K/2),
Pe(K)
=
1
-
exp(-K/2),
Pe(K)
=
erf(\/K/2)
n
=
1
(38)
n
=
2
(39)
-
V/1T
exp(-K/2),
n
=
3
where
the
error
function
is
defined
by
2
2Udt
erf(x)
=
>
exp(-t2)dt.
Equation
(40)
is
obtained
by
integration
by
parts.
To
verify
(35),
we
define
the
volume
of
a
hypersphere
of
radius
p
by
(40)
(41)
IEEE
TRANSACTIONS
ON
AEROSPACE
AND
ELECTRONIC
SYSTEMS
VOL.
AES-20,
NO.
2
MARCH
1984
A
Tp
'A
=
[i
LA
l
186
1
11
.
2
a
...
exp
1
i
d.l
d.2
...
d.,,
R2
2
it
Xi
V,(p)
=
f
f
dX
dx2
*
dx,n.
X2
p2
A
change
of
coordinates
shows
that
Vn(p)
=
p'V,2(l)
where
VJ(1)
is
the
volume
of
a
unit
hypersphere.
Straightforward
calculations
give
V,(l)
=
2,
V2(1)
=
TT
where
the
"volumes"
are
a
length
and
an
area,
respectively.
We
define
the
sets
(42)
corresponding
to
probability
Pe
is
defined
to
be
the
(42)
particular
hyperellipsoid
for
which
Pe
is
the
probability
that
x
lies
inside
it.
Thus
the
concentration
ellipsoid
is
a
multidimensional
measure
of
accuracy
for
an
unbiased
estimator.
(43)
A
scalar
measure
of
estimator
accuracy
is
the
root-
mean-square
error
Er,
which
is
defined
by
(44)
2=
E[>
(i,
-
x,i)2J
Expanding
(51)
and
using
(21),
we
obtain
n
er2i=
tr(P)
+
,
b
i
=I
(51)
(52)
B
=
{(xl,x2)
x1
+
X-
<
1}
(45)
n
C
=
{
(X3,
...I.
X)1
:
E:
X~
i::
I
X,
I
X,2}.-
i=3
(46)
For
n
-
3,
Fubini's
theorem
for
interchanging
the
order
of
integrations
[5]
and
a
change
of
coordinates
in
(42)
give
Vn(l)
=
ff
dx1
dx2,,
.
Jdx3
dxn
B
C
B
Vn-2
(/1
-
x2
-X22
dX]
dX2
(47)
Equation
(43)
and
further
coordinate
changes
yield
1)
=
V-
2(0)
JJ(1
-x2-x2)1n-2)12
dx1
dx2
B
Vnf20)
J
J
(1
-
r2)(n-
2)2
r
dr
dO
=
T
V20(1)
X(n
2)2
dx
=
27rV2,-2(l)/n.
(48)
where
tr(P)
denotes
the
trace
of
P
and
bi
=
E[xi]
-xi
denotes
a
component
of
the
bias
vector
b.
IV.
TWO-DIMENSIONAL
ESTIMATORS
For
the
estimator
of
a
two-dimensional
vector,
such
as
position
coordinates
on
the
surface
of
the
Earth,
the
bivariate
covariance
matrix
can
be
expressed
as
P=
[:*
(53)
a12
(°
2
A
straightforward
calculation
yields
the
eigenvalues:
21
=
2
[u
+
(T2
+
x/(or
-
o.)'
+
2
.2
+
u~
V(o.2~&)2±421-
(54)
(55)
where
the
positive
square
root
is
used.
By
definition,
X1
>
A2.
Suppose
that
new
coordinates
are
defined
by
rotating
the
axes
of
the
old
coordinate
system
counterclockwise
through
an
angle
0,
as
shown
in
Fig.
3.
A
vector
By
induction,
this
recursion
relation
and
(44)
imply
that
(2m)'
V2m(0)
=
2.m4
...
(2m)9
2
(2rr)
1m
m=
,2
V2m
(l)=1.3
...(2m
-
1)
(49)
We
can
express
V1(1)
in
terms
of
a
compact
formula
by
using
the
properties
of
the
gamma
function:
F(t
+
1)
=
tF(t);
F(l)
=
1;
r(l/2)
=
A/;.
We
obtain
F(n/2
+
1)
n
=
1,
2,....
(50)
Combining
(43)
and
(50)
yields
(35).
If
Pe
is
specified,
say
Pe
=
1/2,
then
(37),
(38),
(39),
or
(40)
can
be
solved
numerically
to
determine
the
corresponding
value
of
K,
which
in
turn
defines
a
hyperellipsoid
by
(22).
The
concentration
ellipsoid
V
Fig.
3.
Concentration
ellipse
and
coordinate
axes.
represented
by
-
in
the
old
coordinates
is
represented
in
the
new
coordinates
by
;
=
AT^y,
where
A
is
the
orthogonal
matrix
cos
0
-
sin
0
A
=.
Lsin
0
cos
0J
(56)
TORRIERI:
STATISTICAL
THEORY
OF
PASSIVE
LOCATION
SYSTEMS
~2
187