scispace - formally typeset
Open AccessBook ChapterDOI

Recursive Total Least Squares: An Alternative to Using the Discrete Kalman Filter in Robot Navigation

TLDR
This work proposes the use of a Recursive Total Least Squares Filter, which is easily updated to incorporate new sensor data, and in the authors' experiments converged faster and to greater accuracy than the Kalman filter.
Abstract
In the robot navigation problem, noisy sensor data must be filtered to obtain the best estimate of the robot position. The discrete Kalman filter, commonly used for prediction and detection of signals in communication and control problems, has become a popular method to reduce the effect of uncertainty from the sensor data. However, in the domain of robot navigation, sensor readings are not only uncertain, but can also be relatively infrequent compared to traditional signal processing applications. In addition, a good initial estimate of location, critical for Kalman convergence, is often not available. Hence, there is a need for a filter that is capable of converging with a poor initial estimate and many fewer readings than the Kalman filter. To this end, we propose the use of a Recursive Total Least Squares Filter. This filter is easily updated to incorporate new sensor data, and in our experiments converged faster and to greater accuracy than the Kalman filter.

read more

Content maybe subject to copyright    Report

Recursive Total Least Squares: An
Alternative to Using the Discrete Kalman
Filter in Robot Navigation
Daniel L. Boley and Erik S. Steinmetz Karen T. Sutherland
Department of Computer Science Department of Computer Science
University of Minnesota University of Wisconsin La Crosse
Minneapolis, MN 55455 La Crosse, WI 54601
Abstract
In the robot navigation problem, noisy sensor data must be filtered to ob-
tain the best estimate of the robot position. The discrete Kalman lter, com-
monly used for prediction and detection of signals in communication and
control problems, has become a popular method to reduce the effect of un-
certainty from the sensor data. However, in the domain of robot navigation,
sensor readings are not only uncertain, but can also be relatively infrequent,
compared to traditionalsignalprocessingapplications. Hence, thereis a need
for a filter that is capable of converging with many fewer readings than the
Kalman filter. To this end, we propose the use of a Recursive Total Least
Squares Filter. This filter is easily updated to incorporate new sensor data,
andinourexperimentsconvergedfaster andtogreater accuracy than theKalman
filter.
1 Introduction
The discrete Kalman filter, commonly used for prediction and detection of signals
in communication and control problems, has become a popular method of reduc-
This work was supported jointly by Minnesota Department of Transportation grant 71789-
72996-173 and National Science Foundation grant CCR-9405380.
1

ing uncertainty in robot navigation. A brief summary of the Kalman filter can be
found in [2]and acomplete description in [9]. One ofthe main advantages of using
the Kalman filter is that it is recursive, eliminating the necessity for storing large
amounts of data. It requires a good initial estimate of the solution. It also assumes
that the noise obeys a weighted white gaussian distribution. The Kalman filter is
guaranteed to be optimal only in that it is guaranteed to find the best solution in the
least squares sense.
Although originally designed as an estimator for dynamical systems, the filter is
used in many applications as a static state estimator [13]. Also, due to the fact that
functions are frequently non-linear, the extended Kalman filter (EKF) rather than
the Kalman filter itself is often used [1, 11]. In this case, the function is linearized
by taking a first order Taylor expansion. This linear approximation is then used as
the Kalman filter equation.
There are two basic problems which can occur when using either the Kalman or
extended Kalman filter in robot navigation applications:
Due to the fact that the filter was developed for applications such as those in
signal processing, it is assumed that many measurements are taken. Sensing
in robot navigation, often done using camera images, is a time consuming
process. To be useful, a method must succeed with relatively few readings.
An underlying assumption in any least squares estimation is that the entries
in the data matrix are error-free [7], e.g., the time intervals at which mea-
surements are taken are exact. In many actual applications, the errors in the
data matrix can be at least as great as the measurement errors. In such cases,
the Kalman filter can give poor results.
Two additional problems occur when using the EKF:
Thelinearization process itself hasthe potential to introducesignificanterror
into the problem.
The EKF is not guaranteed to be optimal or to even converge [14]. It can
easily fall into a local minimum when an initial estimate of the solution is
poor, often the type of situation faced by robot navigators.
Although limited modifications can be made to the Kalman approach to improve
robustnesstonoise[12], our workinoutdoornavigation[17], wheremeasurements
are expensive to obtainand have significant errorinherent to the system, motivated
2

1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Figure 1:
In an LS solution, as shown on the left, the sum of the squaredvertical distances
to the line of best fit is minimized. In a TLS solution, as shown on the right, the sum of
the squared perpendicular distances to the line of best fit is minimized.
us to look for another filtering method, preferably one which would not require
numerous measurements to converge and did not assume an error-freedata matrix.
As demonstrated by Mintz et al [8], the criterion of optimality depends critically
on the specific model being used. When error exists in both the measurement and
the data matrix, the best solution in the least squares sense is often not as good
as the best solution in the eigenvector sense, where the sum of the squares of the
perpendicular distances from the points to the lines are minimized (Fig. 1). This
second method is known in the statistical literature as orthogonal regression and
in numerical analysis as total least squares (TLS) [18].
In thenextsection, wediscussthe RecursiveTLS algorithm,insection3 wepresent
our experimental results, and in section 4 we offer some concluding remarks.
2 Recursive Total Least Squares Algorithm
Given an overdetermined system of equations
A
x
=
b
, the TLS problem, in its
simplest form, is to find the smallest perturbation to
A
and
b
to make the system of
equationscompatible. Specifically,weseek a matrix
E
and vector
f
thatminimizes
k
(
E ;
f
)
k
2
such that
(
A
+
E
)
x
=
b
+
f
for some vector
x
. The vector
x
correspond-
ing to the optimal
(
E ;
f
)
is called the TLS solution. Recently, some recursive TLS
filters have been developed for applications in signal processing [4, 5, 20]. Davila
3

[4] used a Kalman filter toobtain afast update for the eigenvector correspondingto
the smallest eigenvalue of the covariance matrix. This eigenvector was then used
to solve a symmetric TLS problem for the filter. It was not explained how the al-
gorithm might be modified for the case where the smallest eigenvalue is multiple
(i.e., corresponding to a noise subspace of dimension higher than one), or variable
(i.e., of unknown multiplicity). In [20], Yu described a method for the fast update
of an approximate eigendecomposition of a covariance matrix. He replaced all the
eigenvalues in the noise subspace with their “average”, and did the same for the
eigenvaluesin thesignalsubspace, obtainingan approximationwhich would be ac-
curate if the exact eigenvaluescould be grouped into two clusters of known dimen-
sions. In [5], DeGroat used this approach combined with the averaging technique
used in [20], again assuming that the singular values could be grouped into two
clusters. Recently, Bose et al.[3] applied Davila’s algorithm to reconstruct images
from noisy, undersampledframesafter converting complex-valuedimage data into
equivalent real data. All of these methods made some assumptions that the singu-
lar values or eigenvalues could be well approximated by two tight clusters, one big
and one small. In this paper, we present a recursive algorithm that makes very few
assumptions about the distribution of the singular values.
The most common algorithms to compute the TLS solution are based on the Sin-
gular Value Decomposition (SVD), a non-recursive matrix decomposition which
is computationally expensive to update. The TLS problem can be solved by the
SVD using Algorithm 3.1 of [18]. The main computation cost of that algorithm
occurs in the computation of the SVD. That cost is
O
(
p
3
)
for each update. The
basic solution method is sketched as follows. If
v
= (
v
1
; : : : ; v
p
)
T
is a right sin-
gular vector corresponding to the smallest singular value of
(
A;
b
)
, then it is well
known that the TLS solution can be obtained by setting
x
=
?
(
v
1
; : : : ; v
p
?
1
)
T
=v
p
.
If the smallest singular value is multiple, then there are multiple TLS solutions, in
which case one usually seeks the solution of smallest norm. If
v
p
is too small or
zero, then the TLS solution may be too big or nonexistent, in which case an ap-
proximate solution of reasonable size can be obtained by using the next smallest
singular values(s) [18].
In cases such as the applications considered in this paper where the exact TLS so-
lution is still corrupted by external effects such as noise, it suffices to obtain an
approximate TLS solution at much less cost. We seek a method that can obtain a
good approximation to the TLS solution, but which admits rapid updating as new
data samples arrive. One such method is the so-called ULV Decomposition, first
introducedby Stewart [15] as a meansto obtain an approximate SVD which can be
4

easily updated as new data arrives, without making any a priori assumptions about
the overall distribution of the singular values. The ULV Decomposition of a real
n
p
matrix
A
(where
n
p
) is a triple of 3 matrices
U
,
L
,
V
plus a rank index
r
, where
A
=
U LV
T
,
V
is
p
p
and orthogonal,
L
is
p
p
and lower triangular,
U
has the same shape as
A
with orthonormal columns, and where
L
has the form
L
=
C
0
E F
where
C
(
r
r
) encapsulates the “large” singular values of
A
,
(
E ; F
)
(
(
p
?
r
)
p
)
approximately encapsulate the
p
?
r
smallest singular values of
A
, and the last
p
?
r
columns of
V
encapsulate the corresponding trailing right singular vectors.
To solve the TLS problem, the
U
matrix is not required, hence we need to carry
only
L
,
V
, and the effective rank
r
. Therefore, a given ULV Decomposition can
be represented just by the triple
[
L; V ; r
]
.
The feature that makes this decomposition of interest is the fact that, when a new
row of coefficients is appended to the
A
matrix, the
L
,
V
and
r
can be updated
in only
O
(
p
2
)
operations, with
L
restored to the standard form above, as opposed
to the
O
(
p
3
)
cost for an SVD. In this way, it is possible to track the leading
r
-
dimensional“signalsubspace” or theremaining“noisesubspace relativelycheaply.
Details on the updating process can be found in [15, 10].
We canadapt theULV DecompositiontosolvetheTotal LeastSquares (TLS)prob-
lem
A
x
b
, wherenewmeasurements
b
arecontinuallybeingadded, as originally
proposed in [2]. The adaptation of the ULV to the TLS problem has also been an-
alyzed independently in great detail in [19], though the recursive updating process
wasnot discussed at length. For our specific purposes, let
A
be an
n
(
p
?
1)
matrix
and
b
be an
n
-vector, where
p
is fixed and
n
is growing as new measurements ar-
rive. Then given a ULV Decomposition of the matrix
(
A;
b
)
and an approximate
TLS solution to
A
x
b
, our task is to find a TLS solution
b
x
to the augmented
system
b
A
b
x
b
b
, where
b
A
=
A
a
T
and
b
b
=
b
;
and
is an optional exponential forgetting factor [9].
The RTLS Algorithm:
Start with
[
L; V ; r
]
, the ULV Decomposition of
(
A;
b
)
, and the coefficients
a
T
;
for the new incoming equation
a
T
x
=
.
5

Figures
Citations
More filters
Journal ArticleDOI

Extended Target Tracking Using Polynomials With Applications to Road-Map Estimation

TL;DR: An extended target tracking framework which uses polynomials in order to model extended objects in the scene of interest from imagery sensor data and state-space models are proposed which enables the use of Kalman filters in tracking.
Proceedings ArticleDOI

Robot localization from landmarks using recursive total least squares

TL;DR: This work proposes using a recursive total least squares algorithm to obtain estimates of the robot position by avoiding several weaknesses inherent in the use of the Kalman and extended Kalman filters, achieving much faster convergence without good initial (a priori) Estimates of the position.
Journal ArticleDOI

Error-Covariance Analysis of the Total Least-Squares Problem

TL;DR: In this article, the estimate error covariance associated with both the nonstationary and stationary noise process cases with uncorrelated elementwise components for the total least squares problem is derived and analyzed.
Proceedings ArticleDOI

Depth estimation for autonomous robot navigation: A comparative approach

TL;DR: Two different strategies for inferring depth are implemented, one inspired by biology, that is optical flow, while the second one is based on a least squares method, both of which are computationally efficient.
Journal ArticleDOI

A Rapidly Converging Recursive Method for Mobile Robot Localization

TL;DR: This paper proposes a simple method for estimating the position of a robot from relatively few sensor readings, which exhibits faster convergence with fewer measurements and greater accuracy than that exhibited by the discrete Kalman filter in this type of application.
References
More filters
Book

Matrix computations

Gene H. Golub
Book

Matrix Analysis

TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Book

Adaptive Filter Theory

Simon Haykin
TL;DR: In this paper, the authors propose a recursive least square adaptive filter (RLF) based on the Kalman filter, which is used as the unifying base for RLS Filters.
Book

Applied Optimal Estimation

Arthur Gelb
TL;DR: This is the first book on the optimal estimation that places its major emphasis on practical applications, treating the subject more from an engineering than a mathematical orientation, and the theory and practice of optimal estimation is presented.
Frequently Asked Questions (9)
Q1. What are the contributions mentioned in the paper "Recursive total least squares: an alternative to using the discrete kalman filter in robot navigation" ?

To this end, the authors propose the use of a Recursive Total Least Squares Filter. 

Future work includes utilizing the filter in navigation problems with actual outdoor terrain data and combining its use with the higher level reasoning described in [ 16 ]. 

One of the main advantages of using the Kalman filter is that it is recursive, eliminating the necessity for storing large amounts of data. 

some recursive TLS filters have been developed for applications in signal processing [4, 5, 20]. Davila[4] used a Kalman filter to obtain a fast update for the eigenvector corresponding to the smallest eigenvalue of the covariance matrix. 

Although originally designed as an estimator for dynamical systems, the filter is used in many applications as a static state estimator [13]. 

If v = (v1; : : : ; vp)T is a right singular vector corresponding to the smallest singular value of (A;b), then it is well known that the TLS solution can be obtained by setting x = (v1; : : : ; vp 1)T=vp. 

When error exists in both the measurement and the data matrix, the best solution in the least squares sense is often not as good as the best solution in the eigenvector sense, where the sum of the squares of the perpendicular distances from the points to the lines are minimized (Fig. 1). 

Since there were only two measurements taken at this point, the system was not yet overdetermined, and the erroneous measures were given significant weight. 

The top three graphs have uniformly distributed error in of 2 and normally distributed error in t with standard deviation sd = 0, .05, and .1.