scispace - formally typeset
Open AccessProceedings ArticleDOI

Trellis-based search of the maximum a posteriori sequence using particle filtering

T. Bertozzi, +3 more
- Vol. 6, pp 693-696
Reads0
Chats0
TLDR
This work proposes to use the M and T algorithms in order to reduce the computational complexity of the Viterbi algorithm and shows that these algorithms enable a reduction of the number of particles by up to 20%, practically without loss of performance.
Abstract
For a given computational complexity, the Viterbi algorithm applied on the discrete representation of the state space provided by a standard particle filtering, outperforms the particle filtering. However, the computational complexity of the Viterbi algorithm is still high. We propose to use the M and T algorithms in order to reduce the computational complexity of the Viterbi algorithm and we show that these algorithms enable a reduction of the number of particles by up to 20%, practically without loss of performance with respect to the Viterbi algorithm.

read more

Content maybe subject to copyright    Report

TRELLIS-BASED SEARCH OF THE MAXIMUM A POSTERIORI SEQUENCE USING
PARTICLE FILTERING
Tanya Bertozzi
*
, Didier Le Ruyet
, Gilles Rigal
*
and Han Vu-Thien
*
DIGINEXT, 45 Impasse de la Draille, 13857 Aix en Provence Cedex 3, France, Email: bertozzi@diginext.fr
CNAM, 292 rue Saint Martin, 75141 Paris Cedex 3, France
ABSTRACT
For a given computational complexity, the Viterbi algorithm ap-
plied on the discrete representation of the state space provided by a
standard particle filtering, outperforms the particle filtering. How-
ever, the computational complexity of the Viterbi algorithm is still
high. In this paper, we propose to use the M and T algorithms in
order to reduce the computational complexity of the Viterbi algo-
rithm and we show that these algorithms enable a reduction of the
number of particles up to 20%, practically without loss of perfor-
mance with respect to the Viterbi algorithm.
1. INTRODUCTION
Many real systems of data analysis require the estimation of un-
known quantities from measures provided by sensors. In general,
the physical phenomenon can be represented by a mathematical
model, which describes the time evolution of the unknown quan-
tities called hidden state and their interactions with the observa-
tions. Often, the observations arrive sequentially in time and it
is of interest to update at each instant the estimation of the hid-
den state. Except in a few special cases, including linear Gaussian
state space models and hidden finite-state space Markov chains,
it is impossible to derive an exact analytical solution to the prob-
lem of sequential estimation of the hidden state. For over thirty
years, many approximation schemes have been proposed to solve
this problem and recently, the approach which receives the ma-
jor interest is based on the particle filtering techniques [1]. These
methods allow to approximate iteratively the posterior distribution
of the hidden state given the observations by weighted points or
particles which evolve in the state space. Therefore, the particle
filtering gives a discrete approximation of the state space of a con-
tinuous state space model.
In [2], the estimation of the hidden state using a standard par-
ticle filtering is compared to the estimation done by the Viterbi Al-
gorithm (VA) [3]-[4], where the trellis is built from the discrete
representation of the state space provided by the particle filter-
ing. For a given computational complexity, the VA outperforms
the standard particle filtering. However, the computational com-
plexity of this solution is still high since the VA analyzes all the
possible paths arriving to each particle.
In this paper, we propose to apply the M algorithm [5] and the
T algorithm [6] in order to reduce the computational complexity
of the VA built on the particle states. This paper is organized as
follows. In Section II the system model is presented. The structure
of the standard particle filtering is introduced in Section III. Sec-
tion IV describes the VA, the M and the T algorithms built on the
particle states. Finally, simulation results are given in Section IV.
2. THE STATE SPACE MODEL
The standard Markovian state space model is represented by the
following expressions:
x
k
= f(x
k1
,w
k
)
y
k
= h(x
k
,v
k
)
, (1)
where k 1 is a discrete time index, w
k
and v
k
are indepen-
dent white noises. The functions f and h can involve nonlinearity
and the noises w
k
and v
k
can be non Gaussian. The first equa-
tion describes the time evolution of the hidden state x
k
and the
second equation shows the interactions between the observation
y
k
and the hidden state. In this paper, we consider the filtering
problem yielding the estimation of the hidden state x
t
at a time t
from the observations y
1:t
= {y
1
, ··· ,y
t
}. The estimation of the
hidden state can be obtained by the Minimum Mean Square Error
(MMSE) method or by the Maximum A Posteriori (MAP) method.
The MMSE solution is given by the following expectation:
ˆx
t
= E[x
t
|y
1:t
]. (2)
The calculation of (2) involves the knowledge of the filtering dis-
tribution p(x
t
|y
1:t
). When this distribution is multimodal, the
MMSE estimate is located between the modes and is far from the
true value of the hidden state. In this case, it is preferable to use
the MAP method, which provides the estimate of the hidden state
sequence x
1:t
= {x
1
, ··· ,x
t
}:
ˆx
1:t
=argmax
x
1:t
p(x
1:t
|y
1:t
). (3)
The calculation of (3) requires the knowledge of the posterior dis-
tribution p(x
1:t
|y
1:t
).
3. THE STANDARD PARTICLE FILTERING
The aim of the standard particle filtering is to approximate recur-
sively in time the posterior distribution p(x
1:t
|y
1:t
) with weighted
particles:
p(x
1:t
|y
1:t
)
N
i=1
˜w
i
t
δ(x
t
x
i
t
) ···δ(x
1
x
i
1
), (4)
where N is the number of particles, ˜w
i
t
is the normalized weight as-
sociated with the particle i and δ(x
k
x
i
k
) denotes the Dirac delta
centered in x
k
= x
i
k
for k =1, ··· ,t. The iteration is achieved
by evolving the particles from time 1 to time t, using the Sequen-
tial Importance Sampling and Resampling (SISR) methods [7]. In

general, an initial distribution p(x
0
) of the hidden state is avail-
able. Initially, the supports {x
i
0
; i =1, ··· ,N} of the particles
are drawn according to the initial distribution. The evolution of
the particles from time k to time k +1is achieved with an impor-
tance sampling distribution [8]. At each time k the particles are
drawn according to the importance function π(x
k
|x
0:k1
,y
1:k
).
The importance function enables to calculate recursively in time
the weights associated with the particles:
w
i
k
= w
i
k1
p(y
k
|x
i
k
)p(x
i
k
|x
i
k1
)
π(x
i
k
|x
i
0:k1
,y
1:k
)
, (5)
where k 1, i =1, ··· ,N and w
i
0
=1/N, i. the normalized
weights are given by:
˜w
i
k
=
w
i
k
N
j=1
w
j
k
. (6)
This algorithm presents a degeneracy phenomenon. After a few
iterations of the algorithm, only a particle has a normalized weight
almost equal to 1 and the other weights are very close to zero. This
problem of the SIS method can be eliminated with a resampling of
the particles. A measure of the degeneracy is the effective sample
size N
ef f
[9]-[10], estimated by:
ˆ
N
ef f
=
1
N
i=1
w
i
k
)
2
. (7)
When
ˆ
N
ef f
is below a fixed threshold N
thres
, the particles are
resampled according the weight distribution [7]. After each re-
sampling task, the normalized weights are initialized to 1/N .
The optimal importance function, which minimizes the degen-
eracy of the SIS algorithm, is given by:
π(x
k
|x
0:k1
,y
1:k
)=p(x
k
|x
k1
,y
k
). (8)
In the general case of nonlinear non Gaussian state space model,
(8) cannot be evaluated in an analytical form. It is only possible
to calculate (8) exactly, when the noises w
k
and v
k
are Gaussian
and the function h is linear. If w
k
and v
k
are Gaussian and h is
nonlinear, we can obtain an approximation of (8) by linearizing
the function h in x
k
= f(x
k1
,w
k
) [7]. A simpler choice for the
importance function is represented by the prior distribution:
π(x
k
|x
0:k1
,y
1:k
)=p(x
k
|x
k1
), (9)
however, this method can be inefficient since the state space is
explored a priori without taking account of the observations.
Using the SISR methods, we can provide a MMSE of the hid-
den state at each time k:
ˆx
k
=
x
k
p(x
k
|y
1:k
)dx
k
=
x
k
N
i=1
˜w
i
k
δ(x
k
x
i
k
)dx
k
=
N
i=1
x
i
k
˜w
i
k
. (10)
For the MAP estimate, the maximization in (3) is only performed
on the N sequences of particles. Applying the Bayes theorem to
the posterior distribution at a time k:
p(x
1:k
|y
1:k
)=
p(y
k
|x
k
)p(x
k
|x
k1
)
p(y
k
|y
1:k1
)
p(x
1:k1
|y
1:k1
), (11)
k
k+1k-1
Time
State space
Fig. 1. Application of the VA in a particle trellis (N =4).
we observe that the posterior distribution in (3) associated at each
particle can be processed iteratively:
λ
i
k
= λ
i
k1
+lnp(y
k
|x
i
k
)+lnp(x
i
k
|x
i
k1
), (12)
where we have omitted the normalization term identical for each
particle and λ
i
k
denotes the metric of the particle i at time k.At
time t, the MAP estimate coincides with the path in the state space
of the particle with maximum λ
i
t
.
4. COMPLEXITY REDUCTION OF THE VITERBI
ALGORITHM
The VA, introduced by Viterbi in 1967 [3] and analyzed in detail by
Forney in 1973 [4], is a dynamic programming algorithm, which
provides an iterative way of finding the most probable sequence
in the MAP sense of hidden states of a finite-state discrete-time
Markov model. It reduces the complexity of the problem by avoid-
ing the necessity to examine every path through the trellis. How-
ever, in the most general case of a continuous-state space model,
the VA cannot be applied. In [2], the authors have proposed to per-
form the VA on the discrete trellis built by a SISR technique. Each
particle represents a state with a metric expressed by (12). An ex-
ample of a particle trellis is represented in Fig. 1. We consider the
generic transition from time k 1 to time k. At time k, the VA
analyzes all the possible paths which reach the arrival particle p
a
,
for p
a
=1, ··· ,N. The metric associated with a possible path in
the particle trellis from a departure particle p
d
at time k 1 to p
a
is given by:
λ
p
a
k
= λ
p
d
k1
+lnp(y
k
|x
p
a
k
)+lnp(x
p
a
k
|x
p
d
k1
). (13)
Among these paths from all the p
d
to p
a
, only the path with the
maximum metric is kept. At the final instant t, the MAP estimate
of the hidden state sequence coincides with the path of the par-
ticle with maximum metric. If the computational complexity of
the SISR algorithm is proportional to the number N of particles,
the computational complexity of the VA is proportional to N
2
.In
[2], the authors have shown that the VA processed on a trellis of
N particles outperforms a SISR algorithm with N
2
particles. The
problem is that N
2
can assume very high values. In this paper, we
propose to reduce the computational complexity of the VA using
the M and T algorithms, while keeping the same performance.
The M algorithm retains the M best paths, with M less than
the total number of states, from one iteration to the next one. In
the other hand, the T algorithm keeps variable number of paths

k
k+1k-1
Time
State space
Fig. 2. Application of the M algorithm in a particle trellis (N =4,
M =2).
depending on the threshold parameter T . First, let’s modify the M
algorithm on the particle trellis built by the SISR algorithm.
At time 1, we consider all the possible paths from the depar-
ture particles p
d
to the arrival particles p
a
and we retain one path
for each p
a
, as in the VA. At that time, we introduce a new step.
From the N arrival particles we keep the M particles with the best
metrics, where M<N. At the next time 2, the number of depar-
ture particles is M and of arrival particles is N. Therefore, only
MN paths from p
d
to p
a
are possible. At time 2, we retain the
M particles with the best metrics and go through the trellis in this
way up to the final time t. The path of the particle with maxi-
mum metric at time t represents the MAP estimate of the hidden
state sequence, as in the VA. This M algorithm has a computational
complexity proportional to MN. An example is shown in Fig. 2.
Let’s consider now the T algorithm. At time 1, we perform
the VA. Then, among the arrival particles we determine the par-
ticle with the maximum metric. We calculate the difference be-
tween the maximum metric and metrics of the other arrival parti-
cles. When this difference is greater than a given threshold T , the
particle is discarded. At the next time 2, the departure particles are
N
1
<= N and the arrival particles are N. As the consequence,
only consider N
1
N paths from p
d
to p
a
. At time 2, we retain the
N
2
particles which have survived the threshold test and go through
the trellis in this way up to the final time t. The path of the par-
ticle with maximum metric at time t represents the MAP estimate
of the hidden state sequence, as in the VA. This T algorithm has a
mean computational complexity proportional to
NN, where N is
the mean number of survivor particles at each instant. An example
is given in Fig. 3.
5. SIMULATION RESULTS
In order to compare the simulation results of the standard SISR,
Viterbi, M and T algorithms, we consider the following nonlinear
Gaussian state space model [11]-[12]-[2]:
x
k
=
1
2
x
k1
+25
x
k1
1+x
2
k1
+ 8 cos(1.2k)+w
k
y
k
=
x
2
20
+ v
k
, (14)
where the time index 1 k t with t = 200, the density of
the initial hidden state x
0
is Gaussian with zero mean and vari-
ance 5 and w
k
and v
k
are mutually independent white Gaussian
noises with zero mean and variance respectively equal to 10 and
k
k+1k-1
Time
State space
Fig. 3. Application of the T algorithm in a particle trellis (N =4,
N =2).
0
20
40
60
80
10
0
−20
−10
0
10
20
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
TimeState space
Filtering distribution
Fig. 4. Filtering distribution.
1. In this case, the filtering distribution p(x
k
|y
1:k
) can be bi-
modal, as illustrated in Fig. 4. This figure is obtained by ap-
plying a SISR with N = 1000 particles, an importance function
π(x
k
|x
0:k1
,y
1:k
)=p(x
k
|x
k1
) and a resampling step made at
each time (bootstrap filter, [11]).
To evaluate the performance of the different algorithms ex-
posed above, we use the mean of the absolute value of the filtering
error on N
r
=100realizations of the algorithms:
m
err
=
1
N
r
N
r
n=1
m
err
(n)=
1
N
r
N
r
n=1
1
t
t
k=1
|x
k
(n) ˆx
k
(n)|,
(15)
where m
err
(n) is the mean of the absolute value of the error fil-
tering for the realization n and x
k
(n) is the hidden state at time k
for the realization n, and the variance of this error:
σ
2
err
=
1
N
r
N
r
n=1
σ
2
err
(n)
=
1
N
r
N
r
n=1
1
t
t
k=1
|x
k
(n) ˆx
k
(n)|−m
err
(n)
2
(16)
where σ
2
err
(n) is the variance of the absolute value of the error fil-
tering for the realization n. For the standard particle filtering, we
use a SISR algorithm with an importance function calculated by
linearizing the observation model (14) and a resampling threshold
N
thres
= N/5. We consider only the MAP estimate which in this
case gives the better performance because the filtering distribution

SISR MAP algorithm Viterbi algorithm
N Mean Variance Mean Variance
100 0.906583 3.029062 0.870553 5.982229
250 xxxx xxxx 0.849045 5.219399
500 xxxx xxxx 0.804020 3.936105
1000 xxxx xxxx 0.784171 3.394146
Table 1. Performance of the SISR and Viterbi algorithms.
M algorithm
N M Mean Variance
100 90 0.878995 6.145979
80 0.910603 6.684327
250 225 0.853668 5.273226
200 0.871307 5.648677
500 450 0.808492 4.013840
400 0.834472 4.514141
1000 900 0.795176 3.545520
800 0.807538 3.827843
Table 2. Performance of the M algorithm.
can be bimodal. The obtained results are shown in Tables 1, 2 and
3. In Table 1, the SISR algorithm and the VA have the same com-
putational complexity proportional to N
2
, since the first one uses
N
2
particles. We notice that for a given computational complex-
ity, the VA outperforms the standard particle filtering. In Table 2,
the computational complexity of the M algorithm is proportional
to MN.IfM is less than N by 10%, we have almost the same
performance than the VA. Nevertheless, if we reduce the number
of particles by 20%, the performance degrades. In Table 3, the
mean computational complexity of the T algorithm is proportional
to
NN. We observe that the T algorithm presents better perfor-
mance than the one of the M algorithm. Applying the T algorithm,
we can reduce the number of particles up to nearly 20% practically
without loss of performance with regard to the VA.
6. CONCLUSION
In this paper, we have analyzed the problem of the estimation of
a nonlinear non Gaussian hidden state, solved generally with the
SISR algorithm. The particles of the SISR algorithm provide a dis-
crete representation of the state space. If we see the particles as the
states of a trellis, we can search for the most likely sequence using
the VA. We have shown that for a given complexity, the VA out-
performs the SISR algorithm. However, the computational com-
plexity of this solution is still high. As the consequence, we have
proposed the M and the T algorithms in order to reduce the com-
putational complexity of the VA. We can conclude that these al-
gorithms enable a reduction of the number of particles up to 20%,
practically without loss of performance with regard to the VA.
7. REFERENCES
[1] A. Doucet, J. F. G. de Freitas and N. J. Gordon, Sequen-
tial Monte Carlo methods in practice. New York: Springer-
Verlag, 2001.
[2] S. Godsill, A. Doucet and M. West, “Maximum a posteri-
ori sequence estimation using Monte Carlo particle filters,
T algorithm
N N Mean Variance
100 86.461960 0.873065 6.048468
77.598141 0.886633 6.445502
250 217.145276 0.850905 5.271618
195.566131 0.861759 5.637627
500 434.812261 0.806332 4.020476
391.834975 0.824472 4.598820
1000 869.416231 0.786935 3.480737
784.061206 0.810000 4.154820
Table 3. Performance of the T algorithm.
Annals of the Institute of Statistical Mathematics, Vol. 52,
No. 1, pp. 82–96, 2001.
[3] A. Viterbi, “Error bounds for convolutional coding and an
asymptotically optimum decoding algorithm, IEEE Trans.
on Inform. Theory, Vol. 13, pp. 260–269, Apr. 1967.
[4] G. D. Forney, “The Viterbi algorithm, Proc. of the IEEE,
Vol. 61, No. 3, March 1973.
[5] J. B. Anderson and S. Mohan, “Sequential coding algorithm:
a survey and cost analysis, IEEE Trans. on Com., Vol. 32,
pp. 169–176, Feb. 1984.
[6] S. J. Simmons, “Breadth-first trellis decoding with adaptive
effort, IEEE Trans. on Com., Vol. 38, pp. 3–12, Jan. 1990.
[7] A. Doucet, S. Godsill and C. Andrieu, “On sequential Monte
Carlo sampling methods for Bayesian filtering, Statistics
and Computing, Vol. 10, No. 3, pp. 197–208, 2000.
[8] J. Geweke, “Bayesian inference in econometric models using
Monte Carlo integration, Econometrica, Vol. 24, pp. 1317–
1399, 1989.
[9] A. Kong, J. S. Liu and W. H. Wong, “Sequential imputations
and Bayesian missing data problems, Journ. of the Ameri-
can Statistical Association, Vol. 89, pp. 278–288, 1994.
[10] J. S. Liu, “Metropolized independent sampling with compar-
ison to rejection sampling and importance sampling, Statis-
tics and Computing, Vol. 6, pp. 113–119, 1996.
[11] N. J. Gordon, D. J. Salmond and A. F. M. Smith, “Novel
approach to nonlinear/non-Gaussian Bayesian state estima-
tion, IEE Proc., Vol. 140(2), pp. 107–113, 1993.
[12] G. Kitagawa, “Monte Carlo filter and smoother for non-
Gaussian nonlinear state space models, Journ. of Compu-
tational and Graphical Statistics, Vol. 5(1), pp. 1–25, 1996.
Citations
More filters
Journal ArticleDOI

On Approximate Maximum-Likelihood Methods for Blind Identification: How to Cope With the Curse of Dimensionality

TL;DR: A novel unbiased selection scheme is proposed, which minimizes the expected loss with respect to general distance functions and is compared to the expectation maximization Viterbi algorithm, a fixed-lag smoothing algorithm and the Block constant modulus algorithm.
Dissertation

Approximate Maximum Likelihood Methods for Large Scale Blind Classification and Identification in Digital Communications

TL;DR: This work considers blind classification of linear modulation schemes for digital communication over frequency- (and time-) selective channels using the maximum likelihood principle and develops several model estimators exploiting the specific structure of these schemes.
Dissertation

Méthodes approchées de maximum de vraisemblances pour la classification et identification aveugles en communications numériques

TL;DR: In this paper, the authors propose a classification aveugle de modulations lineaires en communication numerique sur des canaux selectifs en frequence (et en temps).
Proceedings ArticleDOI

A filtering approach for model selection

TL;DR: It is found that the value of initial parameters in the method plays a key role in selecting models, and the effect of those parameters is studied, and guidelines on how to choose them are proposed.
References
More filters
Journal ArticleDOI

Novel approach to nonlinear/non-Gaussian Bayesian state estimation

TL;DR: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters, represented as a set of random samples, which are updated and propagated by the algorithm.
Journal ArticleDOI

Error bounds for convolutional codes and an asymptotically optimum decoding algorithm

TL;DR: The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.
BookDOI

Sequential Monte Carlo methods in practice

TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Journal ArticleDOI

The viterbi algorithm

TL;DR: This paper gives a tutorial exposition of the Viterbi algorithm and of how it is implemented and analyzed, and increasing use of the algorithm in a widening variety of areas is foreseen.
Journal ArticleDOI

On sequential Monte Carlo sampling methods for Bayesian filtering

TL;DR: An overview of methods for sequential simulation from posterior distributions for discrete time dynamic models that are typically nonlinear and non-Gaussian, and how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature are shown.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions in "Trellis-based search of the maximum a posteriori sequence using particle filtering" ?

For a given computational complexity, the Viterbi algorithm applied on the discrete representation of the state space provided by a standard particle filtering, outperforms the particle filtering. In this paper, the authors propose to use the M and T algorithms in order to reduce the computational complexity of the Viterbi algorithm and they show that these algorithms enable a reduction of the number of particles up to 20 %, practically without loss of performance with respect to the Viterbi algorithm. 

In this paper, the authors propose to apply the M algorithm [5] and the T algorithm [6] in order to reduce the computational complexity of the VA built on the particle states. 

The metric associated with a possible path in the particle trellis from a departure particle pd at time k − 1 to pa is given by:λpak = λ pd k−1 + ln p(yk|xpak ) + ln p(xpak |xpdk−1). 

After a few iterations of the algorithm, only a particle has a normalized weight almost equal to 1 and the other weights are very close to zero. 

If the computational complexity of the SISR algorithm is proportional to the number N of particles, the computational complexity of the VA is proportional to N2. 

The estimation of the hidden state can be obtained by the Minimum Mean Square Error (MMSE) method or by the Maximum A Posteriori (MAP) method. 

Applying the T algorithm, the authors can reduce the number of particles up to nearly 20% practically without loss of performance with regard to the VA. 

In this paper, the authors consider the filtering problem yielding the estimation of the hidden state xt at a time t from the observations y1:t = {y1, · · · , yt}. 

The standard Markovian state space model is represented by the following expressions:{ xk = f(xk−1, wk) yk = h(xk, vk) , (1)where k ≥ 1 is a discrete time index, wk and vk are independent white noises. 

In general, the physical phenomenon can be represented by a mathematical model, which describes the time evolution of the unknown quantities called hidden state and their interactions with the observations. 

This figure is obtained by applying a SISR with N = 1000 particles, an importance function π(xk|x0:k−1, y1:k) = p(xk|xk−1) and a resampling step made at each time (bootstrap filter, [11]). 

The aim of the standard particle filtering is to approximate recursively in time the posterior distribution p(x1:t|y1:t) with weighted particles:p(x1:t|y1:t) ≈ N∑i=1w̃itδ(xt − xit) · · · δ(x1 − xi1), (4)where N is the number of particles, w̃it is the normalized weight associated with the particle i and δ(xk −xik) denotes the Dirac delta centered in xk = xik for k = 1, · · · , t. 

In this paper, the authors propose to reduce the computational complexity of the VA using the M and T algorithms, while keeping the same performance. 

The authors can conclude that these algorithms enable a reduction of the number of particles up to 20%, practically without loss of performance with regard to the VA.