635
APPLICATION
OF
SPARSE EIGENVALUE TECHNIQUES
TO THE SMALL SIGNAL STABILITY ANALYSIS OF LARGE POWER SYSTEMS
L. Wang
A.
Semlyen
Department
of
Electrical Engineering
University
of
Toronto
Toronto, Ontario, Canada
Abstract
-
This paper presents two sparsity-based eigenvalue
techniques
-
simultaneous iterations and the modified Arnoldi
method
-
and their application to the small signal stability analysis
of large power systems.
Simultaneous iterations and the modified Arnoldi method are two
recently developed methods for large, sparse unsymmetrical eigen-
value problems, and have been reported as very efficient in com-
puting the partial eigensolution of several types of matrices, such
as stochastic ones. It is shown in this paper that they can also
be
applied successfully to the matrices derived for small signal stabil-
ity studies of power systems. An algorithm utilizing these two
methods is proposed for calculating the eigenvalues around a fixed
point which can
be
placed at will in various parts of the complex
plane. The sparsity is fully preserved in the algorithm by using the
augmented system state equations as the linearized power system
small signal model and performing the corresponding sparsity-
oriented calculations. Several applications of the algorithm are dis-
cussed and illustrated by numerical examples.
The proposed methods and algorithm have been tested on two test
systems with
20
and
50
machines respectively. The results show
that they
are
suitable for the eigenanalysis of large power systems.
Keywords:
Small signal stability, Eigenvalues, Sparse methods.
INTRODUCTION
The evaluation of the small signal stability of power systems
requires the calculation of the eigenvalues of a very large unsym-
metrical and nonsparse matrix. The well-known
QR
method is
robust and converges fast
[l]
but cannot
be
implemented with
sparsity techniques,
so
that its application is limited to relatively
small power systems. On the other hand, for a large power system
with thousands of state variables, it is usually required to calculate
only a specific set of eigenvalues with certain features of interest,
for example, local mechanical modes, inter-area modes, etc.
Therefore, significant effort has been expended to develop or apply
new methods with the following three basic properties:
(a) Sparsity techniques can
be
used
(b) A specific set of eigenvalues can
be
found efficiently
(c) Mathematical robustness is guaranteed, i.e. good conver-
gence characteristics and numerical stability.
This paper
was
sponsored
by
the
IEEE
Power
Engineering Society
for
presentation at the
IEEE
Power Industry Computer Application
Conference, Seattle, Washington,
May
1
-
5,
1989.
Manuscript
was
published
in
the
__
1989
PICA Conference Record.
Among these (a) is of utmost importance since it provides the pos-
sibility to handle large power systems. Several sparsity-based
methods have been proposed in recent years. PEALS
[2]
is mainly
aimed at the computation of slow inter-area oscillatory modes; the
S-Method
[3]
is most efficient for finding the unstable modes;
STEPS
[4]
can be used for computing the eigenvalues belonging to
a small study zone;
[5]
gives an implementation of the inverse
iterations. In addition to these methods,
[6]
and
[7]
also report spe-
cial methods to solve the eigenvalue problem of large power sys-
tems.
This paper presents two sparsity-based eigenvalue techniques
-
simultaneous iterations and the modified Amoldi method
-
and
their application to the small signal stability analysis of large
power systems. These two methods
are
mathematically well-
developed and both have been proved to
be
very efficient in com-
puting the dominant eigenvalues of large, sparse, unsymmetrical
matrices
[8,9].
The former is an extension of the classical power
method with a tactically designed interaction analysis which
makes the method converge reliably. The latter is a method similar
to the well-known Lanczos method, but more reliable by having
better numerical properties after introducing appropriate
modifications. Both simultaneous iterations and the modified
Amoldi method are successful in the eigenanalysis of power sys-
tems, as will be illustrated by various numerical examples.
For the small signal stability analysis, an algorithm is pro-
posed to make the eigenvalue problem of power systems fit the
two methods mentioned above. The sparsity is fully preserved in
the algorithm by using the augmented system state equations as the
linearized power system small signal model and performing the
corresponding sparsity-oriented calculations. A simple spectral
transformation
-
fractional transformation
-
is then applied to the
augmented state matrix to make dominant the eigenvalues around
a specified shift point,
so
that a group of eigenvalues near the shift
point can
be
computed by either of the two methods. This algo-
rithm
is
most suitable for calculating a desired number of eigen-
values nearest to or all eigenvalues within certain distance from
the shift point. For example, if the local mechanical modes are of
interest, shift points with typical frequencies between
1
to
2
Hz
can
be
used to sequentially calculate the eigenvalues in this area.
Two test systems with
20
and
50
machines respectively have
been chosen to test the performance of the proposed methods and
algorithm. Comparisons are also made for the two eigenvalue
methods with other formerly used techniques. Some means for
improving the methods as well as experience with the application
of the algorithm are discussed and illustrated by numerical exam-
ples.
SOLUTION METHODS
Sparsity-Based Eigenvalue Techniques
Since the eigenanalysis of modem power systems deals with
matrices of very large dimension, sparsity techniques play a key
role in the analysis.
A
survey of the available sparsity-based eigen-
value techniques for general unsymmetrical matrices results in the
following four methods:
0885-8950/90/0500-0635$01.00
0
1990
IEEE
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.
636
(a)
(b) Simultaneous iterations
(c) Arnoldi method
(d) Lanczos method.
The application of (a) to the eigenanalysis of power systems is
reported in
[4]
and
[5].
A proposal of using (b)
on
vector and array
processors is presented in
[lo]
which, however, does not contain
numerical results. (d) has also been applied to this problem in [3]
and
[l
11. We note that (a) is good only for computing one eigen-
value, or at most a few with deflation, and this is not satisfactory in
most cases. (d) is a very successful method for the symmetrical
eigenvalue problem, but has serious flaws in the case of unsym-
metrical eigenvalue problems as, for example, the phenomenon of
'breakdown' as pointed out in
[12]
and also experienced by the
authors (see Appendix
1
for a brief discussion of the block Lanc-
zos method).
On
the other hand, as far as we know, (b) has not
been tried
on
ordinary computers for the eigenanalysis of power
systems, and (c) has never been applied to these problems, but
both (b) and (c) have been used successfully in some other applica-
tions such as the partial eigensolution of stochastic mamces. Since
they have generally better numerical properties, it seems that they
may be the best candidates for the eigenanalysis of power systems.
This is the reason why we choose them as solution methods in this
study.
It is interesting to note that all four methods mentioned above
belong to a class of methods known as the Krylov method
[13]
in
which the Krylov subspace
{
x
Ax
.
.
.
Ai-'x
]
is used to
approach the dominant invariant subspace of a matrix
A.
There are
two important and useful features for these methods. First, they
are all aimed at finding a few of the dominant eigenvalues of
A
(here dominance refers to largeness in modulus). This corresponds
to the requirement that usually only a few of the eigenvalues are
needed in the eigenanalysis of large power systems, although some
transformation is necessary to make the required eigenvalues dom-
inant. Second, in these methods the only operation involving
A
is
the matrix-vector multiplication
Ay.
Therefore, it is
not
necessary
to form
A
explicitly, provided that
Ay
can
be
calculated easily.
This allows us to use the augmented system state equations to
preserve the full sparsity of the problem.
Power method and inverse iterations
Simultaneous Iterations
The method of simultaneous iterations was originally pro-
posed in
[14]
for the symmetrical eigenvalue problem. The exten-
sion of the method to general real unsymmetric matrices is first
found in [15], and then fully analyzed in [16] and in
[8]
which also
provides a practical algorithm of the lopsided simultaneous itera-
tions. Although the matrices dealt with in the above references
are.
all real, the method is also applicable to general complex matrices,
as demonstrated below.
Let
AE
CnX"
have eigenvalues
hi,
with
1x11
2
112.21
2
...
2
Ih,l
and
Aa
0
'=[O
Ad
where
Aa
=
diag(
hl
. . .
h,
)
and
Ab
=
diag(
hm+l
. . .
h,
1.
Denote the matrix of the right eigenvectors of
A
by
Q=[Qa
QbI=[ql
...
qm
Iqm+1
...
qnI
where
qi
is associated with
hi.
Then we have
AQa
=
QaAa
and
AQb
=
QbAb
(1)
Assuming that we start with
rn
independent trial vectors
U=[u1
U2
...
U,]
E
cnXm
perform the multiplication
V=AU
(2)
(3)
Since
U
may be represented by
where
C,E
Cmm
and
CbE
C(n-m)xm
are coefficient matrices, it is
clear that
U
=
Qaca
+
QbCb
v
=
AU
=
Q,Aaca
+
QbAbCb
(4)
Note that in eqn.(4) the first term is more dominant than in eqn.(3),
i.e. the components of
Qb
have been somehow washed out in
v.
To further refine the eigenvalues in
Aa,
an interaction analysis is
introduced by defining
G
=
UHU
UHQaCa
(5)
H
=
uHv
=
UHQ,A,C,
and
(6)
where the superscript
H
means conjugate-transpose. Assuming that
UHQa
is non-singular, we obtain
G-l
H
=
C,~(U~Q,)-~
U~Q,A,C,
=
ci1
A~C,
(7)
GB
=H
(8)
CUB
=
AaCa
(9)
or, if
B
is the solution of
then we have
which implies that
A,
and
C,
contain the approximate eigenvalues
and left eigenvectors of
B.
If
P
is the matrix of the right eigenvec-
tors of
B,
P
=
c,'
then
w
=
VP
Qah,
+
QbAbCbCi'
(10)
gives an improved set of right eigenvectors of
A.
Taking
W
as the
new set of trial vectors, the above process can be iterated until all
required eigenvalues
are
found. It can be readily shown (see, for
example,
[
161 for a similar proof of the simultaneous bi-iteration
method) that this method is convergent for the first
i
eigenvalues of
A
if
lhil
>
Ih,+l
I
(11)
for
i
=
1,
2,
.
. .
,
rn
and the convergence rate for
hi
is
I1,+1
I
I
lhil.
Locking Device
It may be noticed that matrix
G
in eqn.(5) is symmetrical
positive definite. Therefore, the Cholesky decomposition can be
used to solve eqn.(8). Moreover, when one eigenvalue (say, the ith
one) has converged, the first
i
rows and columns of
G
will not
change in all subsequent calculations, and the Cholesky decompo-
sition of
G
up to the ith step will also remain unchanged. Thus we
can
'lock'
the decomposed matrix
G
up to the ith row and column,
and only perform the Cholesky decomposition from the (i+l)th
step. This is the so-called 'locking device' which helps to improve
the efficiency of the algorithm.
Guard Vectors
In practice, ifs dominant eigenvalues of
A
are required,
an
rn
larger than
s
is usually used in this method to obtain better conver-
gence rate and to ensure the convergence of all
s
eigenvalues if
I
h,
I
=
I
h,+l
I.
The additional vectors are named guard vectors
in
181.
A practical question is how to decide the number of guard vec-
tors
so
as to have the best computational efficiency. Unfortunately,
there is
no
theoretical answer available.
The only way to explore
this
is
by numerical tests. Some calculations have been done on
our test systems for this problem. The results
are
reported later, in
the section
on
numerical results.
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.
637
Fast Iteration Cycles
In
[8]
the idea of fast iteration cycles is introduced and
proved to be very efficient for a variety of large, sparse matrices. It
is basically the iteration procedure (2) with the interaction analysis
omitted for a number of iterations. For power system problems,
however, it seems less attractive since the multiplication in eqn.(2)
is quite expensive for large systems and, more seriously, the suc-
cessive multiplications will force the vectors in
U
to become
dependent
so
that the matrix
G
is no longer positive definite, which
will make the subsequent calculations very inefficient (this did
happen in our test calculations). For this reason, the fast iteration
cycles are not considered in our algorithm.
General Procedure
gram, as a summary of the discussions on simultaneous iterations.
We give the following algorithm which we used in our pro-
Set up the initial trial vectors
U’
with independent columns;
let i
=
1
Calculate
Vi
by eqn.(2)
Calculate
Gi
by eqn.(5) and factorize it by the Cholesky
decomposition
Calculate
Hi
by eqn.(6)
Solve for
B
from eqn.(g)
Perform full eigenanalysis for
B
by the
QR
method, obtain-
ing the eigenvalues
Ab
=
diag(
h’~~
.
.
.
hl~,
)
and the
associated right eigenvectors
PI
Compare
Ai
with
Ab-’
(
AS
=
0).
If all required eigenvalues
have been found, exit; otherwise go on to the next step
Calculate the new trial vectors
Ui+’
by eqn.(lO)
Let
i
=
i
+
1 and go to (b) to perform the next iteration
Modified Arnoldi Method
The Arnoldi method was first uresented in 1171. However.
--
because of its poor numerical properties, it was not successful
before implementing several modifications to it [9]. The main
problems of the original Arnoldi method are
loss
of orthogonality
and slow convergence if a number of dominant eigenvalues is
needed. The latter entails in most cases the need for the full
eigenanalysis of a relatively large Hessenberg matrix, which is
expensive. These problems can
be
solved by using the complete
reorthogonalization and the iterative process described in [9].
The Amoldi method presented in [9] is also for general real
matrices. The following extends it
to
general complex matrices
with discussions on two modifications.
Let
AE
Cnm
and
v
E
C”
the starting vector with
IIv
1112
=
1.
The subsequent orthonormal vectors are produced by the recursive
formula
hi+l,ivi+l
=
(I-v~v~)AV~
i
=
1,
.
+.,
m
(12)
. . .
vi
1.
where
hi+l,i
is chosen such that
llvi+l
112
=
1,
and
Vi
=
[
v
From eqn.(l2) we can obtain
where
hi’= VyAvi
E
Ci.
For all
m
equations assembled, eqn.(l3)
becomes
Av~
=
Vihi’+hi+l,ivi+l
(13)
AV,
=
VmHm+hm+l,mvm+leZ
(14)
where
e:
=
[
0
. . .
0
1
]
and
H,
is an upper Hessenberg matrix
with the ith column equal to
hi
’
(15)
Eqn.(l4) can be approximated by dropping the second term
on the right hand side. Thus,
AV,
=
V,H,
(16)
which implies that the eigenvalues of
H,
are the approximations
of the eigenvalues of
A.
Clearly, the error depends on h,+l,,
which vanishes when
m
=
n.
In fact, as
m
increases, eigenvalues of
H,
with largest and smallest modulus will gradually converge to
the eigenvalues of
A.
This is the well-known property of the Lanc-
zos method, which also holds here. The approximate eigenvectors
of
A
can be readily found as
w
=
V,P
(17)
where
P
is the
mxm
matrix of the right eigenvectors of
H,.
Reorthogonalization
It has been found that the original Arnoldi method as men-
tioned above has numerically poor behavior because of the
loss
of
orthogonality for the vector series
vi
after a number of iterations.
The natural remedy for this problem is to reorthogonalize every
newly-produced vector
vi+l.
A modified Gram-Schmidt method
[18] is used for this purpose, in which we simply replace eqn.(l2)
by the iterative process
(18)
up+1
=
(I
-
ViV?
)up
k
=
1,
2,
. . .
with
uf
=
Avi.
This
process continues until
for some
k
=
k,.
Then we take
hi+l,i
=
IIUy
112
(20a)
In [9] a scheme of incomplete reorthogonalization was pro-
posed. From our experience, however, it is not of great benefit
because, first, by using the iterative Amoldi method a small
m
is
used, and second, the reorthogonalization scheme (18)-(20) is very
efficient
so
that in most cases only one iteration is enough.
Iterative Arnoldi Method
Since the original Arnoldi method converges usually for rela-
tively large
m,
the complete eigenanalysis of a Hessenberg matrix
of large dimension is necessary. To reduce the order of the
Hessenberg matrix, the iterative Arnoldi method is introduced.
Let
m
be fixed at a moderate value. We perform the original Arnoldi
method with reorthogonalization to obtain the eigenvalue and
eigenvector approximations
hi
and
wi
for
i
=
1,
.
.
.
,
m.
Then we
repeat the same method but using a new starting vector, for exam-
ple the one recommended in [9]:
S
VI
=
aCII(A
-hi1
)wi 112wi
(21)
where
a
is a scalar to normalize
vl
and
s
is the number of eigen-
values to be found. The iteration continues until all required
eigenvalues are found. It can
be
shown (see Appendix 2) that
eqn.(21) is equal to
where
a’
is
again a normalizing scalar,
P*
=
[
p1
. . . ps
I
and
jT
=
[
Ip,
1
I
Ip,
I
IT.
Here
pi
is the ith right eigenvector of
H,
and
pmi
is the last element of
pi.
With the iterative Arnoldi method we will have the problem
of choosing a proper
m.
Recommendations are given later by the
numerical results which are based on our test systems.
i=l
v1=
a’v,P*jJ
(22)
1
. .
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.
638
U?
General Procedure
In what follows, we give the algorithm for the modified
Arnoldi method in which both the complete reorthogonalization
and the iterative process are used.
1
Set up the starting vector
v
;
let
V
1
=
v
and
i
=
1
Calculate
ut
=AV~;
let
k
=
1
Calculate
U:+'
by eqn.(l8)
If the condition (19) is satisfied, go
on
to the next step; other-
wise let
k
=
k
+
1 and
go
to (c)
Calculate
hi+l,i
and
vi+l
by eqn.(20a) and (20b)
Calculate
hi
'
=
VYAvi
and form
hi
by eqn.(
15)
If
i
=
m,
matrix
H
has been formed and go
on
to the next step;
otherwise let
Vi+l
=
[
Vi vi+l
1.
i
=
i
+
1
and go to
(b)
Perform full eigenanalysis for
H
by the QR method. If all
required eigenvalues have been found, exit; otherwise go
on
to the next step
Calculate the new starting vector
v
1
by eqn.(22); let
V
=
v
1,
i
=
1 and
go
to (b)
APPLICATION TO POWER SYSTEMS
Power System Modeling
The linearized power system model for the small signal sta-
bility analysis is easy to derive (for example, see
[l]).
However,
since the state matrix of a power system is in general not sparse,
the direct construction of the state matrix would be impossible for
large systems. Various schemes have been proposed to implement
sparsity techniques
[2],
[3],
[4],
[5].
Here we adopt the method of
141 in which the augmented system state equations are used:
[:I
=[;:
a]
[:I
(23)
where
x
is the vector of the state variables and
V
is the vector of
the system voltages.
JA,
JB,
Jc
and
JD
are sparse matrices which
depend
on
the system parameters and the operating point. It can be
seen that the state mamx
A
may be formed from eqn.(23) as
A
=
J~
-
JB
JE~
J~
(24)
which is only of theoretical significance
in
this work.
Spectral Transformation
For the smali signal stability analysis of power systems, two
types of the eigenvalues are of special interest: the weakly-damped
local mechanical modes with frequencies between
0.8
to 2.0 Hz
and inter-area modes with frequencies between
0.1
to 0.6 Hz.
Unfortunately, these eigenvalues are usually much smaller in
modulus than other eigenvalues (for example, the fast damped
local modes),
so
that most of the sparsity-based eigenvalue algo-
rithms can not be applied directly. The solution
of
this problem is
to apply a spectral transformation to the original state matrix to
shift the required eigenvalues
so
that they become dominant in
modulus. The simplest way to
do
this is to use the fractional
transformation
A, =(A
-h,I)-'
(25)
which transforms the eigenvalue
hi
of
A
to
h.--
1
hi
-h,
rl
-
where
h,
is a fixed shift.
It
is easy to verify that the transformation
(25)
transforms the eigenvalues of
A
within the unit circle centered
at
h,
to the eigenvalues of
At
outside the unit circle at the origin.
Thus, if the eigenvalues around some point
k,
(say a fixed fre-
quency) are required, the shift
1,
can be used in eqn.(25) to mag-
nify the eigenvalues near
Lr.
Sparsity-based eigenvalue techniques
can then be applied to the transformed matrix
A,
to find these dom-
inant eigenvalues.
We would like to make a short comment here for the Cayley
transformation used in
[3].
An
advantage of the Cayley transfor-
mation is that the transformed matrix
At
(or matrix
S
as in
[3])
remains real, while for the fractional transformation it becomes
complex in general. However, this will not substantially increase
the storage requirement for the latter. For example, if we consider
a system with 1200 buses, 1400 lines,
300
machines and
3000
state
variables, the increase of the storage
is
about
370
KB
for double
precision calculations, which can easily be handled by modern
computers.
On
the other hand, by using the Cayley transformation
all complex eigenvalues will be calculated twice for each conju-
gate pair, and the real eigenvalues around the origin may also have
to
be
calculated, even if only slow oscillatory modes are of
interest. Therefore, the fractional transformation can provide more
flexibility and better computational efficiency.
Practical Implementation
To apply the two eigenvalue techniques described in this
paper, we only need to provide the matrix-vector product
Ay,
or
Ary
=
(A
-
hrl
)-'y
if the transformed matrix
A,
is considered. The
corresponding calculation in terms of the augmented state matrix
is then to solve the equation
for
x.
An algorithm for solving the above equation with the spar-
sity techniques
is
given in Appendix
2
of [4].
Numerical Results
Description of the Test Systems
Two test systems are employed in order to examine the per-
formance of the above two sparsity-based eigenvalue techniques
applied to the eigenanalysis of large power systems.
In
each sys-
tem, a 9th order model is used for the synchronous machine and its
control systems. Loads are represented by constant impedances.
The first system, T77, has 77 buses, 20 machines and 180 state
variables, and the second one, T169, is a 169 bus, 50 machine
sys-
tem with 450 state variables. Complete eigenanalyses by the QR
method have been performed for both systems to make sure the
results from the new methods are correct. Fig.
1
and Fig. 2 show
the eigenvalue distributions of the two systems (in both figures,
eigenvalues with the real parts less than
-30
and with negative ima-
ginary parts are omitted).
!I I
.L
c
g-1
E
--In
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.
-30
-25
-20
-15
-10
-5
0
Reoi
Axis
-
Fig. 2
-
Eigenvalue Distribution of T169
The results by the
QR
method show that 177 is stable with all
eigenvalues having negative real parts, and T169 is unstable with
two unstable modes (eigenvalues): 0.089017 and 1.389242.
All calculations reported in this paper were performed with
double precision (error tolerance
=
lo-'
)
on
an IBM4361 com-
puter running CMS.
0:
s=1
Guard
Vectors
The effect of guard vectors in simultaneous iterations is
explored mainly
on
T77 by using a shift
h,
=
-O.l+jll.O and
sequentially calculating up to
5
eigenvalues nearest to
hl.
The
5
calculated eigenvalues are: -0.291205+j 11.181740, -0.299078+
j
1 1.55841
1,
-O.273586+j 10.0698 16, -0.820126ej 1 1.686807 and
-0.956709+j9.957278. The CPU times for these calculations are
shown in Fig.
3
where
s
is the number of eigenvalues found in
each calculation. It can be seen from the figure that for small
s
(i.e.
only a couple of required eigenvalues), guard vectors are not help-
ful in improving the computational efficiency. For medium
s,
how-
ever, one guard vector seems the best. Several calculations
on
T169 indicate that one guard vector results in the optimal
CPU
time in most cases, however, two or more guard vectors are neces-
sary to reach convergence within a proper number of iterations
when (a) a large number of eigenvalues
(
>
5
)
are required and (b)
the shift point
ht
is in an area where eigenvalues are densely distri-
buted.
In
general, we recommend that one guard vector be used for
s
<
5,
and two for
s
2
5.
For the case when both a large number of
eigenvalues are required and
h,
is in the area densely filled with
eigenvalues (such as the area of inter-area modes), one should con-
sider three or more guard vectors.
Order
of
the Hessenberg
Matrix
To determine the proper order of the Hessenberg mamx
defined in the modified Arnoldi method (i.e. the iterative Arnoldi
method with complete reorthogonalization), we use the same sys-
tems as in the above section. The
CPU
times are given in Fig. 4
where
s
is again the number of eigenvalues found in each calcula-
tion, and the horizontal axis is the order of the Hessenberg matrix
minus
s
(or the 'net additional order'). We see from the figure that
the additional order is necessary to avoid divergence andor slow
convergence. The computational efficiency is basically the same
over a wide range after the initial slow convergence period (i.e.
after an additional order of about 4). This slow convergence
period varies for different numbers of eigenvalues required and
also depends
on
the system size. From
our
experience, a value of
10 to 20 for the additional order is appropriate in most cases:
smaller value for fewer required eigenvalues and smaller systems.
4
0:
5=2
A-
4:
s=4
/'
$1
n:
s=3
/
::
.:
s=5
Y
CO
I
No. of Guara
vectcrs
Fig.
3
-
Effect of Guard Vectors
0
0
1
2
3
4
5 6
7
8
9
10111213141'
Addiiio
no1
0
rder
Fig. 4
-
Effect of Order
of
the Hessenberg Matrix
0
0
o
Simultaneous
Iteiotions
n
Modified
krnolai
Merhoa
/
"
Fig.
5
-
Comparison of the Two Methods
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.