scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Application of sparse eigenvalue techniques to the small signal stability analysis of large power systems

01 May 1989-IEEE Transactions on Power Systems (Institute of Electrical and Electronics Engineers)-Vol. 5, Iss: 2, pp 635-642
TL;DR: In this paper, two sparsity-based eigenvalue simultaneous iterations and the modified Arnoldi method are presented and their application to the small signal stability analysis of large power systems is discussed.
Abstract: Two sparsity-based eigenvalue simultaneous iterations and the modified Arnoldi method are presented and their application to the small signal stability analysis of large power systems is discussed. An algorithm utilizing these two methods is proposed for calculating the eigenvalues around a fixed point which can be placed at will in various parts of the complex plane. The sparsity is fully preserved in the algorithm by using the augmented system state equations as the linearized power system small signal model and performing the corresponding sparsity-oriented calculations. Several applications of the algorithm are discussed and illustrated by numerical examples. Comparisons are made for the two eigenvalue methods with other techniques. >

Summary (5 min read)

INTRODUCTION

  • The evaluation of the small signal stability of power systems requires the calculation of the eigenvalues of a very large unsymmetrical and nonsparse matrix.
  • The well-known QR method is robust and converges fast [l] but cannot be implemented with sparsity techniques, so that its application is limited to relatively small power systems.
  • On the other hand, for a large power system with thousands of state variables, it is usually required to calculate only a specific set of eigenvalues with certain features of interest, for example, local mechanical modes, inter-area modes, etc.
  • Therefore, significant effort has been expended to develop or apply new methods with the following three basic properties: (a) Sparsity techniques can be used (b) A specific set of eigenvalues can be found efficiently (c) Mathematical robustness is guaranteed, i.e. good convergence characteristics and numerical stability.

T h i s paper w a s sponsored by t h e I E E E Power

  • Among these (a) is of utmost importance since it provides the possibility to handle large power systems.
  • This paper presents two sparsity-based eigenvalue techniques simultaneous iterations and the modified Amoldi methodand their application to the small signal stability analysis of large power systems.
  • The latter is a method similar to the well-known Lanczos method, but more reliable by having better numerical properties after introducing appropriate modifications.
  • This algorithm is most suitable for calculating a desired number of eigenvalues nearest to or all eigenvalues within certain distance from the shift point.
  • Two test systems with 20 and 50 machines respectively have been chosen to test the performance of the proposed methods and algorithm.

Sparsity-Based Eigenvalue Techniques

  • Since the eigenanalysis of modem power systems deals with matrices of very large dimension, sparsity techniques play a key role in the analysis.
  • A survey of the available sparsity-based eigenvalue techniques for general unsymmetrical matrices results in the following four methods: (a) (b) Simultaneous iterations (c) Arnoldi method (d) Lanczos method.
  • A proposal of using (b) on vector and array processors is presented in [lo] which, however, does not contain numerical results.
  • Since they have generally better numerical properties, it seems that they may be the best candidates for the eigenanalysis of power systems.
  • This is the reason why the authors choose them as solution methods in this study.

Simultaneous Iterations

  • The method of simultaneous iterations was originally proposed in [14] for the symmetrical eigenvalue problem.
  • The extension of the method to general real unsymmetric matrices is first found in [15] , and then fully analyzed in [16] and in [8] which also provides a practical algorithm of the lopsided simultaneous iterations.
  • Although the matrices dealt with in the above references are.
  • (4) the first term is more dominant than in eqn.
  • To further refine the eigenvalues in Aa, an interaction analysis is introduced by defining EQUATION where the superscript H means conjugate-transpose.

Locking Device

  • Therefore, the Cholesky decomposition can be used to solve eqn.(8).
  • Moreover, when one eigenvalue (say, the ith one) has converged, the first i rows and columns of G will not change in all subsequent calculations, and the Cholesky decomposition of G up to the ith step will also remain unchanged.
  • Thus the authors can 'lock' the decomposed matrix G up to the ith row and column, and only perform the Cholesky decomposition from the (i+l)th step.
  • This is the so-called 'locking device' which helps to improve the efficiency of the algorithm.

Guard Vectors

  • The additional vectors are named guard vectors in 181.
  • Unfortunately, there is no theoretical answer available.
  • Some calculations have been done on their test systems for this problem.
  • It can be seen from the figure that for small s (i.e. only a couple of required eigenvalues), guard vectors are not helpful in improving the computational efficiency.

Fast Iteration Cycles

  • It is basically the iteration procedure (2) with the interaction analysis omitted for a number of iterations.
  • For power system problems, however, it seems less attractive since the multiplication in eqn.(.
  • 2) is quite expensive for large systems and, more seriously, the successive multiplications will force the vectors in U to become dependent so that the matrix G is no longer positive definite, which will make the subsequent calculations very inefficient (this did happen in their test calculations).
  • For this reason, the fast iteration cycles are not considered in their algorithm.

Modified Arnoldi Method

  • The Arnoldi method was first uresented in 1171.
  • The main problems of the original Arnoldi method are loss of orthogonality and slow convergence if a number of dominant eigenvalues is needed.
  • The latter entails in most cases the need for the full eigenanalysis of a relatively large Hessenberg matrix, which is expensive.
  • The Amoldi method presented in [9] is also for general real matrices.
  • This is the well-known property of the Lanczos method, which also holds here.

Reorthogonalization

  • It has been found that the original Arnoldi method as mentioned above has numerically poor behavior because of the loss of orthogonality for the vector series vi after a number of iterations.
  • The natural remedy for this problem is to reorthogonalize every newly-produced vector vi+l.
  • From their experience, however, it is not of great benefit because, first, by using the iterative Amoldi method a small m is used, and second, the reorthogonalization scheme ( 18)-( 20) is very efficient so that in most cases only one iteration is enough.

Iterative Arnoldi Method

  • Since the original Arnoldi method converges usually for relatively large m, the complete eigenanalysis of a Hessenberg matrix of large dimension is necessary.
  • To reduce the order of the Hessenberg matrix, the iterative Arnoldi method is introduced.
  • The iteration continues until all required eigenvalues are found.

APPLICATION TO POWER SYSTEMS Power System Modeling

  • The linearized power system model for the small signal stability analysis is easy to derive (for example, see [l] ).
  • Since the state matrix of a power system is in general not sparse, the direct construction of the state matrix would be impossible for large systems.
  • JA, JB, Jc and JD are sparse matrices which depend on the system parameters and the operating point.

Spectral Transformation

  • For the smali signal stability analysis of power systems, two types of the eigenvalues are of special interest: the weakly-damped local mechanical modes with frequencies between 0.8 to 2.0 Hz and inter-area modes with frequencies between 0.1 to 0.6 Hz.
  • Unfortunately, these eigenvalues are usually much smaller in modulus than other eigenvalues (for example, the fast damped local modes), so that most of the sparsity-based eigenvalue algorithms can not be applied directly.
  • The solution of this problem is to apply a spectral transformation to the original state matrix to shift the required eigenvalues so that they become dominant in modulus.

A, = ( A -h,I)-' (25) which transforms the eigenvalue hi of

  • Thus, if the eigenvalues around some point k, (say a fixed frequency) are required, the shift 1, can be used in eqn.(25) to mag-nify the eigenvalues near Lr. Sparsity-based eigenvalue techniques can then be applied to the transformed matrix A, to find these dominant eigenvalues.
  • The authors would like to make a short comment here for the Cayley transformation used in [3] .
  • If the authors consider a system with 1200 buses, 1400 lines, 300 machines and 3000 state variables, the increase of the storage is about 370 KB for double precision calculations, which can easily be handled by modern computers.
  • On the other hand, by using the Cayley transformation all complex eigenvalues will be calculated twice for each conjugate pair, and the real eigenvalues around the origin may also have to be calculated, even if only slow oscillatory modes are of interest.
  • Therefore, the fractional transformation can provide more flexibility and better computational efficiency.

Description of the Test Systems

  • Two test systems are employed in order to examine the performance of the above two sparsity-based eigenvalue techniques applied to the eigenanalysis of large power systems.
  • In each system, a 9th order model is used for the synchronous machine and its control systems.
  • Complete eigenanalyses by the QR method have been performed for both systems to make sure the results from the new methods are correct.

Order of the Hessenberg Matrix

  • To determine the proper order of the Hessenberg mamx defined in the modified Arnoldi method (i.e. the iterative Arnoldi method with complete reorthogonalization), the authors use the same systems as in the above section.
  • The CPU times are given in Fig. 4 where s is again the number of eigenvalues found in each calculation, and the horizontal axis is the order of the Hessenberg matrix minus s (or the 'net additional order').
  • The authors see from the figure that the additional order is necessary to avoid divergence andor slow convergence.
  • This slow convergence period varies for different numbers of eigenvalues required and also depends on the system size.
  • From their experience, a value of 10 to 20 for the additional order is appropriate in most cases: smaller value for fewer required eigenvalues and smaller systems.

Comparison of the Two Methods

  • A comparison of the two methodssimultaneous iterations and the modified Arnoldi methodis made from Fig. 3 and Fig. 4 by taking the optimal CPU time for each s.
  • Therefore, in general the modified Arnoldi method is faster than simultaneous iterations at the price of more storage requirement because of the complete reorthogonalization and the additional order.
  • Both figures clearly indicate that two circles have covered the most weakly-damped inter-area modes, so that their purpose is fulfilled.
  • Comparisons with Other Methods Comparisons were also attempted for the two eigenvalue methods with other formerly used methods.
  • A block Lanczos algorithm similar to the one in [3] was tried to solve the same problems, but due to the severe numerical problems the methood did not converge.

Calculation of Local Mechanical Modes

  • In most applications of the eigenanalysis of modem power systems, two types of eigenvalues are of special interest: the weakly-damped local mechanical modes and inter-area modes.
  • Fig. 6 shows the distribution of the complex eigenvalues of T77, in which the eigenvalues within each of two circles are found in the calculations.
  • From the figure the authors see that indeed all local mechanical modes with frequencies from 0.76 to 2.05.
  • Therefore, the authors add an additional shift point h,3 =-0.l+j9.42 in the middle between hil and ht2 and compute the first 10 eigenvalues near ht3.

Calculation of Inter-area Modes

  • The authors now turn to the second type of interesting eigenvaluesthe weakly-damped inter-area modes which have typical frequencies of 0.1 to 0.6 Hz.
  • The authors use again two shift points with small negative real parts and frequencies of 0.2 and 0.5.
  • Real unstable modes can be found by simply using a real shift point.
  • Note that if a system contains unstable local mechanical modes and/or unstable inter-area modes, the procedure in the above two sections can probably find them since from Fig. 6 to 9 the authors see that each circle covers also portions of the unstable area.

CONCLUSIONS

  • The paper has presented two sparsity-based eigenvalue techniquessimultaneous iterations and the modified Amoldi method and their application to the small signal stability analysis of large power systems.
  • Since the methods and the associated algorithm do not have any restriction in the modeling of power system components, any detailed models can be implemented in the program to make the results more practical.
  • First, the bi-orthogonality which is fundamental for the algorithm is very rapidly lost on a finite-precision computer due to the round-off errors.
  • The algorithm must be restarted with the new starting mamces, but it is still uncertain whether or not the breakdown will occur again.
  • The Lanczos method for the unsymmetrical eigenvalue problem is not reliable, and also not economical if complete reorthogonalization is going to be used.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

635
APPLICATION
OF
SPARSE EIGENVALUE TECHNIQUES
TO THE SMALL SIGNAL STABILITY ANALYSIS OF LARGE POWER SYSTEMS
L. Wang
A.
Semlyen
Department
of
Electrical Engineering
University
of
Toronto
Toronto, Ontario, Canada
Abstract
-
This paper presents two sparsity-based eigenvalue
techniques
-
simultaneous iterations and the modified Arnoldi
method
-
and their application to the small signal stability analysis
of large power systems.
Simultaneous iterations and the modified Arnoldi method are two
recently developed methods for large, sparse unsymmetrical eigen-
value problems, and have been reported as very efficient in com-
puting the partial eigensolution of several types of matrices, such
as stochastic ones. It is shown in this paper that they can also
be
applied successfully to the matrices derived for small signal stabil-
ity studies of power systems. An algorithm utilizing these two
methods is proposed for calculating the eigenvalues around a fixed
point which can
be
placed at will in various parts of the complex
plane. The sparsity is fully preserved in the algorithm by using the
augmented system state equations as the linearized power system
small signal model and performing the corresponding sparsity-
oriented calculations. Several applications of the algorithm are dis-
cussed and illustrated by numerical examples.
The proposed methods and algorithm have been tested on two test
systems with
20
and
50
machines respectively. The results show
that they
are
suitable for the eigenanalysis of large power systems.
Keywords:
Small signal stability, Eigenvalues, Sparse methods.
INTRODUCTION
The evaluation of the small signal stability of power systems
requires the calculation of the eigenvalues of a very large unsym-
metrical and nonsparse matrix. The well-known
QR
method is
robust and converges fast
[l]
but cannot
be
implemented with
sparsity techniques,
so
that its application is limited to relatively
small power systems. On the other hand, for a large power system
with thousands of state variables, it is usually required to calculate
only a specific set of eigenvalues with certain features of interest,
for example, local mechanical modes, inter-area modes, etc.
Therefore, significant effort has been expended to develop or apply
new methods with the following three basic properties:
(a) Sparsity techniques can
be
used
(b) A specific set of eigenvalues can
be
found efficiently
(c) Mathematical robustness is guaranteed, i.e. good conver-
gence characteristics and numerical stability.
This paper
was
sponsored
by
the
IEEE
Power
Engineering Society
for
presentation at the
IEEE
Power Industry Computer Application
Conference, Seattle, Washington,
May
1
-
5,
1989.
Manuscript
was
published
in
the
__
1989
PICA Conference Record.
Among these (a) is of utmost importance since it provides the pos-
sibility to handle large power systems. Several sparsity-based
methods have been proposed in recent years. PEALS
[2]
is mainly
aimed at the computation of slow inter-area oscillatory modes; the
S-Method
[3]
is most efficient for finding the unstable modes;
STEPS
[4]
can be used for computing the eigenvalues belonging to
a small study zone;
[5]
gives an implementation of the inverse
iterations. In addition to these methods,
[6]
and
[7]
also report spe-
cial methods to solve the eigenvalue problem of large power sys-
tems.
This paper presents two sparsity-based eigenvalue techniques
-
simultaneous iterations and the modified Amoldi method
-
and
their application to the small signal stability analysis of large
power systems. These two methods
are
mathematically well-
developed and both have been proved to
be
very efficient in com-
puting the dominant eigenvalues of large, sparse, unsymmetrical
matrices
[8,9].
The former is an extension of the classical power
method with a tactically designed interaction analysis which
makes the method converge reliably. The latter is a method similar
to the well-known Lanczos method, but more reliable by having
better numerical properties after introducing appropriate
modifications. Both simultaneous iterations and the modified
Amoldi method are successful in the eigenanalysis of power sys-
tems, as will be illustrated by various numerical examples.
For the small signal stability analysis, an algorithm is pro-
posed to make the eigenvalue problem of power systems fit the
two methods mentioned above. The sparsity is fully preserved in
the algorithm by using the augmented system state equations as the
linearized power system small signal model and performing the
corresponding sparsity-oriented calculations. A simple spectral
transformation
-
fractional transformation
-
is then applied to the
augmented state matrix to make dominant the eigenvalues around
a specified shift point,
so
that a group of eigenvalues near the shift
point can
be
computed by either of the two methods. This algo-
rithm
is
most suitable for calculating a desired number of eigen-
values nearest to or all eigenvalues within certain distance from
the shift point. For example, if the local mechanical modes are of
interest, shift points with typical frequencies between
1
to
2
Hz
can
be
used to sequentially calculate the eigenvalues in this area.
Two test systems with
20
and
50
machines respectively have
been chosen to test the performance of the proposed methods and
algorithm. Comparisons are also made for the two eigenvalue
methods with other formerly used techniques. Some means for
improving the methods as well as experience with the application
of the algorithm are discussed and illustrated by numerical exam-
ples.
SOLUTION METHODS
Sparsity-Based Eigenvalue Techniques
Since the eigenanalysis of modem power systems deals with
matrices of very large dimension, sparsity techniques play a key
role in the analysis.
A
survey of the available sparsity-based eigen-
value techniques for general unsymmetrical matrices results in the
following four methods:
0885-8950/90/0500-0635$01.00
0
1990
IEEE
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.

636
(a)
(b) Simultaneous iterations
(c) Arnoldi method
(d) Lanczos method.
The application of (a) to the eigenanalysis of power systems is
reported in
[4]
and
[5].
A proposal of using (b)
on
vector and array
processors is presented in
[lo]
which, however, does not contain
numerical results. (d) has also been applied to this problem in [3]
and
[l
11. We note that (a) is good only for computing one eigen-
value, or at most a few with deflation, and this is not satisfactory in
most cases. (d) is a very successful method for the symmetrical
eigenvalue problem, but has serious flaws in the case of unsym-
metrical eigenvalue problems as, for example, the phenomenon of
'breakdown' as pointed out in
[12]
and also experienced by the
authors (see Appendix
1
for a brief discussion of the block Lanc-
zos method).
On
the other hand, as far as we know, (b) has not
been tried
on
ordinary computers for the eigenanalysis of power
systems, and (c) has never been applied to these problems, but
both (b) and (c) have been used successfully in some other applica-
tions such as the partial eigensolution of stochastic mamces. Since
they have generally better numerical properties, it seems that they
may be the best candidates for the eigenanalysis of power systems.
This is the reason why we choose them as solution methods in this
study.
It is interesting to note that all four methods mentioned above
belong to a class of methods known as the Krylov method
[13]
in
which the Krylov subspace
{
x
Ax
.
.
.
Ai-'x
]
is used to
approach the dominant invariant subspace of a matrix
A.
There are
two important and useful features for these methods. First, they
are all aimed at finding a few of the dominant eigenvalues of
A
(here dominance refers to largeness in modulus). This corresponds
to the requirement that usually only a few of the eigenvalues are
needed in the eigenanalysis of large power systems, although some
transformation is necessary to make the required eigenvalues dom-
inant. Second, in these methods the only operation involving
A
is
the matrix-vector multiplication
Ay.
Therefore, it is
not
necessary
to form
A
explicitly, provided that
Ay
can
be
calculated easily.
This allows us to use the augmented system state equations to
preserve the full sparsity of the problem.
Power method and inverse iterations
Simultaneous Iterations
The method of simultaneous iterations was originally pro-
posed in
[14]
for the symmetrical eigenvalue problem. The exten-
sion of the method to general real unsymmetric matrices is first
found in [15], and then fully analyzed in [16] and in
[8]
which also
provides a practical algorithm of the lopsided simultaneous itera-
tions. Although the matrices dealt with in the above references
are.
all real, the method is also applicable to general complex matrices,
as demonstrated below.
Let
AE
CnX"
have eigenvalues
hi,
with
1x11
2
112.21
2
...
2
Ih,l
and
Aa
0
'=[O
Ad
where
Aa
=
diag(
hl
. . .
h,
)
and
Ab
=
diag(
hm+l
. . .
h,
1.
Denote the matrix of the right eigenvectors of
A
by
Q=[Qa
QbI=[ql
...
qm
Iqm+1
...
qnI
where
qi
is associated with
hi.
Then we have
AQa
=
QaAa
and
AQb
=
QbAb
(1)
Assuming that we start with
rn
independent trial vectors
U=[u1
U2
...
U,]
E
cnXm
perform the multiplication
V=AU
(2)
(3)
Since
U
may be represented by
where
C,E
Cmm
and
CbE
C(n-m)xm
are coefficient matrices, it is
clear that
U
=
Qaca
+
QbCb
v
=
AU
=
Q,Aaca
+
QbAbCb
(4)
Note that in eqn.(4) the first term is more dominant than in eqn.(3),
i.e. the components of
Qb
have been somehow washed out in
v.
To further refine the eigenvalues in
Aa,
an interaction analysis is
introduced by defining
G
=
UHU
UHQaCa
(5)
H
=
uHv
=
UHQ,A,C,
and
(6)
where the superscript
H
means conjugate-transpose. Assuming that
UHQa
is non-singular, we obtain
G-l
H
=
C,~(U~Q,)-~
U~Q,A,C,
=
ci1
A~C,
(7)
GB
=H
(8)
CUB
=
AaCa
(9)
or, if
B
is the solution of
then we have
which implies that
A,
and
C,
contain the approximate eigenvalues
and left eigenvectors of
B.
If
P
is the matrix of the right eigenvec-
tors of
B,
P
=
c,'
then
w
=
VP
Qah,
+
QbAbCbCi'
(10)
gives an improved set of right eigenvectors of
A.
Taking
W
as the
new set of trial vectors, the above process can be iterated until all
required eigenvalues
are
found. It can be readily shown (see, for
example,
[
161 for a similar proof of the simultaneous bi-iteration
method) that this method is convergent for the first
i
eigenvalues of
A
if
lhil
>
Ih,+l
I
(11)
for
i
=
1,
2,
.
. .
,
rn
and the convergence rate for
hi
is
I1,+1
I
I
lhil.
Locking Device
It may be noticed that matrix
G
in eqn.(5) is symmetrical
positive definite. Therefore, the Cholesky decomposition can be
used to solve eqn.(8). Moreover, when one eigenvalue (say, the ith
one) has converged, the first
i
rows and columns of
G
will not
change in all subsequent calculations, and the Cholesky decompo-
sition of
G
up to the ith step will also remain unchanged. Thus we
can
'lock'
the decomposed matrix
G
up to the ith row and column,
and only perform the Cholesky decomposition from the (i+l)th
step. This is the so-called 'locking device' which helps to improve
the efficiency of the algorithm.
Guard Vectors
In practice, ifs dominant eigenvalues of
A
are required,
an
rn
larger than
s
is usually used in this method to obtain better conver-
gence rate and to ensure the convergence of all
s
eigenvalues if
I
h,
I
=
I
h,+l
I.
The additional vectors are named guard vectors
in
181.
A practical question is how to decide the number of guard vec-
tors
so
as to have the best computational efficiency. Unfortunately,
there is
no
theoretical answer available.
The only way to explore
this
is
by numerical tests. Some calculations have been done on
our test systems for this problem. The results
are
reported later, in
the section
on
numerical results.
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.

637
Fast Iteration Cycles
In
[8]
the idea of fast iteration cycles is introduced and
proved to be very efficient for a variety of large, sparse matrices. It
is basically the iteration procedure (2) with the interaction analysis
omitted for a number of iterations. For power system problems,
however, it seems less attractive since the multiplication in eqn.(2)
is quite expensive for large systems and, more seriously, the suc-
cessive multiplications will force the vectors in
U
to become
dependent
so
that the matrix
G
is no longer positive definite, which
will make the subsequent calculations very inefficient (this did
happen in our test calculations). For this reason, the fast iteration
cycles are not considered in our algorithm.
General Procedure
gram, as a summary of the discussions on simultaneous iterations.
We give the following algorithm which we used in our pro-
Set up the initial trial vectors
U’
with independent columns;
let i
=
1
Calculate
Vi
by eqn.(2)
Calculate
Gi
by eqn.(5) and factorize it by the Cholesky
decomposition
Calculate
Hi
by eqn.(6)
Solve for
B
from eqn.(g)
Perform full eigenanalysis for
B
by the
QR
method, obtain-
ing the eigenvalues
Ab
=
diag(
h’~~
.
.
.
hl~,
)
and the
associated right eigenvectors
PI
Compare
Ai
with
Ab-’
(
AS
=
0).
If all required eigenvalues
have been found, exit; otherwise go on to the next step
Calculate the new trial vectors
Ui+’
by eqn.(lO)
Let
i
=
i
+
1 and go to (b) to perform the next iteration
Modified Arnoldi Method
The Arnoldi method was first uresented in 1171. However.
--
because of its poor numerical properties, it was not successful
before implementing several modifications to it [9]. The main
problems of the original Arnoldi method are
loss
of orthogonality
and slow convergence if a number of dominant eigenvalues is
needed. The latter entails in most cases the need for the full
eigenanalysis of a relatively large Hessenberg matrix, which is
expensive. These problems can
be
solved by using the complete
reorthogonalization and the iterative process described in [9].
The Amoldi method presented in [9] is also for general real
matrices. The following extends it
to
general complex matrices
with discussions on two modifications.
Let
AE
Cnm
and
v
E
C”
the starting vector with
IIv
1112
=
1.
The subsequent orthonormal vectors are produced by the recursive
formula
hi+l,ivi+l
=
(I-v~v~)AV~
i
=
1,
.
+.,
m
(12)
. . .
vi
1.
where
hi+l,i
is chosen such that
llvi+l
112
=
1,
and
Vi
=
[
v
From eqn.(l2) we can obtain
where
hi’= VyAvi
E
Ci.
For all
m
equations assembled, eqn.(l3)
becomes
Av~
=
Vihi’+hi+l,ivi+l
(13)
AV,
=
VmHm+hm+l,mvm+leZ
(14)
where
e:
=
[
0
. . .
0
1
]
and
H,
is an upper Hessenberg matrix
with the ith column equal to
hi
(15)
Eqn.(l4) can be approximated by dropping the second term
on the right hand side. Thus,
AV,
=
V,H,
(16)
which implies that the eigenvalues of
H,
are the approximations
of the eigenvalues of
A.
Clearly, the error depends on h,+l,,
which vanishes when
m
=
n.
In fact, as
m
increases, eigenvalues of
H,
with largest and smallest modulus will gradually converge to
the eigenvalues of
A.
This is the well-known property of the Lanc-
zos method, which also holds here. The approximate eigenvectors
of
A
can be readily found as
w
=
V,P
(17)
where
P
is the
mxm
matrix of the right eigenvectors of
H,.
Reorthogonalization
It has been found that the original Arnoldi method as men-
tioned above has numerically poor behavior because of the
loss
of
orthogonality for the vector series
vi
after a number of iterations.
The natural remedy for this problem is to reorthogonalize every
newly-produced vector
vi+l.
A modified Gram-Schmidt method
[18] is used for this purpose, in which we simply replace eqn.(l2)
by the iterative process
(18)
up+1
=
(I
-
ViV?
)up
k
=
1,
2,
. . .
with
uf
=
Avi.
This
process continues until
for some
k
=
k,.
Then we take
hi+l,i
=
IIUy
112
(20a)
In [9] a scheme of incomplete reorthogonalization was pro-
posed. From our experience, however, it is not of great benefit
because, first, by using the iterative Amoldi method a small
m
is
used, and second, the reorthogonalization scheme (18)-(20) is very
efficient
so
that in most cases only one iteration is enough.
Iterative Arnoldi Method
Since the original Arnoldi method converges usually for rela-
tively large
m,
the complete eigenanalysis of a Hessenberg matrix
of large dimension is necessary. To reduce the order of the
Hessenberg matrix, the iterative Arnoldi method is introduced.
Let
m
be fixed at a moderate value. We perform the original Arnoldi
method with reorthogonalization to obtain the eigenvalue and
eigenvector approximations
hi
and
wi
for
i
=
1,
.
.
.
,
m.
Then we
repeat the same method but using a new starting vector, for exam-
ple the one recommended in [9]:
S
VI
=
aCII(A
-hi1
)wi 112wi
(21)
where
a
is a scalar to normalize
vl
and
s
is the number of eigen-
values to be found. The iteration continues until all required
eigenvalues are found. It can
be
shown (see Appendix 2) that
eqn.(21) is equal to
where
a’
is
again a normalizing scalar,
P*
=
[
p1
. . . ps
I
and
jT
=
[
Ip,
1
I
Ip,
I
IT.
Here
pi
is the ith right eigenvector of
H,
and
pmi
is the last element of
pi.
With the iterative Arnoldi method we will have the problem
of choosing a proper
m.
Recommendations are given later by the
numerical results which are based on our test systems.
i=l
v1=
a’v,P*jJ
(22)
1
. .
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.

638
U?
General Procedure
In what follows, we give the algorithm for the modified
Arnoldi method in which both the complete reorthogonalization
and the iterative process are used.
1
Set up the starting vector
v
;
let
V
1
=
v
and
i
=
1
Calculate
ut
=AV~;
let
k
=
1
Calculate
U:+'
by eqn.(l8)
If the condition (19) is satisfied, go
on
to the next step; other-
wise let
k
=
k
+
1 and
go
to (c)
Calculate
hi+l,i
and
vi+l
by eqn.(20a) and (20b)
Calculate
hi
'
=
VYAvi
and form
hi
by eqn.(
15)
If
i
=
m,
matrix
H
has been formed and go
on
to the next step;
otherwise let
Vi+l
=
[
Vi vi+l
1.
i
=
i
+
1
and go to
(b)
Perform full eigenanalysis for
H
by the QR method. If all
required eigenvalues have been found, exit; otherwise go
on
to the next step
Calculate the new starting vector
v
1
by eqn.(22); let
V
=
v
1,
i
=
1 and
go
to (b)
APPLICATION TO POWER SYSTEMS
Power System Modeling
The linearized power system model for the small signal sta-
bility analysis is easy to derive (for example, see
[l]).
However,
since the state matrix of a power system is in general not sparse,
the direct construction of the state matrix would be impossible for
large systems. Various schemes have been proposed to implement
sparsity techniques
[2],
[3],
[4],
[5].
Here we adopt the method of
141 in which the augmented system state equations are used:
[:I
=[;:
a]
[:I
(23)
where
x
is the vector of the state variables and
V
is the vector of
the system voltages.
JA,
JB,
Jc
and
JD
are sparse matrices which
depend
on
the system parameters and the operating point. It can be
seen that the state mamx
A
may be formed from eqn.(23) as
A
=
J~
-
JB
JE~
J~
(24)
which is only of theoretical significance
in
this work.
Spectral Transformation
For the smali signal stability analysis of power systems, two
types of the eigenvalues are of special interest: the weakly-damped
local mechanical modes with frequencies between
0.8
to 2.0 Hz
and inter-area modes with frequencies between
0.1
to 0.6 Hz.
Unfortunately, these eigenvalues are usually much smaller in
modulus than other eigenvalues (for example, the fast damped
local modes),
so
that most of the sparsity-based eigenvalue algo-
rithms can not be applied directly. The solution
of
this problem is
to apply a spectral transformation to the original state matrix to
shift the required eigenvalues
so
that they become dominant in
modulus. The simplest way to
do
this is to use the fractional
transformation
A, =(A
-h,I)-'
(25)
which transforms the eigenvalue
hi
of
A
to
h.--
1
hi
-h,
rl
-
where
h,
is a fixed shift.
It
is easy to verify that the transformation
(25)
transforms the eigenvalues of
A
within the unit circle centered
at
h,
to the eigenvalues of
At
outside the unit circle at the origin.
Thus, if the eigenvalues around some point
k,
(say a fixed fre-
quency) are required, the shift
1,
can be used in eqn.(25) to mag-
nify the eigenvalues near
Lr.
Sparsity-based eigenvalue techniques
can then be applied to the transformed matrix
A,
to find these dom-
inant eigenvalues.
We would like to make a short comment here for the Cayley
transformation used in
[3].
An
advantage of the Cayley transfor-
mation is that the transformed matrix
At
(or matrix
S
as in
[3])
remains real, while for the fractional transformation it becomes
complex in general. However, this will not substantially increase
the storage requirement for the latter. For example, if we consider
a system with 1200 buses, 1400 lines,
300
machines and
3000
state
variables, the increase of the storage
is
about
370
KB
for double
precision calculations, which can easily be handled by modern
computers.
On
the other hand, by using the Cayley transformation
all complex eigenvalues will be calculated twice for each conju-
gate pair, and the real eigenvalues around the origin may also have
to
be
calculated, even if only slow oscillatory modes are of
interest. Therefore, the fractional transformation can provide more
flexibility and better computational efficiency.
Practical Implementation
To apply the two eigenvalue techniques described in this
paper, we only need to provide the matrix-vector product
Ay,
or
Ary
=
(A
-
hrl
)-'y
if the transformed matrix
A,
is considered. The
corresponding calculation in terms of the augmented state matrix
is then to solve the equation
for
x.
An algorithm for solving the above equation with the spar-
sity techniques
is
given in Appendix
2
of [4].
Numerical Results
Description of the Test Systems
Two test systems are employed in order to examine the per-
formance of the above two sparsity-based eigenvalue techniques
applied to the eigenanalysis of large power systems.
In
each sys-
tem, a 9th order model is used for the synchronous machine and its
control systems. Loads are represented by constant impedances.
The first system, T77, has 77 buses, 20 machines and 180 state
variables, and the second one, T169, is a 169 bus, 50 machine
sys-
tem with 450 state variables. Complete eigenanalyses by the QR
method have been performed for both systems to make sure the
results from the new methods are correct. Fig.
1
and Fig. 2 show
the eigenvalue distributions of the two systems (in both figures,
eigenvalues with the real parts less than
-30
and with negative ima-
ginary parts are omitted).
!I I
.L
c
g-1
E
--In
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.

-30
-25
-20
-15
-10
-5
0
Reoi
Axis
-
Fig. 2
-
Eigenvalue Distribution of T169
The results by the
QR
method show that 177 is stable with all
eigenvalues having negative real parts, and T169 is unstable with
two unstable modes (eigenvalues): 0.089017 and 1.389242.
All calculations reported in this paper were performed with
double precision (error tolerance
=
lo-'
)
on
an IBM4361 com-
puter running CMS.
0:
s=1
Guard
Vectors
The effect of guard vectors in simultaneous iterations is
explored mainly
on
T77 by using a shift
h,
=
-O.l+jll.O and
sequentially calculating up to
5
eigenvalues nearest to
hl.
The
5
calculated eigenvalues are: -0.291205+j 11.181740, -0.299078+
j
1 1.55841
1,
-O.273586+j 10.0698 16, -0.820126ej 1 1.686807 and
-0.956709+j9.957278. The CPU times for these calculations are
shown in Fig.
3
where
s
is the number of eigenvalues found in
each calculation. It can be seen from the figure that for small
s
(i.e.
only a couple of required eigenvalues), guard vectors are not help-
ful in improving the computational efficiency. For medium
s,
how-
ever, one guard vector seems the best. Several calculations
on
T169 indicate that one guard vector results in the optimal
CPU
time in most cases, however, two or more guard vectors are neces-
sary to reach convergence within a proper number of iterations
when (a) a large number of eigenvalues
(
>
5
)
are required and (b)
the shift point
ht
is in an area where eigenvalues are densely distri-
buted.
In
general, we recommend that one guard vector be used for
s
<
5,
and two for
s
2
5.
For the case when both a large number of
eigenvalues are required and
h,
is in the area densely filled with
eigenvalues (such as the area of inter-area modes), one should con-
sider three or more guard vectors.
Order
of
the Hessenberg
Matrix
To determine the proper order of the Hessenberg mamx
defined in the modified Arnoldi method (i.e. the iterative Arnoldi
method with complete reorthogonalization), we use the same sys-
tems as in the above section. The
CPU
times are given in Fig. 4
where
s
is again the number of eigenvalues found in each calcula-
tion, and the horizontal axis is the order of the Hessenberg matrix
minus
s
(or the 'net additional order'). We see from the figure that
the additional order is necessary to avoid divergence andor slow
convergence. The computational efficiency is basically the same
over a wide range after the initial slow convergence period (i.e.
after an additional order of about 4). This slow convergence
period varies for different numbers of eigenvalues required and
also depends
on
the system size. From
our
experience, a value of
10 to 20 for the additional order is appropriate in most cases:
smaller value for fewer required eigenvalues and smaller systems.
4
0:
5=2
A-
4:
s=4
/'
$1
n:
s=3
/
::
.:
s=5
Y
CO
I
No. of Guara
vectcrs
Fig.
3
-
Effect of Guard Vectors
0
0
1
2
3
4
5 6
7
8
9
10111213141'
Addiiio
no1
0
rder
Fig. 4
-
Effect of Order
of
the Hessenberg Matrix
0
0
o
Simultaneous
Iteiotions
n
Modified
krnolai
Merhoa
/
"
Fig.
5
-
Comparison of the Two Methods
Authorized licensed use limited to: The University of Toronto. Downloaded on December 27, 2008 at 12:58 from IEEE Xplore. Restrictions apply.

Citations
More filters
Book
08 Oct 2008
TL;DR: Stochastic Security Analysis of Electrical Power Systems and Power System Transient Stability Analysis and Small-Signal Stability Analysis of Power Systems.
Abstract: Mathematical Model and Solution of Electric Network.- Load Flow Analysis.- Stochastic Security Analysis of Electrical Power Systems.- Power Flow Analysis in Market Environment.- HVDC and FACTS.- Mathematical Model of Synchronous Generator and Load.- Power System Transient Stability Analysis.- Small-Signal Stability Analysis of Power Systems.

248 citations

Journal ArticleDOI
P. Kundur1, G.J. Rogers1, D.Y. Wong1, L. Wang1, M.G. Lauby 
TL;DR: A package of integrated programs for small-signal stability analysis of large interconnected power systems is described, which has extensive modeling capability and uses alternative eigenvalue calculation techniques, making it suitable for the analysis of a wide range of stability and control problems.
Abstract: A package of integrated programs for small-signal stability analysis of large interconnected power systems is described. The package has extensive modeling capability and uses alternative eigenvalue calculation techniques, making it suitable for the analysis of a wide range of stability and control problems. Results of eigenvalue calculations for three power systems of differing size and complexity are presented and the accuracy, consistency and convergence of the alternative calculation methods are discussed. >

189 citations

Journal ArticleDOI
TL;DR: In this paper, the dominant poles of any specified high order transfer function are computed using a generalized Rayleigh quotient (GRL) algorithm, which retains the numerical properties of global and ultimately cubic convergence.
Abstract: This paper describes the first algorithm to efficiently compute the dominant poles of any specified high order transfer function. As the method is closely related to Rayleigh iteration (generalized Rayleigh quotient), it retains the numerical properties of global and ultimately cubic convergence. The results presented are limited to the study of low frequency oscillations in electrical power systems, but the algorithm is completely general.

121 citations

Journal ArticleDOI
TL;DR: In this article, a rotational invariance technique (TLS-ESPRIT) was proposed to mitigate the effect of colored Gaussian noise produced due to filters used for signal preprocessing.
Abstract: This paper proposes a method for online identification of modes corresponding to low-frequency oscillations in a power system. The proposed method has considered the effect of colored Gaussian noise produced due to filters used for signal preprocessing. In order to mitigate the effect of colored Gaussian noise, the paper first proposes a modified total least square estimation of signal parameters via rotational invariance techniques (TLS-ESPRIT) that utilizes first and second rotational shift invariance properties of the signal. In the next step, the modified TLS-ESPRIT utilizes the signal transformed in an orthogonal basis. The proposed method has been compared with the improved Prony, the TLS-ESPRIT and the fourth-order cumulant-based TLS-ESPRIT (4CB-TLS-ESPRIT) using a test signal for identification of the modes at different noise levels. Robustness of the proposed method is established in the presence of colored Gaussian noise through Monte Carlo simulations. Estimation of modes for a two-area power system, using the proposed method, is carried out in the present work. Comparison of the proposed method with other methods is also performed on real-time probing test data obtained from the Western Electricity Coordinating Council (WECC) network.

111 citations


Cites methods from "Application of sparse eigenvalue te..."

  • ...Traditionally, for large power systems, these modes are identified utilizing eigenvalue analysis [1]–[3], using a linearized time-invariant model around the operating point of the system....

    [...]

Journal ArticleDOI
TL;DR: In this article, improved and new methodologies for the calculation of critical eigenvalues in the small-signal stability analysis of large electric power systems are presented, which augment the robustness and efficiency of existing methods and provide new alternatives.
Abstract: This paper presents improved and new methodologies for the calculation of critical eigenvalues in the small-signal stability analysis of large electric power systems. They augment the robustness and efficiency of existing methods and provide new alternatives. The procedures are implementations of Newton's method, inverse power and Rayleigh quotient iterations, equipped with implicit deflation, and restarted Arnoldi with a locking mechanism and either shift-invert or semi-complex Cayley preconditioning. The various algorithms are compared and evaluated regarding convergence, performance and applicability.

95 citations

References
More filters
Book
01 Jan 1965
TL;DR: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography.
Abstract: Theoretical background Perturbation theory Error analysis Solution of linear algebraic equations Hermitian matrices Reduction of a general matrix to condensed form Eigenvalues of matrices of condensed forms The LR and QR algorithms Iterative methods Bibliography Index.

7,422 citations

Journal ArticleDOI
Walter Arnoldi1
TL;DR: In this paper, an interpretation of Dr. Cornelius Lanczos' iteration method, which he has named ''minimized iterations'' is discussed, expounding the method as applied to the solution of the characteristic matrix equations both in homogeneous and nonhomogeneous form.
Abstract: An interpretation of Dr. Cornelius Lanczos' iteration method, which he has named \"minimized iterations\", is discussed in this article, expounding the method as applied to the solution of the characteristic matrix equations both in homogeneous and nonhomogeneous form. This interpretation leads to a variation of the Lanczos procedure which may frequently be advantageous by virtue of reducing the volume of numerical work in practical applications. Both methods employ essentially the same algorithm, requiring the generation of a series of orthogonal functions through which a simple matrix equation of reduced order is established. The reduced matrix equation may be solved directly in terms of certain polynomial functions obtained in conjunction with the generated orthogonal functions, and the convergence of the solution may be observed as the order of the reduced matrix is successively increased with the order of the original matrix as a limit. The method of minimized iterations is recommended as a rapid means for determining a small number of the larger eigenvalues and modal columns of a large matrix and as a desirable alternative for various series expansions of the Fredholm problem. 1. The conventional iterative procedures. It is frequently required that real latent roots, or eigenvalues, and modal columns be determined for a real numerical matrix, u, of order, n, in the characteristic homogeneous equation,*

1,826 citations

Journal ArticleDOI
TL;DR: It is shown that the method of Arnoldi can be successfully used for solvinglarge unsymmetric eigenproblems and bounds for the rates of convergence similar to those for the symmetric Lanczos algorithm are obtained.

515 citations


"Application of sparse eigenvalue te..." refers methods in this paper

  • ...These two methods are mathematically welldeveloped and both have been proved to be very efficient in computing the dominant eigenvalues of large, sparse, unsymmetrical matrices [8] [9]....

    [...]

Journal ArticleDOI
TL;DR: Numerically stable algorithms are given for updating the GramSchmidt QR factorization of an m X n matrix A (m > n) when A is modified by a matrix of rank one, or when a row or column is inserted or deleted.
Abstract: Numerically stable algorithms are given for updating the GramSchmidt QR factorization of an m X n matrix A (m > n) when A is modified by a matrix of rank one, or when a row or column is inserted or deleted. The algorithms require O(mn) operations per update, and are based on the use of elementary two-by-two reflection matrices and the Gram-Schmidt process with reorthogonalization. An error analysis of the reorthogonalization process provides rigorous justification for the corresponding ALGOL procedures.

447 citations

Journal ArticleDOI
TL;DR: A new class of algorithms which is based on rational functions of the matrix is described, and there are also new algorithms which correspond to rational functions with several poles.

343 citations