Evaluation of Eigenvalue Routines for Large Scale Applications
International Journal of
Evaluation of Eigenvalue Routines for Large Scale Applications
0 V. B. Venkayya Wright Laboratory Wright Patterson AFB , OH 454336553 , USA
The NASA structural analysis (NASTRAN*) program is one of the most extensively used engineering applications software in the world. It contains a wealth of matrix operations and numerical solution techniques, and they were used to construct efficient eigenvalue routines. The purpose of this article is to examine the current eigenvalue routines in NASTRAN and to make efficiency comparisons with a more recent implementation of the block Lanczos aLgorithm. This eigenvalue routine is now availabLe in several mathematics libraries as well as in severaL commerciaL versions of NASTRAN. In addition, the eRA Y library maintains a modified version ofthis routine on their network. Several example problems, with a varying number of degrees of freedom, were selected primarily for efficiency benchmarking. Accuracy is not an issue, because they all gave comparable results. The block Lanczos algorithm was found to be extremely efficient, particularly for very large problems. © 1994 John Wiley & Sons, Inc.*

Applications
INTRODUCTION
In
NASTRAN NASTRAN Theoretical Manual,
1981
the real eigenvalue analysis module is used
to obtain structural vibration modes from the
symmetric mass and stiffness matrices, MAA and
KAA , which are generated in the program using
finite element models. Currently the user has a
choice of four methods for solving vibration
mode problems: determinant method, inverse
power method with shifts, tridiagonal method
(Givens' method), and tridiagonal reduction or
FEER method. NASTRAN provides all these
options for user convenience as well as for analy
sis efficiency. For example, the Givens' method
is most appropriate when all the eigenvalues are
of equal interest. By the same token, it is not
suitable (because of the need for excessive
com*NASTRAN without qualification refers to COSMIC
NASTRAN (or government version) in the paper.
Shock and Vibration, Vol. 1, No.3, pp. 201216 (1994)
puter resources) when the number of degrees of
freedom is too large (greater than 300400) un
less preceded by Guyan reduction (ASET or
OMIT). The inverse power, determinant, and
FEER methods are most suitable when only a
small subset of the eigenvalues are of interest.
These methods take advantage of the sparseness
of the mass and stiffness matrices and extract
one or a small subset of eigenvalues at a time.
The purpose of this article is to examine, in
some detail, the real eigenvalue analysis methods
currently available in NASTRAN and to make
efficiency comparisons with the block Lanczos
algorithm currently available in some commer
cial versions of NASTRAN (for example MSC
NASTRAN and UAINASTRAN). The accu
racy of the eigenValues is not an issue in this
article because all the methods gave comparable
*This article is a US Government work and, as such, is in
the public domain in the United States of America.
results. Efficiency in terms of computer time is
the only issue in this benchmarking. This study
was made, for all cases, on a single platform, the
CRAY XMP. The genesis of the block Lanczos
method in all the NASTRANs, as well as the
CRAY version, is the one implemented by
Grimes, Lewis, and Simon (1991).
The first section discusses the general form of
the eigenvalue problem for vibration modes.
Next, a mathematical formulation of the four
methods in NASTRAN is given with emphasis
on the FEER method as a precursor to the Lanc
zos method. A detailed mathematical description
of the block Lanczos method is given later. Ref
erence is also made to the Lanczos method in
MSC NASTRAN and to its implementation by
CRAY Research, Inc. Selected frequencies are
calculated for five structures of varying complex
ity using the inverse power method, the FEER
method, the MSC/NASTRAN Lanczos method,
and the CRAY Lanczos method. Finally, results
are discussed and recommendations are made for
possible implementation into NASTRAN.
(1)
(2)
(3)
EIGENVALUE PROBLEM
The general form of the eigenvalue problem for
vibration modes is
where M and K are the symmetric mass and stiff
ness matrices, the eigenvalue A = w 2 the square
of the natural vibration frequency, and x is the
eigenvector corresponding to A. The dimension
of the matrices K and M is n x n, where n is the
number of degrees offreedom in the analysis set.
For this paper it is assumed that K and M are at
least positive semidefinite. Thus associated with
Eq. (1) are n eigenpairs Ai, Xi such that
KXi = AiMxi i = 1,2, . . . , n.
Properties of the eigenvectors include:
XiTMx ·= { M "II for i = J'
J 0 for i ¥ j
where Mii is referred to as the modal mass or
generalized mass. It is evident from Eq. (3) that
the eigenvectors are orthonormal with respect to
the mass matrix. Also the eigenvectors are
orthonormal with respect to the stiffness matrix,
that is,
xiTKxj = { KI'I for i = J'
o for i ¥= j
where Ku is the modal stiffness or generalized
stiffness.
The Rayleigh quotient shows that the modal
mass, M ii , and modal stiffness, K ii , are related to
the eigenvalue Ai, that is,
(4)
(5)
(6)
(7)
For normalized eigenvectors with respect to mo
dal mass, Eqs. (3) can be written as
Now using Eqs. (5), Eqs. (4) can be written as
xTMxj = {
I for i = j
o for i ¥ j
xiTKxj =
{Aj for i = j
o for i ¥ j
.
.
The central issue of a real eigenvalue or normal
modes analysis is to determine the eigenvalues,
Ai, and the eigenvectors, Xi, which satisfy the
conditions stated by Eqs. (1)(7). The next sec
tions present the important elements of the ei
genvalue methods of interest.
EIGENVALUE EXTRACTION METHODS IN
NASTRAN
For real symmetric matrices there are four meth
ods of eigenvalue extraction available in NAS
TRAN: the determinant method, the inverse
power method with shifts, the Givens' method of
tridiagonalization, and the tridiagonal reduction
or FEER method. Most methods of algebraic ei
genvalue extraction can be categorized as be
longing to one or the other of two groups: trans
formation methods and tracking methods. In a
transformation method the two matrices M and
K are simultaneously subjected to a series of
transformations with the object of reducing them
to a special form (diagonal or triadiagonal) from
which eigenvalues can be easily extracted. These
transformations involve pre and
postmultiplication by elementary matrices to annihilate the off
diagonal elements in the two matrices. This pro
cess preserves the original eigenvalues intact in
the transformed matrices. The ratio of the diago
nal elements in the two matrices gives the eigen
values. In a tracking method the roots are ex
tracted, one at a time, by iterative procedures
applied to the dynamic matrix consisting of the
original mass and stiffness matrices. In NAS
TRAN the Givens' and the FEER methods are
transformation methods, while the determinant
and the inverse power methods are tracking
methods. Both tracking methods and the Givens'
method will be discussed briefly in this section
and the Lanczos algorithm, the main emphasis of
this article, is outlined here and in more detail in
the next section.
Determinant Method
For the vibration problem the matrix of coefficients, A, has the form
where Ao is a constant called the shift point.
Therefore A replaces A as the eigenvalue. The
iteration algorithm is defined in the nth iteration
step by:
of Xi is preset. Because L(Ai) is nonsingular, only
U(A;) is needed. The determinant method may
not be efficient in some cases if more than a few
eigenvalues are desired because of the large num
ber of triangular decompositions of A.
Inverse Power Method with Shifts
The Inverse Power Method with shifts is an itera
tive procedure applied directly to Eq. (1) in the
form
Xn =  Wn
en
where en, a scalar, is equal to that element of the
vector Wn with the largest absolute value. At con
vergence lien converges to A, the shifted eigen
value closest to the shift point, and Xn converges
to the corresponding eigenvector 1>;. Note from
Eq. (14) that a triangular decomposition of ma
trix K  AM is necessary in order to evaluate W n •
The shift point Ao can be changed in order to
improve the rate of convergence toward a partic
ular eigenvalue or to improve accuracy and con
vergence rates after several roots have been ex
tracted from a given shift point. Also Ao can be
calculated such that the eigenvalues within a de
sired frequency band can be found and not just
those that have the smallest absolute value.
For calculating additional eigenvalues, the
trial vectors, x n , in Eq. (14) must be swept to
eliminate contributions due to previously found
eigenvalues that are closer to the shift point than
the current eigenvalue. An algorithm to accom
plish this is given as follows:
m
Xn = in  2: (cf)~Min)cf);
i~l
The determinant of A can be expressed as a func
tion of A, that is,
where Ai, i = I, 2,. . . , n are the eigenvalues of
A. In the determinant method D(A) is evaluated
for trial values of A, selected according to an iter
ative procedure, and a criterion is established to
determine when D(A) is sufficiently small or
when A is sufficiently close to an eigenvalue. The
procedure used for evaluating D(A) employs the
triangular decomposition
(8)
(9)
(10)
(11)
A =LU
LUx; = 0
for an assumed value of A where L is a lower unit
triangular matrix and U is an upper triangular
matrix. D(A) is equal to the product of the diago
nal terms of U. Once an approximate eigenvalue,
A;, has been accepted, an eigenvector, Xi, is de
termined from
by back substitution where one of the elements
where .in is the trial vector being swept, m is the
number of previously extracted eigenvalues, and
;j;i is defined by
where A = L IK(LT)I. Note that L I is easy to
perform because L is triangular. Also A =
where Xi,N is the last eigenvector found in iterat
ing for the ith eigenvalue.
The inverse power method allows the user to
define a range of interest [A a , Ab] on the total
frequency spectrum and to request a desired
number of eigenvalues, ND, within that range.
When ND is greater than the actual number of
eigenvalues in the range, then the method guar
antees the lowest eigenvalues in the range.
Givens' Method of Tridiagonalization
In the Given's method the vibration problem as
posed by Eq. (8) is first transformed to the form
Ax = AX
by the following procedures. The mass matrix,
M, is decomposed into upper and lower triangu
lar matrices such that
If M is not positive definite, the decomposition in
Eq. (19) is not possible. For example, when a
lumped mass model is used, NASTRAN does
not compute rotary inertia effects. This means
that the rows and columns of the mass matrix
corresponding to the rotational degrees of free
dom are zero resulting in a singular mass matrix.
In this case the mass matrix must be modified to
eliminate the massless degrees of freedom.
Thus Eq. (8) becomes
that implies after premultiplying by L I and post
multiplying by (LT)I that
LIK(LT)I is a symmetric matrix. The matrix A
is then transformed to a tridiagonal matrix, A r ,
by the Givens' method, that is, a sequence of
orthogonal transformations, Tj , are defined such
that
Recall that an orthogonal transformation is one
whose matrix T satisfies
TTT = TTT = I
the identity matrix. The eigenvalues of A are pre
served by the transformation, and if
x = TfTI· .. T~_I T~y
then from Eq. (22)
that is,
by repeatedly applying Eq. (23). Equation (25)
implies that y is an eigenvector of the trans
formed matrix TrTr 1 . . . T2 T1ATjTI . . .
T;_I T;' Thus x can be obtained from y by Eq.
(24).
The eigenvalues of the tridiagonal matrix, A r ,
are extracted using a modified QR algorithm,
that is, Ar+ 1 = Q ~Ar Qr such that Ar is factored
into the product QrRr where RT is an upper trian
gular matrix and Qr is orthogonal. Thus
(20)
and from Eq. (26)
(17)
(18)
(19)
(21)
(23)
(24)
(26)
(27)
Because Qr is orthogonal, then
Ar+1 = Q;ArQr
= Q;QrRrQr.
Ar+1 = RrQr.
In the limit as r ~ 00 and A is symmetric, AT
will approach a diagonal matrix. Because eigen
values are preserved under an orthogonal
trans(28)
(29)
. om
formation, the diagonal elements of the limiting
diagonal matrix will be the eigenvalues of the
original matrix A.
To obtain the ith eigenvector, Yi. of the tri
diagonal matrix, A r, the tridiagonal matrix Ar
A.J is factored such that
Ar  A.J = LjVj
where L j is a unit triangular matrix and V j is an
upper triangular matrix. The eigenvector Yj is
then obtained by iterating on
where the elements of the vector yjo) are arbi
trary. Note that the solution of Eq. (29) is easily
obtained by back substitution because V j has the
form
~=
pz q2 rz
Pnl qnl
Pn
The eigenvectors of the original coefficient ma
trix, A, are then obtained from Eq. (24).
Note that in the Givens' method the dimen
sion of A equals the dimension of A r • The major
share of the total effort expended in this method
is in converting A to A r • Therefore the total effort
is not strongly dependent on the number of eigen
values extracted.
Tridiagonal Reduction or FEER Method
The tridiagonal reduction or FEER method is a
matrix reduction scheme whereby the eigen
values in the neighborhood of a specified point,
A.o , in the eigenspectrum can be accurately deter
mined from a tridiagonal eigenvalue problem
whose dimension or order is much lower than
that of the full problem. The order of the reduced
problem, m, is never greater than
m = 2ij + 10
where if is the desired number of eigenvalues. So
the power of the FEER method lies in the fact
that the size of the reduced problem is the same
order of magnitude as the number of desired
roots, even though the actual finite element
model may have thousands of degrees of free
dom.
There are five basic step in the FEER method:
1. Equation (8) is converted to a symmetric
inverse form
where
and A.a is a shift value.
2. The tridiagonal reduction algorithm or
Lanczos algorithm is used to transform Eq.
(31) into a tridiagonal form of reduced or
der.
3. The eigenvalues of the reduced matrix are
extracted using a QR algorithm similar to
that described previously.
4. Upper and lower bounds on the extracted
eigenvalues are obtained.
5. The corresponding eigenvectors are com
puted and converted to physical form.
To implement Step 1, consider Eq. (8),
Kx = A.Mx.
When vibration modes are requested in the
neighborhood of a specified frequency, A. o , Eq.
(8) can be written
(K  A.oM)x = (A.  A.a)Mx.
Let K = K  A.aM and A.' = A.  A.a. Then from
Eq. (33)
Kx = A.'Mx
x = A.'KIMx
Mx = A.'MKIMx
 1
MK'Mx = A.' Mx.
Factor K by Cholesky decomposition, that is,
K = Ld'LT
where L is a lower triangular matrix and d' is a
(31)
(32)
(33)
(34)
(35)
(36)
that is
Bx = AMx
Bx = Ax
where B = M[(LT)Id'IL I]M and A = 1/A' =
1I(A  Ao). Now step 1 is complete.
To implement step 2 rewrite Eq. (31) as
where B MIB. Now B is reduced to tri
diagonal form, A, using single vector Lanczos
recurrence formulas defined by
ai,i = vTBVi
}
Vi+ 1 = BVi  ai,i Vi  diVi I i = 1,2, .
di+1 = {VT+I MVi+IPI2
1
Vi<I = d Vi+1 i = 1,2, . . . , m  1
i+1
where vector Vo = 0, VI is a random starting
vector and d l = O. The reduced tridiagonal eigen
value problem is now given as
diagonal matrix. Then Eq. (35) can be written
VTMV = I.
. ,m
(37)
y = Ay
(38)
thus
Note from Eq. (41) that A is an m x m matrix.
For step 3 the eigenvalues, A, and eigenvec
tors, y, of Eq. (38) are obtained as described for
the Givens' method. The eigenvectors are nor
malized so that
In Eq. (43) Ai is an approximation to the exact
eigenvalue Ai in Eq. (8), dm +1 is calculated from
Eqs. (37), Ymi is the last component of the mth
eigenvector, Ym, of A, and Ai is the ith eigen
value of A. The ith eigenvalue i;.i is acceptable if
ei is less than or equal to a preset error tolerance.
Now step 5 is implemented for acceptable ei
genvalues. If (A, y) is an eigenpair of Eq. (38),
then
or from Eqs. (40) and (41)
Now if x = Vy, then
Ay = Ay
VTBVy = AVTMVy
BVy = AMVy.
that is, (A, x) is an eigenpair of Eq. (31).
Thus for step 5 the eigenvectors of Eq. (31) or
equivalently Eq. (8) are calculated from
and the eigenvalue i;. is calculated from Eq. (32),
that is,
(44)
(45)
(46)
all
d2
d2 a22
d3
Ay =
d3 a33
d4
\
\
\
dm  I amI,mI
dm
dm
amm
where A approximates the eigenvalue A of Eq.
(31), and y is an eigenvector of A. The Lanczos
formulas generate a V matrix, vector by vector
that is,
and Eqs. (37) are modified by NASTRAN such
that each vector Vi+1 is reorthogonalized to all
previously computed V vectors, that is, V is
orthonormal to M.
22
18
14
10
6
2
Note that in the FEER method the matrix B
enters the recurrence formulas, Eqs. (37), only
through the matrixvector multiply terms BVi •
Therefore B is not modified by the computations.
Lanczos procedures for real symmetric matrices
required only that a user provide a subroutine
that for any given vector, z, computes Bz.
BLOCK LANCZOS METHOD
Recall that the eigenvalue problem in vibration
analysis is given by Eq. (8), that is,
Kx = >..Mx
where K and M are symmetric positive definite
matrices. Generally the eigenvalues of interest
are the smallest ones, but they are often poorly
separated. However, the largest eigenvalues that
are not interesting have good separation. Also
convergence rates are very slow at the low end of
the spectrum and fast at the higher end. Conver
gence rates can be accelerated to the desired set
of eigenvalues by a spectral transformation. Con
sider the problem
M(K  (TM)lMx = uMx
(47)
where (T, the shift, is a real parameter. It can be
shown that (>.., x) is an eigenpair of Eq. (8) if and
only if [11(>..  (T), x] is an eigenpair of Eq. (47).
The spectral transformation does not change the
eigenvectors, but the eigenvalues of Eq. (47) are
related to the eigenvalues of Eq. (8) by
1
u = >.. _ (T.
(48)
This transformation will allow the Lanczos algo
rithm to be applied even when M is semidefinite.
Consider the effect of the spectral transformation
on a satellite problem discussed in detail in the
next section. Figure 1 shows the shape of the
transformation. Table 1 shows the effect of the
transformation using an initial shift of (T =
0.046037. Note that the smallest eight eigen
values are transformed from closely spaced ei
genvalues to eigenvalues with good separation.
Our objective is to define the spectral transfor
mation block Lanczos algorithm. Let us consider
first the basic block Lanczos algorithm.
Basic Block Lanczos Algorithm
Consider the Lanczos algorithm
(Cullum and
Willougby, 1985, 1991)
for the eigenvalue prob
lem
Gap
Hx = AX
(49)
where H is symmetric.
The block Lanczos iteration with block size p
for an n x n matrix H is given as:
Initialization:
set Qo = 0
set BI = 0
choose RI and orthonormalize the
columns of RI to obtain QI
Lanczos Loop:
For j = 1,2,3, . . .
set Vj = HQj  QjIBJ
set Aj = QJVj
set Rj+I = Vj  QjAj
Compute the orthogonal factorization
Qj+IBj+1 = Rj +1
End Loop.
Matrices Qj, [1j, and Rj for j = 1, 2, . . . are
n x p; Aj and Bj are p x p. Aj is symmetric and Bj
is upper triangular. The blocksize p is the number
of column vectors of Qj. So if p = 1, then Qj is a
column vector, q. Thus the matrix H is not ex
plicitly required, but only a subroutine that com
putes Hq for a given vector q. Aj and Bj are gen
eralizations of the scalars aj and dj in the ordinary
Lanczos recurrence.
The recurrence formula in the Lanczos loop
can also be written as
Rj+1 = Qj+IBj+1 = HQj  QjAj  QjIBJ. (50)
The orthogonal factorization of the residual,
Rj +l , implies that the columns of Qj are orthonor
mal. Indeed it has been shown that the combined
column vectors of the matrices, QI, Q2, . .
Qj, called the Lanczos vectors, form an
orthonormal set.
The blocks of Lanczos vectors form an n x jp
matrix Wj where
From the algorithm itself a jp x jp block
tridiagonal matrix, Tj, is defined such that
Tj=
AI
B2
0
0
Bf
A2
0
Bf
0
Bj _1 Aj _1 BTJ
Bj
Aj
0
0
(52)
Because the matrices Bj are upper triangular, Tj is
a band matrix with half band width p + 1. the first
j formulas defined by Eq. (50) can be combined
using Eqs. (51) and (52) into a single formula
H~ = ~Tj + Qj+IBj+IEJ
(53)
where Ej is an n x p matrix of zeros except the
last p x p block is a p x p identity matrix.
Premulityplying Eq. (53) by WJ implies
WJHWj = WJWjTj + WJQj+IBj+IEJ
WJWj = I and
WJ Qj+1 = o.
Equation (54) implies that Tj is the orthogonal
projection of H onto the subspace spanned by the
columns of Wj • Also if «(J, s) is an eigenpair of Tj,
that is, Tjs = s(J, then (A, Wjs) is an approximate
eigenpair of H. A discussion on the accuracy of
the approximation will be delayed until the spec
tral transformation block Lanczos algorithm is
considered. Basically the Lanczos algorithm re
places a large and difficult eigenvalue problem
involving H by a small and easy eigenvalue prob
lem involving the block tridiagonal matrix Tj •
Spectral Transformation Block lanczos
Algorithm
Because our primary consideration is vibration
problems, consider the eigenproblem posed by
Eq. (47), that is,
M(K  aM)IMx = uMx
The Lanczos recurrence with block size p for
solving Eq. (47) is given by
Initialization:
set Qo = 0
set BI = 0
choose RI and orthonormalize the columns
of RI to obtain QI with QfMQI = Ip
Lanczos Loop:
For j = 1,2,3, . . .
set Vj = (K  aM)I(MQ)  Qj1BJ
set Aj = VJ(MQj)
set Rj+1 = Vj  QjAj
Compute Qj+1 and (MQj+l) such that
a) Qj+IBj+1 = Rj+1
b) QJ+1(MQj+l) = Ip
End Loop.
Note that the algorithm as written requires only
one multiplication by M per step and no factori
zation of M is required. The matrices Qj are now
M orthogonal, rather than orthogonal, that is,
(55)
Also the Lanczos vectors are M orthogonal, that
is,
WJMW; = I.
The recurrence formula in the Lanczos loop
can also be written as
Qj+tBj+t = (K  O"M)tMQj  QjAj  QjtBJ.
(56)
Now, as before, combining all j formulas of
Eq. (56) into one equation yields
where Wj, Tj , and Ej are as defined in Eq. (53).
Premultiplying Eq. (57) by WJM implies
WJM(K  O"M)tMWj = WJMW;1j
+ WJQj+tBj+tEJ
that is, if 0 is an approximate eigenvalue of 1j,
then from Eq. (59) v is an approximate eigen
value of Eq. (8). Recall that the spectral transfor
mation does not change the eigenvectors,
therefore y = Wjs is an approximate eigenvector for
Eq. (8).
Let us examine the approximations obtained
by solving the block tridiagonal eigenvalue prob
lem involving the matrix 1j. Let (0, s) be an
eigenpair of 1j, that is,
1js = sO
and let y = Wjs. Then premuItiplying Eq. (57) by
M and postmultiplying by s gives
M(K  O"M)tMWjs  MWjTjs = MQj+tBj+tEJs
M(K  O"M)tMy  MWjsO = MQj+tBj+tEJs
M(K  O"M)tMy  MyO = MQj+tBj+tEJs.
(60)
Recall for any vector q, IlqlIM1 = q TMt q
(Parlett,
1980)
.
Therefore, taking the norm of Eq. (60) and
using Eq. (55)
IIM(K  O"M)tMy  MyOIIM1
= IIMQj+tBj+tEJsll,w,
= IIBj+tEJslb == fij·
(61)
Note that fij is easily computed for each eigen
vector s. It is just the norm of the p vector ob
tained by multiplying the upper triangular matrix
Bj + t with the last p components of s.
From
Grimes et al. (1991)
the error in eigen
value approximations for the generalized eigen
problem is given by
1'1   0 I
iA  0"
Thus fij is a bound on how well an eigenvalue of
1j approximates an eigenvalue of Eq. (47).
Recall that if 0 is an approximate eigenvalue of
1j, then from Eq. (48)
1
V=O"+o
is an approximate eigenvalue of Eq. (8). Con
sider
IA  vi = IA rr  ~I
= !I(A  rr) (_1_  0)
o Arr
Therefore IA  vi ::::; f3/02. Thus f3/02 is a bound
on how well the eigenvalues of Eq. (47) approxi
mate the eigenvalues of Eq. (8).
Analysis of Block Tridiagonal Matrix Tj
The eigenproblem for 1) is solved by reducing 1)
to a tridiagonal form and then applying the tri
diagonal QL algorithm. The eigenextraction is ac
complished in three steps:
An orthogonal matrix QT is found so that 1) is
reduced to a tridiagonal matrix H, that is,
An orthogonal matrix QH is found so that H is
reduced to a diagonal matrix of eigenvalues, A,
that is,
Q1HQH = A.
Combining Eqs. (64) and (65) gives
(64)
(65)
where QTQH is the eigenvector matrix for 1). The
orthogonal matrices QH and QT are a product of
simplex orthogonal matrices, Givens' rotations,
QHI QHz . . . QH, or QTI QTz . . . QT,. The algo
rithms used for steps 1 and 2 are standard and
numerically stable algorithms drawn from the
EISPACK collection of eigenvalue routines.
Note from Eq. (61) that only the bottom p en
tries of the eigenvectors of 1) are needed for the
evaluation of the residual bound. Therefore it is
unnecessary to compute and store the whole
eigenvector matrix for 1). Only the last p compo
nents of the eigenvector matrix are computed.
The error bounds on the eigenvalues Eqs. (62)
and (63) are used to determine which eigenvec
tors are accurate enough to be computed. At the
conclusion of the Lanczos run the EISPACK
subroutines are used to obtain the full eigenvec
tors of 1). Then the eigenvectors for Eq. (47) are
found through the transformation
Other Considerations in Implementating the
lanczos Algorithm
The use of the block Lanczos algorithm in the
context of the spectral transformation necessi
tates careful attention to a series of details:
1. the implications of Morthogonality of the
blocks;
2. block generalization of single vector ortho
gonalization schemes;
3. the effect of the spectral transformation on
orthogonality loss;
4. the interactions between the Lanczos algo
rithm and the shifting strategy.
All of these issues are addressed in detail by
Grimes et al. (1986, 1991).
The block Lanczos algorithm as described in
the previous sections was developed as a general
purpose eigensolver for
MSC NASTRAN (1991
).
Boeing designed the software such that the
eigensolver was independent of the form of the
sparse matrix operations required to represent
the matrices involved and their spectral transfor
mations. The key operations needed were ma
trixblock products, triangular block solves, and
sparse factorizations. These and the data struc
tures representing the matrices are isolated from
the eigensolver. Therefore, the eigensolver code
could be incorporated in different environ
ments.
For this study we tested the block Lanczos
algorithm as incorporated in MSC NASTRAN
and as further developed by CRAY Research,
Inc. The block Lanczos algorithm in MSC uses
the factorization and solve modules that are stan
dard operations in MSC. The CRAY Lanczos
code uses an eigensolver with matrix factoriza
tion, triangular solves, and matrixvector prod
ucts from a mathematical library. For vibration
problems the code can be used with the stiffness
and mass matrices, K and M, as generated by
NASTRAN. NASTRAN is run to generate bi
nary files containing the K and M matrices.
These files are input files to the CRAY code that
calculates eigenvalues, checks the orthogonality
of the eigenvectors, x, via x'Kx, calculates the
Rayleigh quotient x'Kxlx'Mx to compare with
the computed eigenvalues, and calculates the
norm of the eigenvector residual. In addition bi
nary eigenvalue and eigenvector files output are
suitable for input to NASTRAN for further pro
cessing if desired.
TEST PROBLEMS
In this section several test problems were solved
using the inverse power and FEER eigenvalue
extraction methods in COSMIC NASTRAN, the
Lanczos algorithm in MSC NASTRAN, and the
Lanczos algorithm as implemented by CRAY
Research. These problems were chosen based on
the complexity of the finite element model in
terms of the kinds of elements used and the num
ber of degrees of freedom. All methods as ex
pected gave approximately the same numerical
results. The only criterion used to compare the
different methods was the number of seconds
needed to reach a solution given that all problems
were solved on the same platform, a CRAY
XMP.
Problem 1: Square Plate
A square 200 x 200 in. plate in the xy plane was
modeled with QUAD4 elements only. Five
meshes were defined. Details are given in Table
2. All elements were 1.0in. thick. Material prop
erties were constant for all meshes. Each plate
was completely fixed along the xaxis and the y
axis at x = 200 in.
For all cases five frequencies were requested
in the interval [0, 20 Hz]. Table 3 gives the results
for the 10 x 10 plate and Table 4 gives the results
for the 50 x 50 plate. As expected within each
case, the numerical results from the different
eigenextraction techniques are approximately
the same. The differences in numerical results
between the 10 x 10 case and the 50 x 50 case
reflect the fineness of the mesh for the 50 x 50
case. Both Lanczos algorithms were run with a
fixed block size of p = 7.
Table 5 gives the CPU time in seconds from
the CRAY XMP needed to extract five frequen
cies for each case. Recall that the CRAY Lanc
zos algorithm needs to obtain the mass and stiff
ness matrices in binary form from NASTRAN.
Thus the time given for this algorithm is the total
time from two computer runs, that is, the time to
obtain the mass and stiffness matrices plus the
time to run the Lanczos algorithm separately.
Figure 2 is a plot of the degrees of freedom
versus the CPU time in seconds on the CRAY for
the four eigenvalue extraction techniques.
Problem 2: Intermediate Complexity Wing
A three spar wing shown in Fig. 3 was modeled
with 88 grids and 158 elements of the following
types: 62 QUAD4, 55 Shear, 39 Rod, and 2
TRIA3. All elements varied in thickness or
crosssectional area. Material properties were
the same for all elements. The wing was com
pletely fixed at the root, which left 390 degrees of
freedom. Five frequencies were requested in the
interval [0, 300 Hz]. Table 6 gives the frequencies
calculated and the CPU time in seconds for the
four eigenextraction algorithms. As for Problem
1 both Lanczos algorithms were run with a fixed
block size of p = 7.
Problem 3: Random
A composite radome shown in Fig. 4 was mod
eled with 346 grids and 630 elements of the fol
lowing types: 54 TRIA2, 284 Bar, and 292
QUAD4. The QUAD4s were both isotropic and
composite with 46 elements isotropic and 246 el
ements modeled as four crossply unsymmetric
laminates of 40, 38, 36, and 32 layers,
respec135.924
135.924
135.918
135.918
Frequencies (Hz)
3
tively. The radome was completely fixed at the
base, which left 1782 active degrees of freedom.
Ten frequencies were requested in the interval
[0, 100 Hz]. Table 7 gives the frequencies calcu
lated and the CPU time in seconds for the four
eigenextraction algorithms. Both Lanczos algo
rithms were run with a fixed block size of p = 7.
Problem 4: Satellite
A satellite shown in Fig. 5 was modeled with
2295 grids and 1900 elements distributed as
shown in Table 8.
Sixteen different materials were referenced,
and 34 coordinate systems were used. All
ele
CONROD
SHEAR 395 TRIA3 15
QUAD4 572 BAR 39
ments varied in thickness and crosssectional
area, and concentrated masses were added to se
lected grids. The satellite has 5422 active degrees
of freedom. Fifty frequencies were requested in
the interval [0, 20 Hz]. Table 9 gives every fifth
frequency calculated and the CPU time in sec
onds for the four eigenextraction algorithms.
Again both Lanczos algorithms were run with a
fixed block size of p = 7.
Problem 5: Forward Fuselage
(FS 360.0620.0)
A section of a forward fuselage from FS 360.0 to
620.0 shown in Fig. 6 was modeled with 1038
grids and 3047 elements distributed as shown in
Table to.
Eleven different materials were referenced.
All elements varied in thickness or
crosssecCOSMIC inv No solution in 3000s
power
COSMIC FEER 0.461 0.819 2.093 3.090 5.577 7.467 12.24 15.17 16.09 17.51 18.18 19.40 22.65 180.34
7 5 7 5 3 3 8 8
MSC Lanczos 0.462 0.823 2.507 3.440 5.546 7.362 10.76 14.02 15.68 16.68 17.80 18.30 19.06 135.81
7 0 2 8 5 3 3 2
CRAY Lanczos 0.462 0.823 2.507 3.440 5.546 7.362 10.76 14.02 15.68 16.68 17.80 18.30 19.06 66.011
7 0 2 8 5 3 3
.
I .\\  7 \\ J \,\' K ~ . N"\"{ " /I \\' . 7l.. ' '
." .'' I,' . ' . . . .~ 7t1/.~' {. .~11 ';.\'\~ ' .""\\
.RH. , 'Rj .
.. II u..n. . h 11..
\\7{ 1 .1/ , 7[.
. \V/" ,7f\, \V/',
tional area. The fuselage was fixed in the 123
directions at FS 620.0 The model had 6045 active
degrees of freedom. Sixty frequencies were re
quested in the interval (0, 20 Hz). Table 11 gives
every fifth frequency calculated plus the last one
and the CPU time in seconds for the four
eigenextraction algorithms. Both Lanczos algo
rithms were run with a fixed block size of P == 7.
CONCLUSIONS
The current real eigenvalue analysis capability in
NASTRAN in quite extensive and adequate for
small and medium size problems. In particular
the FEER method's performance is reasonable at
least for the problems tested in this paper. How
ever, the block Lanczos method as implemented
is more efficient for aU the problems.
An analysis of the preceding section results
clearly showS that the block Lanczos algorithm
merits consideration for possible implementation
into NASTRAN. Comparing CPU secs in Table 5
implies that the CRAY Lanczos method runs
9464% faster than the FEER method. Similarly
from Tables 6, 7, 9 and 11 the CRAY LancZos
runs 66,54,260, and 177%, respectively, faster
than the FEER method.
The comparisons are not nearly as striking
when we consider the CRAY Lanczos and the
MSC Lanczos. Comparing CPU seconds the
CRAY Lanczos runs from 0.2% faster in Table 5
to 105.7% faster in Table 10. The difference in
CPU time reported for these two methods can be
attributed to two factors: algorithm enhance
ments and the mathematical library on the CRAY
versus the standard mathematical modules in
MSC. The CRAY Lanczos is based on
Grimes et
al. (1991)
that is dated July 1991. The MSC Lanc
zos is based on
Grimes et al. (1986)
that is dated
1986 plus subsequent updates by MSC. All prob
lems were run under MSC NASTRAN Version
66a. Recent communications with Roger G.
Grimes at Boeing, one of the developers of the
shifted block Lanczos algorithm, reveals that the
Lanczos algorithm is continuously being refined
andTihmepprroovbeldem.s chosen to test the four
eigenextraction methods, although diverse in terms of
the number of degrees of freedom and element
distribution, were stable with no clusters of mul
tiple eigenvalues. The multiple eigenvalue prob
lem and its relation to the user chosen blocksize,
p, is discussed in detail in
Grimes et al. (1991)
.
The authors conclude that based on timing
results for the selected problems, the shifted
block Lanczos algorithm should be considered
for possible implementation into NASTRAN.
Journal of
Engineering
Volume 2014
Volume 2014
Sensors
Hindawi Publishing Corporation
ht p:/ www.hindawi.com
Rotating
Volume 2014
International Journal of
Hindawi Publishing Corporation
ht p:/ www.hindawi.com
Active
and
Advances in
Civil Engineering
Hindawi Publishing Corporation
ht p:/ www.hindawi.com
Journal of
Robotics
Hindawi Publishing Corporation
ht p:/ www.hindawi.com
Advances
OptoEle
Submit
your manuscr ipts
VLSI
Design
Hindawi Publishing Corporation
ht p:/ www.hindawi.com
Hindawi Publishing Corporation
Navigation and
Observation
Hindawi Publishing Corporation
ht p:/ www.hindawi.com
Modelling
Sim
ulation
&
Engineering
International Journal of
Engineering
Electrical
and
Computer
International Journal of
Aerospace
Engineering
Cullum , J. , and Willoughby , R. A. , 1985 , "A Survey of Lanczos Procedures for Very Large Real 'Symmetric' Eigenvalue Problems," Journal of Computational and Applied Mathematics , Vols . 12 , 13 .
Cullum , J. , and Willoughby , R. A. , 1991 , "Computing Eigenvalues of Very Large Symmetric MatricesAn Implementation of a Lanczos Algorithm with No Reorthogonalization,' , Journal of Computational Physics , Vol. 44 .
Grimes , R. G. , Lewis , J. G. , and Simon H. , 1986 , "The Implementation of a Block Shifted and Inverted Lanczos Algorithm for Eigenvalue Problems in Structural Engineering," Applied Mathematics Technical Report , Boeing Computer Services, ETATR 39 , August .
Grimes , R. G. , Lewis , J. G. , and Simon H. , 1991 , "A Shifted Block LANCZOS Algorithm for Solving Sparse Symmetric Generalized Eigenproblems," Boeing Computer Services , AMSTR 166 . July.
MSC/NASTRAN Version 67 , 1991 , Application Manual CRAY (UNICOS) Edition , The MacNealSchwendler Corporation , November.
NASTRAN Theoretical Manual , January 1981 , NASA SP 221 .
Parlett , B. N. , 1980 , The Symmetric Eigenvalue Problem, PrenticeHall Series in Computational Mathematics, PrenticeHall Inc., Englewood Cliffs , NJ.
and Volume 2014