Condition Number Estimation of Preconditioned Matrices
March
Condition Number Estimation of Preconditioned Matrices
Noriyuki Kushida 0 1
0 1 International Data Centre, the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization , Vienna , Austria
1 Academic Editor: Rodrigo Huerta-Quintanilla , Cinvestav-Merida, MEXICO
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tridaigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
-
Competing Interests: The authors have declared
that no competing interests exist.
Solving the linear equation is considered to be the most time consuming component of
simulation computation. As a result, a number of linear equation solvers have been developed in
order to realize more efficient simulation. Linear equation solvers can be roughly categorized
into two types: direct solvers and iterative solvers. Direct solvers have been used for ill
conditioned problems, because of the robustness of the solution. However, the iterative method
has become mainstream in recent years for the following reasons:
The iterative method requires less memory than the direct method, if the coefficient matrix
of the equation is sparse. Most simulation methods, as is the case with the finite element
method and the finite difference method, use sparse matrices.
The iterative method is more suitable for distributed memory type parallel computers than
the direct method. Recently, the trend in super computers has been toward distributed
memory computers.
As a result, iterative solvers have generated a great deal of interest. Iterative methods are of two
types: the stationary method and the Krylov sub-space type method. Generally, the Krylov
subspace type method is more commonly used than the stationary method because the stationary
method, is in most cases, slower and more suited to parallel computers. The conjugate gradient
method is representative of the Krylov sub-space type method, and the Gauss-Seidel method is
a typical representative example of the stationary method. The Krylov sub-space type method
is faster than the stationary method, but is sometimes unstable. Furthermore, rapid
convergence is always desired. In order to address such considerations, preconditioning is applied to
the Krylov sub-space type method. In particular, the conjugate gradient method with
preconditioning, sometimes called the preconditioned conjugate gradient method (PCG), is one of the
most well known iterative solvers [1].
Evaluating the convergence rate of an iterative solver is sometimes important in order to
demonstrate the effectiveness of new methods or to select an adequate method. The
convergence rate of the conjugate gradient method (CG) and other Krylov sub-space type solvers
strongly depends on the eigenvalue distribution of coefficient matrix, that is to say, the
convergence rate is better if the eigenvalues are concentrated. Complete knowledge of the eigenvalue
distribution enables the complete prediction of the convergence behavior of the PCG.
However, obtaining all of the eigenvalues is difficult, and so another simple indicator, called condition
number, is used. Generally, the condition number is easy to calculate when the coefficient
matrix is explicitly obtained. As described above, parallel computers, especially distributed
memory type parallel computers, (...truncated)