📄 spdoc.txt
字号:
Sparse User's Guide
A Sparse Linear Equation Solver
Version 1.3a
1 April 1988
Kenneth S. Kundert
Alberto Sangiovanni-Vincentelli
Department of
Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, Calif. 94720
June 23, 1988
1: INTRODUCTION
Sparse1.3 is a flexible package of subroutines written in C used to
quickly and accurately solve large sparse systems of linear equations. The
package is able to handle arbitrary real and complex square matrix equa-
tions. Besides being able to solve linear systems, it is also able to
quickly solve transposed systems, find determinants, and estimate errors
due to ill-conditioning in the system of equations and instability in the
computations. Sparse also provides a test program that is able read matrix
equations from a file, solve them, and print useful information about the
equation and its solution.
Sparse1.3 is generally as fast or faster than other popular sparse
matrix packages when solving many matrices of similar structure. Sparse
does not require or assume symmetry and is able to perform numerical pivot-
ing to avoid unnecessary error in the solution. It handles its own memory
allocation, which allows the user to forgo the hassle of providing adequate
memory. It also has a natural, flexible, and efficient interface to the
calling program.
Sparse was originally written for use in circuit simulators and is
particularly apt at handling node- and modified-node admittance matrices.
The systems of linear generated in a circuit simulator stem from solving
large systems of nonlinear equations using Newton's method and integrating
large stiff systems of ordinary differential equations. However, Sparse is
also suitable for other uses, one in particular is solving the very large
systems of linear equations resulting from the numerical solution of par-
tial differential equations.
1.1: Features of Sparse1.3
Beyond the basic capability of being able to create, factor and solve
systems of equations, this package features several other capabilities that
enhance its utility. These features are:
o Ability to handle both real and complex systems of equations. Both
types may resident and active at the same time. In fact, the same
matrix may alternate between being real and complex.
o Ability to quickly solve the transposed system. This feature is use-
ful when computing the sensitivity of a circuit using the adjoint
method.
o Memory for elements in the matrix is allocated dynamically, so the
size of the matrix is only limited by the amount of memory available
to Sparse and the range of the integer data type, which is used to
hold matrix indices.
o Ability to efficiently compute the condition number of the matrix and
an a posteriori estimate of the error caused by growth in the size of
the elements during the factorization.
o Much of the matrix initialization can be performed by Sparse,
June 23, 1988
- 2 -
providing advantages in speed and simplified coding of the calling
program.
o Ability to preorder modified node admittance matrices to enhance accu-
racy and speed.
o Ability to exploit sparsity in the right-hand side vector to reduce
unnecessary computation.
o Ability to scale matrices prior to factoring to reduce uncertainty in
the solution.
o The ability to create and build a matrix without knowing its final
size.
o The ability to add elements, and rows and columns, to a matrix after
the matrix has been reordered.
o The ability to delete rows and columns from a matrix.
o The ability to strip the fill-ins from a matrix. This can improve the
efficiency of a subsequent reordering.
o The ability to handle matrices that have rows and columns missing from
their input description.
o Ability to output the matrix in forms readable by either by people or
by the Sparse package. Basic statistics on the matrix can also be
output.
o By default all arithmetic operations and number storage use double
precision. Thus, Sparse usually gives accurate results, even on
highly ill-conditioned systems. If so desired, Sparse can be easily
configured to use single precision arithmetic.
1.2: Enhancements of Sparse1.3 over Sparse1.2
Most notable of the enhancements provided by Sparse1.3 is that it is
considerably faster on dense matrices. Also, external names have been made
unique to 7 characters and the Sparse prefix sp has been prepended to all
externally accessible names to avoid conflicts. In addition, a routine
that efficiently estimates the condition number of a matrix has been added
and the code that estimates the growth in the factorization has been split
off from the actual factorization so that it is computed only when needed.
It is now possible for the user program to store information in the
matrix elements. It is also possible to provide a subroutine to Sparse
that uses that information to initialize the matrix. This can greatly sim-
plify the user's code.
Sparse1.3 has an FORTRAN interface. Routines written in FORTRAN can
access almost all of the features Sparse1.3.
June 23, 1988
- 3 -
1.3: Copyright Information
Sparse1.3 has been copyrighted. Permission to use, copy, modify, and
distribute this software and its documentation for any purpose and without
fee is hereby granted, provided that the copyright notice appear in all
copies, and Sparse and the University of California, Berkeley are refer-
enced in all documentation for the program or product in which Sparse is to
be installed. The authors and the University of California make no
representations as to the suitability of the software for any purpose. It
is provided `as is', without express or implied warranty.
June 23, 1988
- 4 -
2: PRIMER
2.1: Solving Matrix Equations
Sparse contains a collection of C subprograms that can be used to
solve linear algebraic systems of equations. These systems are of the
form:
Ax = b
where A is an nxn matrix, x is the vector of n unknowns and b is the vector
of n right-hand side terms. Through out this package A is denoted Matrix,
x is denoted Solution and b is denoted RHS (for right-hand side). The sys-
tem is solved using LU factorization, so the actual solution process is
broken into two steps, the factorization or decomposition of the matrix,
performed by spFactor(), and the forward and backward substitution, per-
formed by spSolve(). spFactor() factors the given matrix into upper and
lower triangular matrices independent of the right-hand side. Once this is
done, the solution vector can be determined efficiently for any number of
right-hand sides without refactoring the matrix.
This package exploits the fact that large matrices usually are sparse
by not storing or operating on elements in the matrix that are zero. Stor-
ing zero elements is avoided by organizing the matrix into an orthogonal
linked-list. Thus, to access an element if only its indices are known
requires stepping through the list, which is slow. This function is per-
formed by the routine spGetElement(). It is used to initially enter data
into a matrix and to build the linked-list. Because it is common to
repeatedly solve matrices with identical zero/nonzero structure, it is pos-
sible to reuse the linked-list. Thus, the linked list is left in memory
and the element values are simply cleared by spClear() before the linked-
list is reused. To speed the entering of the element values into succes-
sive matrices, spGetElement() returns a pointer to the element in the
matrix. This pointer can then be used to place data directly into the
matrix without having to traverse through the linked-list.
The order in which the rows and columns of the matrix are factored is
very important. It directly affects the amount of time required for the
factorization and the forward and backward substitution. It also affects
the accuracy of the result. The process of choosing this order is time
consuming, but fortunately it usually only has to be done once for each
particular matrix structure encountered. When a matrix with a new
zero/nonzero structure is to be factored, it is done by using spOr-
derAndFactor(). Subsequent matrices of the same structure are factored
with spFactor(). The latter routine does not have the ability to reorder
matrix, but it is considerably faster. It may be that a order chosen may
be unsuitable for subsequent factorizations. If this is known to be true a
priori, it is possible to use spOrderAndFactor() for the subsequent factor-
izations, with a noticeable speed penalty. spOrderAndFactor() monitors the
numerical stability of the factorization and will modify an existing order-
ing to maintain stability. Otherwise, an a posteriori measure of the
numerical stability of the factorization can be computed, and the matrix
reordered if necessary.
The Sparse routines allow several matrices of different structures to
June 23, 1988
- 5 -
be resident at once. When a matrix of a new structure is encountered, the
user calls spCreate(). This routine creates the basic frame for the
linked-list and returns a pointer to this frame. This pointer is then
passed as an argument to the other Sparse routines to indicate which matrix
is to be operated on. The number of matrices that can be kept in memory at
once is only limited by the amount of memory available to the user and the
size of the matrices. When a matrix frame is no longer needed, the memory
can be reclaimed by calling spDestroy().
A more complete discussion of sparse systems of equations, methods for
solving them, their error mechanisms, and the algorithms used in Sparse can
be found in Kundert [kundert86]. A particular emphasis is placed on
matrices resulting from circuit simulators.
2.2: Error Control
There are two separate mechanisms that can cause errors during the
factoring and solution of a system of equations. The first is ill-
conditioning in the system. A system of equations is ill-conditioned if
the solution is excessively sensitive to disturbances in the input data,
which occurs when the system is nearly singular. If a system is ill-
conditioned then uncertainty in the result is unavoidable, even if A is
accurately factored into L and U. When ill-conditioning is a problem, the
problem as stated is probably ill-posed and the system should be reformu-
lated such that it is not so ill-conditioned. It is possible to measure
the ill-conditioning of matrix using spCondition(). This function returns
an estimate of the reciprocal of the condition number of the matrix (K(A))
[strang80]. The condition number can be used when computing a bound on the
error in the solution using the following inequality [golub83].
||dx|| (||dA|| ||db||)
------ < K(A) (------ + ------) + higher order terms
||x|| (||A|| ||b|| )
where dA and db are the uncertainties in the matrix and right-hand side
vector and are assumed small.
The second mechanism that causes uncertainty is the build up of round-
off error. Roundoff error can become excessive if there is sufficient
growth in the size of the elements during the factorization. Growth is
controlled by careful pivoting. In Sparse, the pivoting is controlled by
the relative threshold parameter. In conventional full matrix techniques
the pivot is chosen to be the largest element in a column. When working
with sparse matrices it is important to choose pivots to minimize the
reduction in sparsity. The best pivot to retain sparsity is often not the
best pivot to retain accuracy. Thus, some compromise must be made. In
threshold pivoting, as used in this package, the best pivot to retain spar-
sity is used unless it is smaller than the relative threshold times the
largest element in the column. Thus, a relative threshold close to one
emphasizes accuracy so it will produce a minimum amount of growth, unfor-
tunately it also slows the factorization. A very small relative threshold
emphasizes maintenance of sparsity and so speeds the factorization, but can
result in a large amount of growth. In our experience, we have found that
a relative threshold of 0.001 seems to result in a satisfactory compromise
between speed and accuracy, though other authors suggest a more conserva-
tive value of 0.1 [duff86].
June 23, 1988
- 6 -
The growth that occurred during a factorization can be computed by
taking the ratio of the largest matrix element in any stage of the factori-
zation to the largest matrix element before factorization. The two numbers
are estimated using spLargestElement(). If the growth is found to be
excessive after spOrderAndFactor(), then the relative threshold should be
increased and the matrix reconstructed and refactored. Once the matrix has
been ordered and factored without suffering too much growth, the amount of
growth that occurred should be recorded. If, on subsequent factorizations,
as performed by spFactor(), the amount of growth becomes significantly
larger, then the matrix should be reconstructed and reordered using the
same relative threshold with spOrderAndFactor(). If the growth is still
excessive, then the relative threshold should be raised again.
2.3: Building the Matrix
It is not necessary to specify the size of the matrix before beginning
to add elements to it. When the compiler option EXPANDABLE is turned on it
is possible to initially specify the size of the matrix to any size equal
to or smaller than the final size of the matrix. Specifically, the matrix
size may be initially specified as zero. If this is done then, as the ele-
ments are entered into the matrix, the matrix is enlarged as needed. This
feature is particularly useful in circuit simulators because it allows the
building of the matrix as the circuit description is parsed. Note that
once the matrix has been reordered by the routines spMNA Preorder(), spFac-
tor() or spOrderAndFactor() the size of the matrix becomes fixed and may no
longer be enlarged unless the compiler option TRANSLATE is enabled.
The TRANSLATE option allows Sparse to translate a non-packed set of
row and column numbers to an internal packed set. In other words, there
may be rows and columns missing from the external description of the
matrix. This feature provides two benefits. First, if two matrices are
identical in structure, except for a few missing rows and columns in one,
then the TRANSLATE option allows them to be treated identically. Simi-
larly, rows and columns may be deleted from a matrix after it has been
built and operated upon. Deletion of rows and columns is performed by the
function spDeleteRowAndCol(). Second, it allows the use of the functions
spGetElement(), spGetAdmittance(), spGetQuad(), and spGetOnes() after the
matrix has been reordered. These functions access the matrix by using row
and column indices, which have to be translated to internal indices once
the matrix is reordered. Thus, when TRANSLATE is used in conjunction with
the EXPANDABLE option, rows and columns may be added to a matrix after it
has been reordered.
Another provided feature that is useful with circuit simulators is the
ability to add elements to the matrix in row zero or column zero. These
elements will have no affect on the matrix or the results. The benefit of
this is that when working with a nodal formulation, grounded components do
not have to be treated special when building the matrix.
June 23, 1988
- 7 -
2.4: Initializing the Matrix
Once a matrix has been factored, it is necessary to clear the matrix
before it can be reloaded with new values. The straight forward way of
doing that is to call spClear(), which sets the value of every element in
the matrix to zero. Sparse also provides a more flexible way to clear the
matrix. Using spInitialize(), it is possible to clear and reload at least
part of the matrix in one step.
Sparse allows the user to keep initialization information with each
structurally nonzero matrix element. Each element has a pointer that is
set and used by the user. The user can set this pointer using spInstallIn-
itInfo() and may read it using spGetInitInfo(). The function spInitial-
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -