📄 tutorial.txt
字号:
Sorry, aborting program
The error handler prints out the error message giving the source code
file and line number as well as the function name where the error was
raised. The relevant section of f() in file tutorial.c is:
if ( x->dim != 2 || out->dim != 2 )
error(E_SIZES,"f"); /* line 79 */
The standard routines in this system perform error checking of this
type, and also checking for undefined results such as division by zero in
the routines for solving systems of linear equations. There are also error
messages for incorrectly formatted input and end-of-file conditions.
To round off the discussion of this program, note that we have seen
interactive input of vectors. If the input file or stream is not a tty
(e.g., a file, a pipeline or a device) then it expects the input to have
the same form as the output for each of the data structures. Each of the
input routines (v_input(), m_input(), px_input()) skips over ``comments''
in the input data, as do the macros input() and finput(). Anything from a
`#' to the end of the line (or EOF) is considered to be a comment. For
example, the initial value problem could be set up in a file ivp.dat as:
# Initial time
0
# Final time
1
# Solution is x(t) = (cos(t),-sin(t))
# x(0) =
Vector: dim: 2
1 0
# Step size
0.1
The output of the above program with the above input (from a file) gives
essentially the same output as shown above, except that no prompts are sent
to the screen.
4. USING ROUTINES FOR LISTS OF ARGUMENTS
Some of the most common routines have variants that take a variable
number of arguments. These are the routines .._get_vars(), .._resize_vars()
and .._free_vars(). These correspond to the the basic routines .._get(),
.._resize() and .._free() respectively. Also there is the
mem_stat_reg_vars() routine which registers a list of static workspace
variables. This corresponds to mem_stat_reg_list() for a single variable.
Here is an example of how to use these functions. This example also
uses the routine v_linlist() to compute a linear combination of vectors.
Note that the code is much more compact, but don't forget that these
``..._vars()'' routines usually need the address-of operator ``&'' and NULL
termination of the arguments to work correctly.
#include "matrix.h"
/* rk4 - 4th order Runge-Kutta method */
double rk4(f,t,x,h)
double t, h;
VEC *(*f)(), *x;
{
static VEC *v1, *v2, *v3, *v4, *temp;
/* do not work with NULL initial vector */
if ( x == VNULL )
error(E_NULL,"rk4");
/* ensure that v1, ..., v4, temp are of the correct size */
v_resize_vars(x->dim, &v1, &v2, &v3, &v4, &temp, NULL);
/* register workspace variables */
mem_stat_reg_vars(0, TYPE_VEC, &v1, &v2, &v3, &v4, &temp, NULL);
/* end of memory allocation */
(*f)(t,x,v1); v_mltadd(x,v1,0.5*h,temp);
(*f)(t+0.5*h,temp,v2); v_mltadd(x,v2,0.5*h,temp);
(*f)(t+0.5*h,temp,v3); v_mltadd(x,v3,h,temp);
(*f)(t+h,temp,v4);
/* now add: temp = v1+2*v2+2*v3+v4 */
v_linlist(temp, v1, 1.0, v2, 2.0, v3, 2.0, v4, 1.0, VNULL);
/* adjust x */
v_mltadd(x,temp,h/6.0,x); /* x = x+(h/6)*temp */
return t+h; /* return the new time */
}
5. A LEAST SQUARES PROBLEM
Here we need to use matrices and matrix factorisations (in particular, a
QR factorisation) in order to find the best linear least squares solution
to some data. Thus in order to solve the (approximate) equations
A*x = b,
where A is an m x n matrix (m > n) we really need to solve the optimisation
problem
min_x ||Ax-b||^2.
If we write A=QR where Q is an orthogonal m x m matrix and R is an upper
triangular m x n matrix then (we use 2-norm)
||A*x-b||^2 = ||R*x-Q^T*b||^2 = || R_1*x - Q_1^T*b||^2 + ||Q_2^T*b||^2
where R_1 is an n x n upper triangular matrix. If A has full rank then R_1
will be an invertible matrix, and the best least squares solution of A*x=b
is x= R_1^{-1}*Q_1^T*b .
These calculations can be be done quite easily as there is a QRfactor()
function available with the system. QRfactor() is declared to have the
prototype
MAT *QRfactor(MAT *A, VEC *diag);
The matrix A is overwritten with the factorisation of A ``in compact
form''; that is, while the upper triangular part of A is indeed the R
matrix described above, the Q matrix is stored as a collection of
Householder vectors in the strictly lower triangular part of A and in the
diag vector. The QRsolve() function knows and uses this compact form and
solves Q*R*x=b with the call QRsolve(A,diag,b,x), which also returns x.
Here is the code to obtain the matrix A, perform the QR factorisation,
obtain the data vector b, solve for x, and determine what the norm of the
errors ( ||Ax-b||_2 ) is.
#include "matrix2.h"
main()
{
MAT *A, *QR;
VEC *b, *x, *diag;
/* read in A matrix */
printf("Input A matrix:");
A = m_input(MNULL); /* A has whatever size is input */
if ( A->m < A->n )
{
printf("Need m >= n to obtain least squares fit");
exit(0);
}
printf("# A ="); m_output(A);
diag = v_get(A->m);
/* QR is to be the QR factorisation of A */
QR = m_copy(A,MNULL);
QRfactor(QR,diag);
/* read in b vector */
printf("Input b vector:");
b = v_get(A->m);
b = v_input(b);
printf("# b ="); v_output(b);
/* solve for x */
x = QRsolve(QR,diag,b,VNULL);
printf("Vector of best fit parameters is");
v_output(x);
/* ... and work out norm of errors... */
printf("||A*x-b|| = %g\n",
v_norm2(v_sub(mv_mlt(A,x,VNULL),b,VNULL)));
}
Note that as well as the usual memory allocation functions like m_get(),
the I/O functions like m_input() and m_output(), and the
factorise-and-solve functions QRfactor() and QRsolve(), there are also
functions for matrix-vector multiplication:
mv_mlt(MAT *A, VEC *x, VEC *out)
and also vector-matrix multiplication (with the vector on the left):
vm_mlt(MAT *A, VEC *x, VEC *out),
with out=x^T A. There are also functions to perform matrix arithmetic -
matrix addition m_add(), matrix-scalar multiplication sm_mlt(),
matrix-matrix multiplication m_mlt().
Several different sorts of matrix factorisation are supported: LU
factorisation (also known as Gaussian elimination) with partial pivoting,
by LUfactor() and LUsolve(). Other factorisation methods include Cholesky
factorisation CHfactor() and CHsolve(), and QR factorisation with column
pivoting QRCPfactor().
Pivoting involve permutations which have their own PERM data structure.
Permutations can be created by px_get(), read and written by px_input() and
px_output(), multiplied by px_mlt(), inverted by px_inv() and applied to
vectors by px_vec().
The above program can be put into a file leastsq.c and compiled under Unix
using
cc -o leastsq leastsq.c meschach.a -lm
A sample session using leastsq follows:
Input A matrix:
Matrix: rows cols:5 3
row 0:
entry (0,0): 3
entry (0,1): -1
entry (0,2): 2
Continue:
row 1:
entry (1,0): 2
entry (1,1): -1
entry (1,2): 1
Continue: n
row 1:
entry (1,0): old 2 new: 2
entry (1,1): old -1 new: -1
entry (1,2): old 1 new: 1.2
Continue:
row 2:
entry (2,0): old 0 new: 2.5
....
.... (Data entry)
....
# A =
Matrix: 5 by 3
row 0: 3 -1 2
row 1: 2 -1 1.2
row 2: 2.5 1 -1.5
row 3: 3 1 1
row 4: -1 1 -2.2
Input b vector:
entry 0: old 0 new: 5
entry 1: old 0 new: 3
entry 2: old 0 new: 2
entry 3: old 0 new: 4
entry 4: old 0 new: 6
# b =
Vector: dim: 5
5 3 2 4 6
Vector of best fit parameters is
Vector: dim: 3
1.47241555 -0.402817858 -1.14411815
||A*x-b|| = 6.78938
The Q matrix can be obtained explicitly by the routine makeQ(). The Q
matrix can then be used to obtain an orthogonal basis for the range of A .
An orthogonal basis for the null space of A can be obtained by finding the
QR-factorisation of A^T .
6. A SPARSE MATRIX EXAMPLE
To illustrate the sparse matrix routines, consider the problem of
solving Poisson's equation on a square using finite differences, and
incomplete Cholesky factorisation. The actual equations to solve are
u_{i,j+1} + u_{i,j-1} + u_{i+1,j} + u_{i-1,j} - 4*u_{i,j} =
h^2*f(x_i,y_j), for i,j=1,...,N
where u_{0,j} = u_{i,0} = u_{N+1,j} = u_{i,N+1} = 0 for i,j=1,...,N and h
is the common distance between grid points.
The first task is to set up the matrix describing this system of linear
equations. The next is to set up the right-hand side. The third is to
form the incomplete Cholesky factorisation of this matrix, and finally to
use the sparse matrix conjugate gradient routine with the incomplete
Cholesky factorisation as preconditioner.
Setting up the matrix and right-hand side can be done by the following
code:
#define N 100
#define index(i,j) (N*((i)-1)+(j)-1)
......
A = sp_get(N*N,N*N,5);
b = v_get(N*N);
h = 1.0/(N+1); /* for a unit square */
......
for ( i = 1; i <= N; i++ )
for ( j = 1; j <= N; j++ )
{
if ( i < N )
sp_set_val(A,index(i,j),index(i+1,j),-1.0);
if ( i > 1 )
sp_set_val(A,index(i,j),index(i-1,j),-1.0);
if ( j < N )
sp_set_val(A,index(i,j),index(i,j+1),-1.0);
if ( j > 1 )
sp_set_val(A,index(i,j),index(i,j-1),-1.0);
sp_set_val(A,index(i,j),index(i,j),4.0);
b->ve[index(i,j)] = -h*h*f(h*i,h*j);
}
Once the matrix and right-hand side are set up, the next task is to
compute the sparse incomplete Cholesky factorisation of A. This must be
done in a different matrix, so A must be copied.
LLT = sp_copy(A);
spICHfactor(LLT);
Now when that is done, the remainder is easy:
out = v_get(A->m);
......
iter_spcg(A,LLT,b,1e-6,out,1000,&num_steps);
printf("Number of iterations = %d\n",num_steps);
......
and the output can be used in whatever way desired.
For graphical output of the results, the solution vector can be copied
into a square matrix, which is then saved in MATLAB format using m_save(),
and graphical output can be produced by MATLAB.
7. HOW DO I ....?
For the convenience of the user, here a number of common tasks that
people need to perform frequently, and how to perform the computations
using Meschach.
7.1 .... SOLVE A SYSTEM OF LINEAR EQUATIONS ?
If you wish to solve Ax=b for x given A and b (without destroying A),
then the following code will do this:
VEC *x, *b;
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -