⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pmatlib.h

📁 压缩文件中是Error Correction Coding - Mathematical Methods and Algorithms(Wiley 2005)作者:(Todd K. Moon )的配
💻 H
📖 第 1 页 / 共 3 页
字号:
void ptenprintsc(char *name,signed char ***ten,int nstack,int n,int m);void ptenprintuc(char *name,unsigned char ***ten,int nstack,int n,int m);void ptenprinthd(char *name,short ***ten,int nstack,int n,int m);void ptenprintd(char *name,int ***ten,int nstack,int n,int m);void ptenprintx(char *name,int ***ten,int nstack,int n,int m);void ptenprintf(char *name,float ***ten,int nstack,int n,int m);void ptenprintlf(char *name,double ***ten,int nstack,int n,int m);void ptenprintld(char *name,long ***ten,int nstack,int n,int m);/* Compute the determinant */double pdetlf(double **a, int n);float pdetf(float **a, int n);/* invert a matrix and return the inverse and the determinant.   The method used is the LU decomposition.  (see pludcmp below).     The numerical analysts hate to invert matrices!!  The fundamental   rule of numerical analysis is NEVER INVERT A MATRIX.   (The alternative is to solve directly the set of equations that you   need to solve using a decomposition instead of an inverse.)   Unfortunately, most DSP algorithms are most easily expressed in   terms of matrix inverse, so by default we invert matrices all over the   place.    For general equation solution, however, see the discussion preceeding    pludcmp and plubksb).   The determinant of a is returned.  If it is nonzero, then ainv    contains the inverse of a. */float pmatinvf(float **a, float **ainv, int n);double pmatinvlf(double **a, double **ainv, int n);/* pludcmp -- compute the LU decomposition of a matrix.  That is, we   can write      A = LU   where L is lower triangular and U is upper triangular.  The   factorization is accomplished using Gauss elimination with   pivoting.  The matrix A is overwritten in this routine.   Once the factorization is accomplished, equations such as      Ax = b   can be solved readily from      (LU)x = b   and solving by backsubstitution from the triangular structore of L   and U.  Note that if b changes, we can solve the equations without   refactorizing A!   Also, once the LU decomposition is performed, the determinant is   easily calculated.   In pludcmp, a is the matrix to factorize, n is the dimension (a   must be square), indx is an array of integers used to hold the   pivoting information, d is used to indicate the sign change of the   pivot, eps indicates the smallest allowable pivot for nonsingular   matrices.   To solve a system of equations Ax = b, the following is the best   way to proceed:   pludcmpf(a,LUa,indx,n,&d,TINY);   plubksubf(LUa,n,indx,b);*/void pludcmpf(float **a, float **LUa, int n, int *indx,float *d, float eps);void pludcmplf(double **a, double **LU, int n, int *idx,double *d, double eps);/* plubksb -- see discussion preceding pludcmp.  Perform   backsubstition to solve Ax = b after do LU decomposition on A.   a is the factored matrix   n is the size   indx is the array of pivots obtained from pludcmp   b is the right hand side of the equation*/void plubksubf(float **LUa, int n, int *indx,float *b);void plubksublf(double **LUa, int n, int *indx,double *b);/* pmatsolve -- solve Ax = b, using ludcmp and lubksub.  Retains   the factorization of A in internal storage, so that other    equations can be solved by calling pmatresolve, unless a function is   called that modifies tmat1*/void pmatsolvef(float **A, float *x, float *b, int n);void pmatresolvef(float *x, float *b, int n);void pmatsolvelf(double **A, double *x, double *b, int n);void pmatresolvelf(double *x, double *b, int n);/*    lueps is a number such that if a pivot is smaller than it, the matrix   is determined to be singular.  A small value (such as 1.e-20) will   probably suffice for most work.    This number is used by all functions which use LU factorization,    including pludcmp, pmatsolve, pmatinv, pdet, and pmineig.   The default value is 1e-20.*/void psetlueps(double lueps);/* pcholf -- do a cholesky factorization.  That is, given a positive-definite   symmetric matrix A, factor A as    A = LL', where L is a lower triangular matrix   A is not modified*//* Compute the Cholesky factorization of a symmetric matrix A */void pcholf(float **A, float **L, int n);void pchollf(double **A, double **L, int n);/* Compute the  LDL decomposition for a symmetric matrix,    factoring A so that A = L*diag(D)*L' */void pLDLlf( double **A, double **L, double *D, int n );void pLDLf( float **A, float **L, float *D, int n );/********************************************************************//* SVD Decomposition:  A = UWV', where   A is mxn   U is mxn   W is an nxn diagonal matrix   V is nxnW is returned as a nx1 vector.  The matrices must all be allocatedbefore calling the routine.Properties of the SVD:  V is orthogonal                        U is column orthogonal*/void psvdf(float **a, float **u, float *w, float **v, int m, int n);void psvdlf(double **a, double **u, double *w, double **v, int m, int n);/********************************************************************//*********************************************************Finds eigenvalues and eigenvectors of a square matrix of dimension dim.         Q -- the matrix whose columns are the eigenvectors.       Lambda -- a diagonal matrix containing the eigenvalues.       epsilon -- Specifies the precision desired for the eigenvalues,          by specifying how small the relative adjustment per iteration \      must be before the eigenvalue is considered found.  Note that          this is relative and not absolute precision.  A precision of 5e-20       for instance will require that every bit in the mantissa of the       double precision floating point number be the same,       which is unreasonable.         maxits -- specifies the maximum number of iterations before giving          up on the desired accuracy.This routine uses the QR method of finding eigenvalues without any of the neat tricks to speed it up.  It could be better.Return value:        the number of iterations required.                  Q, Lambda are modified as return values also.*********************************************************/int peigenlf(double **A, double **Q, double *Lambda, int dim);int peigenf(float **A, float **Q, float *Lambda, int dim);/* find the eigenvalues and eigenvectors of a symmetric 2x2 matrix *//* S is the matrix, v has the eigenvectors as columns, and e    has the normalized eigenvalues */void eigen2lf(double **S, double **v, double *e);void eigen2f(float **S, float **v, float *e);/* find the eigenvalues and eigenvectors of a symmetric 3x3 matrix *//* S is the matrix, v has the eigenvectors as columns, and e    has the normalized eigenvalues */void eigen3lf(double **S, double **v, double *e);/* houseqr -- get the QR factorization of A, storing the   Q matrix implicitly (in terms of normalized householder vectors)   in QR */void houseqrlf(double **A, double **QR, int m, int n);void houseqrf(float **A, float **QR, int m, int n);/* findQlf -- using the matrix QR returned by houseqr, extract Q */void findQlf(double **QR, double **Q, int m, int n);void findQf(float **QR, float **Q, int m, int n);/* findQandRlf -- using the matrix QR returned by houseqr, extract Q and R */void findQandRlf(double **QR, double **Q, double **R, int m, int n);void findQandRf(float **QR, float **Q, float **R, int m, int n);/********************************************************************/void pmatpsolve1lf(double **A, double *x, double *b, int m, int n);/* Least-squares Solve Ax = b, where A is overdetermined, using    a QR factorization */void pmatpresolve1lf(double *x, double *b, int m, int n);/* Least-squares Solve Ax = b, where A is the same as the last call to   pmatpsolve1  (and tmat1 has not been modified) */void applyQlf(double **QR, double *b, int m, int n);/* Given the compact householder representation in QR, compute   Q' b, replacing the result in b*/void ptrisolupperlf(double **R, double *x, double *b, int n) ;/* solve the triangular system Rx = b, where R is nxn upper triangular,   assuming that R[i][i] != 0.*/void pmatpsolve1f(float **A, float *x, float *b, int m, int n);/* Least-squares Solve Ax = b, where A is overdetermined, using    a QR factorization */void pmatpresolve1f(float *x, float *b, int m, int n);/* Least-squares Solve Ax = b, where A is the same as the last call to   pmatpsolve1  (and tmat1 has not been modified) */void applyQf(float **QR, float *b, int m, int n);/* Given the compact householder representation in QR, compute   Q' b, replacing the result in b*/void ptrisolupperf(float **R, float *x, float *b, int n) ;/* solve the triangular system Rx = b, where R is nxn upper triangular,   assuming that R[i][i] != 0.*//*********************************************************************Toeplitz stuff:  ptoepinvlf -- computes the inverse of a symmetric toeplitz matrix  (from Golub and Van Loan, 2nd ed. p. 190  pdurbinlf -- solves the yule walker equations T_n y = -(r1 r2 ... rn)  (this is more efficient than  computing the inverse using ptoepinvlf   and then multiplying)   plevinsonlf -- solve the Toepliz equation T_n x = b for arbitrary right-hand   side.  (this is more efficient than computing the inverse and multipling)**********************************************************************/void ptoepinvlf(double **H,     /* matrix to invert */               double **B,      /* the inverse */               int n);          /* matrix size */void ptoepinvf(float **H,     /* matrix to invert */               float **B,      /* the inverse */               int n);          /* matrix size *//* solve the yule-walker equations  T_n y = -(r1,...,rn)' */void pdurbinlf(double *r, double *y, int n);void pdurbinf(float *r, float *y, int n);/*********************************************************  Toeplitz solution:  Solve Tx = b, where T is a symmetric  toeplitz matrix, determined by the vector r**********************************************************/void plevinsonlf(double *r, double *x, double *b, int n);void plevinsonf(float *r, float *x, float *b, int n);/*********************************************************  Inverse of a triangular matrix.  Computes  L^-1  Where:  L is rows x rows lower triangular.  Linv is rows x rows lower triangular.  Return value:  Linv is modified as a return value.  *********************************************************/void ptriinvlf(double **L, double **Linv, int rows);void ptriinvf(float **L, float **Linv, int rows);/*********************************************************//* This subroutine will attempt to determine the most significant   eigenvalue and eigenvector for the square symmetric input matrix A.   The rate of   convergence depends on the ratio of the most significant eigenvalue   and the second most significant eigenvalue.  (the larger the   better)  The Power Method is used.  For more information see:   "MATRIX Computions" second edition By Gene H. Golub and Charles F.   Van Loan,ISBN 0-8018-3772-3 or 0-8018-3739-1(pbk.)   NOTE: this algorithm is may not converge!!!  The number return value is   the last_lamda - final_lamda.  It will somewhat reflect if the   algorithm has converged.   A:  is a MxM matrix   v:  is where the eigenvector corresponding to lamda that is returned   lamda: is the largest eigenvalue for A   M:  is the dimension of A and v.   Max_iterations:  is the maximuim number of iterations that the   method will attempt before stopping.  {This guarrenties that the   algorithm will stop.}   Error_threshold:  this is the normal stopping codition.   i.e. the   algorithm will stop when the:   (last_lamda - final_lamda) < Error_threshold  */float pmaxeigenf(float **A, float *v, float *lamda, int M, int		 Max_iterations, float Error_threshold);double pmaxeigenlf(double **A, double *v, double *lamda, int M, int		  Max_iterations, double Error_threshold);/*********************************************************//* This subroutine will attempt to determine the least significant   eigenvalue and eigenvector for the input matrix A.  The rate of   convergence depends on the ratio of the most significant eigenvalue   and    the second most significant eigenvalue.  (the larger the   better)  The Power Method is used.  For more information see:   "MATRIX Computions" second edition By Gene H. Golub and Charles F.   Van Loan,ISBN 0-8018-3772-3 or 0-8018-3739-1(pbk.)      NOTE: this algorithm is may not converge!!!  The number return value is   the last_lamda - final_lamda.  It will somewhat reflect if the   algorithm has converged.   A:  is a MxM matrix   v:  is where the eigenvector corresponding to lamda that is returned   lamda: is the largest eigenvalue for A   M:  is the dimension of A and v.   Max_iterations:  is the maximuim number of iterations that the   method will attempt before stopping.  {This guarrenties that the   algorithm will stop.}   Error_threshold:  this is the normal stopping codition.   i.e. the

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -