⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 svd.cc

📁 basic linear algebra classes and applications (SVD,interpolation, multivariate optimization)
💻 CC
📖 第 1 页 / 共 2 页
字号:
// This may look like C code, but it is really -*- C++ -*-/* ************************************************************************ * *			  Numerical Math Package *	Singular Value Decomposition of a rectangular matrix *			     A = U * Sig * V' * * where matrices U and V are orthogonal and Sig is a digonal matrix. * * The singular value decomposition is performed by constructing an SVD * object from an M*N matrix A with M>=N (that is, at least as many rows * as columns). Note, in case M > N, matrix Sig has to be a M*N diagonal * matrix. However, it has only N diag elements, which we store in a 1:N * Vector sig. * * Algorithm *	Bidiagonalization with Householder reflections followed by a * modification of a QR-algorithm. For more details, see *   G.E. Forsythe, M.A. Malcolm, C.B. Moler *   Computer methods for mathematical computations. - Prentice-Hall, 1977 * However, in the persent implementation, matrices U and V are computed * right away rather than delayed until after all Householder reflections. * * This code is based for the most part on a Algol68 code I wrote * ca. 1987 * * $Id: svd.cc,v 1.4 1998/12/19 03:14:21 oleg Exp oleg $ * ************************************************************************ */#if defined(__GNUC__)#pragma implementation#endif#include <math.h>#include "svd.h"#include <float.h>#include "LAStreams.h"/* *------------------------------------------------------------------------ *				Bidiagonalization */ /* *			Left Householder Transformations * * Zero out an entire subdiagonal of the i-th column of A and compute the * modified A[i,i] by multiplication (UP' * A) with a matrix UP' *   (1)  UP' = E - UPi * UPi' / beta * * where a column-vector UPi is as follows *   (2)  UPi = [ (i-1) zeros, A[i,i] + Norm, vector A[i+1:M,i] ] * where beta = UPi' * A[,i] and Norm is the norm of a vector A[i:M,i] * (sub-diag part of the i-th column of A). Note we assign the Norm the * same sign as that of A[i,i]. * By construction, (1) does not affect the first (i-1) rows of A. Since * A[*,1:i-1] is bidiagonal (the result of the i-1 previous steps of * the bidiag algorithm), transform (1) doesn't affect these i-1 columns * either as one can easily verify. * The i-th column of A is transformed as *   (3)  UP' * A[*,i] = A[*,i] - UPi * (since UPi'*A[*,i]/beta = 1 by construction of UPi and beta) * This means effectively zeroing out A[i+1:M,i] (the entire subdiagonal * of the i-th column of A) and replacing A[i,i] with the -Norm. Thus * modified A[i,i] is returned by the present function. * The other (i+1:N) columns of A are transformed as *    (4)  UP' * A[,j] = A[,j] - UPi * ( UPi' * A[,j] / beta ) * Note, due to (2), only elements of rows i+1:M actually  participate * in above transforms; the upper i-1 rows of A are not affected. * As was mentioned earlier, * (5)  beta = UPi' * A[,i] = (A[i,i] + Norm)*A[i,i] + A[i+1:M,i]*A[i+1:M,i] *	= ||A[i:M,i]||^2 + Norm*A[i,i] = Norm^2 + Norm*A[i,i] * (note the sign of the Norm is the same as A[i,i]) * For extra precision, vector UPi (and so is Norm and beta) are scaled, * which would not affect (4) as easy to see. * * To satisfy the definition *   (6)  .SIG = U' A V * the result of consecutive transformations (1) over matrix A is accumulated * in matrix U' (which is initialized to be a unit matrix). At each step, * U' is left-multiplied by UP' = UP (UP is symmetric by construction, * see (1)). That is, U is right-multiplied by UP, that is, rows of U are * transformed similarly to columns of A, see eq. (4). We also keep in mind * that multiplication by UP at the i-th step does not affect the first i-1 * columns of U. * Note that the vector UPi doesn't have to be allocated explicitly: its * first i-1 components are zeros (which we can always imply in computations), * and the rest of the components (but the UPi[i]) are the same as those * of A[i:M,i], the subdiagonal of A[,i]. This column, A[,i] is affected only * trivially as explained above, that is, we don't need to carry this * transformation explicitly (only A[i,i] is going to be non-trivially * affected, that is, replaced by -Norm, but we will use sig[i] to store * the result). * */  inline double SVD::left_householder(Matrix& A, const int i) {					// Note that only UPi[i:M] matter   IRange range = IRange::from(i - A.q_col_lwb());   LAStreamOut UPi_str(MatrixColumn(A,i),range);   register int j;   REAL scale = 0;			// Compute the scaling factor   while( !UPi_str.eof() )     scale += abs(UPi_str.get());   if( scale == 0 )			// If A[,i] is a null vector, no     return 0;				// transform is required   UPi_str.rewind();   double Norm_sqr = 0;			// Scale UPi (that is, A[,i])   while( !UPi_str.eof() )		// and compute its norm, Norm^2     Norm_sqr += sqr(UPi_str.get() /= scale);   UPi_str.rewind();   double new_Aii = sqrt(Norm_sqr);	// new_Aii = -Norm, Norm has the   if( UPi_str.peek() > 0 )		// same sign as Aii (that is, UPi[i])     new_Aii = -new_Aii;   const float beta = - UPi_str.peek()*new_Aii + Norm_sqr;   UPi_str.peek() -= new_Aii;		// UPi[i] = A[i,i] - (-Norm)      for(j=i+1; j<=N; j++)	// Transform i+1:N columns of A   {     LAStreamOut Aj_str(MatrixColumn(A,j),range);     REAL factor = 0;     while( !UPi_str.eof() )       factor += UPi_str.get() * Aj_str.get();	// Compute UPi' * A[,j]     factor /= beta;     for(UPi_str.rewind(), Aj_str.rewind(); !UPi_str.eof(); )       Aj_str.get() -= UPi_str.get() * factor;     UPi_str.rewind();   }   for(j=1; j<=M; j++)	// Accumulate the transform in U   {     LAStrideStreamOut Uj_str(MatrixRow(U,j),range);     REAL factor = 0;     while( !UPi_str.eof() )       factor += UPi_str.get() * Uj_str.get();	// Compute  U[j,] * UPi     factor /= beta;     for(UPi_str.rewind(),Uj_str.rewind(); !UPi_str.eof(); )        Uj_str.get() -= UPi_str.get() * factor;     UPi_str.rewind();   }   return new_Aii * scale;		// Scale new Aii back (our new Sig[i]) } /* *			Right Householder Transformations * * Zero out i+2:N elements of a row A[i,] of matrix A by right * multiplication (A * VP) with a matrix VP *   (1)  VP = E - VPi * VPi' / beta * * where a vector-column .VPi is as follows *   (2)  VPi = [ i zeros, A[i,i+1] + Norm, vector A[i,i+2:N] ] * where beta = A[i,] * VPi and Norm is the norm of a vector A[i,i+1:N] * (right-diag part of the i-th row of A). Note we assign the Norm the * same sign as that of A[i,i+1]. * By construction, (1) does not affect the first i columns of A. Since * A[1:i-1,] is bidiagonal (the result of the previous steps of * the bidiag algorithm), transform (1) doesn't affect these i-1 rows * either as one can easily verify. * The i-th row of A is transformed as *  (3)  A[i,*] * VP = A[i,*] - VPi' * (since A[i,*]*VPi/beta = 1 by construction of VPi and beta) * This means effectively zeroing out A[i,i+2:N] (the entire right super- * diagonal of the i-th row of A, but ONE superdiag element) and replacing * A[i,i+1] with - Norm. Thus modified A[i,i+1] is returned as the result of * the present function. * The other (i+1:M) rows of A are transformed as *    (4)  A[j,] * VP = A[j,] - VPi' * ( A[j,] * VPi / beta ) * Note, due to (2), only elements of columns i+1:N actually  participate * in above transforms; the left i columns of A are not affected. * As was mentioned earlier, * (5)  beta = A[i,] * VPi = (A[i,i+1] + Norm)*A[i,i+1] *			   + A[i,i+2:N]*A[i,i+2:N] *	= ||A[i,i+1:N]||^2 + Norm*A[i,i+1] = Norm^2 + Norm*A[i,i+1] * (note the sign of the Norm is the same as A[i,i+1]) * For extra precision, vector VPi (and so is Norm and beta) are scaled, * which would not affect (4) as easy to see. * * The result of consecutive transformations (1) over matrix A is accumulated * in matrix V (which is initialized to be a unit matrix). At each step, * V is right-multiplied by VP. That is, rows of V are transformed similarly * to rows of A, see eq. (4). We also keep in mind that multiplication by * VP at the i-th step does not affect the first i rows of V. * Note that vector VPi doesn't have to be allocated explicitly: its * first i components are zeros (which we can always imply in computations), * and the rest of the components (but the VPi[i+1]) are the same as those * of A[i,i+1:N], the superdiagonal of A[i,]. This row, A[i,] is affected * only trivially as explained above, that is, we don't need to carry this * transformation explicitly (only A[i,i+1] is going to be non-trivially * affected, that is, replaced by -Norm, but we will use super_diag[i+1] to * store the result). * */ inline double SVD::right_householder(Matrix& A, const int i){   IRange range = IRange::from(i+1 - A.q_row_lwb());// Note only VPi[i+1:N] matter   LAStrideStreamOut VPi_str(MatrixRow(A,i),range);   register int j;   REAL scale = 0;			// Compute the scaling factor   while( !VPi_str.eof() )     scale += abs(VPi_str.get());   if( scale == 0 )			// If A[i,] is a null vector, no     return 0;				// transform is required    VPi_str.rewind();   double Norm_sqr = 0;			// Scale VPi (that is, A[i,])   while( !VPi_str.eof() )		// and compute its norm, Norm^2     Norm_sqr += sqr(VPi_str.get() /= scale);   VPi_str.rewind();   double new_Aii1 = sqrt(Norm_sqr);	// new_Aii1 = -Norm, Norm has the   if( VPi_str.peek() > 0 )		// same sign as     new_Aii1 = -new_Aii1; 		// Aii1 (that is, VPi[i+1])   const float beta = - VPi_str.peek()*new_Aii1 + Norm_sqr;   VPi_str.peek() -= new_Aii1;		// VPi[i+1] = A[i,i+1] - (-Norm)      for(j=i+1; j<=M; j++)	// Transform i+1:M rows of A   {     LAStrideStreamOut Aj_str( MatrixRow(A,j),range );     REAL factor = 0;     while( !VPi_str.eof() )       factor += VPi_str.get() * Aj_str.get();	// Compute A[j,] * VPi     factor /= beta;     for(VPi_str.rewind(), Aj_str.rewind(); !VPi_str.eof(); )       Aj_str.get() -= VPi_str.get() * factor;     VPi_str.rewind();   }   for(j=1; j<=N; j++)	// Accumulate the transform in V   {     LAStrideStreamOut Vj_str( MatrixRow(V,j), range );     REAL factor = 0;     while( !VPi_str.eof() )       factor += VPi_str.get() * Vj_str.get();	// Compute  V[j,] * VPi     factor /= beta;     for(VPi_str.rewind(), Vj_str.rewind(); !VPi_str.eof(); )       Vj_str.get() -= VPi_str.get() * factor;     VPi_str.rewind();   }   return new_Aii1 * scale;		// Scale new Aii1 back}/* *------------------------------------------------------------------------ *			  Bidiagonalization * This nethod turns matrix A into a bidiagonal one. Its N diagonal elements * are stored in a vector sig, while N-1 superdiagonal elements are stored * in a vector super_diag(2:N) (with super_diag(1) being always 0). * Matrices U and V store the record of orthogonal Householder * reflections that were used to convert A to this form. The method * returns the norm of the resulting bidiagonal matrix, that is, the * maximal column sum. */double SVD::bidiagonalize(Vector& super_diag, const Matrix& _A){  double norm_acc = 0;  super_diag(1) = 0;			// No superdiag elem above A(1,1)  Matrix A = _A;			// A being transformed  A.resize_to(_A.q_nrows(),_A.q_ncols()); // Indexing from 1  for(register int i=1; i<=N; i++)  {    const REAL& diagi = sig(i) = left_householder(A,i);    if( i < N )      super_diag(i+1) = right_householder(A,i);    norm_acc = max(norm_acc,(double)abs(diagi)+abs(super_diag(i)));  }  return norm_acc;}/* *------------------------------------------------------------------------ *		QR-diagonalization of a bidiagonal matrix * * After bidiagonalization we get a bidiagonal matrix J: *    (1)  J = U' * A * V * The present method turns J into a matrix JJ by applying a set of * orthogonal transforms *    (2)  JJ = S' * J * T * Orthogonal matrices S and T are chosen so that JJ were also a * bidiagonal matrix, but with superdiag elements smaller than those of J. * We repeat (2) until non-diag elements of JJ become smaller than EPS * and can be disregarded. * Matrices S and T are constructed as *    (3)  S = S1 * S2 * S3 ... Sn, and similarly T * where Sk and Tk are matrices of simple rotations *    (4)  Sk[i,j] = i==j ? 1 : 0 for all i>k or i<k-1 *         Sk[k-1,k-1] = cos(Phk),  Sk[k-1,k] = -sin(Phk), *         SK[k,k-1] = sin(Phk),    Sk[k,k] = cos(Phk), k=2..N * Matrix Tk is constructed similarly, only with angle Thk rather than Phk. * * Thus left multiplication of J by SK' can be spelled out as *    (5)  (Sk' * J)[i,j] = J[i,j] when i>k or i<k-1, *                  [k-1,j] = cos(Phk)*J[k-1,j] + sin(Phk)*J[k,j] *                  [k,j] =  -sin(Phk)*J[k-1,j] + cos(Phk)*J[k,j] * That is, k-1 and k rows of J are replaced by their linear combinations; * the rest of J is unaffected. Right multiplication of J by Tk similarly * changes only k-1 and k columns of J. * Matrix T2 is chosen the way that T2'J'JT2 were a QR-transform with a * shift. Note that multiplying J by T2 gives rise to a J[2,1] element of * the product J (which is below the main diagonal). Angle Ph2 is then * chosen so that multiplication by S2' (which combines 1 and 2 rows of J) * gets rid of that elemnent. But this will create a [1,3] non-zero element. * T3 is made to make it disappear, but this leads to [3,2], etc. * In the end, Sn removes a [N,N-1] element of J and matrix S'JT becomes * bidiagonal again. However, because of a special choice * of T2 (QR-algorithm), its non-diag elements are smaller than those of J. * * All this process in more detail is described in *    J.H. Wilkinson, C. Reinsch. Linear algebra - Springer-Verlag, 1971 * * If during transforms (1), JJ[N-1,N] turns 0, then JJ[N,N] is a singular * number (possibly with a wrong (that is, negative) sign). This is a

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -