⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 aprod2aug.m

📁 书籍“Regularization tools for training large feed-forward neural networks using Automatic Differentiat
💻 M
字号:
function [Af] =aprod2aug(mode,m,n,f,mu,rw,w1,b1,Output_Layer2,Deriv_Layer2,...	df1,w2,b2,Output_Layer3,Deriv_Layer3,df2,P,T)%Call: [Af] =aprod2aug(mode,m,n,f,mu,rw,w1,b1,Output_Layer2,Deriv_Layer2,...%df1,w2,b2,Output_Layer3,deriv_Layer3,df2,P,T)%%This routine computes Jaug*f or Jaug'*f, where J is the Jacobian corresponding%to a FFANN with one hidden layer.%Jaug = ( J  )%       (mu*I)%It should normally be used with the routine truncLS which is a%modification of LSQR written by Paige and Saunders%% If mode == 1 aprod2aug returns y=Jaug*f% If mode == 2 aprod2aug returns y=Jaug'*f%% Description of arguments:%**************************% mode		=1 or 2 (see above)% m,n and rw	working space not really used% f		the m-dimensional (or n-dimensional) vector in Jaug*f (Jaug'*f)% mu		the Tihonov regularizatioon parameter% wi		weight matrix of i:th layer% bi		bias vector of i:th layer% dfi		delta transfer function for i:th layer% Output_Layeri	matrix of outputs from layer i% Deriv_Layeri	matrix of derivatives corresponding to layer i% P		r*q matrix of input vectors% T		s2*q matrix of target vectors%[Fst,q]=size(P);	% # nodes in Input layer (first layer)[Snd,r]=size(w1);	% # nodes in 1'st hidden layer (second layer)if Fst ~= r		% q is # samples presented to the inputs   error('In aprod2aug: Fst .ne. r');end[Thrd,r]=size(w2);	% # nodes in output layer (third layer)if Snd ~= r   error('In aprod2aug: Snd .ne. r')endOm=Fst*Snd+Snd;		% # weights in first hidden layerNm=Om+Snd*Thrd+Thrd;	% # total weightslenf=max(size(f));if mode == 1  if Nm ~= lenf     error('In aprod2aug: Nm .ne. lenf (mode == 1)');  else     Af=zeros(q*Thrd+Nm,1);  endelseif (q*Thrd+Nm) ~= lenf      error('In aprod2aug: q*Thrd+Nm .ne. lenf (mode == 2)');    else      Af=zeros(Nm,1);endTEMP=ones(1,q);%if mode == 1%Compute Jaug*f by using forward mode%unpack the vector elements such that the same kind of computation as%in the function evaluation can be done.%First hidden layer%for k=1:Fst,   %r1=(k-1)*Snd;   %w1(:,k)=f(r1+1:r1+Snd);%endw1=reshape(f(1:Fst*Snd),Snd,Fst);b1=f(Fst*Snd+1:Fst*Snd+Snd);O_L2=b1*TEMP+w1*P;S2=Deriv_Layer2 .* O_L2;%Output layer%for k=1:Snd,   %r1=Om+(k-1)*Thrd;   %w2temp(:,k)=f(r1+1:r1+Thrd);%endw2temp=reshape(f(Om+1:Om+Snd*Thrd),Thrd,Snd);b2=f(Om+Snd*Thrd+1:Om+Snd*Thrd+Thrd);O_L3=b2*TEMP+w2*S2+w2temp*Output_Layer2;S3=Deriv_Layer3 .* O_L3;Af=[S3(:); mu*f];else % Now mode should be 2   %Compute Jaug'*f   e=reshape(f(1:Thrd*q),Thrd,q);   d2=feval(df2,Output_Layer3,e);   d1=feval(df1,Output_Layer2,d2,w2);   [dw1,db1]=learnbp(P,d1,1);   [dw2,db2]=learnbp(Output_Layer2,d2,1);   Af=[dw1(:); db1(:); dw2(:); db2(:)];   Af=Af+mu*f(q*Thrd+1:q*Thrd+Nm);end %of if mode == 1end

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -