⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 incbp.m

📁 类神经网路─MATLAB的应用(范例程式)
💻 M
字号:
function [W1,W2,PI_vector,iter]=incbp(NetDef,W1,W2,PHI,Y,trparms)
% 
%  INCBP
%  -----
%        Recursive (=incremental) version of the backpropagation algorithm.
%
%  Given a set of corresponding input-output pairs and an initial network
%  [W1,W2,critvec,iter]=incbp(NetDef,W1,W2,PHI,Y,trparms) trains a
%  network with recursive backpropagation.
%
%  The activation functions must be either linear or tanh. The network
%  architecture is determined by the matrix 'NetDef' consisting of two 
%  rows. The first row specifies the hidden layer while the second
%  specifies the output layer.
%
%  E.g.,    NetDef = ['LHHHH'
%                     'LL---']
%
%  (L = Linear, H = tanh)
% 
%  Notice that the bias is included as the last column in the weight
%  matrices.
%
%  See alternatively BATBP for the batch version.
% 
%  INPUT:
%  NetDef: Network definition 
%  W1    : Input-to-hidden layer weights. The matrix dimension is
%          dim(W1) = [(# of hidden units) * (inputs + 1)] (the 1 is due to the bias)
%  W2    : hidden-to-output layer weights.
%           dim(W2) = [(outputs)  *  (# of hidden units + 1)]
%  PHI   : Input vector. dim(PHI) = [(inputs)  *  (# of data)]
%  Y     : Output data. dim(Y) = [(outputs)  * (# of data)]
%  trparms : Vector containing parameters associated with the training
%             trparms = [max_iter stop_crit eta]
%             max_iter  : Max. number of iterations
%             stop_crit : Stop learning if criterion is below this value
%             eta       : Step size
%
%  OUTPUT:
%  W1, W2   : Weight matrices after training
%  critvec  : Vector containing the criterion evaluated at each iteration.
%  iter     : # of iterations
% 
%  Programmed by : Magnus Norgaard, IAU//IMM, DTU
%  LastEditDate  : July 16, 1996

 
%----------------------------------------------------------------------------------
%--------------             NETWORK INITIALIZATIONS                   -------------
%----------------------------------------------------------------------------------
max_iter  = trparms(1);
stop_crit = trparms(2);
eta       = trparms(3);
[outputs,N] = size(Y);
[layers,dummy] = size(NetDef);    % Number of hidden layers
L_hidden = find(NetDef(1,:)=='L')';   % Location of linear hidden neurons
H_hidden = find(NetDef(1,:)=='H')';   % Location of tanh hidden neurons
L_output = find(NetDef(2,:)=='L')';   % Location of linear output neurons
H_output = find(NetDef(2,:)=='H')';   % Location of tanh output neurons
[hidden,net_inputs] = size(W1);
net_inputs=net_inputs-1;
PI_vector  = zeros(max_iter,1);
y1 = zeros(hidden,1);
delta1 = y1;
y2 = zeros(outputs,1);
delta2 = y2;
Y_train = zeros(size(Y));
PI_vector  = zeros(max_iter,1);       % Vector containing the SSE for each iteration

%---------------------------------------------------------------------------------
%-------------                   TRAIN NETWORK                       -------------
%---------------------------------------------------------------------------------
clc;
c=fix(clock);
fprintf('Network training started at %2i.%2i.%2i\n\n',c(4),c(5),c(6));

for j=1:max_iter,
  PI=0;
  for jj=1:N,
   
% ---  Compute network output (Presentation phase)  ---
    h1 = W1(:,1:net_inputs)*PHI(:,jj) + W1(:,net_inputs+1);  
    y1(H_hidden) = pmntanh(h1(H_hidden));   % 1 ./ (1 + exp(-x1));
    y1(L_hidden) = h1(L_hidden);
    
    h2 = W2(:,1:hidden)*y1 + W2(:,hidden+1);
    y2(H_output) = pmntanh(h2(H_output));
    y2(L_output) = h2(L_output);

% ---  Train network  ---
    E = Y(:,jj) - y2;                      % Training error
                                           % Delta for output layer
    delta2(H_output) = (1-y2(H_output).*y2(H_output)).*E(H_output);
    delta2(L_output) = E(L_output);
                                           % delta for hidden layer
    E1 = W2(:,1:hidden)'*delta2; 
    delta1(H_hidden) = (1-y1(H_hidden).*y1(H_hidden)).*E1(H_hidden);
    delta1(L_hidden) = E1(L_hidden);
   
    W2 = W2 + eta*delta2*[y1;1]';          % Update weights between hidden and ouput
    W1 = W1 + eta*delta1*[PHI(:,jj);1]';   % Update weights between input and hidden
   
    Y_train(:,jj)=y2;
    PI = PI + E'*E;                        % Update performance index (SSE)
  end
  
%>>>>>>>>>>>>>>>>>>>>>>       UPDATES FOR NEXT ITERATION       <<<<<<<<<<<<<<<<<<<
  PI = PI/(2*N);
  PI_vector(j)=PI;
  fprintf('iteration # %i   PI = %f\r',j,PI)
  if PI < stop_crit, break, end         % Check if stop condition is satisfied
end
%---------------------------------------------------------------------------------
%-------------              END OF NETWORK TRAINING                  -------------
%---------------------------------------------------------------------------------
PI_vector = PI_vector(1:iter);
c=fix(clock);
fprintf('\n\nNetwork training ended at %2i.%2i.%2i\n',c(4),c(5),c(6));

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -