⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 trainpso.m

📁 智能优化算法: 粒子群优化算法(PSO)应用于神经网络优化程序。分为无隐含层、一隐含层、二隐含层。运行DemoTrainPSO.m即可。 程序来自:Brian Birge NCSU
💻 M
字号:
function [a,b,c,d,e,f,g,h] = trainpso(i,j,k,l,m,n,o,p,q,r,s,t)
% trainpso.m
%
% replacement for the nnet toolbox
% neural net training functions
% trainbp, trainbpm, trainbpx, and trainlm
% This uses a particle swarm optimizer as the 
% training function
%
% Brian Birge
% rev 1.0
% 01/01/03
%
%  TRAINPSO can be called with 1, 2, or 3 sets of weights
%  and biases to train up to 3 layer feed-forward networks.
%
%  [W1,B1,W2,B2,...,TE,TR] = TRAINPSO(W1,B1,F1,W2,B2,F2,...,P,T,TP)
%    Wi - SixR weight matrix for the ith layer.
%    Bi - S1x1 bias vector for the ith layer.
%    Fi - Transfer function (string) for the ith layer. (can be purelin,logsig, or tansig)
%    P  - RxQ matrix of input vectors.
%    T  - SxQ matrix of target vectors.
%    TP - Training parameters (optional).
%  Returns new weights and biases and
%    Wi - new weights.
%    Bi - new biases.
%    TE - the actual number of epochs trained.
%    TR - training record: [row of errors]
%
%  Training parameters are:
%    TP(1) - Epochs between updating display, default = 100.
%    TP(2) - Maximum number of iterations (epochs) to train, default = 4000.
%    TP(3) - Sum-squared error goal, default = 0.02.
%    TP(4) - population size, default = 20
%    TP(5) - maximum particle velocity, default = 4
%    TP(6) - acceleration constant 1, default = 2
%    TP(7) - acceleration constant 2, default = 2
%    TP(8) - Initial inertia weight, default = 0.9
%    TP(9) - Final inertia weight (iwt), default = 0.2
%    TP(10)- Epoch by which inertial weight = final value, default = 1500
%    TP(11)- maximum initial network weight absolute value, default = 100
%    TP(12)- randomization flag (flagg), literature says set =2, default = 2:
%            = 0, random for each epoch
%            = 1, random for each particle at each epoch
%            = 2, random for each dimension of each particle at each epoch
%    TP(13)- minimum global error gradient (if SSE(i+1)-SSE(i) < gradient over 
%               certain length of epochs, terminate run, default = 1e-9
%    TP(14)- error gradient criterion terminates run here, default = 200
%               i.e., if the SSE does not change over 200 epochs, quit program
%    TP(15) - plot flag, if =1 display is updated during training
%
%	Missing parameters and NaN's are replaced with defaults.
%
%	See also: DemoTrainPSO, NNTRAIN, BACKPROP, INITFF, SIMFF, TRAINBPX, TRAINLM.

nntwarn off;

if all([5 6 8 9 11 12] ~= nargin)
   error('Wrong number of input arguments');
end

if nargin == 5
  [a,b,c,d] = tpso1(i,j,k,l,m);
elseif nargin == 6
  [a,b,c,d] = tpso1(i,j,k,l,m,n);
elseif nargin == 8
  [a,b,c,d,e,f] = tpso2(i,j,k,l,m,n,o,p);
elseif nargin == 9
  [a,b,c,d,e,f] = tpso2(i,j,k,l,m,n,o,p,q);
elseif nargin == 11
  [a,b,c,d,e,f,g,h] = tpso3(i,j,k,l,m,n,o,p,q,r,s);
elseif nargin == 12
  [a,b,c,d,e,f,g,h] = tpso3(i,j,k,l,m,n,o,p,q,r,s,t);
end

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -