⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 demotrainpso.m

📁 智能优化算法: 粒子群优化算法(PSO)应用于神经网络优化程序。分为无隐含层、一隐含层、二隐含层。运行DemoTrainPSO.m即可。 程序来自:Brian Birge NCSU
💻 M
字号:
% DemoTrainPSO.m
% little file to test out the pso optimizer for nnet training
% trains to the XOR function
%
% note: this does *not* minimize the test set function
% rather it tries to train a neural net to approximate the 
% test set function
%
% Brian Birge
% Rev 1.0
% 1/1/3

clear all
close all
clc
help demotrainpso

%	Training parameters are:
%    TP(1) - Epochs between updating display, default = 100.
%    TP(2) - Maximum number of iterations (epochs) to train, default = 2000.
%    TP(3) - Sum-squared error goal, default = 0.02.
%    TP(4) - population size, default = 20
%    TP(5) - maximum particle velocity, default = 4
%    TP(6) - acceleration constant 1, default = 2
%    TP(7) - acceleration constant 2, default = 2
%    TP(8) - Initial inertia weight, default = 0.9
%    TP(9) - Final inertia weight, default = 0.2
%    TP(10)- Iteration (epoch) by which inertial weight should be at final value, default = 1500
%    TP(11)- maximum initial network weight absolute value, default = 100
%    TP(12)- randomization flag (flagg), default = 2:
%                      flagg = 0, same random numbers used for each particle (different at each epoch - least random)
%                      flagg = 1, separate randomized numbers for each particle at each epoch
%                      flagg = 2, separate random #'s at each component of each particle at each epoch (most random)
%    TP(13)- minimum global error gradient (if SSE(i+1)-SSE(i) < gradient over 
%               certain length of epochs, terminate run, default = 1e-9
%    TP(14)- epochs before error gradient criterion terminates run, default = 200
%               i.e., if the SSE does not change over 200 epochs, quit program

nntwarn off

epdt=25;
maxep=1000;
reqerr=0.02;
maxneur=30;
popsz=20;
maxvel=4;
acnst1=2;
acnst2=2;
inwt1=.9;
inwt2=0.2;
endepoch=1500;
maxwt=0.05;
cnt=0; % counter for neuron architecture

% Training parameters, change these to experiment with PSO performance
% type help trainpso to find out what they do
TP=[epdt,maxep,reqerr,popsz,maxvel,acnst1,acnst2,inwt1,inwt2,endepoch,maxwt,2,1e-9,200];

disp('---------------------------------------------------------------------------------------------------');
disp(' ');
disp('1. 1 hidden layer');
disp('2. 2 hidden layers');
disp('3. no hidden layers');
arch=input('  Pick a neural net architecture >');
disp(' ');
disp('1. Particle Swarm Optimization');
disp('2. Standard Backprop');
meth=input('  Pick training method >');
disp(' ');
disp('---------------------------------------------------------------------------------------------------');
disp(' ');

% XOR function test set
 P=[0,0;0,1;1,0;1,1]';
 T=[1;0;0;1]';
 minmax=[0,1;0,1];
 l1=0;
 l2=1; 
tr(1)=99; % arbitrary choice of initial error just used to update # of neurons

if arch==3
   [w1,b1]=initff(minmax,1,'tansig');
   if meth==1
      [w1,b1,te,tr]=trainpso(w1,b1,'tansig',P,T,TP);      
   elseif meth==2
      [w1,b1,te,tr]=trainbp(w1,b1,'tansig',P,T);      
   end   
elseif arch==1
   while tr(end)>reqerr
    l1=l1+1;

    [w1,b1,w2,b2]=initff(minmax,l1,'tansig',1,'purelin');
    
    if meth==1
      [w1,b1,w2,b2,te,tr]=trainpso(w1,b1,'tansig',w2,b2,'purelin',P,T,TP);
    elseif meth==2
      [w1,b1,w2,b2,te,tr]=trainbp(w1,b1,'tansig',w2,b2,'purelin',P,T);   
    end    
    if l1>maxneur
       break
    end  
   end
elseif arch==2
   while tr(end)>reqerr  
    if l1>l2
       l2=l2+1;
    else
       l1=l1+1;
    end    
    
    [w1,b1,w2,b2,w3,b3]=initff(minmax,l2,'tansig',l1,'logsig',1,'purelin');
    
    if meth==1
      [w1,b1,w2,b2,w3,b3,te,tr]=trainpso(w1,b1,'tansig',w2,b2,'logsig',w3,b3,'purelin',P,T,TP);
    elseif meth==2
      [w1,b1,w2,b2,w3,b3,te,tr]=trainbp(w1,b1,'tansig',w2,b2,'logsig',w3,b3,'purelin',P,T);
    end         
    if l1>maxneur
       break
    end  
   end
end

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -