⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 4v.txt

📁 matlab实现的多变量输入BP神经网络算法
💻 TXT
字号:
% 读入训练数据和测试数据 
X = []; 
Y1= [];
Y2= []; 
Y3= [];
Y4= [];
A=[];
B=[];
str = {'Test4',}; 
Data = textread([str{1},'.txt']); 
% 读训练数据
for i=1:40 
X = [X Data(i:end-41+i-10,1:1) Data(i:end-41+i-10,2:2) Data(i:end-41+i-10,3:3) Data(i:end-41+i-10,1:1)];
end
Y1= Data(41:end-10,1:1);
Y2= Data(41:end-10,2:2);
Y3= Data(41:end-10,3:3);
Y4= Data(41:end-10,4:4);
A=X(end:end,1:end);
%Input = Input'; 
%Output = Output'; 
X=X';
Y3=Y3';
Y2=Y2';
Y1=Y1';
Y4=Y4';

% 矩阵赚置 
[X,minx,maxx,Y1,miny1,maxy1] = premnmx(X,Y1); 
% 标准化数据 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
% 神经网络参数设置 

Para.Goal = 1; 
% 网络训练目标误差 
Para.Epochs = 800; 
% 网络训练代数 
Para.LearnRate = 0.8; 
% 网络学习速率 
%==== 
Para.Show = 200; 
% 网络训练显示间隔 
Para.InRange = repmat([-1 1],size(X,1),1); 
% 网络的输入变量区间 
%Para.Neurons = [size(X,1)*2+1 1]; 
Para.Neurons = [30 1]; 
% 网络后两层神经元配置 
Para.TransferFcn= {'logsig' 'purelin'}; 
% 各层的阈值函数 
Para.TrainFcn = 'trainlm'; 
% 网络训练函数赋值 
% traingd : 梯度下降后向传播法 
% traingda : 自适应学习速率的梯度下降法 
% traingdm : 带动量的梯度下降法 
% traingdx : 
% 带动量,自适应学习速率的梯度下降法 
Para.LearnFcn = 'learngdm'; 
% 网络学习函数 
Para.PerformFcn = 'sse'; 
% 网络的误差函数 
Para.InNum = size(X,1); 
% 输入量维数 
Para.IWNum = Para.InNum*Para.Neurons(1); 
% 输入权重个数 
Para.LWNum = prod(Para.Neurons); 
% 层权重个数 
Para.BiasNum = sum(Para.Neurons); 
% 偏置个数 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net1 = newff(Para.InRange,Para.Neurons,Para.TransferFcn,... 
Para.TrainFcn,Para.LearnFcn,Para.PerformFcn); 
% 建立网络 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net1.trainParam.show = Para.Show; 
% 训练显示间隔赋值 
Net1.trainParam.goal = Para.Goal; 
% 训练目标误差赋值 
Net1.trainParam.lr = Para.LearnRate; 
% 网络学习速率赋值 
Net1.trainParam.epochs = Para.Epochs; 
% 训练代数赋值 
Net1.trainParam.lr = Para.LearnRate; 

Net1.performFcn = Para.PerformFcn; 
% 误差函数赋值 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% 调试 
OutY1 =sim(Net1,X); 
% 仿真刚建立的网络 
Sse1 =sse(Y1-OutY1); 
% 刚建立的网络误差 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
[Net1 TR] = train(Net1,X,Y1); 
% 训练网络并返回 


% 矩阵赚置 
[Y2,miny2,maxy2] = premnmx(Y2); 
% 标准化数据 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
% 神经网络参数设置 

Para.Goal =1; 
% 网络训练目标误差 
Para.Epochs = 800; 
% 网络训练代数 
Para.LearnRate = 0.8; 
% 网络学习速率 
%==== 
Para.Show = 200; 
% 网络训练显示间隔 
Para.InRange = repmat([-1 1],size(X,1),1); 
% 网络的输入变量区间 
%Para.Neurons = [size(X,1)*2+1 1]; 
Para.Neurons = [30 1]; 
% 网络后两层神经元配置 
Para.TransferFcn= {'logsig' 'purelin'}; 
% 各层的阈值函数 
Para.TrainFcn = 'trainlm'; 
% 网络训练函数赋值 
% traingd : 梯度下降后向传播法 
% traingda : 自适应学习速率的梯度下降法 
% traingdm : 带动量的梯度下降法 
% traingdx : 
% 带动量,自适应学习速率的梯度下降法 
Para.LearnFcn = 'learngdm'; 
% 网络学习函数 
Para.PerformFcn = 'sse'; 
% 网络的误差函数 
Para.InNum = size(X,1); 
% 输入量维数 
Para.IWNum = Para.InNum*Para.Neurons(1); 
% 输入权重个数 
Para.LWNum = prod(Para.Neurons); 
% 层权重个数 
Para.BiasNum = sum(Para.Neurons); 
% 偏置个数 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net2 = newff(Para.InRange,Para.Neurons,Para.TransferFcn,... 
Para.TrainFcn,Para.LearnFcn,Para.PerformFcn); 
% 建立网络 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net2.trainParam.show = Para.Show; 
% 训练显示间隔赋值 
Net2.trainParam.goal = Para.Goal; 
% 训练目标误差赋值 
Net2.trainParam.lr = Para.LearnRate; 
% 网络学习速率赋值 
Net2.trainParam.epochs = Para.Epochs; 
% 训练代数赋值 
Net2.trainParam.lr = Para.LearnRate; 

Net2.performFcn = Para.PerformFcn; 
% 误差函数赋值 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% 调试 
OutY2 =sim(Net2,X); 
% 仿真刚建立的网络 
Sse2 =sse(Y2-OutY2); 
% 刚建立的网络误差 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
[Net2 TR] = train(Net2,X,Y2); 
% 训练网络并返回 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 矩阵赚置 
[Y3,miny3,maxy3] = premnmx(Y3); 
% 标准化数据 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
% 神经网络参数设置 

Para.Goal = 1; 
% 网络训练目标误差 
Para.Epochs = 800; 
% 网络训练代数 
Para.LearnRate = 0.8; 
% 网络学习速率 
%==== 
Para.Show = 200; 
% 网络训练显示间隔 
Para.InRange = repmat([-1 1],size(X,1),1); 
% 网络的输入变量区间 
%Para.Neurons = [size(X,1)*2+1 1]; 
Para.Neurons = [30 1]; 
% 网络后两层神经元配置 
Para.TransferFcn= {'logsig' 'purelin'}; 
% 各层的阈值函数 
Para.TrainFcn = 'trainlm'; 
% 网络训练函数赋值 
% traingd : 梯度下降后向传播法 
% traingda : 自适应学习速率的梯度下降法 
% traingdm : 带动量的梯度下降法 
% traingdx : 
% 带动量,自适应学习速率的梯度下降法 
Para.LearnFcn = 'learngdm'; 
% 网络学习函数 
Para.PerformFcn = 'sse'; 
% 网络的误差函数 
Para.InNum = size(X,1); 
% 输入量维数 
Para.IWNum = Para.InNum*Para.Neurons(1); 
% 输入权重个数 
Para.LWNum = prod(Para.Neurons); 
% 层权重个数 
Para.BiasNum = sum(Para.Neurons); 
% 偏置个数 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net3 = newff(Para.InRange,Para.Neurons,Para.TransferFcn,... 
Para.TrainFcn,Para.LearnFcn,Para.PerformFcn); 
% 建立网络 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net3.trainParam.show = Para.Show; 
% 训练显示间隔赋值 
Net3.trainParam.goal = Para.Goal; 
% 训练目标误差赋值 
Net3.trainParam.lr = Para.LearnRate; 
% 网络学习速率赋值 
Net3.trainParam.epochs = Para.Epochs; 
% 训练代数赋值 
Net3.trainParam.lr = Para.LearnRate; 

Net3.performFcn = Para.PerformFcn; 
% 误差函数赋值 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% 调试 
OutY3 =sim(Net3,X); 
% 仿真刚建立的网络 
Sse3 =sse(Y3-OutY3); 
% 刚建立的网络误差 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
[Net3 TR] = train(Net3,X,Y3); 
% 训练网络并返回 

% 矩阵赚置 
[Y4,miny4,maxy4] = premnmx(Y4); 
% 标准化数据 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
% 神经网络参数设置 

Para.Goal =1; 
% 网络训练目标误差 
Para.Epochs = 800; 
% 网络训练代数 
Para.LearnRate = 0.8; 
% 网络学习速率 
%==== 
Para.Show = 200; 
% 网络训练显示间隔 
Para.InRange = repmat([-1 1],size(X,1),1); 
% 网络的输入变量区间 
%Para.Neurons = [size(X,1)*2+1 1]; 
Para.Neurons = [30 1]; 
% 网络后两层神经元配置 
Para.TransferFcn= {'logsig' 'purelin'}; 
% 各层的阈值函数 
Para.TrainFcn = 'trainlm'; 
% 网络训练函数赋值 
% traingd : 梯度下降后向传播法 
% traingda : 自适应学习速率的梯度下降法 
% traingdm : 带动量的梯度下降法 
% traingdx : 
% 带动量,自适应学习速率的梯度下降法 
Para.LearnFcn = 'learngdm'; 
% 网络学习函数 
Para.PerformFcn = 'sse'; 
% 网络的误差函数 
Para.InNum = size(X,1); 
% 输入量维数 
Para.IWNum = Para.InNum*Para.Neurons(1); 
% 输入权重个数 
Para.LWNum = prod(Para.Neurons); 
% 层权重个数 
Para.BiasNum = sum(Para.Neurons); 
% 偏置个数 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net4 = newff(Para.InRange,Para.Neurons,Para.TransferFcn,... 
Para.TrainFcn,Para.LearnFcn,Para.PerformFcn); 
% 建立网络 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Net4.trainParam.show = Para.Show; 
% 训练显示间隔赋值 
Net4.trainParam.goal = Para.Goal; 
% 训练目标误差赋值 
Net4.trainParam.lr = Para.LearnRate; 
% 网络学习速率赋值 
Net4.trainParam.epochs = Para.Epochs; 
% 训练代数赋值 
Net4.trainParam.lr = Para.LearnRate; 

Net4.performFcn = Para.PerformFcn; 
% 误差函数赋值 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% 调试 
OutY4 =sim(Net4,X); 
% 仿真刚建立的网络 
Sse4 =sse(Y4-OutY4); 
% 刚建立的网络误差 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
[Net4 TR] = train(Net4,X,Y4); 
% 训练网络并返回 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


 
T1=sim(Net1,X);
T2=sim(Net2,X);
T3=sim(Net3,X);
T4=sim(Net4,X);
[A1]=tramnmx(A',minx,maxx);
Out1 =sim(Net1,A1); 
Out2 =sim(Net2,A1); 
Out3 =sim(Net3,A1);
Out4 =sim(Net4,A1);
temp1=postmnmx(Out1,miny1,maxy1);
temp2=postmnmx(Out2,miny2,maxy2);
temp3=postmnmx(Out3,miny3,maxy3);
temp4=postmnmx(Out4,miny4,maxy4);
B=postmnmx(T4,miny4,maxy4);
for i=2:10
B=[B temp4];
A=[A(end:end,5:end) temp1 temp2 temp3 temp4];
[A1]=tramnmx(A',minx,maxx);
Out1 =sim(Net1,A1);
Out2 =sim(Net2,A1);
Out3 =sim(Net3,A1);
Out4 =sim(Net4,A1);
temp1=postmnmx(Out1,miny1,maxy1);
temp2=postmnmx(Out2,miny2,maxy2);
temp3=postmnmx(Out3,miny3,maxy3);
temp4=postmnmx(Out4,miny4,maxy4);
end
B=[B temp4];
% 对学习训练后的网络仿真
input=[41:1:200];
Data=Data(41:200,4:4);
plot(1:length(input),Data','r:',1:length(input),B,'k');

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -