📄 neuralnetwork_bp_classification.m
字号:
% BP 神经网络用于模式分类
% 使用平台 - Matlab6.5
% 作者:陆振波,海军工程大学
% 欢迎同行来信交流与合作,更多文章与程序下载请访问我的个人主页
% 电子邮件:luzhenbo@yahoo.com.cn
% 个人主页:http://luzhenbo.88uu.com.cn
clc
clear
close all
%---------------------------------------------------
% 产生训练样本与测试样本,每一列为一个样本
P1 = [rand(3,5),rand(3,5)+1,rand(3,5)+2];
T1 = [repmat([1;0;0],1,5),repmat([0;1;0],1,5),repmat([0;0;1],1,5)];
%
% B =repmat Replicate and tile an array SyntaxB = repmat(A,m,n)
% B = repmat(A,[m n])
% B = repmat(A,[m n p...])
% repmat(A,m,n)
% DescriptionB = repmat(A,m,n) creates a large matrix B consisting of an m-by-n tiling of copies of A. The statement repmat(A,n) creates an n-by-n tiling. B = repmat(A,[m n]) accomplishes the same result as repmat(A,m,n). B = repmat(A,[m n p...]) produces a multidimensional (m-by-n-by-p-by-...) array composed of copies of A. A may be multidimensional. repmat(A,m,n) when A is a scalar, produces an m-by-n matrix filled with A's value. This can be much faster than a*ones(m,n) when m or n is large. ExamplesIn this example, repmat replicates 12 copies of the second-order identity matrix, resulting in a "checkerboard" pattern. B = repmat(eye(2),3,4)
% B =
% 1 0 1 0 1 0 1 0
% 0 1 0 1 0 1 0 1
% 1 0 1 0 1 0 1 0
% 0 1 0 1 0 1 0 1
% 1 0 1 0 1 0 1 0
% 0 1 0 1 0 1 0 1
% The statement N = repmat(NaN,[2 3]) creates a 2-by-3 matrix of NaNs.
% 1 0 1 0 1 0 1 0
% 0 1 0 1 0 1 0 1
% 1 0 1 0 1 0 1 0
% 0 1 0 1 0 1 0 1
% 1 0 1 0 1 0 1 0
% 0 1 0 1 0 1 0 1
% The statement N = repmat(NaN,[2 3]) creates a 2-by-3 matrix of NaNs.
P2 = [rand(3,5),rand(3,5)+1,rand(3,5)+2];
T2 = [repmat([1;0;0],1,5),repmat([0;1;0],1,5),repmat([0;0;1],1,5)];
%---------------------------------------------------
% 归一化
[PN1,minp,maxp] = premnmx(P1);
PN2 = tramnmx(P2,minp,maxp);
%---------------------------------------------------
% 设置网络参数
NodeNum = 10; % 隐层节点数
TypeNum = 3; % 输出维数
TF1 = 'tansig';TF2 = 'purelin'; % 判别函数(缺省值)
%TF1 = 'tansig';TF2 = 'logsig';
%TF1 = 'logsig';TF2 = 'purelin';
%TF1 = 'tansig';TF2 = 'tansig';
%TF1 = 'logsig';TF2 = 'logsig';
%TF1 = 'purelin';TF2 = 'purelin';
net = newff(minmax(PN1),[NodeNum TypeNum],{TF1 TF2});
%---------------------------------------------------
% 指定训练参数
% net.trainFcn = 'traingd'; % 梯度下降算法
% net.trainFcn = 'traingdm'; % 动量梯度下降算法
%
% net.trainFcn = 'traingda'; % 变学习率梯度下降算法
% net.trainFcn = 'traingdx'; % 变学习率动量梯度下降算法
%
% (大型网络的首选算法 - 模式识别)
% net.trainFcn = 'trainrp'; % RPROP(弹性BP)算法,内存需求最小
%
% 共轭梯度算法
% net.trainFcn = 'traincgf'; % Fletcher-Reeves修正算法
% net.trainFcn = 'traincgp'; % Polak-Ribiere修正算法,内存需求比Fletcher-Reeves修正算法略大
% net.trainFcn = 'traincgb'; % Powell-Beal复位算法,内存需求比Polak-Ribiere修正算法略大
% (大型网络的首选算法 - 函数拟合,模式识别)
% net.trainFcn = 'trainscg'; % Scaled Conjugate Gradient算法,内存需求与Fletcher-Reeves修正算法相同,计算量比上面三种算法都小很多
%
% net.trainFcn = 'trainbfg'; % Quasi-Newton Algorithms - BFGS Algorithm,计算量和内存需求均比共轭梯度算法大,但收敛比较快
% net.trainFcn = 'trainoss'; % One Step Secant Algorithm,计算量和内存需求均比BFGS算法小,比共轭梯度算法略大
%
% (中小型网络的首选算法 - 函数拟合,模式识别)
net.trainFcn = 'trainlm'; % Levenberg-Marquardt算法,内存需求最大,收敛速度最快
%
% net.trainFcn = 'trainbr'; % 贝叶斯正则化算法
%
% 有代表性的五种算法为:'traingdx','trainrp','trainscg','trainoss', 'trainlm'
%---------------------%
net.trainParam.show = 1; % 训练显示间隔
net.trainParam.lr = 0.3; % 学习步长 - traingd,traingdm
net.trainParam.mc = 0.95; % 动量项系数 - traingdm,traingdx
net.trainParam.mem_reduc = 10; % 分块计算Hessian矩阵(仅对Levenberg-Marquardt算法有效)
net.trainParam.epochs = 1000; % 最大训练次数
net.trainParam.goal = 1e-8; % 最小均方误差
net.trainParam.min_grad = 1e-20; % 最小梯度
net.trainParam.time = inf; % 最大训练时间
%---------------------------------------------------
% 训练与测试
net = train(net,PN1,T1); % 训练
%---------------------------------------------------
% 测试
Y1 = sim(net,PN1); % 训练样本实际输出
Y2 = sim(net,PN2); % 测试样本实际输出
Y1 = full(compet(Y1)); % 竞争输出
Y2 = full(compet(Y2));
% View code for competDefault Topics compet Competitive transfer function.
%
% Syntax
%
% A = compet(N)
% info = compet(code)
%
% Description
%
% compet is a transfer function. Transfer functions
% calculate a layer's output from its net input.
%
% compet(N) takes one input argument,
% N - SxQ matrix of net input (column) vectors.
% and returns output vectors with 1 where each net input
% vector has its maximum value, and 0 elsewhere.
%
% compet(code) returns information about this function.
% These codes are defined:
% 'deriv' - Name of derivative function.
% 'name' - Full name.
% 'output' - Output range.
% 'active' - Active input range.
%
% compet does not have a derivative function.
%
% Examples
%
% Here we define a net input vector N, calculate the output,
% and plot both with bar graphs.
%
% n = [0; 1; -0.5; 0.5];
% a = compet(n);
% subplot(2,1,1), bar(n), ylabel('n')
% subplot(2,1,2), bar(a), ylabel('a')
%---------------------------------------------------
% 结果统计
Result = ~sum(abs(T1-Y1)) % 正确分类显示为1
Percent1 = sum(Result)/length(Result) % 训练样本正确分类率
Result = ~sum(abs(T2-Y2)) % 正确分类显示为1
Percent2 = sum(Result)/length(Result) % 测试样本正确分类率
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -