⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 regression_ls_svmlab_pso_luhua.m

📁 运用粒子群算法优化支持向量机的参数
💻 M
字号:
clc;clear;% close allst=cputime;%the computation time of start%---------------------------------------------------% 产生训练样本与测试样本% load soft.txt;% % PP=soft(:,1:7);% TT=soft(:,8);% [h,l]=size(PP);load luhua.txt;[row,col]=size(luhua);j=1;i=1;while(i<=row-19)    data(j,:)=luhua(i,:);    j=j+1;        i=i+20;endPP=data(:,1:5);TT=data(:,6);[h,l]=size(PP);% n1 = 1:2:200;% x1 = sin(n1*0.1);% % n2 = 2:2:200;% x2 = sin(n2*0.1);% % xn_train = n1;          % 训练样本,每一列为一个样本% dn_train = x1;          % 训练目标,行向量% % xn_test = n2;           % 测试样本,每一列为一个样本% dn_test = x2;           % 测试目标,行向量%---------------------------------------------------% 参数设置ntest=850;ntrain=h-ntest;X=PP(1:ntrain,:);Y=TT(1:ntrain,:);Xt=PP(ntrain+1:h,:);Yt=TT(ntrain+1:h,:);% X = xn_train';% Y = dn_train';% Xt = xn_test';% Yt = dn_test';type = 'f';kernel = 'RBF_kernel';% gam = 1;              % Regularization parameter% sig2 = 5;            % Kernel parameter (bandwidth in the case of the 'RBF_kernel'success=0; PopSize=10; MaxIt=10; iter=0; ErrGoal=0.05;dim=2; maxw=0.8;   minw=0.2;    w=maxw;c1=2;     c2=2; populmax1=1000;populmin1=200;velmax1=100;velmin1=-100;populmax2=10;populmin2=2;velmax2=2;velmin2=-2;populmax=[populmax1,populmax2];populmin=[populmin1,populmin2];velmax=[velmax1,velmax2];velmin=[velmin1,velmin2];popul1=rand(1,PopSize)*(populmax(1)-populmin(1))+populmin(1); popul2=rand(1,PopSize)*(populmax(2)-populmin(2))+populmin(2); popul=[popul1',popul2']';clear popul1;    clear popul2;vel1=rand(1,PopSize)*(velmax(1)-velmin(1))+velmin(1);vel2=rand(1,PopSize)*(velmax(2)-velmin(2))+velmin(2);vel=[vel1',vel2']';for m=1:PopSize    model = initlssvm(X,Y,type,popul(1,m),popul(2,m),kernel,'original');                    % 模型初始化    model = trainlssvm(model);  % 训练     Ytr=simlssvm(model,X);     error(m)=sum((Ytr-Y).^2)/2;endibestpos=popul;           %  个体最好位置初始化ibestfit=error;              %  各个体的适应值[fbestpart,g]=min(ibestfit);             %  找全局最好的适应值gbestfit=fbestpart;                      %  全局最好的适应值gbestpos=ibestpos(:,g);                     %  全局最好的适应值对应的个体while (success==0)&(iter<MaxIt)    iter=iter+1;    w=maxw-(maxw-minw)*(iter-1)/MaxIt;        for m=1:PopSize,               % 将全局最好的适应值对应的个体展开        A(:,m)=gbestpos;    end    R1=rand(dim,PopSize);          % 产生随机数    R2=rand(dim,PopSize);     vel=w*vel+2.0*R1.*(ibestpos-popul)+2.0*R2.*(A-popul);%+0.01*R4.*(AC-vel);%+0.1*R3.*(A-B)%;%;     %  速度计算    clear A;    for m=1:PopSize                        %    速度限幅处理        for d=1:dim             if vel(d,m)>velmax(d)                               vel(d,m)=rand*(velmax(d)-velmin(d))+velmin(d);%;velmax            end            if vel(d,m)<velmin(d)               vel(d,m)=rand*(velmax(d)-velmin(d))+velmin(d);%;velmin            end        end    end        popul=popul+vel;                                    %  位置计算        for m=1:PopSize                                     %  位置限        for d=1:dim            if popul(d,m)>populmax(d)                popul(d,m)=rand*(populmax(d)-populmin(d))+populmin(d);  %rand*1.5;populmax            elseif popul(d,m)<populmin                popul(d,m)=rand*(populmax(d)-populmin(d))+populmin(d);  %rand*(-1.5); populmin            end        end    endfor m=1:PopSize    model = initlssvm(X,Y,type,popul(1,m),popul(2,m),kernel,'original');  % 模型初始化    model = trainlssvm(model);  % 训练     Ytr=simlssvm(model,X);     error(m)=sum((Ytr-Y).^2)/2;    if error(m)<ibestfit(m)       ibestfit(m)=error(m);       ibestpos(:,m)=popul(:,m);    endend[fbestpart,g]=min(error);               %  更新全局历史最好位置    if fbestpart<gbestfit       gbestfit=fbestpart;       gbestpos=popul(:,g);%        velg=vel(:,g);    end    seg0(iter)=gbestfit;                  %  历史全局最优适应度轨迹   %     if abs(gbestfit)<=ErrGoal              %  判断是否迭代结束%         success=1;%     endend%---------------------------------------------------% 交叉验证优化参数% costfun = 'rcrossvalidate';% costfun_args = {X,Y,10};% optfun = 'gridsearch';% model = tunelssvm(model,[],optfun,{},costfun,costfun_args);     % 模型参数优化 %---------------------------------------------------% 训练与测试model = initlssvm(X,Y,type,gbestpos(1),gbestpos(2),kernel,'original');  % 模型初始化model = trainlssvm(model);  % 训练 Ytr=simlssvm(model,X);  Yd = simlssvm(model,Xt);    % 回归%---------------------------------------------------% 结果作图figure(1)plot(1:ntrain,Y(1:ntrain),'r:',1:ntrain,Ytr(1:ntrain),'b-')xlabel('样本号');                                %  坐标标注ylabel('4CBA值');title('4CBA训练结果对比图:');legend('实际值','LS-SVM预测值');figure(2)plot(1+ntrain:h,Yt(1:ntest),'r:',1+ntrain:h,Yd(1:ntest),'b-')xlabel('样本号');                                %  坐标标注ylabel('4CBA值');title('4CBA训练结果对比图:');legend('实际值','LS-SVM预测值');Err_SSE1=sqrt(sumsqr(Ytr-Y)/ntrain);Err_SSE2=sqrt(sumsqr(Yd-Yt)/ntest);Err_ABSE1=sum(abs(Ytr-Y))/ntrain;Err_ABSE2=sum(abs(Yd-Yt))/ntest;Err_RELAT1=sumsqr((Ytr-Y)./Y)/ntrain;Err_RELAT2=sumsqr((Yd-Yt)./Yt)/ntest;% plot(1:length(Yt),Yt,'r+:',1:length(Yd),Yd,'bo:')% title('+为真实值,o为预测值')% 截止时间stp=cputime-st

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -