⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sd_mlp.m

📁 一个用MATLAB编写的优化控制工具箱
💻 M
字号:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% For training two-layer perceptrons using steepest descent%% By: Kevin Passino% Version: 2/10/99%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%clearfigure(1)clf             % Clear a figure that will be usedfigure(2)clf             % Clear a figure that will be used% First set the number of steps for the simulationNsd=1001; % One more than the number of iterations you wanttime=0:Nsd-1; % For use in plotting (notice that time starts at zero in the plots but                         % and the index 1 corresponds to a time of zerolambda=0.01; % Step size used in all update formulas% Next, we provide several options for intializing the algorithm (comment out appropriate part of the code):% Define the parameters for the phi function of the neural network (the index 1 corresponds to k=0)% Also, notice that since n=1 the indexing is simplified (switched some indices below compared to earlier)n1=25;w(:,1)=0.1*(2*(rand(n1,1)-ones(n1,1))); % Initialize uniformly, each element, on [-0.1,0.1]b(:,1)=0.1*(2*(rand(n1,1)-ones(n1,1)));% Recall that we had initialized this in our earlier studies in least squares training as:%w(:,1)=ones(n1,1);%temp=-6:0.5:6;%b(:,1)=temp';% Also, we have to pick these values for the output layer (the parameters that enter linearly):wo(:,1)=0.1*(2*(rand(n1,1)-ones(n1,1)));bo(1)=0.1*(2*(rand-1)); % Just a scalartheta(:,1)=[wo(:,1)', bo(1)]';  % Load the parameters that enter linearly into one vector (for convenience)% As another option you can run the batch least squares program and generate the% best guess at theta then use it to initialize the algorithm in the following manner:%load variables theta26%theta(:,1)=theta26;  % For RLS we had also chosen these to be all zeros %theta(:,1)=0*theta(:,1);% Next, define the error that results from the initial choice of parameters:x(1)=6*(-1+2*rand);  % Input data uniformly distributed on (-6,6)z(1)=0.15*(rand-0.5)*2;  % Define the auxiliary variableG(1)=exp(-50*(x(1)-1)^2)-0.5*exp(-100*(x(1)-1.2)^2)+atan(2*x(1))+2.15+...0.2*exp(-10*(x(1)+1)^2)-0.25*exp(-20*(x(1)+1.5)^2)+0.1*exp(-10*(x(1)+2)^2)-0.2*exp(-10*(x(1)+3)^2);if x(1) >= 0	G(1)=G(1)+0.1*(x(1)-2)^2-0.4;endy(1)=G(1)+z(1); % Adds in the influence of the auxiliary variable% Next, compute the estimator outputfor j=1:n1	xbar(j,1)=b(j,1)+w(j,1)*x(1);	phi(j,1)=inv(1+exp(-xbar(j,1)));endphi(n1+1,1)=1;% The estimator output is:yhat(1)=theta(:,1)'*phi(:,1); epsilon(1)=y(1)-yhat(1); % Define the estimation error % Next, start the estimator	for k=2:Nsd		% First, generate data from the "unknown" function	x(k)=6*(-1+2*rand);  % Input data uniformly distributed on (-6,6)	z(k)=0.15*(rand-0.5)*2;  % Define the auxiliary variable	G(k)=exp(-50*(x(k)-1)^2)-0.5*exp(-100*(x(k)-1.2)^2)+atan(2*x(k))+2.15+...	0.2*exp(-10*(x(k)+1)^2)-0.25*exp(-20*(x(k)+1.5)^2)+0.1*exp(-10*(x(k)+2)^2)-0.2*exp(-10*(x(k)+3)^2);	if x(k) >= 0		G(k)=G(k)+0.1*(x(k)-2)^2-0.4;	end	y(k)=G(k)+z(k); % Adds in the influence of the auxiliary variable	% Compute the update formulas:		for j=1:n1						% First, for the bias in the output layer (variables tempi used so do not have to recompute things)				temp1=lambda*epsilon(k-1);		bo(k)=bo(k-1)+temp1;				% Next, for the output layer weights				temp2=temp1*phi(j,k-1);		wo(j,k)=wo(j,k-1)+temp2;				% Next, for the hidden biases				temp3=temp2*wo(j,k-1)*(1-phi(j,k-1));		b(j,k)=b(j,k-1)+temp3;				% Finally, for the hidden weights			w(j,k)=w(j,k-1)+temp3*x(k);		end		% Compute the phi vector		for j=1:n1		xbar(j,k)=b(j,k)+w(j,k)*x(k);		phi(j,k)=inv(1+exp(-xbar(j,k)));	end	phi(n1+1,k)=1;	theta(:,k)=[wo(:,k)', bo(k)]';  % Load the parameters that enter linearly into one vector (for convenience)	yhat(k)=theta(:,k)'*phi(:,k);  % The current guess of the estimator	epsilon(k)=y(k)-yhat(k);  % Compute the estimation error  (for plotting if you want)	if k <=11  % For the first 10 iterations plot the approximator mapping%if k<=Nsd & k>=Nsd-9 % This is in case you want to see the last 10 map shapes, but					% also have to change the "subplot" command below	% Next, compute the estimator mapping and plot it on the dataxt=-6:0.05:6;for i=1:length(xt),	for j=1:n1		xbart(j,i)=b(j,k)+w(j,k)*xt(i);		phit(j,i)=inv(1+exp(-xbart(j,i)));	end	phit(n1+1,i)=1;	Fmlp25(i)=theta(:,k)'*phit(:,i);  % The current guess of the estimatorend% Plot the estimator mapping after having k-1 pieces of training datafigure(1)subplot(5,2,k-1)plot(x,y,'ko',xt,Fmlp25,'k')xlabel('x')T=num2str(k-1);T=strcat('y, k=',T);ylabel(T)gridaxis([-6 6 0 4.8])hold onendif k<=Nsd & k>=Nsd-9 % This is in case you want to see the last 10 map shapes	xt=-6:0.05:6;for i=1:length(xt),	for j=1:n1		xbart(j,i)=b(j,k)+w(j,k)*xt(i);		phit(j,i)=inv(1+exp(-xbart(j,i)));	end	phit(n1+1,i)=1;	Fmlp25(i)=theta(:,k)'*phit(:,i); end% Plot the estimator mapping figure(2)subplot(5,2,k-(Nsd-9)+1)plot(x,y,'k.',xt,Fmlp25,'k')xlabel('x')T=num2str(k-1);T=strcat('y, k=',T);ylabel(T)gridaxis([-6 6 0 4.8])hold onendend  % This is the end for the main loop% Next, plot the output of the last approximator obtained and the unknown function% (just to get a bigger version of the last plot produced above)figure(3)clfplot(x,y,'ko',xt,Fmlp25,'k')xlabel('x')T=num2str(k-1);T=strcat('y, k=',T);ylabel(T)title('Neural network trained with steepest descent')gridaxis([-6 6 0 4.8])% Note: If you move the "end" from the main loop to here, along with a pause statement, % you can make the algorithm show you the shape of the nonlinearity at each iteration.% This can provide some insights but can quickly get tedious so another option would% be to provide a 3-d plot of the shape as it moves through time, or a Matlab "movie"%pause%end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% End of program%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -