📄 bls_lip_mlp.m
字号:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% For training two-layer perceptrons using batch least squares%% By: Kevin Passino% Version: 2/9/99%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%clear% First, generate the training data, G% For the M=121 casex=-6:0.1:6;M=length(x)for i=1:M, z(i)=0.15*(rand-0.5)*2; % Define the auxiliary variable G(i)=exp(-50*(x(i)-1)^2)-0.5*exp(-100*(x(i)-1.2)^2)+atan(2*x(i))+2.15+... 0.2*exp(-10*(x(i)+1)^2)-0.25*exp(-20*(x(i)+1.5)^2)+0.1*exp(-10*(x(i)+2)^2)-0.2*exp(-10*(x(i)+3)^2); if x(i) >= 0 G(i)=G(i)+0.1*(x(i)-2)^2-0.4; end Gz(i)=G(i)+z(i); % Adds in the influence of the auxiliary variable% fpoly(i)=0.6+0.1*x(i);end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% First, study the n1=2 casen1=2% First, form the vector YY=Gz';% Next, Phi, which involves processing x through phiw1=[1.5]';b1=0;w2=[1.25]'; b2=-6;% Initialize the Phi vectorphi1(1)=inv(1+exp(-b1-w1*x(1)));phi2(1)=inv(1+exp(-b2-w2*x(1)));Phi=[phi1(1), phi2(1), 1];for i=2:M, phi1(i)=inv(1+exp(-b1-w1*x(i))); phi2(i)=inv(1+exp(-b2-w2*x(i))); Phi=[Phi; phi1(i), phi2(i), 1];end% Next, we compute the least squares estimate. Rather than using% theta=inv(Phi'*Phi)*Phi'*Y; we will use a method in Matlab% that can be better numerically. In particular, see the help document on % "mldivide" that is implemented with the backslash.theta=Phi\Y% Note that we tested the result by plotting the resulting approximator% mapping and it produces a reasonable result. It is for this reason% that we trust the numerical computations, and do not seek to % use other methods for the computation of the estimate.% Next, compute the approximator valuesfor i=1:M, phi=[phi1(i) phi2(i) 1]'; Fmlp(i)=theta'*phi;end% Next, plot the data and the approximator to comparefigure(1)plot(x,Gz,'ko',x,Fmlp,'k')xlabel('x(i)')ylabel('y(i)=G(x(i),z(i)), and perceptron output')title('Neural network approximation,trained with batch least squares')gridaxis([min(x) max(x) 0 max(G)])%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Next, we study the case where there are n1=11 neurons% in the hidden layer% We use the same Y as before, but must form Phi% which involves processing x through phin1=11w(1,:)=ones(1,n1);b=-5:1:5;% Initialize the Phi vectorfor j=1:n1 phi(j,1)=inv(1+exp(-b(j)-w(1,j)*x(1)));endPhi=[phi(:,1)', 1];for i=2:M, for j=1:n1 phi(j,i)=inv(1+exp(-b(j)-w(1,j)*x(i))); end Phi=[Phi; phi(:,i)', 1];end% Next, we compute the least squares estimate. Rather than using% theta=inv(Phi'*Phi)*Phi'*Y; we will use a method in Matlab% that can be better numerically. In particular, see the help document on % "mldivide" that is implemented with the backslash.theta=Phi\Y% Next, compute the approximator valuesfor i=1:M, Fmlp11(i)=theta'*[phi(:,i)', 1]';end% Next, plot the data and the approximator to comparefigure(2)plot(x,Gz,'ko',x,Fmlp11,'k')xlabel('x(i)')ylabel('y(i)=G(x(i),z(i)), and perceptron output')title('Neural network approximation,11 hidden neurons')gridaxis([min(x) max(x) 0 max(G)])%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Next, we study the case where there are n1=25 neurons% in the hidden layer% We use the same Y as before, but must form Phi% which involves processing x through phin1=25clear w b phi Phi thetaw(1,:)=ones(1,n1);b=-6:0.5:6;% Initialize the Phi vectorfor j=1:n1 phi(j,1)=inv(1+exp(-b(j)-w(1,j)*x(1)));endPhi=[phi(:,1)', 1];for i=2:M, for j=1:n1 phi(j,i)=inv(1+exp(-b(j)-w(1,j)*x(i))); end Phi=[Phi; phi(:,i)', 1];end% Next, we compute the least squares estimate. Rather than using% theta=inv(Phi'*Phi)*Phi'*Y; we will use a method in Matlab% that can be better numerically. In particular, see the help document on % "mldivide" that is implemented with the backslash.theta=Phi\Ytheta26=theta; % For use in a later programsave variables theta26% Next, compute the approximator valuesfor i=1:M, Fmlp25(i)=theta'*[phi(:,i)', 1]';end% Next, plot the data and the approximator to comparefigure(3)plot(x,Gz,'ko',x,Fmlp25,'k')xlabel('x(i)')ylabel('y(i)=G(x(i),z(i)), and perceptron output')title('Neural network approximation,25 hidden neurons')gridaxis([min(x) max(x) 0 max(G)])%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Next, we study the case where there are n1=121 neurons% in the hidden layer% We use the same Y as before, but must form Phi% which involves processing x through phin1=121clear w b phi Phi thetaw(1,:)=ones(1,n1);b=-6:0.1:6;% Initialize the Phi vectorfor j=1:n1 phi(j,1)=inv(1+exp(-b(j)-w(1,j)*x(1)));endPhi=[phi(:,1)', 1];for i=2:M, for j=1:n1 phi(j,i)=inv(1+exp(-b(j)-w(1,j)*x(i))); end Phi=[Phi; phi(:,i)', 1];end% Next, we compute the least squares estimate. Rather than using% theta=inv(Phi'*Phi)*Phi'*Y; we will use a method in Matlab% that can be better numerically. In particular, see the help document on % "mldivide" that is implemented with the backslash.theta=Phi\Y% Next, compute the approximator valuesfor i=1:M, Fmlp121(i)=theta'*[phi(:,i)', 1]';end% Next, plot the data and the approximator to comparefigure(4)subplot(121)plot(x,Gz,'ko',x,Fmlp121,'k')xlabel('x(i)')ylabel('y(i)=G(x(i),z(i)), and perceptron output')title('Neural network approximation,121 hidden neurons')gridaxis([min(x) max(x) 0 max(G)])subplot(122)plot(x,G,'k-.',x,Fmlp121,'k')xlabel('x')ylabel('y=G(x), and perceptron output')title('Comparison to the function G(x)')gridaxis([min(x) max(x) 0 max(G)])%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Next, we study the case where there are n1=121 neurons% in the hidden layer, but now we train to match% G rather than Gz% First, form the vector YY=G';% We form Phin1=121clear w b phi Phi thetaw(1,:)=ones(1,n1);b=-6:0.1:6;% Initialize the Phi vectorfor j=1:n1 phi(j,1)=inv(1+exp(-b(j)-w(1,j)*x(1)));endPhi=[phi(:,1)', 1];for i=2:M, for j=1:n1 phi(j,i)=inv(1+exp(-b(j)-w(1,j)*x(i))); end Phi=[Phi; phi(:,i)', 1];end% Next, we compute the least squares estimate. Rather than using% theta=inv(Phi'*Phi)*Phi'*Y; we will use a method in Matlab% that can be better numerically. In particular, see the help document on % "mldivide" that is implemented with the backslash.theta=Phi\Y% Next, compute the approximator valuesfor i=1:M, Fmlp121(i)=theta'*[phi(:,i)', 1]';end% Next, plot the data and the approximator to comparefigure(5)plot(x,G,'ko',x,Fmlp121,'k')xlabel('x(i)')ylabel('y(i)=G(x(i),z(i)), and perceptron output')title('Neural network approximation,121 hidden neurons')gridaxis([min(x) max(x) 0 max(G)])%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% End of program%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -