📄 demolin5.html
字号:
<!--This HTML is auto-generated from an m-file.Your changes will be overwritten.--><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:x-large">Underdetermined Problem</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">A linear neuron is trained to find a non-unique solution to an undeterminedproblem.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Copyright 1992-2002 The MathWorks, Inc.$Revision: 1.12 $ $Date: 2002/03/29 19:36:16 $</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">P defines one 1-element input patterns (column vectors). T defines anassociated 1-element target (column vectors). Note that there are infinitevalues of W and B such that the expression W*P+B = T is true. Problems withmultiple solutions are called underdetermined.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">P = [+1.0];T = [+0.5];</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">ERRSURF calculates errors for a neuron with a range of possible weight andbias values. PLOTES plots this error surface with a contour plot underneath.The bottom of the valley in the error surface corresponds to the infinitesolutions to this problem.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">w_range = -1:0.2:1; b_range = -1:0.2:1;ES = errsurf(P,T,w_range,b_range,<span style="color:#B20000">'purelin'</span>);plotes(w_range,b_range,ES);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin5_img03.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">MAXLINLR finds the fastest stable learning rate for training a linear network.NEWLIN creates a linear neuron. NEWLIN takes these arguments: 1) Rx2 matrixof min and max values for R input elements, 2) Number of elements in theoutput vector, 3) Input delay vector, and 4) Learning rate.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">maxlr = maxlinlr(P,<span style="color:#B20000">'bias'</span>);net = newlin([-2 2],1,[0],maxlr);</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Override the default training parameters by setting the performance goal.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">net.trainParam.goal = 1e-10;</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">To show the path of the training we will train only one epoch at a time andcall PLOTEP every epoch. The plot shows a history of the training. Each dotrepresents an epoch and the blue lines show each change made by the learningrule (Widrow-Hoff by default).</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px"><span style="color:green">% [net,tr] = train(net,P,T);</span>net.trainParam.epochs = 1;net.trainParam.show = NaN;h=plotep(net.IW{1},net.b{1},mse(T-sim(net,P))); [net,tr] = train(net,P,T); r = tr;epoch = 1;<span style="color:blue">while</span> true epoch = epoch+1; [net,tr] = train(net,P,T); <span style="color:blue">if</span> length(tr.epoch) > 1 h = plotep(net.IW{1,1},net.b{1},tr.perf(2),h); r.epoch=[r.epoch epoch]; r.perf=[r.perf tr.perf(2)]; r.vperf=[r.vperf NaN]; r.tperf=[r.tperf NaN]; <span style="color:blue">else</span> <span style="color:blue">break</span> <span style="color:blue">end</span><span style="color:blue">end</span>tr=r;</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin5_img06.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Here we plot the NEWLIND solution. Note that the TRAIN (white dot) andSOLVELIN (red circle) solutions are not the same. In fact, TRAINWH willreturn a different solution for different initial conditions, while SOLVELINwill always return the same solution.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">solvednet = newlind(P,T);hold on;plot(solvednet.IW{1,1},solvednet.b{1},<span style="color:#B20000">'ro'</span>)hold off;</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin5_img07.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">The train function outputs the trained network and a history of the trainingperformance (tr). Here the errors are plotted with respect to trainingepochs: Once the error reaches the goal, an adequate solution for W and B hasbeen found. However, because the problem is underdetermined, this solution isnot unique.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">subplot(1,2,1);plotperf(tr,net.trainParam.goal);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin5_img08.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">We can now test the associator with one of the original inputs, 1.0, and seeif it returns the target, 0.5. The result is very close to 0.5. The errorcan be reduced further, if required, by continued training with TRAINWH usinga smaller error goal.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">p = 1.0;a = sim(net,p)</pre><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:gray; font-style:italic;">a = 0.5000</pre><originalCode xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" code="%% Underdetermined Problem
% A linear neuron is trained to find a non-unique solution to an undetermined
% problem.
%
% Copyright 1992-2002 The MathWorks, Inc.
% $Revision: 1.12 $ $Date: 2002/03/29 19:36:16 $

%%
% P defines one 1-element input patterns (column vectors). T defines an
% associated 1-element target (column vectors). Note that there are infinite
% values of W and B such that the expression W*P+B = T is true. Problems with
% multiple solutions are called underdetermined.

P = [+1.0];
T = [+0.5];

%%
% ERRSURF calculates errors for a neuron with a range of possible weight and
% bias values. PLOTES plots this error surface with a contour plot underneath.
% The bottom of the valley in the error surface corresponds to the infinite
% solutions to this problem.

w_range = -1:0.2:1; b_range = -1:0.2:1;
ES = errsurf(P,T,w_range,b_range,'purelin');
plotes(w_range,b_range,ES);

%%
% MAXLINLR finds the fastest stable learning rate for training a linear network.
% NEWLIN creates a linear neuron. NEWLIN takes these arguments: 1) Rx2 matrix
% of min and max values for R input elements, 2) Number of elements in the
% output vector, 3) Input delay vector, and 4) Learning rate.

maxlr = maxlinlr(P,'bias');
net = newlin([-2 2],1,[0],maxlr);


%%
% Override the default training parameters by setting the performance goal.

net.trainParam.goal = 1e-10;

%%
% To show the path of the training we will train only one epoch at a time and
% call PLOTEP every epoch. The plot shows a history of the training. Each dot
% represents an epoch and the blue lines show each change made by the learning
% rule (Widrow-Hoff by default).

% [net,tr] = train(net,P,T);
net.trainParam.epochs = 1;
net.trainParam.show = NaN;
h=plotep(net.IW{1},net.b{1},mse(T-sim(net,P))); 
[net,tr] = train(net,P,T); 
r = tr;
epoch = 1;
while true
 epoch = epoch+1;
 [net,tr] = train(net,P,T);
 if length(tr.epoch) > 1
 h = plotep(net.IW{1,1},net.b{1},tr.perf(2),h);
 r.epoch=[r.epoch epoch]; 
 r.perf=[r.perf tr.perf(2)];
 r.vperf=[r.vperf NaN];
 r.tperf=[r.tperf NaN];
 else
 break
 end
end
tr=r;

%%
% Here we plot the NEWLIND solution. Note that the TRAIN (white dot) and
% SOLVELIN (red circle) solutions are not the same. In fact, TRAINWH will
% return a different solution for different initial conditions, while SOLVELIN
% will always return the same solution.

solvednet = newlind(P,T);
hold on;
plot(solvednet.IW{1,1},solvednet.b{1},'ro')
hold off;

%%
% The train function outputs the trained network and a history of the training
% performance (tr). Here the errors are plotted with respect to training
% epochs: Once the error reaches the goal, an adequate solution for W and B has
% been found. However, because the problem is underdetermined, this solution is
% not unique.

subplot(1,2,1);
plotperf(tr,net.trainParam.goal);


%%
% We can now test the associator with one of the original inputs, 1.0, and see
% if it returns the target, 0.5. The result is very close to 0.5. The error
% can be reduced further, if required, by continued training with TRAINWH using
% a smaller error goal.

p = 1.0;
a = sim(net,p)
"></originalCode>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -