⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 demolin4.html

📁 神经网络学习过程的实例程序
💻 HTML
字号:
<!--This HTML is auto-generated from an m-file.Your changes will be overwritten.--><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:x-large">Linear Fit of Nonlinear Problem</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">A linear neuron is trained to find the minimum sum-squared error linear fit toa nonlinear input/output problem.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Copyright 1992-2002 The MathWorks, Inc.$Revision: 1.16 $  $Date: 2002/03/29 19:36:17 $</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">P defines four 1-element input patterns (column vectors).  T definesassociated 1-element targets (column vectors).  Note that the relationshipbetween values in P and in T is nonlinear.   I.e. No W and B exist such thatP*W+B = T for all of four sets of P and T values above.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">P = [+1.0 +1.5 +3.0 -1.2];T = [+0.5 +1.1 +3.0 -1.0];</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">ERRSURF calculates errors for a neuron with a range of possible weight andbias values.  PLOTES plots this error surface with a contour plot underneath.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">The best weight and bias values are those that result in the lowest point onthe error surface.  Note that because a perfect linear fit is not possible,the minimum has an error greater than 0.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">w_range =-2:0.4:2;  b_range = -2:0.4:2;ES = errsurf(P,T,w_range,b_range,<span style="color:#B20000">'purelin'</span>);plotes(w_range,b_range,ES);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin4_img03.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">MAXLINLR finds the fastest stable learning rate for training a linear network.NEWLIN creates a linear neuron.  NEWLIN takes these arguments: 1) Rx2 matrixof min and max values for R input elements, 2) Number of elements in theoutput vector, 3) Input delay vector, and 4) Learning rate.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">maxlr = maxlinlr(P,<span style="color:#B20000">'bias'</span>);net = newlin([-2 2],1,[0],maxlr);</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Override the default training parameters by setting the maximum number ofepochs.  This ensures that training will stop.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">net.trainParam.epochs = 15;</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">To show the path of the training we will train only one epoch at a time andcall PLOTEP every epoch (code not shown here).  The plot shows a history ofthe training.  Each dot represents an epoch and the blue lines show eachchange made by the learning rule (Widrow-Hoff by default).</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px"><span style="color:green">% [net,tr] = train(net,P,T);</span>net.trainParam.epochs = 1;net.trainParam.show = NaN;h=plotep(net.IW{1},net.b{1},mse(T-sim(net,P)));     [net,tr] = train(net,P,T);                                                    r = tr;epoch = 1;<span style="color:blue">while</span> epoch &lt; 15   epoch = epoch+1;   [net,tr] = train(net,P,T);   <span style="color:blue">if</span> length(tr.epoch) &gt; 1      h = plotep(net.IW{1,1},net.b{1},tr.perf(2),h);      r.epoch=[r.epoch epoch];       r.perf=[r.perf tr.perf(2)];      r.vperf=[r.vperf NaN];      r.tperf=[r.tperf NaN];   <span style="color:blue">else</span>      <span style="color:blue">break</span>   <span style="color:blue">end</span><span style="color:blue">end</span>tr=r;</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin4_img06.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">The train function outputs the trained network and a history of the trainingperformance (tr).  Here the errors are plotted with respect to trainingepochs.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Note that the error never reaches 0.  This problem is nonlinear and thereforea zero error linear solution is not possible.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">plotperf(tr,net.trainParam.goal);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demolin4_img07.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Now use SIM to test the associator with one of the original inputs, -1.2, andsee if it returns the target, 1.0.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">The result is not very close to 0.5!  This is because the network is the bestlinear fit to a nonlinear problem.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">p = -1.2;a = sim(net, p)</pre><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:gray; font-style:italic;">a =   -1.1803</pre><originalCode xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" code="%% Linear Fit of Nonlinear Problem&#xA;% A linear neuron is trained to find the minimum sum-squared error linear fit to&#xA;% a nonlinear input/output problem.&#xA;%&#xA;% Copyright 1992-2002 The MathWorks, Inc.&#xA;% $Revision: 1.16 $  $Date: 2002/03/29 19:36:17 $&#xA;&#xA;%%&#xA;% P defines four 1-element input patterns (column vectors).  T defines&#xA;% associated 1-element targets (column vectors).  Note that the relationship&#xA;% between values in P and in T is nonlinear.   I.e. No W and B exist such that&#xA;% P*W+B = T for all of four sets of P and T values above.&#xA;&#xA;P = [+1.0 +1.5 +3.0 -1.2];&#xA;T = [+0.5 +1.1 +3.0 -1.0];&#xA;&#xA;%%&#xA;% ERRSURF calculates errors for a neuron with a range of possible weight and&#xA;% bias values.  PLOTES plots this error surface with a contour plot underneath.&#xA;% &#xA;% The best weight and bias values are those that result in the lowest point on&#xA;% the error surface.  Note that because a perfect linear fit is not possible,&#xA;% the minimum has an error greater than 0.&#xA;&#xA;w_range =-2:0.4:2;  b_range = -2:0.4:2;&#xA;ES = errsurf(P,T,w_range,b_range,'purelin');&#xA;plotes(w_range,b_range,ES);&#xA;&#xA;%%&#xA;% MAXLINLR finds the fastest stable learning rate for training a linear network.&#xA;% NEWLIN creates a linear neuron.  NEWLIN takes these arguments: 1) Rx2 matrix&#xA;% of min and max values for R input elements, 2) Number of elements in the&#xA;% output vector, 3) Input delay vector, and 4) Learning rate.&#xA;&#xA;maxlr = maxlinlr(P,'bias');&#xA;net = newlin([-2 2],1,[0],maxlr);&#xA;&#xA;&#xA;%%&#xA;% Override the default training parameters by setting the maximum number of&#xA;% epochs.  This ensures that training will stop.&#xA;&#xA;net.trainParam.epochs = 15;&#xA;&#xA;%%&#xA;% To show the path of the training we will train only one epoch at a time and&#xA;% call PLOTEP every epoch (code not shown here).  The plot shows a history of&#xA;% the training.  Each dot represents an epoch and the blue lines show each&#xA;% change made by the learning rule (Widrow-Hoff by default).&#xA;&#xA;% [net,tr] = train(net,P,T);&#xA;net.trainParam.epochs = 1;&#xA;net.trainParam.show = NaN;&#xA;h=plotep(net.IW{1},net.b{1},mse(T-sim(net,P)));     &#xA;[net,tr] = train(net,P,T);                                                    &#xA;r = tr;&#xA;epoch = 1;&#xA;while epoch < 15&#xA;   epoch = epoch+1;&#xA;   [net,tr] = train(net,P,T);&#xA;   if length(tr.epoch) &gt; 1&#xA;      h = plotep(net.IW{1,1},net.b{1},tr.perf(2),h);&#xA;      r.epoch=[r.epoch epoch]; &#xA;      r.perf=[r.perf tr.perf(2)];&#xA;      r.vperf=[r.vperf NaN];&#xA;      r.tperf=[r.tperf NaN];&#xA;   else&#xA;      break&#xA;   end&#xA;end&#xA;tr=r;&#xA;&#xA;%%&#xA;% The train function outputs the trained network and a history of the training&#xA;% performance (tr).  Here the errors are plotted with respect to training&#xA;% epochs.&#xA;%&#xA;% Note that the error never reaches 0.  This problem is nonlinear and therefore&#xA;% a zero error linear solution is not possible.&#xA;&#xA;plotperf(tr,net.trainParam.goal);&#xA;&#xA;%%&#xA;% Now use SIM to test the associator with one of the original inputs, -1.2, and&#xA;% see if it returns the target, 1.0.&#xA;%&#xA;% The result is not very close to 0.5!  This is because the network is the best&#xA;% linear fit to a nonlinear problem.&#xA;&#xA;p = -1.2;&#xA;a = sim(net, p)&#xA;&#xA;&#xA;"></originalCode>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -