⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 demorb1.html

📁 神经网络学习过程的实例程序
💻 HTML
字号:
<!--This HTML is auto-generated from an m-file.Your changes will be overwritten.--><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:x-large">Radial Basis Approximation</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">This demo uses the NEWRB function to create a radial basis network thatapproximates a function defined by a set of data points.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Copyright 1992-2002 The MathWorks, Inc.$Revision: 1.14 $  $Date: 2002/03/29 19:36:06 $</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Define 21 inputs P and associated targets T.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">P = -1:.1:1;T = [-.9602 -.5770 -.0729  .3771  .6405  .6600  .4609 <span style="color:blue">...</span>      .1336 -.2013 -.4344 -.5000 -.3930 -.1647  .0988 <span style="color:blue">...</span>      .3072  .3960  .3449  .1816 -.0312 -.2189 -.3201];plot(P,T,<span style="color:#B20000">'+'</span>);title(<span style="color:#B20000">'Training Vectors'</span>);xlabel(<span style="color:#B20000">'Input Vector P'</span>);ylabel(<span style="color:#B20000">'Target Vector T'</span>);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demorb1_img02.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">We would like to find a function which fits the 21 data points.  One way to dothis is with a radial basis network.  A radial basis network is a network withtwo layers.  A hidden layer of radial basis neurons and an output layer oflinear neurons.  Here is the radial basis transfer function used by the hiddenlayer.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">p = -3:.1:3;a = radbas(p);plot(p,a)title(<span style="color:#B20000">'Radial Basis Transfer Function'</span>);xlabel(<span style="color:#B20000">'Input p'</span>);ylabel(<span style="color:#B20000">'Output a'</span>);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demorb1_img03.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">The weights and biases of each neuron in the hidden layer define the positionand width of a radial basis function.  Each linear output neuron forms aweighted sum of these radial basis functions.  With the correct weight andbias values for each layer, and enough hidden neurons, a radial basis networkcan fit any function with any desired accuracy.  This is an example of threeradial basis functions (in blue) are scaled and summed to produce a function(in magenta).</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">a2 = radbas(p-1.5);a3 = radbas(p+2);a4 = a + a2*1 + a3*0.5;plot(p,a,<span style="color:#B20000">'b-'</span>,p,a2,<span style="color:#B20000">'b--'</span>,p,a3,<span style="color:#B20000">'b--'</span>,p,a4,<span style="color:#B20000">'m-'</span>)title(<span style="color:#B20000">'Weighted Sum of Radial Basis Transfer Functions'</span>);xlabel(<span style="color:#B20000">'Input p'</span>);ylabel(<span style="color:#B20000">'Output a'</span>);</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demorb1_img04.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">The function NEWRB quickly creates a radial basis network which approximatesthe function defined by P and T.  In addition to the training set and targets,NEWRB takes two arguments, the sum-squared error goal and the spread constant.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">eg = 0.02; <span style="color:green">% sum-squared error goal</span>sc = 1;    <span style="color:green">% spread constant</span>net = newrb(P,T,eg,sc);</pre><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:gray; font-style:italic;">NEWRB, neurons = 0, SSE = 3.69051</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">To see how the network performs, replot the training set.  Then simulate thenetwork response for inputs over the same range.  Finally, plot the results onthe same graph.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">plot(P,T,<span style="color:#B20000">'+'</span>);xlabel(<span style="color:#B20000">'Input'</span>);X = -1:.01:1;Y = sim(net,X);hold on;plot(X,Y);hold off;legend({<span style="color:#B20000">'Target'</span>,<span style="color:#B20000">'Output'</span>})</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demorb1_img06.gif"><originalCode xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" code="%% Radial Basis Approximation&#xA;% This demo uses the NEWRB function to create a radial basis network that&#xA;% approximates a function defined by a set of data points.&#xA;%&#xA;% Copyright 1992-2002 The MathWorks, Inc. &#xA;% $Revision: 1.14 $  $Date: 2002/03/29 19:36:06 $&#xA;&#xA;%%&#xA;% Define 21 inputs P and associated targets T.&#xA;&#xA;P = -1:.1:1;&#xA;T = [-.9602 -.5770 -.0729  .3771  .6405  .6600  .4609 ...&#xA;      .1336 -.2013 -.4344 -.5000 -.3930 -.1647  .0988 ...&#xA;      .3072  .3960  .3449  .1816 -.0312 -.2189 -.3201];&#xA;plot(P,T,'+');&#xA;title('Training Vectors');&#xA;xlabel('Input Vector P');&#xA;ylabel('Target Vector T');&#xA;&#xA;%%&#xA;% We would like to find a function which fits the 21 data points.  One way to do&#xA;% this is with a radial basis network.  A radial basis network is a network with&#xA;% two layers.  A hidden layer of radial basis neurons and an output layer of&#xA;% linear neurons.  Here is the radial basis transfer function used by the hidden&#xA;% layer.&#xA;&#xA;p = -3:.1:3;&#xA;a = radbas(p);&#xA;plot(p,a)&#xA;title('Radial Basis Transfer Function');&#xA;xlabel('Input p');&#xA;ylabel('Output a');&#xA;&#xA;%%&#xA;% The weights and biases of each neuron in the hidden layer define the position&#xA;% and width of a radial basis function.  Each linear output neuron forms a&#xA;% weighted sum of these radial basis functions.  With the correct weight and&#xA;% bias values for each layer, and enough hidden neurons, a radial basis network&#xA;% can fit any function with any desired accuracy.  This is an example of three&#xA;% radial basis functions (in blue) are scaled and summed to produce a function&#xA;% (in magenta).&#xA;&#xA;a2 = radbas(p-1.5);&#xA;a3 = radbas(p+2);&#xA;a4 = a + a2*1 + a3*0.5;&#xA;plot(p,a,'b-',p,a2,'b--',p,a3,'b--',p,a4,'m-')&#xA;title('Weighted Sum of Radial Basis Transfer Functions');&#xA;xlabel('Input p');&#xA;ylabel('Output a');&#xA;&#xA;%%&#xA;% The function NEWRB quickly creates a radial basis network which approximates&#xA;% the function defined by P and T.  In addition to the training set and targets,&#xA;% NEWRB takes two arguments, the sum-squared error goal and the spread constant.&#xA;&#xA;eg = 0.02; % sum-squared error goal&#xA;sc = 1;    % spread constant&#xA;net = newrb(P,T,eg,sc);&#xA;&#xA;%%&#xA;% To see how the network performs, replot the training set.  Then simulate the&#xA;% network response for inputs over the same range.  Finally, plot the results on&#xA;% the same graph.&#xA;&#xA;plot(P,T,'+');&#xA;xlabel('Input');&#xA;&#xA;X = -1:.01:1;&#xA;Y = sim(net,X);&#xA;&#xA;hold on;&#xA;plot(X,Y);&#xA;hold off;&#xA;legend({'Target','Output'})"></originalCode>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -