⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 demopnn1.html

📁 神经网络学习过程的实例程序
💻 HTML
字号:
<!--This HTML is auto-generated from an m-file.Your changes will be overwritten.--><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:x-large">PNN Classification</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">This demonstration uses functions NEWPNN and SIM.</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Copyright 1992-2002 The MathWorks, Inc.$Revision: 1.9 $  $Date: 2002/03/29 19:36:07 $</p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Here are three two-element input vectors P and their associated classes Tc.We would like to create a probabilistic neural network that classifes thesevectors properly.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">P = [1 2; 2 2; 1 1]';Tc = [1 2 3];plot(P(1,:),P(2,:),<span style="color:#B20000">'.'</span>,<span style="color:#B20000">'markersize'</span>,30)<span style="color:blue">for</span> i=1:3, text(P(1,i)+0.1,P(2,i),sprintf(<span style="color:#B20000">'class %g'</span>,Tc(i))), <span style="color:blue">end</span>axis([0 3 0 3])title(<span style="color:#B20000">'Three vectors and their classes.'</span>)xlabel(<span style="color:#B20000">'P(1,:)'</span>)ylabel(<span style="color:#B20000">'P(2,:)'</span>)</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demopnn1_img02.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">First we convert the target class indices Tc to vectors T.  Then we design aprobabilistic neural network with NEWPNN.  We use a SPREAD value of 1 becausethat is a typical distance between the input vectors.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">T = ind2vec(Tc);spread = 1;net = newpnn(P,T,spread);</pre><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Now we test the network on the design input vectors.  We do this by simulatingthe network and converting its vector outputs to indices.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">A = sim(net,P);Ac = vec2ind(A);plot(P(1,:),P(2,:),<span style="color:#B20000">'.'</span>,<span style="color:#B20000">'markersize'</span>,30)axis([0 3 0 3])<span style="color:blue">for</span> i=1:3,text(P(1,i)+0.1,P(2,i),sprintf(<span style="color:#B20000">'class %g'</span>,Ac(i))),<span style="color:blue">end</span>title(<span style="color:#B20000">'Testing the network.'</span>)xlabel(<span style="color:#B20000">'P(1,:)'</span>)ylabel(<span style="color:#B20000">'P(2,:)'</span>)</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demopnn1_img04.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">Let's classify a new vector with our network.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">p = [2; 1.5];a = sim(net,p);ac = vec2ind(a);hold onplot(p(1),p(2),<span style="color:#B20000">'.'</span>,<span style="color:#B20000">'markersize'</span>,30,<span style="color:#B20000">'color'</span>,[1 0 0])text(p(1)+0.1,p(2),sprintf(<span style="color:#B20000">'class %g'</span>,ac))hold offtitle(<span style="color:#B20000">'Classifying a new vector.'</span>)xlabel(<span style="color:#B20000">'P(1,:) and p(1)'</span>)ylabel(<span style="color:#B20000">'P(2,:) and p(2)'</span>)</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demopnn1_img05.gif"><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="color:#990000; font-weight:bold; font-size:medium; page-break-before: auto;"><a name=""></a></p><p xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">This diagram shows how the probabilistic neural network divides the inputspace into the three classes.</p><pre xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" style="position: relative; left:30px">p1 = 0:.05:3;p2 = p1;[P1,P2] = meshgrid(p1,p2);pp = [P1(:) P2(:)]';aa = sim(net,pp);aa = full(aa);m = mesh(P1,P2,reshape(aa(1,:),length(p1),length(p2)));set(m,<span style="color:#B20000">'facecolor'</span>,[0 0.5 1],<span style="color:#B20000">'linestyle'</span>,<span style="color:#B20000">'none'</span>);hold onm = mesh(P1,P2,reshape(aa(2,:),length(p1),length(p2)));set(m,<span style="color:#B20000">'facecolor'</span>,[0 1.0 0.5],<span style="color:#B20000">'linestyle'</span>,<span style="color:#B20000">'none'</span>);m = mesh(P1,P2,reshape(aa(3,:),length(p1),length(p2)));set(m,<span style="color:#B20000">'facecolor'</span>,[0.5 0 1],<span style="color:#B20000">'linestyle'</span>,<span style="color:#B20000">'none'</span>);plot3(P(1,:),P(2,:),[1 1 1]+0.1,<span style="color:#B20000">'.'</span>,<span style="color:#B20000">'markersize'</span>,30)plot3(p(1),p(2),1.1,<span style="color:#B20000">'.'</span>,<span style="color:#B20000">'markersize'</span>,30,<span style="color:#B20000">'color'</span>,[1 0 0])hold offview(2)title(<span style="color:#B20000">'The three classes.'</span>)xlabel(<span style="color:#B20000">'P(1,:) and p(1)'</span>)ylabel(<span style="color:#B20000">'P(2,:) and p(2)'</span>)</pre><img xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" src="demopnn1_img06.gif"><originalCode xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd" code="%% PNN Classification&#xA;% This demonstration uses functions NEWPNN and SIM.&#xA;%&#xA;% Copyright 1992-2002 The MathWorks, Inc.&#xA;% $Revision: 1.9 $  $Date: 2002/03/29 19:36:07 $&#xA;&#xA;%%&#xA;% Here are three two-element input vectors P and their associated classes Tc.&#xA;% We would like to create a probabilistic neural network that classifes these&#xA;% vectors properly.&#xA;&#xA;P = [1 2; 2 2; 1 1]';&#xA;Tc = [1 2 3];&#xA;plot(P(1,:),P(2,:),'.','markersize',30)&#xA;for i=1:3, text(P(1,i)+0.1,P(2,i),sprintf('class %g',Tc(i))), end&#xA;axis([0 3 0 3])&#xA;title('Three vectors and their classes.')&#xA;xlabel('P(1,:)')&#xA;ylabel('P(2,:)')&#xA;&#xA;&#xA;%%&#xA;% First we convert the target class indices Tc to vectors T.  Then we design a&#xA;% probabilistic neural network with NEWPNN.  We use a SPREAD value of 1 because&#xA;% that is a typical distance between the input vectors.&#xA;&#xA;T = ind2vec(Tc);&#xA;spread = 1;&#xA;net = newpnn(P,T,spread);&#xA;&#xA;%%&#xA;% Now we test the network on the design input vectors.  We do this by simulating&#xA;% the network and converting its vector outputs to indices.&#xA;&#xA;A = sim(net,P);&#xA;Ac = vec2ind(A);&#xA;plot(P(1,:),P(2,:),'.','markersize',30)&#xA;axis([0 3 0 3])&#xA;for i=1:3,text(P(1,i)+0.1,P(2,i),sprintf('class %g',Ac(i))),end&#xA;title('Testing the network.')&#xA;xlabel('P(1,:)')&#xA;ylabel('P(2,:)')&#xA;&#xA;%%&#xA;% Let's classify a new vector with our network.&#xA;&#xA;p = [2; 1.5];&#xA;a = sim(net,p);&#xA;ac = vec2ind(a);&#xA;hold on&#xA;plot(p(1),p(2),'.','markersize',30,'color',[1 0 0])&#xA;text(p(1)+0.1,p(2),sprintf('class %g',ac))&#xA;hold off&#xA;title('Classifying a new vector.')&#xA;xlabel('P(1,:) and p(1)')&#xA;ylabel('P(2,:) and p(2)')&#xA;&#xA;%%&#xA;% This diagram shows how the probabilistic neural network divides the input&#xA;% space into the three classes.&#xA;&#xA;p1 = 0:.05:3;&#xA;p2 = p1;&#xA;[P1,P2] = meshgrid(p1,p2);&#xA;pp = [P1(:) P2(:)]';&#xA;aa = sim(net,pp);&#xA;aa = full(aa);&#xA;m = mesh(P1,P2,reshape(aa(1,:),length(p1),length(p2)));&#xA;set(m,'facecolor',[0 0.5 1],'linestyle','none');&#xA;hold on&#xA;m = mesh(P1,P2,reshape(aa(2,:),length(p1),length(p2)));&#xA;set(m,'facecolor',[0 1.0 0.5],'linestyle','none');&#xA;m = mesh(P1,P2,reshape(aa(3,:),length(p1),length(p2)));&#xA;set(m,'facecolor',[0.5 0 1],'linestyle','none');&#xA;plot3(P(1,:),P(2,:),[1 1 1]+0.1,'.','markersize',30)&#xA;plot3(p(1),p(2),1.1,'.','markersize',30,'color',[1 0 0])&#xA;hold off&#xA;view(2)&#xA;title('The three classes.')&#xA;xlabel('P(1,:) and p(1)')&#xA;ylabel('P(2,:) and p(2)')&#xA;"></originalCode>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -