📄 demo.html
字号:
<html xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <!--This HTML is auto-generated from an M-file.To make changes, update the M-file and republish this document. --> <title>Parzen Probabilistic Neural Networks</title> <meta name="generator" content="MATLAB 7.2"> <meta name="date" content="2006-08-03"> <meta name="m-file" content="demo"><style>body { background-color: white; margin:10px;}h1 { color: #990000; font-size: x-large;}h2 { color: #990000; font-size: medium;}/* Make the text shrink to fit narrow windows, but not stretch too far in wide windows. On Gecko-based browsers, the shrink-to-fit doesn't work. */ p,h1,h2,div.content div { /* for MATLAB's browser */ width: 600px; /* for Mozilla, but the "width" tag overrides it anyway */ max-width: 600px; /* for IE */ width:expression(document.body.clientWidth > 620 ? "600px": "auto" );}pre.codeinput { background: #EEEEEE; padding: 10px;}@media print { pre.codeinput {word-wrap:break-word; width:100%;}} span.keyword {color: #0000FF}span.comment {color: #228B22}span.string {color: #A020F0}span.untermstring {color: #B20000}span.syscmd {color: #B28C00}pre.codeoutput { color: #666666; padding: 10px;}pre.error { color: red;}p.footer { text-align: right; font-size: xx-small; font-weight: lighter; font-style: italic; color: gray;} </style></head> <body> <div class="content"> <h1>Parzen Probabilistic Neural Networks</h1> <introduction><pre>The Parzen Probabilistic Neural Networks (PPNN) are a simple type ofneural network used to classify data vectors. This classifiers are basedon the Bayesian theory where the a posteriori probability densityfunction (apo-pdf) is estimated from data using the Parzen windowtechnique.</pre></introduction> <h2>Contents</h2> <div> <ul> <li><a href="#1">A brief overview on the theory of the Parzen window and PPNN</a></li> <li><a href="#2">A first simple example with 2D data</a></li> <li><a href="#3">Adding samples to the network to increase the training</a></li> </ul> </div> <h2>A brief overview on the theory of the Parzen window and PPNN<a name="1"></a></h2><pre>The Bayesian classifiers use the Bayesian equation:</pre><pre> P(x|wi)P(wi)P(wi|x) = ---------------------- SUM_j P(x|wj)P(wj)</pre><p>to estimate the apo-pdf P(wi|x). Obviously to be usefull, this method needs the probabilities P(x|wi) and P(wi) to be known. A technique is to parametrize this pdfs, another is to estimate them from data. The Parzen window technique estimates the probability defining a window (given the winow size) and a function on this window (i.e. an hypersphere with the gaussian function truncated inside). The computes the estimation of the probability function convolving the window function with the samples function. This obviously requires that the window function must have the integral (the hypervolume under the funciton) equal to 1 to mantain the scale in the estimated pdf. The PPNN is a simple tool that is the composition of the pdf estimation with the Parzen window and the Bayesian classification selecting for a feature vector x the class wi where P(wi|x) is maximum. In this quick explanation the particular derivations aren't reported but can be found in [1]. </p><pre>A PPNN is a two layer neural network (NN) where the input data are fullyconnected with the first neuron layer and the first layer is sparselyconnected with the second (and ouput) layer. The output layer is composedof c neurons where c is the number of classes of the classifier.The wheights on the first layer are trained as follows: each sample datais normalized so that its length becames unitary, each sample databecames a neuron with the normalized values as weights w. The input datax is so dot-multiplied by the weights obtaining the network activationsignal net=w^Tx. Then the exponential nonlinearity:</pre><pre> net - 1 --------- 2 sigmact = e</pre><p>is computed to obtain the synaptic activation signals. During the learning process each first layer neuron is connected to the output layer neuron related to its class with wheight 1. During the classification process the output neuron of each class sums the activation signals from all the neurons of the neurons of the first layer. Simply the highest output value selects the class of the input data. </p><pre>(w1) (w2) ... OUTPUT \ \ \__ | | \ ( ) ( ) ( ) internal layer /|\ /|\ /|\ INPUT</pre><p>[1] "Pattern Classification", second edition, by Richard O. Duda, Peter E. Hart. and David G. Stork</p> <h2>A first simple example with 2D data<a name="2"></a></h2> <p>In this simple example three set of points in the plane are selected in the region [1:100;1:100]. A PPNN is trained with this samples and then an image of the classification regions is produced. </p><pre class="codeinput"><span class="comment">% A training set for the class 'a' and 'b':</span>img=ones(100);f=figure; imshow(img); [X,Y]=getpts; sa=[X,Y]'; close(f);f=figure; imshow(img); [X,Y]=getpts; sb=[X,Y]'; close(f);f=figure; imshow(img); [X,Y]=getpts; sc=[X,Y]'; close(f);<span class="comment">% The samples matrix:</span>S = [sa,sb,sc];<span class="comment">% The classification vector:</span>C = [repmat(<span class="string">'a'</span>,[1,size(sa,2)]),repmat(<span class="string">'b'</span>,[1,size(sb,2)]),repmat(<span class="string">'c'</span>,[1,size(sc,2)])];<span class="comment">% Generating the network:</span>net = parzenPNNlearn(S,C);<span class="comment">% Generating the whole grid:</span>[X,Y] = meshgrid(1:100,1:100);D = [X(:),Y(:)]';<span class="comment">% Classification of all points:</span>class = parzenPNNclassify(net,D);class = reshape(class,[100,100]);<span class="comment">% Plotting:</span>sep = double(class==<span class="string">'a'</span>) + 2*double(class==<span class="string">'c'</span>);figure; imshow(sep/2); hold <span class="string">on</span>;plot(sa(1,:),sa(2,:),<span class="string">'r.'</span>);plot(sb(1,:),sb(2,:),<span class="string">'g.'</span>);plot(sc(1,:),sc(2,:),<span class="string">'b.'</span>);</pre><img vspace="5" hspace="5" src="demo_01.png"> <h2>Adding samples to the network to increase the training<a name="3"></a></h2><pre>A PPNN can be simply improved adding new samples to it, the new samplesgenerates new neurons in the internal layer that so can grow. Here anexample of improving of the generated neural network.</pre><pre class="codeinput"><span class="comment">% Getting new samples:</span>f=figure; imshow(img); hold <span class="string">on</span>; plot(sa(1,:),sa(2,:),<span class="string">'r.'</span>);[X,Y]=getpts; nsa=[X,Y]'; close(f);f=figure; imshow(img); hold <span class="string">on</span>; plot(sb(1,:),sb(2,:),<span class="string">'g.'</span>);[X,Y]=getpts; nsb=[X,Y]'; close(f);f=figure; imshow(img); hold <span class="string">on</span>; plot(sc(1,:),sc(2,:),<span class="string">'b.'</span>);[X,Y]=getpts; nsc=[X,Y]'; close(f);Sa = [sa,nsa];Sb = [sb,nsb];Sc = [sc,nsc];<span class="comment">% The samples matrix:</span>nS = [nsa,nsb,nsc];<span class="comment">% The classification vector:</span>nC = [repmat(<span class="string">'a'</span>,[1,size(nsa,2)]),repmat(<span class="string">'b'</span>,[1,size(nsb,2)]),repmat(<span class="string">'c'</span>,[1,size(nsc,2)])];<span class="comment">% Improving the network:</span>net = parzenPNNimprove(net,nS,nC);<span class="comment">% Classification of all points:</span>class = parzenPNNclassify(net,D);class = reshape(class,[100,100]);<span class="comment">% Plotting:</span>sep = double(class==<span class="string">'a'</span>) + 2*double(class==<span class="string">'c'</span>);figure; imshow(sep/2); hold <span class="string">on</span>;plot(Sa(1,:),Sa(2,:),<span class="string">'r.'</span>);plot(Sb(1,:),Sb(2,:),<span class="string">'g.'</span>);plot(Sc(1,:),Sc(2,:),<span class="string">'b.'</span>);</pre><img vspace="5" hspace="5" src="demo_02.png"> <p class="footer"><br> Published with MATLAB® 7.2<br></p> </div> <!--##### SOURCE BEGIN #####%% Parzen Probabilistic Neural Networks% The Parzen Probabilistic Neural Networks (PPNN) are a simple type of% neural network used to classify data vectors. This classifiers are based% on the Bayesian theory where the a posteriori probability density% function (apo-pdf) is estimated from data using the Parzen window% technique.%% A brief overview on the theory of the Parzen window and PPNN% The Bayesian classifiers use the Bayesian equation:%% P(x|wi)P(wi)% P(wi|x) = REPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASH% SUM_j P(x|wj)P(wj)%% to estimate the apo-pdf P(wi|x). Obviously to be usefull, this method% needs the probabilities P(x|wi) and P(wi) to be known. A technique is to% parametrize this pdfs, another is to estimate them from data.% The Parzen window technique estimates the probability defining a window% (given the winow size) and a function on this window (i.e. an hypersphere% with the gaussian function truncated inside). The computes the estimation% of the probability function convolving the window function with the% samples function. This obviously requires that the window function must% have the integral (the hypervolume under the funciton) equal to 1 to% mantain the scale in the estimated pdf.% The PPNN is a simple tool that is the composition of the pdf estimation% with the Parzen window and the Bayesian classification selecting for a% feature vector x the class wi where P(wi|x) is maximum.% In this quick explanation the particular derivations aren't reported but% can be found in [1].% % A PPNN is a two layer neural network (NN) where the input data are fully% connected with the first neuron layer and the first layer is sparsely% connected with the second (and ouput) layer. The output layer is composed% of c neurons where c is the number of classes of the classifier. % The wheights on the first layer are trained as follows: each sample data% is normalized so that its length becames unitary, each sample data% becames a neuron with the normalized values as weights w. The input data% x is so dot-multiplied by the weights obtaining the network activation% signal net=w^Tx. Then the exponential nonlinearity:%% net - 1% REPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASHREPLACE_WITH_DASH_DASH-% 2% sigm% act = e%% is computed to obtain the synaptic activation signals. During the% learning process each first layer neuron is connected to the output layer% neuron related to its class with wheight 1. During the classification% process the output neuron of each class sums the activation signals from% all the neurons of the neurons of the first layer. Simply the highest% output value selects the class of the input data.%% (w1) (w2) ... OUTPUT% \ \ \__% | | \% ( ) ( ) ( ) internal layer% /|\ /|\ /|\% INPUT%% [1] "Pattern Classification", second edition,% by Richard O. Duda, Peter E. Hart. and David G. Stork%% A first simple example with 2D data% In this simple example three set of points in the plane are selected in% the region [1:100;1:100]. A PPNN is trained with this samples and then% an image of the classification regions is produced.% A training set for the class 'a' and 'b':img=ones(100);f=figure; imshow(img); [X,Y]=getpts; sa=[X,Y]'; close(f);f=figure; imshow(img); [X,Y]=getpts; sb=[X,Y]'; close(f);f=figure; imshow(img); [X,Y]=getpts; sc=[X,Y]'; close(f);% The samples matrix:S = [sa,sb,sc];% The classification vector:C = [repmat('a',[1,size(sa,2)]),repmat('b',[1,size(sb,2)]),repmat('c',[1,size(sc,2)])];% Generating the network:net = parzenPNNlearn(S,C);% Generating the whole grid:[X,Y] = meshgrid(1:100,1:100);D = [X(:),Y(:)]';% Classification of all points:class = parzenPNNclassify(net,D);class = reshape(class,[100,100]);% Plotting:sep = double(class=='a') + 2*double(class=='c');figure; imshow(sep/2); hold on;plot(sa(1,:),sa(2,:),'r.');plot(sb(1,:),sb(2,:),'g.');plot(sc(1,:),sc(2,:),'b.');%% Adding samples to the network to increase the training% A PPNN can be simply improved adding new samples to it, the new samples% generates new neurons in the internal layer that so can grow. Here an% example of improving of the generated neural network.% Getting new samples:f=figure; imshow(img); hold on; plot(sa(1,:),sa(2,:),'r.');[X,Y]=getpts; nsa=[X,Y]'; close(f);f=figure; imshow(img); hold on; plot(sb(1,:),sb(2,:),'g.');[X,Y]=getpts; nsb=[X,Y]'; close(f);f=figure; imshow(img); hold on; plot(sc(1,:),sc(2,:),'b.');[X,Y]=getpts; nsc=[X,Y]'; close(f);Sa = [sa,nsa];Sb = [sb,nsb];Sc = [sc,nsc];% The samples matrix:nS = [nsa,nsb,nsc];% The classification vector:nC = [repmat('a',[1,size(nsa,2)]),repmat('b',[1,size(nsb,2)]),repmat('c',[1,size(nsc,2)])];% Improving the network:net = parzenPNNimprove(net,nS,nC);% Classification of all points:class = parzenPNNclassify(net,D);class = reshape(class,[100,100]);% Plotting:sep = double(class=='a') + 2*double(class=='c');figure; imshow(sep/2); hold on;plot(Sa(1,:),Sa(2,:),'r.');plot(Sb(1,:),Sb(2,:),'g.');plot(Sc(1,:),Sc(2,:),'b.');##### SOURCE END #####--> </body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -