📄 ptos_psc.m
字号:
function [FAMbank,msum,y2,m2e,m2u,M,mdens]=ptos_psc(esec,usec,samples,N,p,wii,wij,c,rands)% [FAMbank,msum,y2,m2e,m2u,M,mdens]=ptos_psc(esec,usec,samples,N,p,wii,wij,c,rands)%% Product Space Clustering (PSC) to generate FAM rules, using a adaptive% vector quantisation (AVQ) system. (Realised by an autoassociative two-% layer feedforward neural network). %% esec,usec are matrixes with pairs (in each row) describing% the intervals in which the spaces of the variables e,u are divided.% Samples is a matrix containing all training sample vectors.% FAMbank is the estimated FAMbank from the training data.% msum is a matrix from the same size like FAMbank. A field number % indicates how many trained synaptic vectors felt in the corresponding% cell. %% FSTB - Fuzzy Systems Toolbox for MATLAB% Copyright (c) 1993-1996 by Olaf Wolkenhauer% Control Systems Centre at UMIST% Manchester M60 1QD, UK%% 21-May-1994 % Product space clustering% % Product space clustering is a form of stochastic adaptive vector quanti-% sation. Adaptive vector quantization (AVQ) systems adaptively quantize % pattern clusters in R^n and thus estimates clusters. Stochastic competive% learning systems are AVQ systems. % N= number of nearest synaptic vectors (or "winners")% p= number of synaptic vectors mj defining p columns of the synaptic% connection matrix M.% p is also equal to the number of competing nonlinear neurons in the% output field, Fy.% n= number of inputs or linear neurons in the input neuronal field, Fx.% M= synaptic connection matrix, which interconnects the input field with % the output field.% x= matrix which stores all training sample vectors. One vector in a row.% In this function the variable is called 'samples'.% W= pxp matrix containing the Fy within-field synaptic connection % strengths. Diagonal elements wii are positive, off-diagonal elements% negative. % yt= Fy neuronal activations. yt equals y(t) and yt1 equals y(t+1).% Competive AVQ Algorithm%% The AVQ system compares a current vector random sample x(t) in Euclidean% distance to the p columns of the matrix M.% 1.) Initialize synaptic vectors: mi(0)=x(i) , i=1,...,p. % 2.) For random sample x(t), find the closest or "winning" synaptic vector% mj(t). (nearest neighbor classification)% Define N synaptic vectors closest to x as "winners".% 3.) Update the winning synaptic vector(s) mj(t) with an appropriate learn-% ing algorithm. Here the DCL algorithm.FAMbank=zeros(1,length(esec));% msum contains the number of quant.vector in each resulting rule cell.msum=FAMbank; % Density of m vectors in the cells: mdens=zeros(length(usec),length(esec)); [nu_of_v,n]=size(samples); % nu_of_v equals the number of sample vectors.nu_of_cells=length(esec)*length(usec); % observing yt(2), M(2,1) and M(2,2):y2=zeros(1,nu_of_v);m2e=zeros(1,nu_of_v);m2u=m2e;M=zeros(n,p);%wii=1; Positive diagonal elements - winning neurons excites themselves %wij=-1; and inhibit all other neurons. => Off-diagonal elements negative.W=wij.*ones(p,p)+(wii+abs(wij)).*eye(p,p); yt=zeros(1,p); yt1=yt;% Initialisation: % (sample dependence avoids pathologies disturbing the nearest neigbor % learning)for i=1:p, M(:,i)=samples(i,:)'; end;% rands=randperm(nu_of_v); eunorm=ones(1,p); % For measuring the distance to a sample vector.%figure%axis([min(ESET) max(ESET) min(OSET) max(OSET)]);grid on; %xlabel('e');ylabel('u');%title('distribution of quantization vectors mj(t)');%hold onruntime=clock;for t=1:nu_of_v, % For random sample x(t) find the closest synaptic vector mj(t): for i=1:p, sample=samples(rands(t),:); eunorm(i)=norm(M(:,i)-sample'); end; [winnorm, col_in_M]=sort(eunorm); % Updating the Fy neuronal activations yi with an additive model: % Nonnegative signal function approximating a steep binary logistic % sigmoid, with some constant c>0 : %c=1; Syt=1./(1+exp(-c.*yt)); yt1=yt+(sample*M)+Syt*W; y2(t)=yt1(2); % For an observation of yt1(2) % Updating the winning synaptic vector(s) mj(t) with the differential % competive learning algorithm: % Harmonic series coefficient for a decreasing gain sequence over % the learning period would be ct=1/t. ct=0.1*(1-(t/nu_of_v)); for no_of_winner=1:N, % N winners mj(t) nearest to x(t) j=col_in_M(no_of_winner); % Activation difference as an approximation of the competive signal % difference DSj: (DCL-learning) %Dyjt=sign(yt1(j)-yt(j)); %M(:,j)=M(:,j)+ct*Dyjt.*(sample'-M(:,j)); % Using the UCL learning algorithm: M(:,j)=M(:,j)+ct*(sample'-M(:,j)); end; % of winning synaptic vector loop. yt=yt1; %plot(M(1,:),M(2,:),'.','EraseMode','none'); m2e(t)=M(1,2); m2u(t)=M(2,2);end; % of outer t loop for the training period.runtime=etime(clock,runtime)% Building the FAMbank: % Adding FAM rule if necessary: (if mj(t) fell in FAM cell) % (non-overlapping sectors) for j=1:p, for co=length(esec):-1:1, % starting with PB, going to NB if M(1,j)>=esec(co,1) & M(1,j)<=esec(co,2), es=co;break; else es=0;end; end; for co=length(usec):-1:1, % starting with PB, going to NB if M(2,j)>=usec(co,1) & M(2,j)<=usec(co,2), us=co;break; else us=0;end; end; if (es+us)>=2, % Synaptic vector fells in cell. % increment number of vectors in cell: mdens(us,es)= mdens(us,es)+1; end;end; % of testing synaptic vectors fitting in a cell.% Finding the cell with the most densely clustered data:% (Other ways for the decision which consequent set is used, see for example% Kosko,Pacini: 'Adaptive Fuzzy Systems for Target Tracking' in [2] !)secvec=zeros(1,length(usec)); % For cell specified by e, this vector % contains the number of quantisation vectors in each cell correspon-% ding to all consequences of u.for es=1:length(esec), for us=1:length(usec), secvec(us)=mdens(us,es); end; if ~isempty(find(secvec)), [msum(es),FAMbank(es)]=max(secvec); else msum(es)=0; FAMbank(es)=0; end;end;% mdens is a length(usec) x length(esec) matrix.% After building the FAMbank, the matrix msum contains the number% of quantisation vectors in each resulting rule cell.% The index for the consequence cell with the maximal number of quantisation% vectors corresponds to a linguistic term description. All set in the % fuzzy toolbox are ordered from left to right - from most negative to most% positive. E.g.: Five ling.terms: "NB" "NS" "NZ" "PS" "PB". Therefore a 1% stands for "NB" or "Negative Big".
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -