📄 ganders2.m
字号:
function [alpha,theta,solution,minr,t,maxerr]=... ganders2(MI,SG,J,tmax,stopCond,t,alpha,theta)
% GANDERS2 algorithm solving Generalized Anderson's task.% [alpha,theta,solution,minr,t,maxerr]=...% ganders2(MI,SG,J,tmax,rdelta,t,alpha,theta)
%% GANDERS2 is implementation of the general algorithm framework that finds % optimal solution of the Generalized Anderson's task (GAT). This % implementation is similar to GANDERS.M but its part finding so-called % improving direction uses different method (VF). %% The GAT solves problem of finding a separationg hyperplane % (alpha'*x = theta) between two classes such that found solution minimizes % the probability of bad classification. Both classes are described in term % of the conditional probability density functions p(x|k), where x is % observation and k is class identifier (1 for the first and 2 for the % second class). The p(x|k) for both the classes has normal distribution % but its genuine parameters are not know only finite set of possible % parameters is known.%% The algorithms works iteratively until the change of the solution quality % in two consequential steps is less then given limit rdelta (radius of % the smallest ellipsoid) or until the number of steps, the algorithm % performed, exceeds given limit tmax.%% Input:% (Notation: K is number of input parameters, N is dimension of feature space.)
%
% GANDERS2(MI,SIGMA,J,tmax,rdelta)
% MI [NxK] contains K column vectors of mean values MI=[mi_1,mi_2...mi_K],
% where mi_i is i-th column vector N-by-1.
% SIGMA [N,(K*N)] contains K covariance matrices,
% SIGMA=[sigma_1,sigma_2,...sigma_K], where sigma_i is i-th matrix N-by-N.
% J [1xK] is vector of class labels J=[label_1,label_2,..,class_K], where
% label_i is an integer 1 or 2 according to which class the pair
% {mi_i,sigma_i} describes.
% tmax [1x1] is maximal number of steps of algorithm. Default is inf
% (it exclude this stop condition).
% rdelta [1x1] is positive real number (including 0 that exclude this
% stop condition) which determines stop condition - it works until
% the change of solution quality (radius of the smallest ellipsoid)
% is less than rdelta the algorithm exits.
%
% GANDERS2(MI,SIGMA,J,tmax,rdelta,t,alpha,theta) begins from state given by
% t [1x1] begin step number.
% alpha [Nx1], theta [1x1] are state variables of the algorithm.
%
% Returns:
% alpha [Nx1] is normal vector of found separating hyperplane.
% theta [1x1] is threshold of separating hyperplane (alpha'*x=theta).
% solution [1x1] is equal to -1 if solution does not exist,
% is equal to 0 if solution is not found,
% is equal to 1 if is found.
% minr [1x1] is radius of the smallest ellipsoid correseponding to the
% quality of found solution.
% t [1x1] is number of steps the algorithm performed.
% maxerr [1x1] is probability of bad classification. %
% See also OANDERS, EANDERS, GGANDERS, GANDERS
%
% Statistical Pattern Recognition Toolbox, Vojtech Franc, Vaclav Hlavac
% (c) Czech Technical University Prague, http://cmp.felk.cvut.cz
% Written Vojtech Franc (diploma thesis) 04.11.1999, 6.5.2000
% Modifications
% 24. 6.00 V. Hlavac, comments polished.% 18. 7.00 V. Franc, comments changedMINEPS_TMAX=1e2; % max # of iter. in epsilon minimization% default arguments settingif nargin < 3, error('Not enough input arguments.');endif nargin < 4, tmax = inf;end%%% Gets stop conditionif nargin < 5, stopCond = 0;endif length(stopCond)==2, stopT=stopCond(2);else stopT=1;enddeltaR=stopCond(1);%%if nargin < 6, t=0;end% perform transformationif nargin < 8, [alpha,MI,SG]=ctransf(0,0,MI,J,SG);else [alpha,MI,SG]=ctransf(alpha,theta,MI,J,SG);end% get dimension N and number of distributionsN=size(MI,1);K=size(MI,2);% STEP (1)if t==0, [alpha,alphaexists]=csvm(MI); if alphaexists==1, t=1; else % no feasible solution can be found, so that exit algorithm alpha=zeros(N,1); theta=0; minr=0; solution=-1; return; end tmax=tmax-1;end% STEP (2.1)% find the minimal radius of all the ellipsoids[minrs,minri]=min( (alpha'*MI)./sqrt( reshape(alpha'*SG,N,K)'*alpha )' );minr=minrs(1);lastminr=minr;queueMinR=[]; % queue of the best solutions, stopT steps backwardt0=0; % counter of performed iterations in this call of the function% t, is the whole number of iterations, i.e. t(end_fce)=t(begin_fce)+t0;% iterations cyclesolution=0;while solution==0 & tmax > 0 & t~=0, tmax = tmax-1; % STEP (2.2) % compute contact points for each distribution Y0=zeros(N,K); for i=1:K, x0(:,i)=MI(:,i)-(minr/sqrt(alpha'*SG(:,(i-1)*N+1:i*N)*alpha))*SG(:,(i-1)*N+1:i*N)*alpha; Y0(:,i) = x0(:,i); end % find direction delta_alpha in which error decreases [dalpha,dalphaexists]=csvm(Y0); if dalphaexists == 1, % STEP (3) % find the optimal scalar t that t=arg min max epsilon(alpha+t*dalpha, MI, SG ) % [alpha]=mineps(MI,SG,alpha,dalpha,tmax,delta) alpha=mineps(MI,SG,alpha,dalpha,MINEPS_TMAX,0);% alpha=minepsvl(MI,SG,alpha,dalpha,MINEPS_TMAX,0);% alpha=minepsrt(MI,SG,alpha,dalpha); % STEP (2.1) % find the minimal radius of all the ellipsoids [minrs,minri]=min( (alpha'*MI)./sqrt( reshape(alpha'*SG,N,K)'*alpha )' ); minr=minrs(1); end %% incereases iteration counter t=t+1; t0=t0+1; %%%% Stop criterion %%%%%%%%%%%%%%%% if dalphaexists == 0, solution=1; t=t-1; else %%% the stop condition if t0 > stopT, if (minr-queueMinR(1)) < deltaR, solution = 1; t=t-1; end % a queue of the best solutions queueMinR=[queueMinR(2:end),minr]; else queueMinR=[queueMinR,minr]; end end % store old value of minr
lastminr=minr;end% inverse transformation[alpha,theta]=ictransf(alpha);% probability of classification errormaxerr=1-cdf('norm',minr,0,1);
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -