📄 mog_dd.m
字号:
%MOG_DD Mixture of Gaussians data description%% W = MOG_DD(A,FRACREJ,N)%% Training of a mixture of Gaussians, with N Gaussians.%% The algorithm was inspired from NetLab, Bishop but changed very much.% For a given number of gaussians, the EM procedure optimizes the size and% the place of the gaussians.%% W = MOG_DD(A,FRACREJ,N,CTYPE)%% By setting ctype, the covariance structure of the covariance can be% set. There are three possibilities:% CTYPE = 'sphr' : diagonal cov. matrix with equal values% CTYPE = 'diag' : diagonal cov. matrix% CTYPE = 'full' : full cov. matrix%%% Required functions: mogEM and mogP% Copyright: D.M.J. Tax, R.P.W. Duin, davidt@ph.tn.tudelft.nl% Faculty of Applied Physics, Delft University of Technology% P.O. Box 5046, 2600 GA Delft, The Netherlandsfunction W = mog_dd(a,fracrej,n,ctype,reg,numiters)if (nargin<6) numiters = 25;endif (nargin<5) reg = 0.1;endif (nargin<4) ctype = 'sphr';endif (nargin<3) n = 5;endif (nargin<2) fracrej = 0.05;endif (nargin<1)|isempty(a) W = mapping(mfilename,{fracrej,n,ctype,reg,numiters}); W = setname(W,'Mixture of Gaussians'); returnendif isa(fracrej,'double') %training a = +target_class(a); % only use the target class [m,k] = size(a); % Train it [means, covs, priors] = mogEM(a, n, ctype, reg, numiters); % Obtain the threshold on the training set: d = sum(mogP(a,means,covs,priors),2); thr = dd_threshold(d,fracrej); % and save all useful data: W.m = means; W.c = covs; W.p = priors; W.threshold = thr; W = mapping(mfilename,'trained',W,str2mat('target','outlier'),k,2); W = setname(W,'Mixture of Gaussians');else %testing W = getdata(fracrej); % unpack m = size(a,1); %compute: out = sum(mogP(+a,W.m,W.c,W.p),2); newout = [out repmat(W.threshold,m,1)]; W = setdat(a,newout,fracrej);endreturn
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -