📄 mmln.m
字号:
function [mi,sigma,solution,minp,topp,N,t]=mmln(X,epsilon,tmax,t,N)
% MMLN Minimax learning for Gaussian distribution.
% [mi,sigma,solution,minp,topp,N,t]=mmln(X,epsilon,tmax,t,N)
%
% MMLN implements the Minimax learning algorithm that finds
% parameters of the normal distributed p.d.f. with correlated
% features. The input of the algorithm are training points
% well describing one class for which the parameters are
% estimated. The points in training set should be distinguished
% representatives of the class. Independent selection of the points
% is not required unlike for ML estimation methods.
%
% Input:
% (Notation: D is dimension of feature space
% K is number (size) of training point set )
% MMLN(X,epsilon,tmax)
% X [DxK] matrix containing K training points in the D-dimensional
% feature space, X=[x_1,x_2,...,x_K].
% epsilon [1x1] determines desired accuracy of finding solution.
% Algorithms works until the difference between upper limit and
% lower limit of the optimal solution is less than epsilon.
% tmax [1x1] is the maximal limit of number of steps the algorithm
% will perform. If tmax is exceeded the algorithm will stop.
%
% MMLN(X,epsilon,tmax,t,N) begins from the state determined by
% t [1x1] initial step number.
% N matrix which contains state variables in the step t.
%
% Output:
% mi [Dx1] vector of mean values of found statistical model.
% sigma [DxD] covariance matrix of found statistical model.
% solution [1x1] is equal to 1 if found solution has desired
% precision, otherwise it is equal to 0.
% minp [1x1] is lower limit of optimal value of the objective function.
% topp [1x1] is upper limit of optimal value of the objective function.
% N [matrix] contains state variables in step t.
% t [1x1] number of step when the algorithm stopped.
%
% See also MMDEMO, UNSUPER.
%
% Statistical Pattern Recognition Toolbox, Vojtech Franc, Vaclav Hlavac
% (c) Czech Technical University Prague, http://cmp.felk.cvut.cz
% Written Vojtech Franc (diploma thesis) 10.12.1999
% Modifications
% 24. 6.00 V. Hlavac, comments polished.
% default setting
if nargin < 4,
t=0;
end
if nargin < 3,
tmax = inf; % work until the solution is not found
end
if nargin < 2,
error('Not enough input arguments');
return;
end
% gets # of samples and dimension
K=size(X,2);
DIM=size(X,1);
% STEP (1)
if t==0,
N=ones(1,K);
end
% main cycle
solution = 0;
while solution == 0 & tmax > 0,
tmax = tmax-1;
% STEP(2),STEP (3)
% computes maximal likelihood estimation for the normal distribution
% computes # of occurrences
sumN=sum(N);
% mean value estimation, mi = sum( n(x)*x )/sum(n(s))
mi=sum((X.*repmat(N,DIM,1))')'/sumN;
% covariance matrix estimation
for i=1:DIM,
for j=i:DIM,
% computes COV( X(i,:),X(j,:) )
sigma(i,j)=sum(N.*((X(i,:)-repmat(mi(i,1),1,K)).*(X(j,:)-repmat(mi(j,1),1,K))));
% matrix is symmetrical
sigma(j,i)=sigma(i,j);
end
end
sigma=sigma/(sumN-1);
% STEP (4)
% check stop condition
% computes logarithm of probability dens. function
logpx=log(normald(X,mi,sigma));
% find a sample with the minimal probability
[minp,minpinx]=min(logpx);
% top limit of probability
topp=sum(N.*logpx)/sumN;
% stop criteria
if topp-minp < epsilon,
solution=1;
else
% STEP (5)
% add other occurrence of the point with minimal probability
N(1,minpinx)=N(1,minpinx)+1;
t=t+1;
end
end
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -