⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 ka.m

📁 该工具箱是用于统计模式识别的
💻 M
字号:
function [Alpha, bias, margin, t, flps] = ka(X,I,ker,arg,C,tmax,epsilon,Ni,mi)% KA kernel-Adatron algorithm solving SVM (L1) problem.% [Alpha,bias,margin,t,flps]=ka(X,I,ker,arg,C,tmax,epsilon,Ni,mi)% % The Kernel-Adatron (Cambell) solves the Support Vector Problem % with linear penalization od classification violation % using stochastic gradient ascent.%% Mandatory inputs:%  X [NxL] L training patterns in N-dimensional input space. %  I [1xL] patterns labels, 1 for the 1st, 2 for the 2nd class.%  ker [string] kernel identifier, see help kernel.%  arg [...]  kernel arguments, see help kernel.%  C [real] trade-off between margin and traing error. %     Default is inf.%% Optional inputs:%  tmax [uint] maximal number of iterations. Default is inf.%  epsilon [real] stop algorithm if abs(1 - margin) < epsilon .%     Default is 1e-3.%  Ni [real] multiplier of the gradient ascent. Default is%      Ni(i) = 1/kernel(X(:,i),X(:,i),ker,arg).%  mi [real] initial vaule of threshold, default is 0.1;%% Outputs:%  Alpha [1xL] weights of the patterns X (Lagrangian multiplyers).%  bias [real] bias of the decision function (hyperplane).%  margin [real] margin in the scaled (canonical) space, the %      optimal is margin = 1;.%  t [uint] number of iterations.%  flps [uint] number of used float point operations.%% See also SVMCLASS, SVMLEARN, KERNEL.%% Statistical Pattern Recognition Toolbox, Vojtech Franc, Vaclav Hlavac% (c) Czech Technical University Prague, http://cmp.felk.cvut.cz% Written Vojtech Franc (diploma thesis) 23.12.1999, 5.4.2000% Modifications% 19-September-2001, V. Franc, comments changed.% 8-July-2001, V.Franc, created.flops(0);if nargin < 4,   error('Not enough number of arguments.');   return;endif nargin < 5,   C = inf;endif nargin < 6,   tmax = inf;endif nargin < 7,   epsilon = 1e-3;end   L=size(X,2);   % number of patternsY=itosgn(I)';  % make labels to be {-1,1}% Compute kernel matrix.K = kernel( X, X, ker, arg);if nargin < 8,   % Compute gradient step.   Ni = 1./diag(K);     elseif length(Ni) < L,   Ni = Ni*ones(L,1);endmi = 0.1;      % 1st value of the thresholdt=-1;          % step numberomega1 = 0;omega2 = 0;Z = zeros(L,1);margin = 0;% Step 1Alpha = zeros(L,1);   % patterns weigths% Step 2 - execute Steps 3,4...8while t < tmax & abs(1-margin) > epsilon,   t=t+1;      % Step 3   if t==0,      lambda = mi;   elseif t == 1,      lambda = -mi;      lambda2 = mi;      lambda1 = -mi;   else      lambda = lambda1 - omega1 * ( (lambda1 - lambda2)/(omega1 - omega2));      lambda2 = lambda1;      lambda1 = lambda;   end         % Step 4 - For i=1:L, execute steps 5 to 6   for i = 1:L,            % Step 5            Z(i) = ( Alpha .* Y )' * K(:,i);         % Step 6      dalpha = Ni(i)*(1 - Z(i)*Y(i) - lambda*Y(i) );            % Step 6.1
      Alpha(i) = Alpha(i) + dalpha;            Alpha(i) = max([0,Alpha(i)]);      Alpha(i) = min([Alpha(i),C]);   end      % Step 7   omega = Alpha' * Y;   omega2 = omega1;   omega1 = omega;      % Step 8 - compute margin   margin = 0.5 * ( min( Z( find( Y == 1 & Alpha < C ))) - ...                    max( Z( find( Y == -1 & Alpha < C))) );                                 end % end whilebias = lambda;Alpha = Alpha';flps = flops;return;

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -