⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 svmmot.m

📁 支持向量机工具箱
💻 M
字号:
function [Alpha,bias,nsv,exitflag,flps,margin,trn_err]=...    svmmot(X,I,ker,arg,C)% SVMMOT learns SVM (L1) classifier using Matlab Optimization Toolbox.% [Alpha,bias,nsv,eflag,flps,margin,trn_err]=svmmot(X,I,ker,arg,C)%% SVMMOT solves binary Support Vector Machines problem with %  L1 soft margin (classification violations are linearly %  penalized) by the use of Matlab Optimization Toolbox.%%  For classification use SVMCLASS.%% Mandatory inputs:%  X [NxL] L training patterns in N-dimensional space.%  I [1xL] labels of training patterns (1 - 1st, 2 - 2nd class ).%  ker [string] identifier of kernel, see help kernel.%  arg [...] arguments of given kernel, see help kernel.%  C [real] trade-off between margin and training error.%% Outputs:%  Alpha [1xL] Lagrange multipliers for training patterns.%  bias [real] bias of decision function.%  nsv [uint] number of Support Vectors, i.e number of patterns%     with non-zero ( > epsilon) Lagrangians.%  eflag [int] exit flag of qp function, see 'help qp'.%  flps [uint] number of used floating point operations.%  margin [real] margin between classes.%  trn_err [real] training error (empirical risk).%% See also SVMCLASS, SVM.%% Statistical Pattern Recognition Toolbox, Vojtech Franc, Vaclav Hlavac% (c) Czech Technical University Prague, http://cmp.felk.cvut.cz% Modifications% 28-Nov-2001, V.F. used quadprog instead of qp% 23-Occt-2001, V.F.% 19-September-2001, V. Franc, renamed to svmmot.% 8-July-2001, V.Franc, comments changed, bias mistake removed.% 28-April-2001, V.Franc, flps counter added% 10-April-2001, V. Franc, createdflops(0);   % reset counter of floating point operations% small diagonal to make kernel matrix positive definiteADD_DIAG = 1e-9;%  numerical precision for SVs, if Alpha(i) > epsilon%  then i-th pattern is Support Vector. Default is 1e-9.SV_LIMIT = 1e-9;  if nargin < 5,  error('Not enough number of input arguments.');  return;end% get dimension and number of points[N,L] = size(X);% labels {1,2} -> {1,-1}Y = itosgn( I );     % compute kernel matrixK = kernel(X,X,ker,arg);K=K.*(Y'*Y);% add small numbers on diagonal to make T positive definiteK = K + ADD_DIAG*eye(size(K));% transform the SVM problem to the Optimization toolbox format:%% SVM (Wolf dual) problem:%   max Alpha*ones(1,L) - 0.5*Alpha'*K*Alpha%   Alpha%%  subject to:%   0 <= Alpha <= C%   Alpha*Y' = 0  %% Quadratic programming in the Optimization toolbox%   min 0.5*x'*H*x + f'*x   %    x% %  subject to:  %    Aeq*x = beq, %    x <= UB, %    LB < xAeq = Y;beq = 0;f = -ones(L,1);       % AlphaLB = zeros(L,1);      % 0 <= AlphaUB = C*ones(L,1);     % Alpha <= Cx0 = zeros(L,1);      % starting point% call optimization toolbox%[Alpha,fval,exitflag] = qp(K, f, Aeq, beq, LB, UB, x0, 1);[Alpha,fval,exitflag] = quadprog(K, f, [],[],Aeq, beq, LB, UB, x0,...  optimset('Display','off'));% find the support vectorssv_inx = find( Alpha > SV_LIMIT);nsv = length(sv_inx);% compute average value of the bias from the points on the margin,% the patterns on the margin have Alpha: 0 < Alpha(i) < Csv_margin = find( Alpha > SV_LIMIT & Alpha < (C - SV_LIMIT));nsv_margin = length( sv_margin ); if nsv_margin ~= 0,   bias = sum(Y(sv_margin)'-K(sv_margin,sv_inx)*Alpha(sv_inx)...          .*Y(sv_margin)')/nsv_margin;else   disp('Bias cannot be determinend - no patterns on margin.');endAlpha=Alpha(:)';   % Alpha will be a row vector% compute margin between classes = 1/norm(w)if nargout >= 6,   margin = 1/sqrt(Alpha*K*Alpha');    end% compute training classification error (empirical risk)if nargout >= 7,   K=K.*(Y'*Y);   fpred = K*(Alpha(:).*Y(:)) + ones(length(Y),1)*bias;      trn_err = length( find( (fpred(:).*Y(:)) < 0))/length(Y);      endflps =flops;   % take number of used floating point operationsreturn;

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -