⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 re smo-matlab code.htm

📁 svm的分类和应用~~有详细的例子,非常实际和好用~~
💻 HTM
📖 第 1 页 / 共 3 页
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0063)http://www.kernel-machines.org/blackboard_bak/messages/454.html -->
<HTML><HEAD><TITLE>Re: SMO-matlab code</TITLE>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2722" name=GENERATOR></HEAD>
<BODY bgColor=#ffffff><IMG height=0 hspace=0 
src="Re SMO-matlab code.files/empty.gif" width=25> <FONT color=#6699cc>
<H1>Re: SMO-matlab code</H1></FONT>
<HR width="100%" SIZE=7>

<CENTER>[ <A 
href="http://www.kernel-machines.org/blackboard_bak/messages/454.html#followups">Follow 
Ups</A> ] [ <A 
href="http://www.kernel-machines.org/blackboard_bak/messages/454.html#postfp">Post 
Followup</A> ] [ <A 
href="http://www.kernel-machines.org/blackboard/index.html">Message Board on 
Kernel Machines</A> ] [ <A 
href="http://www.kernel-machines.org/blackboard/faq.html">FAQ</A> ]</CENTER>
<HR width="100%" SIZE=7>

<P>Posted by <A href="mailto:diegoandresalvarez@gmx.net">Diego Andres Alvarez 
Marin</A> on April 24, 19102 at 01:25:41:
<P>In Reply to: <A 
href="http://www.kernel-machines.org/blackboard_bak/messages/416.html">SMO-matlab 
code</A> posted by chen-chia chuang on April 07, 19102 at 01:25:14:
<P>: Do you anybody knows where matlab code for SMO<BR>: algorithm is obtained? 
<P>Hey... take a look at the following code
<P>This is a function I implemented for using along with Gavin Cawley's MATLAB 
Support Vector Machine Toolbox<BR>(c) September 2000.
<P>Diego Andres Alvarez.
<P>function net = train(tutor, X, Y, C, kernel, alpha_init, bias_init)<BR>% 
Train a support vector classification network, using the sequential minimal<BR>% 
optimisation algorithm.<BR>%<BR>% net = train(tutor, x, y, net);<BR>% net = 
train(tutor, x, y, C, kernel);<BR>% net = train(tutor, X, Y, C, kernel, 
alpha_init, bias_init)<BR>%<BR>% where:<BR>% tutor = tutor object<BR>% x = 
training inputs<BR>% y = training data<BR>% C = Upper bound - non-separable case 
(optional, defaults C=Inf)<BR>% kernel = kernel function (optional, defaults 
kernel=linear)<BR>% net = svc object (optional)<BR>% alpha_init = <BR>% 
bias_init =<BR>%<BR>% if kernel, alpha_init or bias_init is 'NOBIAS' then no 
<BR>% threshold, b, is used and it is set to 0
<P>% File : @quadprogsvctutor/train.m<BR>% Author : Diego Andres Alvarez 
Marin<BR>% Description : Part of an object-oriented implementation of Vapnik's 
<BR>% Support Vector Machine, as described in [1].<BR>%<BR>% References : <BR>% 
V.N. VAPNIK, "The Nature of Statistical Learning Theory", <BR>% Springer-Verlag, 
New York, ISBN 0-387-94559-8, 1995.<BR>%<BR>% PLATT, J.~C. (1998).<BR>% Fast 
training of support vector machines using sequential minimal<BR>% optimization. 
In Sch鰈kopf, B., Burges, C., and Smola, A.~J., editors,<BR>% Advances in Kernel 
Methods: Support Vector Learning, chapter~12, <BR>% pages 185--208. MIT Press, 
Cambridge, Massachusetts.
<P>% History : May 15/2001 - v1.00
<P>if size(Y, 2) ~= 1 | ~isreal(Y)<BR>error('y must be a real double precision 
column vector');<BR>end
<P>n = size(Y, 1);
<P>if n ~= size(X, 1)<BR>error('x and y must have the same number of 
rows');<BR>end
<P>if (nargin&lt;3 | nargin&gt;7) % check correct number of arguments<BR>help 
svc<BR>return;<BR>end;
<P>if nargin == 4 &amp; isa(C, 'svc')<BR>net = C;<BR>C = get(net,'C');<BR>kernel 
= get(net,'kernel');<BR>else<BR>if nargin &lt; 4, C = Inf; end;<BR>if nargin 
&lt; 5, kernel = linear; end;<BR>end;
<P>NOBIAS = 0;<BR>switch nargin<BR>case 5<BR>if ischar(kernel) &amp; 
strcmp(kernel,'NOBIAS')<BR>NOBIAS = 1;<BR>end;<BR>case 6<BR>if 
ischar(alpha_init) &amp; strcmp(alpha_init,'NOBIAS') <BR>NOBIAS = 
1;<BR>end;<BR>case 7<BR>if ischar(bias_init) &amp; 
strcmp(bias_init,'NOBIAS')<BR>NOBIAS = 1;<BR>end;<BR>end;
<P>if nargin == 7<BR>if n ~= size(alpha_init, 1)<BR>error('alpha must be a real 
double precision column vector with the same size as y');<BR>end<BR>if 
any(alpha_init &lt; 0)<BR>error ('No pueden existir alphas 
negativos')<BR>end;<BR>else<BR>alpha_init = zeros(n,1); %inicializo los pesos a 
zeros<BR>bias_init = 0; %inicializo threshold a zero<BR>end;
<P>fprintf('\n\nSequential Minimal Optimization: SVMs for 
Classification\n')<BR>fprintf( 
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n')
<P>tic;<BR>if NOBIAS<BR>SMO = SMOTutorNOBIAS(X, Y, C, kernel, alpha_init, 
bias_init);<BR>else<BR>SMO = SMOTutor(X, Y, C, kernel, alpha_init, 
bias_init);<BR>end; <BR>fprintf('Execution time: %4.1f seconds\n',toc);
<P>sv = X;<BR>w = (SMO.alpha.*Y)'; %weight vector<BR>net = svc(kernel, sv, w, 
SMO.bias, C);<BR>fprintf('Epochs : %d\n',SMO.epochs);<BR>fprintf('|w0|^2 : 
%f\n',sum(SMO.alpha));<BR>fprintf('Margin : %f\n',1/sqrt(sum(w.*w)));<BR>NUMSV = 
nonZeroLagrangeMultipliers;<BR>fprintf('Support Vectors : %d 
(%3.1f%%)\n\n',NUMSV,100*NUMSV/n);<BR>return;
<P>function RESULT = 
SMOTutor(x,y,C,kernel,alpha_init,bias_init)<BR>%Implementation of the Sequential 
Minimal Optimization (SMO)<BR>%training algorithm for Vapnik's Support Vector 
Machine (SVM)
<P>global SMO;
<P>[ntp,d] = size(x);<BR>%Inicializando las variables<BR>SMO.epsilon = svtol(C); 
SMO.tolerance = KKTtol;<BR>SMO.x = x; SMO.y = y;<BR>SMO.C = C; SMO.kernel = 
kernel;<BR>SMO.alpha = alpha_init; SMO.bias = bias_init;<BR>SMO.ntp = ntp; 
%number of training points
<P>%CACHES:<BR>SMO.Kcache = evaluate(kernel,x,x); %kernel 
evaluations<BR>SMO.error = zeros(SMO.ntp,1); %error 
<P>if ~any(SMO.alpha) <BR>%Como todos los alpha(i) son zeros, entonces fwd(i), 
tambien es zero<BR>SMO.error = -y;<BR>else<BR>SMO.error = fwd(1:ntp) - 
y;<BR>end;
<P>numChanged = 0; examineAll = 1;<BR>epoch = 0;
<P>%When all data were examined and no changes done the loop reachs its 
<BR>%end. Otherwise, loops with all data and likely support vector are 
<BR>%alternated until all support vector be found.<BR>while (numChanged &gt; 0) 
| examineAll<BR>numChanged = 0;<BR>if examineAll<BR>%Loop sobre todos los 
puntos<BR>for i = 1:ntp<BR>numChanged = numChanged + examineExample(i);<BR>end; 
<BR>else<BR>%Loop sobre KKT points<BR>for i = 1:ntp<BR>%Solo los puntos que 
violan las condiciones KKT<BR>if (SMO.alpha(i)&gt;SMO.epsilon) &amp; 
(SMO.alpha(i)&lt;(SMO.C-SMO.epsilon))<BR>numChanged = numChanged + 
examineExample(i);<BR>end;<BR>end;<BR>end;<BR><BR>if (examineAll == 
1)<BR>examineAll = 0;<BR>elseif (numChanged == 0)<BR>examineAll = 
1;<BR>end;<BR><BR>epoch = epoch+1;<BR>% trerror = 1; %100*sum((error)&lt;0)/ntp; 
<BR>% fprintf('Epoch: %d, TR Error: %g%%, numChanged: %d, alpha&gt;0: %d, 
0<ALPHA<C: \n?,...<br %d> % epoch,...<BR>% trerror,...<BR>% numChanged,...<BR>% 
nonZeroLagrangeMultipliers,...<BR>% nonBoundLagrangeMultipliers);<BR><BR>%WRITE 
RESULTADOS A DISCO, W, B, ERROR <BR>end;<BR>SMO.epochs = epoch;<BR>RESULT = 
SMO;<BR>return;
<P>function RESULT = nonZeroLagrangeMultipliers;<BR>global SMO;<BR>RESULT = 
sum(SMO.alpha&gt;SMO.epsilon);<BR>return;
<P>function RESULT = nonBoundLagrangeMultipliers;<BR>global SMO;<BR>RESULT = 
sum((SMO.alpha&gt;SMO.epsilon) &amp; 
(SMO.alpha&lt;(SMO.C-SMO.epsilon)));<BR>return;
<P>function RESULT = fwd(n)<BR>global SMO;<BR>LN = length(n);<BR>RESULT = 
-SMO.bias + sum(repmat(SMO.y,1,LN) .* repmat(SMO.alpha,1,LN) .* 
SMO.Kcache(:,n))';<BR>return;
<P>function RESULT = examineExample(i2)<BR>%First heuristic selects i2 and asks 
to examineExample to find a<BR>%second point (i1) in order to do an optimization 
step with two <BR>%Lagrange multipliers
<P>global SMO;<BR>alpha2 = SMO.alpha(i2); y2 = SMO.y(i2);
<P>if ((alpha2 &gt; SMO.epsilon) &amp; (alpha2 &lt; (SMO.C-SMO.epsilon)))<BR>e2 
= SMO.error(i2);<BR>else<BR>e2 = fwd(i2) - y2;<BR>end;
<P>% r2 &lt; 0 if point i2 is placed between margin (-1)-(+1)<BR>% Otherwise r2 
is &gt; 0. r2 = f2*y2-1
<P>r2 = e2*y2; <BR>%KKT conditions:<BR>% r2&gt;0 and alpha2==0 (well 
classified)<BR>% r2==0 and 0<ALPHA2<C margins)<br at vectors (support>% r2&lt;0 
and alpha2==C (support vectors between margins)<BR>%<BR>% Test the KKT 
conditions for the current i2 point. <BR>%<BR>% If a point is well classified 
its alpha must be 0 or if<BR>% it is out of its margin its alpha must be C. If 
it is at margin<BR>% its alpha must be between 0<ALPHA2<C.<P>%take action only 
if i2 violates Karush-Kuhn-Tucker conditions<BR>if ((r2 &lt; -SMO.tolerance) 
&amp; (alpha2 &lt; (SMO.C-SMO.epsilon))) | ...<BR>((r2 &gt; SMO.tolerance) &amp; 
(alpha2 &gt; SMO.epsilon))<BR>% If it doens't violate KKT conditions then exit, 
otherwise continue.<BR><BR>%Try i2 by three ways; if successful, then 
immediately return 1; <BR>RESULT = 1;<BR>% First the routine tries to find an i1 
lagrange multiplier that<BR>% maximizes the measure |E1-E2|. As large this value 
is as bigger <BR>% the dual objective function becames.<BR>% In this first test, 
only support vectors will be tested.<BR><BR>POS = find((SMO.alpha &gt; 
SMO.epsilon) &amp; (SMO.alpha &lt; (SMO.C-SMO.epsilon)));<BR>[MAX,i1] = 
max(abs(e2 - SMO.error(POS)));<BR>if ~isempty(i1)<BR>if takeStep(i1, i2, e2), 
return; end;<BR>end;<BR><BR>%The second heuristic choose any Lagrange Multiplier 
that is a SV and tries to optimize<BR>for i1 = randperm(SMO.ntp)<BR>if 
(SMO.alpha(i1) &gt; SMO.epsilon) &amp; (SMO.alpha(i1) &lt; 
(SMO.C-SMO.epsilon))<BR>%if a good i1 is found, optimise<BR>if takeStep(i1, i2, 
e2), return; end;<BR>end<BR>end<BR><BR>%if both heuristc above fail, iterate 
over all data set <BR>for i1 = randperm(SMO.ntp)<BR>if ~((SMO.alpha(i1) &gt; 
SMO.epsilon) &amp; (SMO.alpha(i1) &lt; (SMO.C-SMO.epsilon)))<BR>if takeStep(i1, 
i2, e2), return; end;<BR>end<BR>end;<BR>end; 
<P>%no progress possible<BR>RESULT = 0;<BR>return;
<P><BR>function RESULT = takeStep(i1, i2, e2)<BR>% for a pair of alpha indexes, 
verify if it is possible to execute<BR>% the optimisation described by Platt.
<P>global SMO;<BR>RESULT = 0;<BR>if (i1 == i2), return; end;
<P>% compute upper and lower constraints, L and H, on multiplier a2<BR>alpha1 = 
SMO.alpha(i1); alpha2 = SMO.alpha(i2);<BR>x1 = SMO.x(i1); x2 = SMO.x(i2);<BR>y1 
= SMO.y(i1); y2 = SMO.y(i2);<BR>C = SMO.C; K = SMO.Kcache;
<P>s = y1*y2;<BR>if (y1 ~= y2)<BR>L = max(0, alpha2-alpha1); H = min(C, 
alpha2-alpha1+C);<BR>else<BR>L = max(0, alpha1+alpha2-C); H = min(C, 
alpha1+alpha2);<BR>end;
<P>if (L == H), return; end;
<P>if (alpha1 &gt; SMO.epsilon) &amp; (alpha1 &lt; (C-SMO.epsilon))<BR>e1 = 
SMO.error(i1);<BR>else<BR>e1 = fwd(i1) - y1;<BR>end;
<P>%if (alpha2 &gt; SMO.epsilon) &amp; (alpha2 &lt; (C-SMO.epsilon))<BR>% e2 = 
SMO.error(i2);<BR>%else<BR>% e2 = fwd(i2) - y2;<BR>%end;
<P>%compute eta<BR>k11 = K(i1,i1); k12 = K(i1,i2); k22 = K(i2,i2);<BR>eta = 
2.0*k12-k11-k22;
<P>%recompute Lagrange multiplier for pattern i2<BR>if (eta &lt; 0.0)<BR>a2 = 
alpha2 - y2*(e1 - e2)/eta;<BR><BR>%constrain a2 to lie between L and H<BR>if (a2 
&lt; L)<BR>a2 = L;<BR>elseif (a2 &gt; H)<BR>a2 = H;<BR>end;<BR>else<BR>%When eta 
is not negative, the objective function W should be<BR>%evaluated at each end of 
the line segment. Only those terms in the<BR>%objective function that depend on 
alpha2 need be evaluated... <BR><BR>ind = find(SMO.alpha&gt;0);<BR><BR>aa2 = L; 
aa1 = alpha1 + s*(alpha2-aa2);<BR><BR>Lobj = aa1 + aa2 + 
sum((-y1*aa1/2).*SMO.y(ind).*K(ind,i1) + 
(-y2*aa2/2).*SMO.y(ind).*K(ind,i2));<BR><BR>aa2 = H; aa1 = alpha1 + 
s*(alpha2-aa2);<BR>Hobj = aa1 + aa2 + sum((-y1*aa1/2).*SMO.y(ind).*K(ind,i1) + 
(-y2*aa2/2).*SMO.y(ind).*K(ind,i2));<BR><BR>if (Lobj&gt;Hobj+SMO.epsilon)<BR>a2 
= L;<BR>elseif (Lobj&gt;Hobj-SMO.epsilon)<BR>a2 = H;<BR>else<BR>a2 = 
alpha2;<BR>end;<BR>end;
<P>if (abs(a2-alpha2) &lt; 
SMO.epsilon*(a2+alpha2+SMO.epsilon))<BR>return;<BR>end;
<P>% recompute Lagrange multiplier for pattern i1<BR>a1 = alpha1 + 
s*(alpha2-a2);
<P>w1 = y1*(a1 - alpha1); w2 = y2*(a2 - alpha2);
<P>%update threshold to reflect change in Lagrange multipliers<BR>b1 = SMO.bias 
+ e1 + w1*k11 + w2*k12; <BR>bold = SMO.bias;
<P>if (a1&gt;SMO.epsilon) &amp; (a1&lt;(C-SMO.epsilon))<BR>SMO.bias = 
b1;<BR>else<BR>b2 = SMO.bias + e2 + w1*k12 + w2*k22;<BR>if (a2&gt;SMO.epsilon) 
&amp; (a2&lt;(C-SMO.epsilon))<BR>SMO.bias = b2;<BR>else<BR>SMO.bias = (b1 + 
b2)/2;<BR>end;<BR>end;
<P>% update error cache using new Lagrange multipliers<BR>SMO.error = SMO.error 
+ w1*K(:,i1) + w2*K(:,i2) + bold - SMO.bias;<BR>SMO.error(i1) = 0.0; 
SMO.error(i2) = 0.0;
<P>% update vector of Lagrange multipliers<BR>SMO.alpha(i1) = a1; SMO.alpha(i2) 
= a2;
<P>%report progress made<BR>RESULT = 1;<BR>return;
<P>%*********************************************************************
<P>function RESULT = 
SMOTutorNOBIAS(x,y,C,kernel,alpha_init,bias_init);<BR>%Implementation of the 
Sequential Minimal Optimization (SMO)<BR>%training algorithm for Vapnik's 
Support Vector Machine (SVM)
<P>global SMO;
<P>[ntp,d] = size(x);<BR>%Inicializando las variables<BR>SMO.epsilon = svtol(C); 
<BR>SMO.x = x; SMO.y = y;<BR>SMO.C = C; SMO.kernel = kernel;<BR>SMO.alpha = 
alpha_init; 
<P>fprintf('We will not use bias. Setting bias to zero.\n')<BR>SMO.bias = 0; 
%Implicit bias = 0
<P>SMO.ntp = ntp; %number of training points
<P>%CACHES:<BR>SMO.Kcache = evaluate(kernel,x,x); %kernel 
evaluations<BR>SMO.error = zeros(SMO.ntp,1); %error 
<P>if ~any(SMO.alpha) <BR>%Como todos los alpha(i) son zeros, entonces fwd(i), 
tambien es zero<BR>SMO.error = -y;<BR>else<BR>SMO.error = fwd(1:ntp) - 
y;<BR>end;
<P>numChanged = 0; examineAll = 1;<BR>epoch = 0;
<P>%When all data were examined and no changes done the loop reachs its 
<BR>%end. Otherwise, loops with all data and likely support vector are 
<BR>%alternated until all support vector be found.<BR>while (numChanged &gt; 0) 
| examineAll<BR>numChanged = 0;<BR>%FIRST CHOICE HEURISTIC<BR>%THE OUTER 
LOOP<BR>if examineAll<BR>%Loop sobre todos los puntos<BR>for i = 
1:ntp<BR>numChanged = numChanged + examineExampleNOBIAS(i);<BR>end; 
<BR>else<BR>%Loop sobre KKT points<BR>for i = 1:ntp<BR>%Solo los puntos que 
violan las condiciones KKT<BR>if (SMO.alpha(i)&gt;SMO.epsilon) &amp; 
(SMO.alpha(i)&lt;(SMO.C-SMO.epsilon))<BR>numChanged = numChanged + 
examineExampleNOBIAS(i);<BR>end;<BR>end;<BR>end;<BR><BR>if (examineAll == 
1)<BR>examineAll = 0;<BR>elseif (numChanged == 0)<BR>examineAll = 
1;<BR>end;<BR><BR>% epoch = epoch+1;<BR>% trerror = 1; 
%100*sum((error)&lt;0)/ntp; <BR>% fprintf('Epoch: %d, TR Error: %g%%, 
numChanged: %d, alpha&gt;0: %d, 0<ALPHA<C: \n?,...<br %d>% epoch,...<BR>% 
trerror,...<BR>% numChanged,...<BR>% nonZeroLagrangeMultipliers,...<BR>% 
nonBoundLagrangeMultipliers);<BR><BR>%WRITE RESULTADOS A DISCO, W, B, ERROR 
<BR>end;<BR>SMO.epochs = epoch;<BR>RESULT = SMO;<BR>return;
<P>function RESULT = examineExampleNOBIAS(i1)<BR>RESULT = 
takeStepNOBIAS(i1);<BR>return;
<P>function RESULT = takeStepNOBIAS(i1)<BR>global SMO;
<P>alpha1 = SMO.alpha(i1); <BR>x1 = SMO.x(i1); <BR>y1 = SMO.y(i1); <BR>C = 
SMO.C; <BR>K = SMO.Kcache;
<P>if (alpha1 &gt; SMO.epsilon) &amp; (alpha1 &lt; (C-SMO.epsilon))<BR>e1 = 
SMO.error(i1);<BR>else<BR>e1 = fwd(i1) - y1;<BR>end;
<P>%constrain a1 to lie between L and H<BR>a1 = min(max(0,alpha1 - 
y1*e1/K(i1,i1)),C);<BR><BR>if (abs(a1-alpha1) &lt; 
SMO.epsilon*(a1+alpha1+SMO.epsilon))<BR>RESULT = 0;<BR>return;<BR>end;
<P>% update error cache using new Lagrange multipliers<BR>SMO.error = SMO.error 
+ y1*(a1 - alpha1)*K(:,i1);<BR>SMO.error(i1) = 0.0; 
<P>% update vector of Lagrange multipliers<BR>SMO.alpha(i1) = a1; 
<P>%report progress made<BR>RESULT = 1;<BR>return;<BR><BR><BR>
<HR width="100%" SIZE=7>

<P><A name=followups>Follow Ups:</A><BR>
<UL><!--insert: 454--><!--top: 4575-->
  <LI><A 
  href="http://www.kernel-machines.org/blackboard_bak/messages/4575.html">Re: a 
  Little bug in smo matlab</A> <B>tangjingyuan</B> <I>17:53:44 05/28/04</I> (<!--responses: 4575-->0) 
  <UL><!--insert: 4575--></UL><!--end: 4575--><!--top: 847-->
  <LI><A 
  href="http://www.kernel-machines.org/blackboard_bak/messages/847.html">a 
  Little bug in smo matlab</A> <B>Diego Andres Alvarez</B> <I>02:59:51 
  2/19/103</I> (<!--responses: 847-->1) 
  <UL><!--insert: 847--></UL><!--end: 847--><!--top: 510-->
  <LI><A 
  href="http://www.kernel-machines.org/blackboard_bak/messages/510.html">Re: 
  SMO-matlab code</A> <B>Rampal Singh Rana</B> <I>17:13:56 6/13/102</I> (<!--responses: 510-->0) 
  <UL><!--insert: 510--></UL><!--end: 510--><!--top: 509-->
  <LI><A 
  href="http://www.kernel-machines.org/blackboard_bak/messages/509.html">Re: 
  SMO-matlab code</A> <B>Rampal Singh Rana</B> <I>17:13:11 6/13/102</I> (<!--responses: 509-->0) 
  <UL><!--insert: 509--></UL><!--end: 509--></LI></UL><!--end: 454--><BR>
<HR width="100%" SIZE=7>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -