⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 kernel_combination.m

📁 MATLAB2007科学计算与工程分析代码
💻 M
字号:
function D = kernel_combination(K_,Y)% D = KERNEL_COMBINATION(K,Y)% Learn a combination of kernel given by the 3D matrix K% Postive weights are returned in D [note that length(D) = size(K,3)]% The kernels need to be centered and normalized  global K; K = K_; clear K_; % Global variable to gain speed   n = length(Y);   % Number of pointsd = size(K,3);   % Number of kernels% Check the kernels  for i=1:d  K1 = K(:,:,i);  if abs(sum(K1(:))) > 1e-5    error('Kernel not centered in feature space');  end;  if abs(trace(K1)- n) > 1e-5    error('Kernel not normalized');  end;end;D=ones(d,1) / d;     % Initialization: put equal weight to all kernelsoptions = optimset('display','off','LargeScale','off');% Do the optimization: % minimize the SVM objective function under constraints D >= 0 and sum(D)=1% For that, compute the gradient and the Hessian of the SVM objective% function with respect to D and make a constrained Newton step. % This step is computed with quadprog (need the optimization toolbox).while 1  D(D<1e-10) = 0;  [w,grad,hess] = obj_fun(D,Y);   hess = (hess+hess')/2 + 1e-5*eye(size(hess)); % Because of numerical errors  fprintf('Obj fun = %f, L0 norm of the combination = %d    \n',w,sum(D>0));    % Find the search direction.   % If the problem were unconstrained, it would be hess \ grad (Newton step).  [S,fval] = quadprog(hess,grad,[],[],ones(1,d),0,-D,[],[],options);  if fval > -1e-5*w break; end; % Stop when the relative increase to the                                % objective function is small.  % Back tracking if necessary [usually, lambda = 1]  lambda = 1; non_convex=0;  while 1    w1 = obj_fun(D+lambda*S,Y);    if w1<w break; end;    lambda = lambda / 2;    if lambda<1e-10 non_convex=1; break; end;  end;    %Take the step  D = D+lambda*S;    if non_convex    warning('Convexity problem; lambda too small');    break;  end;end;fprintf('\n');   function [w,grad,hess] = obj_fun(D,Y)  % Compute the SVM objective function, as well as the gradient and  % Hessian with respect to D.  % Note that alpha depends on D.  global K;  n = length(Y);  d = length(D);  % Compute the convex combination of kernels  K0 = zeros(n);  for i=1:d    K0 = K0 + K(:,:,i)*D(i);  end;    % Standard hard margin SVM training  [alpha, b, sv] = learn(K0,Y);      % Easy way to compute the objective function (i.e. w^2)  w = sum(alpha.*Y);  if nargout == 1,    return;  end;    % Compute the gradient and the hessian of the margin  lsv = length(sv);  invKsv = inv([[K0(sv,sv) ones(lsv,1)]; [ones(1,lsv) 0]]);  for i=1:d    grad(i) = - alpha'*K(1:n,:,i)*alpha;    dalY = - invKsv(1:end-1,:)*[K(sv,sv,i)*alpha(sv); 0];    for j=1:d      hess(i,j) = -2 * alpha'*K(1:n,sv,j)*dalY;    end;  end;   grad = grad';  function [beta,b,svset] = learn(K,Y)  % A simplified version of primal SVM training  % Minimize the SVM objective function by Newton  % See www.kyb.tuebingen.mpg.de/bs/people/chapelle/primal/    svset = 1:length(Y);  old_svset = [];  iter = 0;  itermax = 20;    while ~isempty(setxor(svset,old_svset)) & (iter<itermax)    old_svset = svset;    % An infinitely small ridge should be added, but we take it larger in    % case the kernel matrix is ill conditioned.    H = K(svset,svset) + 1e-3*eye(length(svset));    H(end+1,:) = 1;                 % To take the bias into account    H(:,end+1) = 1;    H(end,end) = 0;    % Find the parameters    par = H\[Y(svset);0];    beta = zeros(length(Y),1);    beta(svset) = par(1:end-1)';    b = par(end);    % Compute the new set of support vectors    out = K*beta+b;     svset = find(Y.*out < 1);    iter = iter + 1;  end;  if iter == itermax    warning('Maximum number of Newton steps reached');  end;  

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -