⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 svc.m

📁 It is for Face Recognition
💻 M
📖 第 1 页 / 共 2 页
字号:
%% SVC: Construct a support vector classifier.%% Constructs a support vector classifier based on the data points 'xo'% (stored rowwise) and labels 'yo', which should be either -1 or 1.%% Returns:%   'a'  a vector of weights for each point in 'xo'%   'wo'%   'bo' parameters in f(z) = wo.z + bo%   'xs' a vector of support vectors%   'ys' a vector of support vector labels%% NOTE:%% - Various options are not parameters; they can only be adjusted in this%   file. See the list 'Parameters' below.%% - You'll need either the Matlab minimization package, which contains 'qp.m'%   or some other quadratic minimization problem solver. Matlab's 'qp.m'%   is not very stable, so it might give problems.%% - This is a rough, non-optimized version. Its only goal is to help%   understand the algorithm.%% (C) 1996 David Tax & Dick de Ridder% Faculty of Applied Physics, Delft University of Technology% P.O. Box 5046, 2600 GA Delft, The Netherlands% Dick de Ridder, D.deRidder@ph.tn.tudelft.nlfunction [ a, wo, bo, xs, ys ] = svc (xo, yo, options)  % Some constants.  false          = 0;  true           = 1;  % Methods.  linear        = 0;  polynomial    = 1;  potential      = 2;  radial_basis  = 3;  neural        = 4;  % Parameters.  num_points        = size (xo, 1);  input_dimension    = size (xo, 2);  a_crit             = 1.0e-8;      % The criterion for considering a == 0.  upper_bound        = 1.0e+20;    % Upper bound for a(i)'s.  batch_size        = num_points;  % The batch size.  max_retries        = 10;          % Maximum number of retries before the  if nargin < 3    % algorithm is restarted.    feature_dimension  = 5;    % The feature space dimensionality.    method            = polynomial;     % Dot product convolution function.    C  = 1.0e+2;        % Weighing factor of errors made.    % Radial basis    rb_sigma          = 1.0;    % Potential    po_sigma          = 1.0;    % Neural    nn_v              = 1.0e-1;    nn_c              = 1.0;    dographics        = true;    dographics3d      = true;  else    method            = options(1);     % Dot product convolution function.    feature_dimension = options(2);    C                 = options(3);    rb_sigma          = options(4);    po_sigma          = options(5);    nn_v              = options(6);    nn_c              = options(7);    dographics        = options(8);    dographics3d      = options(9);  end  % Flags.  dodebug      = false;  throwaway    = false;  % QP  negdef      = 0;  normalize    = 1;  % Normalize: subtract mean.  mx = mean (xo);  for i = 1:input_dimension    xo(:, i) = xo(:, i) - mx(i);  end  % Graphics: define the axes.  mx = max (max (abs (xo))) * 1.05;  ax = [-mx mx -mx mx];  % We have four datasets:  %    xorg   - the original dataset.  %    xo     - the 'current' dataset, with processed points thrown  %             out (keeps shrinking).  %    x      - the current batch being processed: support vectors  %             remaining from a previous cycle + some points randomly  %             chosen from xo.  %    xs     - the set of current support vectors.  xorg = xo;  yorg = yo;  support_size   = 0;  xs = []; ys = []; a = [];  org_num_points = num_points;  done            = false;  retries         = 0;  % As long as there are unprocessed points....  while ((size (xo, 1) > 0) & (done == false))    % Prepare a new batch by copying the previously found    % support vectors + a set of points randomly chosen from    % xo (the set of remaining unprocessed points) into    % x and y.    points_left = size (xo, 1);    num_points = min (batch_size + support_size, ...                      points_left + support_size);    r = randperm (points_left);    r = r (1, 1:min (batch_size, points_left));    x = [xs(1:support_size, :); xo(r, :)];    y = [ys(1:support_size)   ; yo(r)   ];    % Initialize D.    D = zeros (num_points, num_points);    for i = 1:num_points      for j = i:num_points        if (method == linear)          D (i, j) = y(i) * y(j) * (x(i,:) * x(j,:)');        elseif (method == polynomial)          D (i, j) = y(i) * y(j) * ((x(i,:) * x(j,:)' + 1)^feature_dimension);        elseif (method == potential)          D (i, j) = y(i) * y(j) * exp (- vectdist (x(i,:), x(j,:)') / po_sigma);        elseif (method == radial_basis)          D (i, j) = y(i) * y(j) * exp (- (vectdist (x(i,:), x(j,:)') / rb_sigma)^2);        elseif (method == neural)          D (i, j) = y(i) * y(j) * tanh (nn_v * x(i,:) * x(j,:)' + nn_c);        else          disp ('Error: unknown dot product convolution.')          return        end        D (j, i) = D (i, j);      end    end    % Now check whether D is positive semi-definite, as it is supposed    % to be. Thanks go to A.J. Quist of TWI-SSOR for the pd_check routine.    if (pd_check (D) ~= 1)      i = -50;      while (pd_check (D + (10.0^i) * eye (num_points)) ~= 1)        i = i + 1;      end      fprintf ('Adding I * 10^%d to D\n', i)      D = D + (10.0^(i)) * eye(num_points);    end    D = [D                    zeros(num_points,1)  ; ....         zeros(1,num_points)   1/C];    % Initialize a(i) randomly    rand ('seed', sum (100 * clock));    a = [ 0.1 * rand(1, num_points) 1];    % Initialize graphics and freeze the axes.    if (dographics)      % Init.       figure(1);       clf;      axis (ax); axis ('square');      hold on;      % Plot the points      plot (xorg (find (yorg < 0), 1), xorg (find (yorg < 0), 2), 'y+');      plot (xorg (find (yorg > 0), 1), xorg (find (yorg > 0), 2), 'y*');      plot (xo   (find (yo   < 0), 1), xo     (find (yo   < 0), 2), 'g+');      plot (xo   (find (yo   > 0), 1), xo     (find (yo   > 0), 2), 'g*');      plot (x    (find (y    < 0), 1), x    (find (y    < 0), 2), 'co');      plot (x    (find (y    > 0), 1), x    (find (y    > 0), 2), 'co');      if (dodebug)        for i = 1:num_points          text (x(i, 1) + 0.1, x(i, 2) + 0.1, num2str (i));        end      end      drawnow      if (dodebug)        wait ('Press any key');      end    end    % Minimization procedure initialization:    % 'qp' minimizes:   0.5 x' D x + f' x    % subject to:       Ax <= b    %    % Here: one constraint is coded into the lower    % bound of the solution (i.e., a(i) >= 0, eqn. 16),    % the other is given in f:    % Ax <= b with A = vector containing y(i)'s,    % x = the solution to be found (a(i)'s) and    % b = 0 gives a'y = 0 (eqn. 17). So we set f to    % (-1 -1 ... -1)'.    %    % NB: Since we want to *maximize* eqn. 26,    %     we minimize its negative - that's why    %     f is negative.    f = [-ones(num_points, 1); 0];    % The first row in A is the equality constraint    % ay = 0. The second to last rows are the constraints    % given in eqn. 29: all a(i) should be smaller than    % delta....    A = [y'              0                     ; ...         eye(num_points) -ones(num_points, 1)  ];    b = zeros (num_points + 1, 1);    % One of our constraints: each a(i) should be >= 0 (the    % lower bound, lb). The upper bound is an arbitrary    % number.    lb   = zeros (num_points + 1, 1);     ub   = [ upper_bound * ones(num_points + 1, 1)];    % Initial guess. This doesn't seem to have an    % enormous effect.    rand ('seed', sum( 100 * clock));    p = 0.5 * rand (num_points + 1, 1);    % Call the minimization routine. Note: this routine    %  has some built-in safety checks and returns []    % when it decides something is definitely wrong.    %    % The last 1 in the call means that only the first    % constraint is an equality constraint. See 'qp'.    if (exist ('qld') == 3)      a = qld (D, f, -A, b, lb, ub, p, 1);    else      a = qp (D, f, A, b, lb, ub, p, 1, 1, negdef, normalize)';    end    if (~isempty(a))      delta = a(num_points + 1);

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -