⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 icamf.m

📁 Independent Component Analysis源代码,最大似然方法
💻 M
📖 第 1 页 / 共 4 页
字号:
logZ0=sum(sum(logZ0terms));function [logZ0] = logZ0_mog(gamma,lambda,par)oN = ones(size(gamma,2),1) ;K = length(par.S(par.Sindx(1)).pi) ;oK = ones(K,1) ;logZ0 = 0 ;for i = 1:length(par.Sindx) % loop over sources    cindx = par.Sindx(i) ;    cgamma = gamma(i,:) ; clambda = lambda(i,:) ; % 1 * N    logpi = log(par.S(cindx).pi) ; % K * 1    cmu = par.S(cindx).mu ;       % K * 1    cSigma = par.S(cindx).Sigma ; % K * 1    mu2dSigma = cmu .^ 2 ./ cSigma ;     musqrtSigma = cmu .* sqrt(cSigma) ;     resp = logpi(:,oN) - 0.5 * log( 1 + cSigma * clambda ) ....       + 0.5 * (sqrt(cSigma) * cgamma + musqrtSigma(:,oN) ) .^2 ./ ...       ( 1 + cSigma * clambda ) - 0.5 * mu2dSigma(:,oN) ; % K * N    maxresp = max( resp, [] , 1 ) ;    logZ0 = logZ0 + sum ( maxresp + log( sum( exp( resp - maxresp(oK,:) ) , 1 ) ) ) ; end%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                                different math help function                                               %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function y = logdet(A)% log(det(A)) where A is positive-definite.% This is faster and more stable than using log(det(A)).[U,p] = chol(A);if p     y=0; % if not positive definite return 0 , could also be a log(eps)else    y = 2*sum(log(diag(U)));end% Phi (error) functionfunction y=Phi(x)% Phi(x) = int_-infty^x Dz z=abs(x/sqrt(2));t=1.0./(1.0+0.5*z);y=0.5*t.*exp(-z.*z-1.26551223+t.*(1.00002368+...    t.*(0.37409196+t.*(0.09678418+t.*(-0.18628806+t.*(0.27886807+...    t.*(-1.13520398+t.*(1.48851587+t.*(-0.82215223+t.*0.17087277)))))))));y=(x<=0.0).*y+(x>0).*(1.0-y);%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%                           conjugated gradient minimizer by C Rasmussen                                           %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function [X, fX, i] = minimize(X, f, length, P1, P2, P3, P4, P5);% Minimize a continuous differentialble multivariate function. Starting point% is given by "X" (D by 1), and the function named in the string "f", must% return a function value and a vector of partial derivatives. The Polack-% Ribiere flavour of conjugate gradients is used to compute search directions,% and a line search using quadratic and cubic polynomial approximations and the% Wolfe-Powell stopping criteria is used together with the slope ratio method% for guessing initial step sizes. Additionally a bunch of checks are made to% make sure that exploration is taking place and that extrapolation will not% be unboundedly large. The "length" gives the length of the run: if it is% positive, it gives the maximum number of line searches, if negative its% absolute gives the maximum allowed number of function evaluations. You can% (optionally) give "length" a second component, which will indicate the% reduction in function value to be expected in the first line-search (defaults% to 1.0). The function returns when either its length is up, or if no further% progress can be made (ie, we are at a minimum, or so close that due to% numerical problems, we cannot get any closer). If the function terminates% within a few iterations, it could be an indication that the function value% and derivatives are not consistent (ie, there may be a bug in the% implementation of your "f" function). The function returns the found% solution "X", a vector of function values "fX" indicating the progress made% and "i" the number of iterations (line searches or function evaluations,% depending on the sign of "length") used.%% Usage: [X, fX, i] = minimize(X, f, length, P1, P2, P3, P4, P5)%% See also: checkgrad %% Copyright (C) 2001 and 2002 by Carl Edward Rasmussen. Date 2002-02-13RHO = 0.01;                            % a bunch of constants for line searchesSIG = 0.5;       % RHO and SIG are the constants in the Wolfe-Powell conditionsINT = 0.1;    % don't reevaluate within 0.1 of the limit of the current bracketEXT = 3.0;                    % extrapolate maximum 3 times the current bracketMAX = 20;                         % max 20 function evaluations per line searchRATIO = 100;                                      % maximum allowed slope ratioargstr = [f, '(X'];                      % compose string used to call functionfor i = 1:(nargin - 3)  argstr = [argstr, ',P', int2str(i)];endargstr = [argstr, ')'];if max(size(length)) == 2, red=length(2); length=length(1); else red=1; endif length>0, S=['Linesearch']; else S=['Function evaluation']; end i = 0;                                            % zero the run length counterls_failed = 0;                             % no previous line search has failedfX = [];[f1 df1] = eval(argstr);                      % get function value and gradienti = i + (length<0);                                            % count epochs?!s = -df1;                                        % search direction is steepestd1 = -s'*s;                                                 % this is the slopez1 = red/(1-d1);                                  % initial step is red/(|s|+1)while i < abs(length)                                      % while not finished  i = i + (length>0);                                      % count iterations?!  X0 = X; f0 = f1; df0 = df1;                   % make a copy of current values  X = X + z1*s;                                             % begin line search  [f2 df2] = eval(argstr);  i = i + (length<0);                                          % count epochs?!  d2 = df2'*s;  f3 = f1; d3 = d1; z3 = -z1;             % initialize point 3 equal to point 1  if length>0, M = MAX; else M = min(MAX, -length-i); end  success = 0; limit = -1;                     % initialize quanteties  while 1    while ((f2 > f1+z1*RHO*d1) | (d2 > -SIG*d1)) & (M > 0)       limit = z1;                                         % tighten the bracket      if f2 > f1        z2 = z3 - (0.5*d3*z3*z3)/(d3*z3+f2-f3);                 % quadratic fit      else        A = 6*(f2-f3)/z3+3*(d2+d3);                                 % cubic fit        B = 3*(f3-f2)-z3*(d3+2*d2);        z2 = (sqrt(B*B-A*d2*z3*z3)-B)/A;       % numerical error possible - ok!      end      if isnan(z2) | isinf(z2)        z2 = z3/2;                  % if we had a numerical problem then bisect      end      z2 = max(min(z2, INT*z3),(1-INT)*z3);  % don't accept too close to limits      z1 = z1 + z2;                                           % update the step      X = X + z2*s;      [f2 df2] = eval(argstr);      M = M - 1; i = i + (length<0);                           % count epochs?!      d2 = df2'*s;      z3 = z3-z2;                    % z3 is now relative to the location of z2    end    if f2 > f1+z1*RHO*d1 | d2 > -SIG*d1      break;                                                % this is a failure    elseif d2 > SIG*d1      success = 1; break;                                             % success    elseif M == 0      break;                                                          % failure    end    A = 6*(f2-f3)/z3+3*(d2+d3);                      % make cubic extrapolation    B = 3*(f3-f2)-z3*(d3+2*d2);    z2 = -d2*z3*z3/(B+sqrt(B*B-A*d2*z3*z3));        % num. error possible - ok!    if ~isreal(z2) | isnan(z2) | isinf(z2) | z2 < 0   % num prob or wrong sign?      if limit < -0.5                               % if we have no upper limit        z2 = z1 * (EXT-1);                 % the extrapolate the maximum amount      else        z2 = (limit-z1)/2;                                   % otherwise bisect      end    elseif (limit > -0.5) & (z2+z1 > limit)          % extraplation beyond max?      z2 = (limit-z1)/2;                                               % bisect    elseif (limit < -0.5) & (z2+z1 > z1*EXT)       % extrapolation beyond limit      z2 = z1*(EXT-1.0);                           % set to extrapolation limit    elseif z2 < -z3*INT      z2 = -z3*INT;    elseif (limit > -0.5) & (z2 < (limit-z1)*(1.0-INT))   % too close to limit?      z2 = (limit-z1)*(1.0-INT);    end    f3 = f2; d3 = d2; z3 = -z2;                  % set point 3 equal to point 2    z1 = z1 + z2; X = X + z2*s;                      % update current estimates    [f2 df2] = eval(argstr);    M = M - 1; i = i + (length<0);                             % count epochs?!    d2 = df2'*s;  end                                                      % end of line search  if success                                         % if line search succeeded    f1 = f2; fX = [fX' f1]';    fprintf('%s %6i;  Value %4.6e\r', S, i, f1);    s = (df2'*df2-df1'*df2)/(df1'*df1)*s - df2;      % Polack-Ribiere direction    tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives    d2 = df1'*s;    if d2 > 0                                      % new slope must be negative      s = -df1;                              % otherwise use steepest direction      d2 = -s'*s;        end    z1 = z1 * min(RATIO, d1/(d2-realmin));          % slope ratio but max RATIO    d1 = d2;    ls_failed = 0;                              % this line search did not fail  else    X = X0; f1 = f0; df1 = df0;  % restore point from before failed line search    if ls_failed | i > abs(length)          % line search failed twice in a row      break;                             % or we ran out of time, so we give up    end    tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives    s = -df1;                                                    % try steepest    d1 = -s'*s;    z1 = 1/(1-d1);                         ls_failed = 1;                                    % this line search failed  endendfprintf('\n');function  [X, info, perf, D] = ucminf(fun,par, x0, opts, D0)%UCMINF  BFGS method for unconstrained nonlinear optimization:% Find  xm = argmin{f(x)} , where  x  is an n-vector and the scalar% function  F  with gradient  g  (with elements  g(i) = DF/Dx_i )% must be given by a MATLAB function with with declaration%           function  [F, g] = fun(x, par)% par  holds parameters of the function.  It may be dummy.  % % Call:  [X, info {, perf {, D}}] = ucminf(fun,par, x0, opts {,D0})%% Input parameters% fun  :  String with the name of the function.% par  :  Parameters of the function.  May be empty.% x0   :  Starting guess for  x .% opts :  Vector with 4 elements:%         opts(1) :  Expected length of initial step%         opts(2:4)  used in stopping criteria:%             ||g||_inf <= opts(2)                     or %             ||dx||_2 <= opts(3)*(opts(3) + ||x||_2)  or%             no. of function evaluations exceeds  opts(4) . %         Any illegal element in  opts  is replaced by its%         default value,  [1  1e-4*||g(x0)||_inf  1e-8  100]% D0   :  (optional)  If present, then approximate inverse Hessian%         at  x0 .  Otherwise, D0 := I % Output parameters% X    :  If  perf  is present, then array, holding the iterates%         columnwise.  Otherwise, computed solution vector.% info :  Performance information, vector with 6 elements:%         info(1:3) = final values of [f(x)  ||g||_inf  ||dx||_2] %         info(4:5) = no. of iteration steps and evaluations of (F,g)%         info(6) = 1 :  Stopped by small gradient%                   2 :  Stopped by small x-step%                   3 :  Stopped by  opts(4) .%                   4 :  Stopped by zero step.% perf :  (optional). If present, then array, holding %          perf(1:2,:) = values of  f(x) and ||g||_inf%          perf(3:5,:) = Line search info:  values of  %                        alpha, phi'(alpha), no. fct. evals. %          perf(6,:)   = trust region radius.% D    :  (optional). If present, then array holding the %         approximate inverse Hessian at  X(:,end) .%  Hans Bruun Nielsen,  IMM, DTU.  00.12.18 / 02.01.22  % Check call   [x n F g] = check(fun,par,x0,opts);  if  nargin > 4,  D = checkD(n,D0);  fst = 0;  else,            D = eye(n);  fst = 1;    end  %  Finish initialization  k = 1;   kmax = opts(4);   neval = 1;   ng = norm(g,inf);  Delta = opts(1);  Trace = nargout > 2;  if  Trace        X = x(:)*ones(1,kmax+1);        perf = [F; ng; zeros(3,1); Delta]*ones(1,kmax+1);      end  found = ng <= opts(2);        h = zeros(size(x));  nh = 0;  ngs = ng * ones(1,3);     while  ~found    %  Previous values    xp = x;   gp = g;   Fp = F;   nx = norm(x);    ngs = [ngs(2:3) ng];    h = D*(-g(:));   nh = norm(h);   red = 0;     if  nh <= opts(3)*(opts(3) + nx),  found = 2;      else      if  fst | nh > Delta  % Scale to ||h|| = Delta        h = (Delta / nh) * h;   nh = Delta;           fst = 0;  red = 1;      end      k = k+1;      %  Line search      [al  F  g  dval  slrat] = softline(fun,par,x,F,g, h);      if  al < 1  % Reduce Delta        Delta = .35 * Delta;      elseif   red & (slrat > .7)  % Increase Delta        Delta = 3*Delta;            end       %  Update  x, neval and ||g||      x = x + al*h;   neval = neval + dval;  ng = norm(g,inf);      if  Trace             X(:,k) = x(:);              perf(:,k) = [F; ng; al; dot(h,g); dval; Delta]; end      h = x - xp;   nh = norm(h);      if  nh == 0,        found = 4;       else        y = g - gp;   yh = dot(y,h);        if  yh > sqrt(eps) * nh * norm(y)          %  Update  D          v = D*y(:);   yv = dot(y,v);          a = (1 + yv/yh)/yh;   w = (a/2)*h(:) - v/yh;          D = D + w*h' + h*w';        end  % update D        %  Check stopping criteria        thrx = opts(3)*(opts(3) + norm(x));        if      ng <= opts(2),              found = 1;        elseif  nh <= thrx,                 found = 2;        elseif  neval >= kmax,              found = 3; %        elseif  neval > 20 & ng > 1.1*max(ngs), found = 5;        else,   Delta = max(Delta, 2*thrx);  end      end      end  % Nonzero h  end  % iteration  %  Set return values  if  Trace    X = X(:,1:k);   perf = perf(:,1:k);  else,  X = x;  end  info = [F  ng  nh  k-1  neval  found];% ==========  auxiliary functions  =================================function  [x,n, F,g, opts] = check(fun,par,x0,opts0)% Check function call  x = x0(:);   sx = size(x);   n = max(sx);     if  (min(sx) > 1)      error('x0  should be a vector'), end  [F g] = feval(fun,x,par);  sf = size(F);   sg = size(g);  if  any(sf-1) | ~isreal(F)      error('F  should be a real valued scalar'), end  if  (min(sg) ~= 1) | (max(sg) ~= n)      error('g  should be a vector of the same length as  x'), end  so = size(opts0);  if  (min(so) ~= 1) | (max(so) < 4) | any(~isreal(opts0(1:4)))      error('opts  should be a real valued vector of length 4'), end  opts = opts0(1:4);   opts = opts(:).';  i = find(opts <= 0);  if  length(i)    % Set default values    d = [1  1e-4*norm(g, inf)  1e-8  100];    opts(i) = d(i);  end   % ----------  end of  check  ---------------------------------------function  D = checkD(n,D0)% Check given inverse Hessian  D = D0;   sD = size(D);  if  any(sD - n)      error(sprintf('D  should be a square matrix of size %g',n)), end  % Check symmetry  dD = D - D';   ndD = norm(dD(:),inf);  if  ndD > 10*eps*norm(D(:),inf)      error('The given D0 is not symmetric'), end  if  ndD,  D = (D + D')/2; end  % Symmetrize        [R p] = chol(D);  if  p      error('The given D0 is not positive definite'), end    function  [alpha,fn,gn,neval,slrat] = ...              softline(fun,fpar, x,f,g, h)% Soft line search:  Find  alpha = argmin_a{f(x+a*h)}  % Default return values   alpha = 0;   fn = f;   gn = g;   neval = 0;  slrat = 1;  n = length(x);      % Initial values  dfi0 = dot(h,gn);  if  dfi0 >= 0,  return, end  fi0 = f;    slope0 = .05*dfi0;   slopethr = .995*dfi0;  dfia = dfi0;   stop = 0;   ok = 0;   neval = 0;   b = 1;    while   ~stop    [fib g] = feval(fun,x+b*h,fpar);  neval = neval + 1;    dfib = dot(g,h);     if  b == 1, slrat = dfib/dfi0; end    if  fib <= fi0 + slope0*b    % New lower bound      if  dfib > abs(slopethr),  stop = 1;      else        alpha = b;   fn = fib;   gn = g;   dfia = dfib;          ok = 1;   slrat = dfib/dfi0;        if  (neval < 5) & (b < 2) & (dfib < slopethr)          % Augment right hand end          b = 2*b;        else,  stop = 1; end      end    else,  stop = 1; end     end    stop = ok;  xfd = [alpha fn dfia; b fib dfib; b fib dfib];  while   ~stop    c = interpolate(xfd,n);    [fic g] = feval(fun, x+c*h, fpar);   neval = neval+1;    xfd(3,:) = [c  fic  dot(g,h)];    if fic < fi0 + slope0*c    % New lower bound      xfd(1,:) = xfd(3,:);   ok = 1;      alpha = c;   fn = fic;   gn = g;  slrat = xfd(3,3)/dfi0;    else,  xfd(2,:) = xfd(3,:);  ok = 0; end    % Check stopping criteria    ok = ok & abs(xfd(3,3)) <= abs(slopethr);    stop = ok | neval >= 5 | diff(xfd(1:2,1)) <= 0;  end  % while   %------------  end of  softline  ------------------------------  function  alpha = interpolate(xfd,n);% Minimizer of parabola given by% xfd(1:2,1:3) = [a fi(a) fi'(a); b fi(b) dummy]  a = xfd(1,1);   b = xfd(2,1);   d = b - a;   dfia = xfd(1,3);  C = diff(xfd(1:2,2)) - d*dfia;  if C >= 5*n*eps*b    % Minimizer exists    A = a - .5*dfia*(d^2/C);    d = 0.1*d;   alpha = min(max(a+d, A), b-d);  else    alpha = (a+b)/2;  end%------------  end of  interpolate  --------------------------

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -