⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dfp.m

📁 matlab算法集 matlab算法集
💻 M
字号:
function [x,ev,j] = dfp (x0,tol,v,m,f)
%-----------------------------------------------------------------------
% Usage:       [x,ev,j] = dfp (x0,tol,v,m,f)
%
% Description: Use the Davidon-Fletcher-Powell method with self-scaling
%              to solve the following n-dimensional unconstrained
%              optimization problem:
%
%                 minimize: f(x)
%
% Inputs:       x0  = n by 1 vector containing initial guess
%               tol = error tolerance used to terminate search (tol >= 0)
%               v   = order of method:
%
%                        v       method
%                        --------------------
%                        1   steepest descent
%                        n   dfp method
%                        --------------------  
%
%               m   = maximum number of iterations (m >= 1)
%               f   = string specifying name of objective function:
%                     f(x). The form of f is:
%
%                        function y = f(x)
%
%                     When f is called with the n by 1 vector x, it must
%                     it must return the scalar value of the objective 
%                     function f(x). 
%
% Outputs:      x  = n by 1 solution vector
%               ev = number of scalar function evaluations performed
%               j  = number of iterations performed. If j < m, then
%                    the following convergence criterion was satisfied 
%                    where eps denotes the machine epsilon and h is 
%                    the step length:
%
%                       (||df(x)/dx|| < tol) or (h*||x|| < eps) 
%
% Notes:        If f(x) is quadratic, dfp should converge in at
%               most n iterations. 
%-----------------------------------------------------------------------

% Initialize

   n = length(x0);
   chkvec(x0,1,'dfp');
   tol = args (tol,0,tol,3,'dfp');
   v   = args (v,1,n,4,'dfp');
   m   = args (m,1,m,5,'dfp');
   h = 1;                
   j = 0;
   k = 0;                
   ev = 0;               
   Q  = eye (n,n);        
   dx = zeros (n,1);
   dg = zeros (n,1);
   g1 = zeros (n,1);
   r =  zeros (n,1);
   e = 2*sqrt(eps);
   x = x0;
   g1 = gradfmu (f,'','',x,0);
   ev = ev + 2*n;   
   gamma = tol + 1;
   s = 1;
   err = 0;

   hwbar = waitbar(0,'Computing Optimum: dfp');
   while (j <= m) & (gamma > tol) & (s > e*norm(x,inf)) & ~err

% Perform line search along d = -Qg 

      waitbar (max(j/m,tol/gamma))
      d = -Q*g1;
      delta = norm (d,inf);
      [a,b,c,err,eval] = bracket (delta,x,d,0,f,'','');
      ev = ev + eval;
      if ~err 
         [h,eval] = golden (x,d,a,c,e,0,f,'','');
         ev = ev + eval;
         dx = h*d; 
         x = x + dx;
         s = norm (dx,inf);
         
% Compute new gradient 

         dg = g1;
         g1 = gradfmu (f,'','',x,0);
         ev = ev + 2*n; 
         gamma = norm (g1,n);
         dg = g1 - dg;

% Update estimate Q of inverse of Hessian matrix 

         r = Q*dg;
         b1 = dot (r,dg);
         b2 = dot (dx,dg);
         beta = b2/b1;				% self-scaling 
         for i = 1 : n
            for p = 1 : n
               Q(i,p) = beta*(Q(i,p) - r(i)*r(p)/b1) + dx(i)*dx(p)/b2;  
            end
         end

% Update indices, check for restart 
   
         j = j + 1;
         k = k + 1;
         if k == v 
      	   k = 0;
           Q = eye(n,n);
         end
      end
   end
   close (hwbar)



⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -