⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 multimin.texi

📁 开放gsl矩阵运算
💻 TEXI
📖 第 1 页 / 共 2 页
字号:
be substituted at runtime without modifications to the code.@deftypefun int gsl_multimin_fdfminimizer_iterate (gsl_multimin_fdfminimizer *@var{s})These functions perform a single iteration of the minimizer @var{s}.  Ifthe iteration encounters an unexpected problem then an error code willbe returned.@end deftypefun@noindentThe minimizer maintains a current best estimate of the minimum at alltimes.  This information can be accessed with the following auxiliaryfunctions,@deftypefun {gsl_vector *} gsl_multiroot_fdfsolver_x (const gsl_multiroot_fdfsolver * @var{s})@deftypefunx double gsl_multiroot_fdfsolver_minimum (const gsl_multiroot_fdfsolver * @var{s})@deftypefunx {gsl_vector *} gsl_multiroot_fdfsolver_gradient (const gsl_multiroot_fdfsolver * @var{s})These functions return the current best estimate of the location of theminimum, the value of the function at that point and its gradient, forthe minimizer @var{s}.@end deftypefun@deftypefun int gsl_multimin_fdfminimizer_restart (gsl_multimin_fdfminimizer *@var{s})This function resets the minimizer @var{s} to use the current point as anew starting point.@end deftypefun@node Multimin Stopping Criteria@section Stopping CriteriaA minimization procedure should stop when one of the followingconditions is true:@itemize @bullet@itemA minimum has been found to within the user-specified precision.@itemA user-specified maximum number of iterations has been reached.@itemAn error has occurred.@end itemize@noindentThe handling of these conditions is under user control.  The functionsbelow allow the user to test the precision of the current result.@deftypefun int gsl_multimin_test_gradient (const gsl_vector * @var{g},double @var{epsabs})This function tests the norm of the gradient @var{g} against theabsolute tolerance @var{epsabs}. The gradient of a multidimensionalfunction goes to zero at a minimum. The test returns @code{GSL_SUCCESS}if the following condition is achieved,@tex\beforedisplay$$|g| < \hbox{\it epsabs}$$\afterdisplay@end tex@ifinfo@example|g| < epsabs@end example@end ifinfo@noindentand returns @code{GSL_CONTINUE} otherwise.  A suitable choice of@var{epsabs} can be made from the desired accuracy in the function forsmall variations in @math{x}.  The relationship between these quantitiesis given by @c{$\delta f = g\,\delta x$}@math{\delta f = g \delta x}.@end deftypefun@node Multimin Algorithms@section AlgorithmsThere are several minimization methods available. The best choice ofalgorithm depends on the problem.  Each of the algorithms uses the valueof the function and its gradient at each evaluation point.@deffn {Minimizer} gsl_multimin_fdfminimizer_conjugate_frThis is the Fletcher-Reeves conjugate gradient algorithm. The conjugategradient algorithm proceeds as a succession of line minimizations. Thesequence of search directions to build up an approximation to thecurvature of the function in the neighborhood of the minimum.  Aninitial search direction @var{p} is chosen using the gradient and lineminimization is carried out in that direction.  The accuracy of the lineminimization is specified by the parameter @var{tol}.  At the minimumalong this line the function gradient @var{g} and the search direction@var{p} are orthogonal.  The line minimization terminates when@c{$p\cdot g < tol |p| |g|$} @math{dot(p,g) < tol |p| |g|}.  Thesearch direction is updated  using the Fletcher-Reeves formula@math{p' = g' - \beta g} where @math{\beta=-|g'|^2/|g|^2}, andthe line minimization is then repeated for the new searchdirection.@end deffn@deffn {Minimizer} gsl_multimin_fdfminimizer_conjugate_prThis is the Polak-Ribiere conjugate gradient algorithm.  It is similarto the Fletcher-Reeves method, differing only in the choice of thecoefficient @math{\beta}. Both methods work well when the evaluationpoint is close enough to the minimum of the objective function that itis well approximated by a quadratic hypersurface.@end deffn@deffn {Minimizer} gsl_multimin_fdfminimizer_vector_bfgsThis is the vector Broyden-Fletcher-Goldfarb-Shanno conjugate gradientalgorithm.  It is a quasi-Newton method which builds up an approximationto the second derivatives of the function @math{f} using the differencebetween successive gradient vectors.  By combining the first and secondderivatives the algorithm is able to take Newton-type steps towards thefunction minimum, assuming quadratic behavior in that region.@end deffn@deffn {Minimizer} gsl_multimin_fdfminimizer_steepest_descentThe steepest descent algorithm follows the downhill gradient of thefunction at each step. When a downhill step is successful the step-sizeis increased by factor of two.  If the downhill step leads to a higherfunction value then the algorithm backtracks and the step size isdecreased using the parameter @var{tol}.  A suitable value of @var{tol}for most applications is 0.1.  The steepest descent method isinefficient and is included only for demonstration purposes.@end deffn@node Multimin Examples@section ExamplesThis example program finds the minimum of the paraboloid functiondefined earlier.  The location of the minimum is offset from the originin @math{x} and @math{y}, and the function value at the minimum isnon-zero. The main program is given below, it requires the examplefunction given earlier in this chapter.@smallexampleintmain (void)@{  size_t iter = 0;  int status;  const gsl_multimin_fdfminimizer_type *T;  gsl_multimin_fdfminimizer *s;  /* Position of the minimum (1,2). */  double par[2] = @{ 1.0, 2.0 @};  gsl_vector *x;  gsl_multimin_function_fdf my_func;  my_func.f = &my_f;  my_func.df = &my_df;  my_func.fdf = &my_fdf;  my_func.n = 2;  my_func.params = &par;  /* Starting point, x = (5,7) */  x = gsl_vector_alloc (2);  gsl_vector_set (x, 0, 5.0);  gsl_vector_set (x, 1, 7.0);  T = gsl_multimin_fdfminimizer_conjugate_fr;  s = gsl_multimin_fdfminimizer_alloc (T, 2);  gsl_multimin_fdfminimizer_set (s, &my_func, x, 0.01, 1e-4);  do    @{      iter++;      status = gsl_multimin_fdfminimizer_iterate (s);      if (status)        break;      status = gsl_multimin_test_gradient (s->gradient, 1e-3);      if (status == GSL_SUCCESS)        printf ("Minimum found at:\n");      printf ("%5d %.5f %.5f %10.5f\n", iter,              gsl_vector_get (s->x, 0),               gsl_vector_get (s->x, 1),               s->f);    @}  while (status == GSL_CONTINUE && iter < 100);  gsl_multimin_fdfminimizer_free (s);  gsl_vector_free (x);  return 0;@}@end smallexample  @noindentThe initial step-size is chosen as 0.01, a conservative estimate in thiscase, and the line minimization parameter is set at 0.0001.  The programterminates when the norm of the gradient has been reduced below0.001. The output of the program is shown below,@example         x       y         f    1 4.99629 6.99072  687.84780    2 4.98886 6.97215  683.55456    3 4.97400 6.93501  675.01278    4 4.94429 6.86073  658.10798    5 4.88487 6.71217  625.01340    6 4.76602 6.41506  561.68440    7 4.52833 5.82083  446.46694    8 4.05295 4.63238  261.79422    9 3.10219 2.25548   75.49762   10 2.85185 1.62963   67.03704   11 2.19088 1.76182   45.31640   12 0.86892 2.02622   30.18555Minimum found at:   13 1.00000 2.00000   30.00000@end example@noindentNote that the algorithm gradually increases the step size as itsuccessfully moves downhill, as can be seen by plotting the successivepoints.@iftex@sp 1@center @image{multimin,4in}@end iftex@noindentThe conjugate gradient algorithm finds the minimum on its seconddirection because the function is purely quadratic. Additionaliterations would be needed for a more complicated function.@node Multimin References and Further Reading@section References and Further Reading@noindentA brief description of multidimensional minimization algorithms andfurther references can be found in the following book,@itemize @asis@item C.W. Ueberhuber,@cite{Numerical Computation (Volume 2)}, Chapter 14, Section 4.4"Minimization Methods", p. 325---335, Springer (1997), ISBN3-540-62057-5.@end itemize@noindent

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -