⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 min.texi

📁 开放gsl矩阵运算
💻 TEXI
📖 第 1 页 / 共 2 页
字号:
@node Minimization Stopping Parameters@section Stopping Parameters@cindex minimization, stopping parametersA minimization procedure should stop when one of the followingconditions is true:@itemize @bullet@itemA minimum has been found to within the user-specified precision.@itemA user-specified maximum number of iterations has been reached.@itemAn error has occurred.@end itemize@noindentThe handling of these conditions is under user control.  The functionbelow allows the user to test the precision of the current result.@deftypefun int gsl_min_test_interval (double @var{x_lower}, double @var{x_upper}, double @var{epsrel}, double @var{epsabs})This function tests for the convergence of the interval [@var{x_lower},@var{x_upper}] with absolute error @var{epsabs} and relative error@var{epsrel}.  The test returns @code{GSL_SUCCESS} if the followingcondition is achieved,@tex\beforedisplay$$|a - b| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, \min(|a|,|b|)$$\afterdisplay@end tex@ifinfo@example|a - b| < epsabs + epsrel min(|a|,|b|) @end example@end ifinfo@noindentwhen the interval @math{x = [a,b]} does not include the origin.  If theinterval includes the origin then @math{\min(|a|,|b|)} is replaced byzero (which is the minimum value of @math{|x|} over the interval).  Thisensures that the relative error is accurately estimated for minima closeto the origin.This condition on the interval also implies that any estimate of theminimum @math{x_m} in the interval satisfies the same condition with respectto the true minimum @math{x_m^*},@tex\beforedisplay$$|x_m - x_m^*| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, x_m^*$$\afterdisplay@end tex@ifinfo@example|x_m - x_m^*| < epsabs + epsrel x_m^*@end example@end ifinfo@noindentassuming that the true minimum @math{x_m^*} is contained within the interval.@end deftypefun@comment ============================================================@node Minimization Algorithms@section Minimization AlgorithmsThe minimization algorithms described in this section require an initialinterval which is guaranteed to contain a minimum --- if @math{a} and@math{b} are the endpoints of the interval and @math{x} is an estimateof the minimum then @math{f(a) > f(x) < f(b)}.  This ensures that thefunction has at least one minimum somewhere in the interval.  If a validinitial interval is used then these algorithm cannot fail, provided thefunction is well-behaved.@deffn {Minimizer} gsl_min_fminimizer_goldensection@cindex golden section algorithm for finding minima@cindex minimum finding, golden section algorithmThe @dfn{golden section algorithm} is the simplest method of bracketingthe minimum of a function.  It is the slowest algorithm provided by thelibrary, with linear convergence.On each iteration, the algorithm first compares the subintervals fromthe endpoints to the current minimum.  The larger subinterval is dividedin a golden section (using the famous ratio @math{(3-\sqrt 5)/2 =0.3189660}@dots{}) and the value of the function at this new point iscalculated.  The new value is used with the constraint @math{f(a') >f(x') < f(b')} to a select new interval containing the minimum, bydiscarding the least useful point.  This procedure can be continuedindefinitely until the interval is sufficiently small.  Choosing thegolden section as the bisection ratio can be shown to provide thefastest convergence for this type of algorithm.@end deffn@comment ============================================================@deffn {Minimizer} gsl_min_fminimizer_brent@cindex brent's method for finding minima@cindex minimum finding, brent's methodThe @dfn{Brent minimization algorithm} combines a parabolicinterpolation with the golden section algorithm.  This produces a fastalgorithm which is still robust.The outline of the algorithm can be summarized as follows: on eachiteration Brent's method approximates the function using aninterpolating parabola through three existing points.  The minimum of theparabola is taken as a guess for the minimum.  If it lies within thebounds of the current interval then the interpolating point is accepted,and used to generate a smaller interval.  If the interpolating point isnot accepted then the algorithm falls back to an ordinary golden sectionstep.  The full details of Brent's method include some additional checksto improve convergence.@end deffn@comment ============================================================@node Minimization Examples@section ExamplesThe following program uses the Brent algorithm to find the minimum ofthe function @math{f(x) = \cos(x) + 1}, which occurs at @math{x = \pi}.The starting interval is @math{(0,6)}, with an initial guess for theminimum of @math{2}.@example#include <stdio.h>#include <gsl/gsl_errno.h>#include <gsl/gsl_math.h>#include <gsl/gsl_min.h>double fn1 (double x, void * params)@{  return cos(x) + 1.0;@}intmain (void)@{  int status;  int iter = 0, max_iter = 100;  const gsl_min_fminimizer_type *T;  gsl_min_fminimizer *s;  double m = 2.0, m_expected = M_PI;  double a = 0.0, b = 6.0;  gsl_function F;  F.function = &fn1;  F.params = 0;  T = gsl_min_fminimizer_brent;  s = gsl_min_fminimizer_alloc (T);  gsl_min_fminimizer_set (s, &F, m, a, b);  printf ("using %s method\n",          gsl_min_fminimizer_name (s));  printf ("%5s [%9s, %9s] %9s %10s %9s\n",          "iter", "lower", "upper", "min",          "err", "err(est)");  printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n",          iter, a, b,          m, m - m_expected, b - a);  do    @{      iter++;      status = gsl_min_fminimizer_iterate (s);      m = gsl_min_fminimizer_minimum (s);      a = gsl_min_fminimizer_x_lower (s);      b = gsl_min_fminimizer_x_upper (s);      status         = gsl_min_test_interval (a, b, 0.001, 0.0);      if (status == GSL_SUCCESS)        printf ("Converged:\n");      printf ("%5d [%.7f, %.7f] "              "%.7f %.7f %+.7f %.7f\n",              iter, a, b,              m, m_expected, m - m_expected, b - a);    @}  while (status == GSL_CONTINUE && iter < max_iter);  return status;@}@end example@noindentHere are the results of the minimization procedure.@smallexamplebash$ ./a.out     0 [0.0000000, 6.0000000] 2.0000000 -1.1415927 6.0000000    1 [2.0000000, 6.0000000] 3.2758640 +0.1342713 4.0000000    2 [2.0000000, 3.2831929] 3.2758640 +0.1342713 1.2831929    3 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862    4 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862    5 [2.8689068, 3.2758640] 3.1460585 +0.0044658 0.4069572    6 [3.1346075, 3.2758640] 3.1460585 +0.0044658 0.1412565    7 [3.1346075, 3.1874620] 3.1460585 +0.0044658 0.0528545    8 [3.1346075, 3.1460585] 3.1460585 +0.0044658 0.0114510    9 [3.1346075, 3.1460585] 3.1424060 +0.0008133 0.0114510   10 [3.1346075, 3.1424060] 3.1415885 -0.0000041 0.0077985Converged:                               11 [3.1415885, 3.1424060] 3.1415927 -0.0000000 0.0008175@end smallexample@node Minimization References and Further Reading@section References and Further Reading@noindentFurther information on Brent's algorithm is available in the followingbook,@itemize @asis@itemRichard Brent, @cite{Algorithms for minimization without derivatives},Prentice-Hall (1973)@end itemize

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -