⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 multiroots.texi

📁 开放gsl矩阵运算
💻 TEXI
📖 第 1 页 / 共 3 页
字号:
J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df$$\afterdisplay@end tex@ifinfo@exampleJ^@{-1@} \to J^@{-1@} - (J^@{-1@} df - dx) dx^T J^@{-1@} / dx^T J^@{-1@} df@end example@end ifinfo@noindentwhere the vectors @math{dx} and @math{df} are the changes in @math{x}and @math{f}.  On the first iteration the inverse Jacobian is estimatedusing finite differences, as in the discrete Newton algorithm. This approximation gives a fast update but is unreliable if the changesare not small, and the estimate of the inverse Jacobian becomes worse astime passes.  The algorithm has a tendency to become unstable unless itstarts close to the root.  The Jacobian is refreshed if this instabilityis detected (consult the source for details).This algorithm is not recommended and is included only for demonstrationpurposes.@end deffn@comment ============================================================@node Example programs for Multidimensional Root finding@section ExamplesThe multidimensional solvers are used in a similar way to theone-dimensional root finding algorithms.  This first exampledemonstrates the @code{hybrids} scaled-hybrid algorithm, which does notrequire derivatives. The program solves the Rosenbrock system of equations,@tex\beforedisplay$$f_1 (x, y) = a (1 - x),~f_2 (x, y) = b (y - x^2)$$\afterdisplay@end tex@ifinfo@examplef_1 (x, y) = a (1 - x)f_2 (x, y) = b (y - x^2)@end example@end ifinfo@noindentwith @math{a = 1, b = 10}. The solution of this system lies at@math{(x,y) = (1,1)} in a narrow valley.The first stage of the program is to define the system of equations,@example#include <stdlib.h>#include <stdio.h>#include <gsl/gsl_vector.h>#include <gsl/gsl_multiroots.h>struct rparams  @{    double a;    double b;  @};introsenbrock_f (const gsl_vector * x, void *params,               gsl_vector * f)@{  double a = ((struct rparams *) params)->a;  double b = ((struct rparams *) params)->b;  double x0 = gsl_vector_get (x, 0);  double x1 = gsl_vector_get (x, 1);  double y0 = a * (1 - x0);  double y1 = b * (x1 - x0 * x0);  gsl_vector_set (f, 0, y0);  gsl_vector_set (f, 1, y1);  return GSL_SUCCESS;@}@end example@noindentThe main program begins by creating the function object @code{f}, withthe arguments @code{(x,y)} and parameters @code{(a,b)}. The solver@code{s} is initialized to use this function, with the @code{hybrids}method.@exampleintmain (void)@{  const gsl_multiroot_fsolver_type *T;  gsl_multiroot_fsolver *s;  int status;  size_t i, iter = 0;  const size_t n = 2;  struct rparams p = @{1.0, 10.0@};  gsl_multiroot_function f = @{&rosenbrock_f, n, &p@};  double x_init[2] = @{-10.0, -5.0@};  gsl_vector *x = gsl_vector_alloc (n);  gsl_vector_set (x, 0, x_init[0]);  gsl_vector_set (x, 1, x_init[1]);  T = gsl_multiroot_fsolver_hybrids;  s = gsl_multiroot_fsolver_alloc (T, 2);  gsl_multiroot_fsolver_set (s, &f, x);  print_state (iter, s);  do    @{      iter++;      status = gsl_multiroot_fsolver_iterate (s);      print_state (iter, s);      if (status)   /* check if solver is stuck */        break;      status =         gsl_multiroot_test_residual (s->f, 1e-7);    @}  while (status == GSL_CONTINUE && iter < 1000);  printf ("status = %s\n", gsl_strerror (status));  gsl_multiroot_fsolver_free (s);  gsl_vector_free (x);  return 0;@}@end example@noindentNote that it is important to check the return status of each solverstep, in case the algorithm becomes stuck.  If an error condition isdetected, indicating that the algorithm cannot proceed, then the errorcan be reported to the user, a new starting point chosen or a differentalgorithm used.The intermediate state of the solution is displayed by the followingfunction.  The solver state contains the vector @code{s->x} which is thecurrent position, and the vector @code{s->f} with corresponding functionvalues.@exampleintprint_state (size_t iter, gsl_multiroot_fsolver * s)@{  printf ("iter = %3u x = % .3f % .3f "          "f(x) = % .3e % .3e\n",          iter,          gsl_vector_get (s->x, 0),           gsl_vector_get (s->x, 1),          gsl_vector_get (s->f, 0),           gsl_vector_get (s->f, 1));@}@end example@noindentHere are the results of running the program. The algorithm is started at@math{(-10,-5)} far from the solution.  Since the solution is hidden ina narrow valley the earliest steps follow the gradient of the functiondownhill, in an attempt to reduce the large value of the residual. Oncethe root has been approximately located, on iteration 8, the Newtonbehavior takes over and convergence is very rapid.@smallexampleiter =  0 x = -10.000  -5.000  f(x) = 1.100e+01 -1.050e+03iter =  1 x = -10.000  -5.000  f(x) = 1.100e+01 -1.050e+03iter =  2 x =  -3.976  24.827  f(x) = 4.976e+00  9.020e+01iter =  3 x =  -3.976  24.827  f(x) = 4.976e+00  9.020e+01iter =  4 x =  -3.976  24.827  f(x) = 4.976e+00  9.020e+01iter =  5 x =  -1.274  -5.680  f(x) = 2.274e+00 -7.302e+01iter =  6 x =  -1.274  -5.680  f(x) = 2.274e+00 -7.302e+01iter =  7 x =   0.249   0.298  f(x) = 7.511e-01  2.359e+00iter =  8 x =   0.249   0.298  f(x) = 7.511e-01  2.359e+00iter =  9 x =   1.000   0.878  f(x) = 1.268e-10 -1.218e+00iter = 10 x =   1.000   0.989  f(x) = 1.124e-11 -1.080e-01iter = 11 x =   1.000   1.000  f(x) = 0.000e+00  0.000e+00status = success@end smallexample@noindentNote that the algorithm does not update the location on everyiteration. Some iterations are used to adjust the trust-regionparameter, after trying a step which was found to be divergent, or torecompute the Jacobian, when poor convergence behavior is detected.The next example program adds derivative information, in order toaccelerate the solution. There are two derivative functions@code{rosenbrock_df} and @code{rosenbrock_fdf}. The latter computes boththe function and its derivative simultaneously. This allows theoptimization of any common terms.  For simplicity we substitute calls tothe separate @code{f} and @code{df} functions at this point in the codebelow.@exampleintrosenbrock_df (const gsl_vector * x, void *params,                gsl_matrix * J)@{  double a = ((struct rparams *) params)->a;  double b = ((struct rparams *) params)->b;  double x0 = gsl_vector_get (x, 0);  double df00 = -a;  double df01 = 0;  double df10 = -2 * b  * x0;  double df11 = b;  gsl_matrix_set (J, 0, 0, df00);  gsl_matrix_set (J, 0, 1, df01);  gsl_matrix_set (J, 1, 0, df10);  gsl_matrix_set (J, 1, 1, df11);  return GSL_SUCCESS;@}introsenbrock_fdf (const gsl_vector * x, void *params,                gsl_vector * f, gsl_matrix * J)@{  rosenbrock_f (x, params, f);  rosenbrock_df (x, params, J);  return GSL_SUCCESS;@}@end example@noindentThe main program now makes calls to the corresponding @code{fdfsolver}versions of the functions,@exampleintmain (void)@{  const gsl_multiroot_fdfsolver_type *T;  gsl_multiroot_fdfsolver *s;  int status;  size_t i, iter = 0;  const size_t n = 2;  struct rparams p = @{1.0, 10.0@};  gsl_multiroot_function_fdf f = @{&rosenbrock_f,                                   &rosenbrock_df,                                   &rosenbrock_fdf,                                   n, &p@};  double x_init[2] = @{-10.0, -5.0@};  gsl_vector *x = gsl_vector_alloc (n);  gsl_vector_set (x, 0, x_init[0]);  gsl_vector_set (x, 1, x_init[1]);  T = gsl_multiroot_fdfsolver_gnewton;  s = gsl_multiroot_fdfsolver_alloc (T, n);  gsl_multiroot_fdfsolver_set (s, &f, x);  print_state (iter, s);  do    @{      iter++;      status = gsl_multiroot_fdfsolver_iterate (s);      print_state (iter, s);      if (status)        break;      status = gsl_multiroot_test_residual (s->f, 1e-7);    @}  while (status == GSL_CONTINUE && iter < 1000);  printf ("status = %s\n", gsl_strerror (status));  gsl_multiroot_fdfsolver_free (s);  gsl_vector_free (x);  return 0;@}@end example@noindentThe addition of derivative information to the @code{hybrids} solver doesnot make any significant difference to its behavior, since it able toapproximate the Jacobian numerically with sufficient accuracy.  Toillustrate the behavior of a different derivative solver we switch to@code{gnewton}. This is a traditional newton solver with the constraintthat it scales back its step if the full step would lead "uphill". Hereis the output for the @code{gnewton} algorithm,@smallexampleiter = 0 x = -10.000  -5.000 f(x) =  1.100e+01 -1.050e+03iter = 1 x =  -4.231 -65.317 f(x) =  5.231e+00 -8.321e+02iter = 2 x =   1.000 -26.358 f(x) = -8.882e-16 -2.736e+02iter = 3 x =   1.000   1.000 f(x) = -2.220e-16 -4.441e-15status = success@end smallexample@noindentThe convergence is much more rapid, but takes a wide excursion out tothe point @math{(-4.23,-65.3)}. This could cause the algorithm to goastray in a realistic application.  The hybrid algorithm follows thedownhill path to the solution more reliably.@node References and Further Reading for Multidimensional Root Finding@section References and Further Reading@noindentThe original version of the Hybrid method is described in the followingarticles by Powell,@itemize @asis@itemM.J.D. Powell, "A Hybrid Method for Nonlinear Equations" (Chap 6, p87-114) and "A Fortran Subroutine for Solving systems of NonlinearAlgebraic Equations" (Chap 7, p 115-161), in @cite{Numerical Methods forNonlinear Algebraic Equations}, P. Rabinowitz, editor.  Gordon andBreach, 1970.@end itemize@noindentThe following papers are also relevant to the algorithms described inthis section,@itemize @asis@itemJ.J. Mor@'e, M.Y. Cosnard, "Numerical Solution of Nonlinear Equations",@cite{ACM Transactions on Mathematical Software}, Vol 5, No 1, (1979), p 64-85@itemC.G. Broyden, "A Class of Methods for Solving NonlinearSimultaneous Equations", @cite{Mathematics of Computation}, Vol 19 (1965),p 577-593@item J.J. Mor@'e, B.S. Garbow, K.E. Hillstrom, "Testing UnconstrainedOptimization Software", ACM Transactions on Mathematical Software, Vol7, No 1 (1981), p 17-41@end itemize

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -