📄 multiroots.texi
字号:
corresponding type. The same functions work for all solvers so thatdifferent methods can be substituted at runtime without modifications tothe code.@deftypefun int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver * @var{s})@deftypefunx int gsl_multiroot_fdfsolver_iterate (gsl_multiroot_fdfsolver * @var{s})These functions perform a single iteration of the solver @var{s}. If theiteration encounters an unexpected problem then an error code will bereturned,@table @code@item GSL_EBADFUNCthe iteration encountered a singular point where the function or itsderivative evaluated to @code{Inf} or @code{NaN}.@item GSL_ENOPROGthe iteration is not making any progress, preventing the algorithm fromcontinuing.@end table@end deftypefunThe solver maintains a current best estimate of the root at all times.This information can be accessed with the following auxiliary functions,@deftypefun {gsl_vector *} gsl_multiroot_fsolver_root (const gsl_multiroot_fsolver * @var{s})@deftypefunx {gsl_vector *} gsl_multiroot_fdfsolver_root (const gsl_multiroot_fdfsolver * @var{s})These functions return the current estimate of the root for the solver @var{s}.@end deftypefun@node Search Stopping Parameters for the multidimensional solver@section Search Stopping Parameters@cindex root finding, stopping parametersA root finding procedure should stop when one of the following conditions istrue:@itemize @bullet@itemA multidimensional root has been found to within the user-specified precision.@itemA user-specified maximum number of iterations has been reached.@itemAn error has occurred.@end itemize@noindentThe handling of these conditions is under user control. The functionsbelow allow the user to test the precision of the current result inseveral standard ways.@deftypefun int gsl_multiroot_test_delta (const gsl_vector * @var{dx}, const gsl_vector * @var{x}, double @var{epsabs}, double @var{epsrel})This function tests for the convergence of the sequence by comparing thelast step @var{dx} with the absolute error @var{epsabs} and relativeerror @var{epsrel} to the current position @var{x}. The test returns@code{GSL_SUCCESS} if the following condition is achieved,@tex\beforedisplay$$|dx_i| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, |x_i|$$\afterdisplay@end tex@ifinfo@example|dx_i| < epsabs + epsrel |x_i|@end example@end ifinfo@noindentfor each component of @var{x} and returns @code{GSL_CONTINUE} otherwise.@end deftypefun@cindex residual, in nonlinear systems of equations@deftypefun int gsl_multiroot_test_residual (const gsl_vector * @var{f}, double @var{epsabs})This function tests the residual value @var{f} against the absoluteerror bound @var{epsabs}. The test returns @code{GSL_SUCCESS} if thefollowing condition is achieved,@tex\beforedisplay$$\sum_i |f_i| < \hbox{\it epsabs}$$\afterdisplay@end tex@ifinfo@example\sum_i |f_i| < epsabs@end example@end ifinfo@noindentand returns @code{GSL_CONTINUE} otherwise. This criterion is suitablefor situations where the the precise location of the root, @math{x}, isunimportant provided a value can be found where the residual is smallenough.@end deftypefun@comment ============================================================@node Algorithms using Derivatives@section Algorithms using DerivativesThe root finding algorithms described in this section make use of boththe function and its derivative. They require an initial guess for thelocation of the root, but there is no absolute guarantee of convergence-- the function must be suitable for this technique and the initialguess must be sufficiently close to the root for it to work. When theconditions are satisfied then convergence is quadratic.@comment ============================================================@cindex HYBRID algorithms for nonlinear systems@deffn {Derivative Solver} gsl_multiroot_fdfsolver_hybridsj@cindex HYBRIDSJ algorithm@cindex MINPACK, minimization algorithmsThis is a modified version of Powell's Hybrid method as implemented inthe @sc{hybrj} algorithm in @sc{minpack}. Minpack was written by JorgeJ. Mor@'e, Burton S. Garbow and Kenneth E. Hillstrom. The Hybridalgorithm retains the fast convergence of Newton's method but will alsoreduce the residual when Newton's method is unreliable. The algorithm uses a generalized trust region to keep each step undercontrol. In order to be accepted a proposed new position @math{x'} mustsatisfy the condition @math{|D (x' - x)| < \delta}, where @math{D} is adiagonal scaling matrix and @math{\delta} is the size of the trustregion. The components of @math{D} are computed internally, using thecolumn norms of the Jacobian to estimate the sensitivity of the residualto each component of @math{x}. This improves the behavior of thealgorithm for badly scaled functions.On each iteration the algorithm first determines the standard Newtonstep by solving the system @math{J dx = - f}. If this step falls insidethe trust region it is used as a trial step in the next stage. If not,the algorithm uses the linear combination of the Newton and gradientdirections which is predicted to minimize the norm of the function whilestaying inside the trust region.@tex\beforedisplay$$dx = - \alpha J^{-1} f(x) - \beta \nabla |f(x)|^2$$\afterdisplay@end tex@ifinfo@exampledx = - \alpha J^@{-1@} f(x) - \beta \nabla |f(x)|^2@end example@end ifinfo@noindentThis combination of Newton and gradient directions is referred to as a@dfn{dogleg step}.The proposed step is now tested by evaluating the function at theresulting point, @math{x'}. If the step reduces the norm of the functionsufficiently then it is accepted and size of the trust region isincreased. If the proposed step fails to improve the solution then thesize of the trust region is decreased and another trial step iscomputed.The speed of the algorithm is increased by computing the changes to theJacobian approximately, using a rank-1 update. If two successiveattempts fail to reduce the residual then the full Jacobian isrecomputed. The algorithm also monitors the progress of the solutionand returns an error if several steps fail to make any improvement,@table @code@item GSL_ENOPROGthe iteration is not making any progress, preventing the algorithm fromcontinuing.@item GSL_ENOPROGJre-evaluations of the Jacobian indicate that the iteration is notmaking any progress, preventing the algorithm from continuing.@end table@end deffn@deffn {Derivative Solver} gsl_multiroot_fdfsolver_hybridj@cindex HYBRIDJ algorithmThis algorithm is an unscaled version of @code{hybridsj}. The steps arecontrolled by a spherical trust region @math{|x' - x| < \delta}, insteadof a generalized region. This can be useful if the generalized regionestimated by @code{hybridsj} is inappropriate.@end deffn@deffn {Derivative Solver} gsl_multiroot_fdfsolver_newton@cindex Newton's Method for systems of nonlinear equationsNewton's Method is the standard root-polishing algorithm. The algorithmbegins with an initial guess for the location of the solution. On eachiteration a linear approximation to the function @math{F} is used toestimate the step which will zero all the components of the residual.The iteration is defined by the following sequence,@tex\beforedisplay$$x \to x' = x - J^{-1} f(x)$$\afterdisplay@end tex@ifinfo@examplex -> x' = x - J^@{-1@} f(x)@end example@end ifinfo@noindentwhere the Jacobian matrix @math{J} is computed from the derivativefunctions provided by @var{f}. The step @math{dx} is obtained by solvingthe linear system,@tex\beforedisplay$$J \,dx = - f(x)$$\afterdisplay@end tex@ifinfo@exampleJ dx = - f(x)@end example@end ifinfo@noindentusing LU decomposition.@end deffn@comment ============================================================@deffn {Derivative Solver} gsl_multiroot_fdfsolver_gnewton@cindex Modified Newton's Method for nonlinear systems@cindex GNEWTON algorithmThis is a modified version of Newton's method which attempts to improveglobal convergence by requiring every step to reduce the Euclidean normof the residual, @math{|f(x)|}. If the Newton step leads to an increasein the norm then a reduced step of relative size,@tex\beforedisplay$$t = (\sqrt{1 + 6 r} - 1) / (3 r)$$\afterdisplay@end tex@ifinfo@examplet = (\sqrt(1 + 6 r) - 1) / (3 r)@end example@end ifinfo@noindentis proposed, with @math{r} being the ratio of norms@math{|f(x')|^2/|f(x)|^2}. This procedure is repeated until a suitable stepsize is found. @end deffn@comment ============================================================@node Algorithms without Derivatives@section Algorithms without DerivativesThe algorithms described in this section do not require any derivativeinformation to be supplied by the user. Any derivatives needed areapproximated from by finite difference.@deffn {Solver} gsl_multiroot_fsolver_hybrids@cindex HYBRIDS algorithm, scaled without derivativesThis is a version of the Hybrid algorithm which replaces calls to theJacobian function by its finite difference approximation. The finitedifference approximation is computed using @code{gsl_multiroots_fdjac}with a relative step size of @code{GSL_SQRT_DBL_EPSILON}.@end deffn@deffn {Solver} gsl_multiroot_fsolver_hybrid@cindex HYBRID algorithm, unscaled without derivativesThis is a finite difference version of the Hybrid algorithm withoutinternal scaling.@end deffn@comment ============================================================@deffn {Solver} gsl_multiroot_fsolver_dnewton@cindex Discrete newton algorithm for multidimensional roots@cindex Newton algorithm, discrete@cindex Discrete Newton algorithm for nonlinear systemsThe @dfn{discrete Newton algorithm} is the simplest method of solving amultidimensional system. It uses the Newton iteration@tex\beforedisplay$$x \to x - J^{-1} f(x)$$\afterdisplay@end tex@ifinfo@examplex -> x - J^@{-1@} f(x)@end example@end ifinfo@noindentwhere the Jacobian matrix @math{J} is approximated by taking finitedifferences of the function @var{f}. The approximation scheme used bythis implementation is,@tex\beforedisplay$$J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j$$\afterdisplay@end tex@ifinfo@exampleJ_@{ij@} = (f_i(x + \delta_j) - f_i(x)) / \delta_j@end example@end ifinfo@noindentwhere @math{\delta_j} is a step of size @math{\sqrt\epsilon |x_j|} with@math{\epsilon} being the machine precision (@c{$\epsilon \approx 2.22 \times 10^{-16}$}@math{\epsilon \approx 2.22 \times 10^-16}).The order of convergence of Newton's algorithm is quadratic, but thefinite differences require @math{n^2} function evaluations on eachiteration. The algorithm may become unstable if the finite differencesare not a good approximation to the true derivatives.@end deffn@comment ============================================================@deffn {Solver} gsl_multiroot_fsolver_broyden@cindex Broyden algorithm for multidimensional roots@cindex multidimensional root finding, Broyden algorithmThe @dfn{Broyden algorithm} is a version of the discrete Newtonalgorithm which attempts to avoids the expensive update of the Jacobianmatrix on each iteration. The changes to the Jacobian are alsoapproximated, using a rank-1 update,@tex\beforedisplay$$
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -