📄 roots.texi
字号:
\beforedisplay$$|r - r^*| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, r^*$$\afterdisplay@end tex@ifinfo@example|r - r^*| < epsabs + epsrel r^*@end example@end ifinfo@noindentassuming that the true root @math{r^*} is contained within the interval.@end deftypefun@deftypefun int gsl_root_test_delta (double @var{x1}, double @var{x0}, double @var{epsrel}, double @var{epsabs})This function tests for the convergence of the sequence @dots{}, @var{x0},@var{x1} with absolute error @var{epsabs} and relative error@var{epsrel}. The test returns @code{GSL_SUCCESS} if the followingcondition is achieved,@tex\beforedisplay$$|x_1 - x_0| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, |x_1|$$\afterdisplay@end tex@ifinfo@example|x_1 - x_0| < epsabs + epsrel |x_1|@end example@end ifinfo@noindentand returns @code{GSL_CONTINUE} otherwise.@end deftypefun@deftypefun int gsl_root_test_residual (double @var{f}, double @var{epsabs})This function tests the residual value @var{f} against the absoluteerror bound @var{epsabs}. The test returns @code{GSL_SUCCESS} if thefollowing condition is achieved,@tex\beforedisplay$$|f| < \hbox{\it epsabs}$$\afterdisplay@end tex@ifinfo@example|f| < epsabs@end example@end ifinfo@noindentand returns @code{GSL_CONTINUE} otherwise. This criterion is suitablefor situations where the precise location of the root, @math{x}, isunimportant provided a value can be found where the residual,@math{|f(x)|}, is small enough.@end deftypefun@comment ============================================================@node Root Bracketing Algorithms@section Root Bracketing AlgorithmsThe root bracketing algorithms described in this section require aninitial interval which is guaranteed to contain a root -- if @math{a}and @math{b} are the endpoints of the interval then @math{f(a)} mustdiffer in sign from @math{f(b)}. This ensures that the function crosseszero at least once in the interval. If a valid initial interval is usedthen these algorithm cannot fail, provided the function is well-behaved.Note that a bracketing algorithm cannot find roots of even degree, sincethese do not cross the @math{x}-axis.@deffn {Solver} gsl_root_fsolver_bisection@cindex bisection algorithm for finding roots@cindex root finding, bisection algorithmThe @dfn{bisection algorithm} is the simplest method of bracketing theroots of a function. It is the slowest algorithm provided bythe library, with linear convergence.On each iteration, the interval is bisected and the value of thefunction at the midpoint is calculated. The sign of this value is usedto determine which half of the interval does not contain a root. Thathalf is discarded to give a new, smaller interval containing theroot. This procedure can be continued indefinitely until the interval issufficiently small.At any time the current estimate of the root is taken as the midpoint ofthe interval.@comment eps file "roots-bisection.eps"@iftex@sp 1@center @image{roots-bisection,3.4in}@quotationFour iterations of bisection, where @math{a_n} is @math{n}th position ofthe beginning of the interval and @math{b_n} is the @math{n}th positionof the end. The midpoint of each interval is also indicated.@end quotation@end iftex@end deffn@comment ============================================================@deffn {Solver} gsl_root_fsolver_falsepos@cindex false position algorithm for finding roots@cindex root finding, false position algorithmThe @dfn{false position algorithm} is a method of finding roots based onlinear interpolation. Its convergence is linear, but it is usuallyfaster than bisection.On each iteration a line is drawn between the endpoints @math{(a,f(a))}and @math{(b,f(b))} and the point where this line crosses the@math{x}-axis taken as a ``midpoint''. The value of the function atthis point is calculated and its sign is used to determine which side ofthe interval does not contain a root. That side is discarded to give anew, smaller interval containing the root. This procedure can becontinued indefinitely until the interval is sufficiently small.The best estimate of the root is taken from the linear interpolation ofthe interval on the current iteration.@comment eps file "roots-false-position.eps"@comment @iftex@comment @image{roots-false-position,4in}@comment @quotation@comment Several iterations of false position, where @math{a_n} is @math{n}th@comment position of the beginning of the interval and @math{b_n} is the@comment @math{n}th position of the end.@comment @end quotation@comment @end iftex@end deffn@comment ============================================================@deffn {Solver} gsl_root_fsolver_brent@cindex brent's method for finding roots@cindex root finding, brent's methodThe @dfn{Brent-Dekker method} (referred to here as @dfn{Brent's method})combines an interpolation strategy with the bisection algorithm. Thisproduces a fast algorithm which is still robust.On each iteration Brent's method approximates the function using aninterpolating curve. On the first iteration this is a linearinterpolation of the two endpoints. For subsequent iterations thealgorithm uses an inverse quadratic fit to the last three points, forhigher accuracy. The intercept of the interpolating curve with the@math{x}-axis is taken as a guess for the root. If it lies within thebounds of the current interval then the interpolating point is accepted,and used to generate a smaller interval. If the interpolating point isnot accepted then the algorithm falls back to an ordinary bisectionstep.The best estimate of the root is taken from the most recentinterpolation or bisection.@end deffn@comment ============================================================@node Root Finding Algorithms using Derivatives@section Root Finding Algorithms using DerivativesThe root polishing algorithms described in this section require aninitial guess for the location of the root. There is no absoluteguarantee of convergence -- the function must be suitable for thistechnique and the initial guess must be sufficiently close to the rootfor it to work. When these conditions are satisfied then convergence isquadratic.These algorithms make use of both the function and its derivative. @deffn {Derivative Solver} gsl_root_fdfsolver_newton@cindex Newton's Method algorithm for finding roots@cindex root finding, Newton's Method algorithmNewton's Method is the standard root-polishing algorithm. The algorithmbegins with an initial guess for the location of the root. On eachiteration, a line tangent to the function @math{f} is drawn at thatposition. The point where this line crosses the @math{x}-axis becomesthe new guess. The iteration is defined by the following sequence,@tex\beforedisplay$$x_{i+1} = x_i - {f(x_i) \over f'(x_i)}$$\afterdisplay@end tex@ifinfo@examplex_@{i+1@} = x_i - f(x_i)/f'(x_i)@end example@end ifinfo@noindentNewton's method converges quadratically for single roots, and linearlyfor multiple roots.@comment eps file "roots-newtons-method.eps"@iftex@sp 1@center @image{roots-newtons-method,3.4in}@quotationSeveral iterations of Newton's Method, where @math{g_n} is the@math{n}th guess.@end quotation@end iftex@end deffn@comment ============================================================@deffn {Derivative Solver} gsl_root_fdfsolver_secant@cindex Secant Method algorithm for finding roots@cindex root finding, Secant Method algorithmThe @dfn{secant method} is a simplified version of Newton's method which doesnot require the computation of the derivative on every step.On its first iteration the algorithm begins with Newton's method, usingthe derivative to compute a first step,@tex\beforedisplay$$x_1 = x_0 - {f(x_0) \over f'(x_0)}$$\afterdisplay@end tex@ifinfo@examplex_1 = x_0 - f(x_0)/f'(x_0)@end example@end ifinfo@noindentSubsequent iterations avoid the evaluation of the derivative byreplacing it with a numerical estimate, the slope of the line throughthe previous two points,@tex\beforedisplay$$x_{i+1} = x_i - {f(x_i) \over f'_{est}} ~\hbox{where}~ f'_{est} = {f(x_{i}) - f(x_{i-1}) \over x_i - x_{i-1}}$$\afterdisplay@end tex@ifinfo@examplex_@{i+1@} = x_i f(x_i) / f'_@{est@} where f'_@{est@} = (f(x_i) - f(x_@{i-1@})/(x_i - x_@{i-1@})@end example@end ifinfo@noindentWhen the derivative does not change significantly in the vicinity of theroot the secant method gives a useful saving. Asymptotically the secantmethod is faster than Newton's method whenever the cost of evaluatingthe derivative is more than 0.44 times the cost of evaluating thefunction itself. As with all methods of computing a numericalderivative the estimate can suffer from cancellation errors if theseparation of the points becomes too small.On single roots, the method has a convergence of order @math{(1 + \sqrt5)/2} (approximately @math{1.62}). It converges linearly for multipleroots. @comment eps file "roots-secant-method.eps"@comment @iftex@comment @tex@comment \input epsf@comment \medskip@comment \centerline{\epsfxsize=5in\epsfbox{roots-secant-method.eps}}@comment @end tex@comment @quotation@comment Several iterations of Secant Method, where @math{g_n} is the @math{n}th@comment guess.@comment @end quotation@comment @end iftex@end deffn@comment ============================================================@deffn {Derivative Solver} gsl_root_fdfsolver_steffenson@cindex Steffenson's Method for finding roots@cindex root finding, Steffenson's MethodThe @dfn{Steffenson Method} provides the fastest convergence of all theroutines. It combines the basic Newton algorithm with an Aitken``delta-squared'' acceleration. If the Newton iterates are @math{x_i}then the acceleration procedure generates a new sequence @math{R_i},@tex\beforedisplay$$R_i = x_i - {(x_{i+1} - x_i)^2 \over (x_{i+2} - 2 x_{i+1} + x_i)}$$\afterdisplay@end tex@ifinfo@exampleR_i = x_i - (x_@{i+1@} - x_i)^2 / (x_@{i+2@} - 2 x_@{i+1@} + x_@{i@})@end example@end ifinfo@noindentwhich converges faster than the original sequence under reasonableconditions. The new sequence requires three terms before it can produceits first value so the method returns accelerated values on the secondand subsequent iterations. On the first iteration it returns theordinary Newton estimate. The Newton iterate is also returned if thedenominator of the acceleration term ever becomes zero.As with all acceleration procedures this method can become unstable ifthe function is not well-behaved. @end deffn@node Root Finding Examples@section ExamplesFor any root finding algorithm we need to prepare the function to besolved. For this example we will use the general quadratic equationdescribed earlier. We first need a header file (@file{demo_fn.h}) todefine the function parameters,@example@verbatiminclude examples/demo_fn.h@end example@noindentWe place the function definitions in a separate file (@file{demo_fn.c}),@example@verbatiminclude examples/demo_fn.c@end example@noindentThe first program uses the function solver @code{gsl_root_fsolver_brent}for Brent's method and the general quadratic defined above to solve thefollowing equation,@tex\beforedisplay$$x^2 - 5 = 0$$\afterdisplay@end tex@ifinfo@examplex^2 - 5 = 0@end example@end ifinfo@noindentwith solution @math{x = \sqrt 5 = 2.236068...}@example@verbatiminclude examples/roots.c@end example@noindentHere are the results of the iterations,@smallexamplebash$ ./a.out using brent method iter [ lower, upper] root err err(est) 1 [1.0000000, 5.0000000] 1.0000000 -1.2360680 4.0000000 2 [1.0000000, 3.0000000] 3.0000000 +0.7639320 2.0000000 3 [2.0000000, 3.0000000] 2.0000000 -0.2360680 1.0000000 4 [2.2000000, 3.0000000] 2.2000000 -0.0360680 0.8000000 5 [2.2000000, 2.2366300] 2.2366300 +0.0005621 0.0366300Converged: 6 [2.2360634, 2.2366300] 2.2360634 -0.0000046 0.0005666@end smallexample@noindentIf the program is modified to use the bisection solver instead ofBrent's method, by changing @code{gsl_root_fsolver_brent} to@code{gsl_root_fsolver_bisection} the slower convergence of theBisection method can be observed,@smallexamplebash$ ./a.out using bisection method iter [ lower, upper] root err err(est) 1 [0.0000000, 2.5000000] 1.2500000 -0.9860680 2.5000000 2 [1.2500000, 2.5000000] 1.8750000 -0.3610680 1.2500000 3 [1.8750000, 2.5000000] 2.1875000 -0.0485680 0.6250000 4 [2.1875000, 2.5000000] 2.3437500 +0.1076820 0.3125000 5 [2.1875000, 2.3437500] 2.2656250 +0.0295570 0.1562500 6 [2.1875000, 2.2656250] 2.2265625 -0.0095055 0.0781250 7 [2.2265625, 2.2656250] 2.2460938 +0.0100258 0.0390625 8 [2.2265625, 2.2460938] 2.2363281 +0.0002601 0.0195312 9 [2.2265625, 2.2363281] 2.2314453 -0.0046227 0.0097656 10 [2.2314453, 2.2363281] 2.2338867 -0.0021813 0.0048828 11 [2.2338867, 2.2363281] 2.2351074 -0.0009606 0.0024414Converged: 12 [2.2351074, 2.2363281] 2.2357178 -0.0003502 0.0012207@end smallexampleThe next program solves the same function using a derivative solverinstead.@example@verbatiminclude examples/rootnewt.c@end example@noindentHere are the results for Newton's method,@examplebash$ ./a.out using newton methoditer root err err(est) 1 3.0000000 +0.7639320 -2.0000000 2 2.3333333 +0.0972654 -0.6666667 3 2.2380952 +0.0020273 -0.0952381Converged: 4 2.2360689 +0.0000009 -0.0020263@end example@noindentNote that the error can be estimated more accurately by taking thedifference between the current iterate and next iterate rather than theprevious iterate. The other derivative solvers can be investigated bychanging @code{gsl_root_fdfsolver_newton} to@code{gsl_root_fdfsolver_secant} or@code{gsl_root_fdfsolver_steffenson}.@node Root Finding References and Further Reading@section References and Further Reading@noindentFor information on the Brent-Dekker algorithm see the following twopapers,@itemize @asis@itemR. P. Brent, "An algorithm with guaranteed convergence for finding azero of a function", @cite{Computer Journal}, 14 (1971) 422-425@itemJ. C. P. Bus and T. J. Dekker, "Two Efficient Algorithms with GuaranteedConvergence for Finding a Zero of a Function", @cite{ACM Transactions ofMathematical Software}, Vol. 1 No. 4 (1975) 330-345@end itemize
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -