⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 roots.texi

📁 用于VC.net的gsl的lib库文件包
💻 TEXI
📖 第 1 页 / 共 3 页
字号:
interpolating curve.  On the first iteration this is a linear
interpolation of the two endpoints.  For subsequent iterations the
algorithm uses an inverse quadratic fit to the last three points, for
higher accuracy.  The intercept of the interpolating curve with the
@math{x}-axis is taken as a guess for the root.  If it lies within the
bounds of the current interval then the interpolating point is accepted,
and used to generate a smaller interval.  If the interpolating point is
not accepted then the algorithm falls back to an ordinary bisection
step.

The best estimate of the root is taken from the most recent
interpolation or bisection.
@end deffn

@comment ============================================================

@node Root Finding Algorithms using Derivatives
@section Root Finding Algorithms using Derivatives

The root polishing algorithms described in this section require an
initial guess for the location of the root.  There is no absolute
guarantee of convergence -- the function must be suitable for this
technique and the initial guess must be sufficiently close to the root
for it to work.  When these conditions are satisfied then convergence is
quadratic.

These algorithms make use of both the function and its derivative. 

@deffn {Derivative Solver} gsl_root_fdfsolver_newton
@cindex Newton's Method algorithm for finding roots
@cindex root finding, Newton's Method algorithm

Newton's Method is the standard root-polishing algorithm.  The algorithm
begins with an initial guess for the location of the root.  On each
iteration, a line tangent to the function @math{f} is drawn at that
position.  The point where this line crosses the @math{x}-axis becomes
the new guess.  The iteration is defined by the following sequence,

@tex
\beforedisplay
$$
x_{i+1} = x_i - {f(x_i) \over f'(x_i)}
$$
\afterdisplay
@end tex
@ifinfo
@example
x_@{i+1@} = x_i - f(x_i)/f'(x_i)
@end example
@end ifinfo

@noindent
Newton's method converges quadratically for single roots, and linearly
for multiple roots.

@comment eps file "roots-newtons-method.eps"
@iftex
@sp 1
@center @image{roots-newtons-method,3.4in}

@quotation
Several iterations of Newton's Method, where @math{g_n} is the
@math{n}th guess.
@end quotation
@end iftex
@end deffn

@comment ============================================================

@deffn {Derivative Solver} gsl_root_fdfsolver_secant
@cindex Secant Method algorithm for finding roots
@cindex root finding, Secant Method algorithm

The @dfn{secant method} is a simplified version of Newton's method which does
not require the computation of the derivative on every step.

On its first iteration the algorithm begins with Newton's method, using
the derivative to compute a first step,

@tex
\beforedisplay
$$
x_1 = x_0 - {f(x_0) \over f'(x_0)}
$$
\afterdisplay
@end tex
@ifinfo
@example
x_1 = x_0 - f(x_0)/f'(x_0)
@end example
@end ifinfo

@noindent
Subsequent iterations avoid the evaluation of the derivative by
replacing it with a numerical estimate, the slope of the line through
the previous two points,

@tex
\beforedisplay
$$
x_{i+1} = x_i - {f(x_i) \over f'_{est}}
 ~\hbox{where}~
 f'_{est} =  {f(x_{i}) - f(x_{i-1}) \over x_i - x_{i-1}}
$$
\afterdisplay
@end tex
@ifinfo
@example
x_@{i+1@} = x_i f(x_i) / f'_@{est@} where
 f'_@{est@} = (f(x_i) - f(x_@{i-1@})/(x_i - x_@{i-1@})
@end example
@end ifinfo

@noindent
When the derivative does not change significantly in the vicinity of the
root the secant method gives a useful saving.  Asymptotically the secant
method is faster than Newton's method whenever the cost of evaluating
the derivative is more than 0.44 times the cost of evaluating the
function itself.  As with all methods of computing a numerical
derivative the estimate can suffer from cancellation errors if the
separation of the points becomes too small.

On single roots, the method has a convergence of order @math{(1 + \sqrt
5)/2} (approximately @math{1.62}).  It converges linearly for multiple
roots.  

@comment eps file "roots-secant-method.eps"
@comment @iftex
@comment @tex
@comment \input epsf
@comment \medskip
@comment \centerline{\epsfxsize=5in\epsfbox{roots-secant-method.eps}}
@comment @end tex
@comment @quotation
@comment Several iterations of Secant Method, where @math{g_n} is the @math{n}th
@comment guess.
@comment @end quotation
@comment @end iftex
@end deffn

@comment ============================================================

@deffn {Derivative Solver} gsl_root_fdfsolver_steffenson
@cindex Steffenson's Method for finding roots
@cindex root finding, Steffenson's Method

The @dfn{Steffenson Method} provides the fastest convergence of all the
routines.  It combines the basic Newton algorithm with an Aitken
``delta-squared'' acceleration.  If the Newton iterates are @math{x_i}
then the acceleration procedure generates a new sequence @math{R_i},

@tex
\beforedisplay
$$
R_i = x_i - {(x_{i+1} - x_i)^2 \over (x_{i+2} - 2 x_{i+1} + x_i)}
$$
\afterdisplay
@end tex
@ifinfo
@example
R_i = x_i - (x_@{i+1@} - x_i)^2 / (x_@{i+2@} - 2 x_@{i+1@} + x_@{i@})
@end example
@end ifinfo

@noindent
which converges faster than the original sequence under reasonable
conditions.  The new sequence requires three terms before it can produce
its first value so the method returns accelerated values on the second
and subsequent iterations.  On the first iteration it returns the
ordinary Newton estimate.  The Newton iterate is also returned if the
denominator of the acceleration term ever becomes zero.

As with all acceleration procedures this method can become unstable if
the function is not well-behaved. 
@end deffn

@node Root Finding Examples
@section Examples

For any root finding algorithm we need to prepare the function to be
solved.  For this example we will use the general quadratic equation
described earlier.  We first need a header file (@file{demo_fn.h}) to
define the function parameters,

@example
@verbatiminclude examples/demo_fn.h
@end example
@noindent
We place the function definitions in a separate file (@file{demo_fn.c}),

@example
@verbatiminclude examples/demo_fn.c
@end example
@noindent
The first program uses the function solver @code{gsl_root_fsolver_brent}
for Brent's method and the general quadratic defined above to solve the
following equation,

@tex
\beforedisplay
$$
x^2 - 5 = 0
$$
\afterdisplay
@end tex
@ifinfo
@example
x^2 - 5 = 0
@end example
@end ifinfo

@noindent
with solution @math{x = \sqrt 5 = 2.236068...}

@example
@verbatiminclude examples/roots.c
@end example

@noindent
Here are the results of the iterations,

@smallexample
bash$ ./a.out 
using brent method
 iter [    lower,     upper]      root        err  err(est)
    1 [1.0000000, 5.0000000] 1.0000000 -1.2360680 4.0000000
    2 [1.0000000, 3.0000000] 3.0000000 +0.7639320 2.0000000
    3 [2.0000000, 3.0000000] 2.0000000 -0.2360680 1.0000000
    4 [2.2000000, 3.0000000] 2.2000000 -0.0360680 0.8000000
    5 [2.2000000, 2.2366300] 2.2366300 +0.0005621 0.0366300
Converged:                            
    6 [2.2360634, 2.2366300] 2.2360634 -0.0000046 0.0005666
@end smallexample

@noindent
If the program is modified to use the bisection solver instead of
Brent's method, by changing @code{gsl_root_fsolver_brent} to
@code{gsl_root_fsolver_bisection} the slower convergence of the
Bisection method can be observed,

@smallexample
bash$ ./a.out 
using bisection method
 iter [    lower,     upper]      root        err  err(est)
    1 [0.0000000, 2.5000000] 1.2500000 -0.9860680 2.5000000
    2 [1.2500000, 2.5000000] 1.8750000 -0.3610680 1.2500000
    3 [1.8750000, 2.5000000] 2.1875000 -0.0485680 0.6250000
    4 [2.1875000, 2.5000000] 2.3437500 +0.1076820 0.3125000
    5 [2.1875000, 2.3437500] 2.2656250 +0.0295570 0.1562500
    6 [2.1875000, 2.2656250] 2.2265625 -0.0095055 0.0781250
    7 [2.2265625, 2.2656250] 2.2460938 +0.0100258 0.0390625
    8 [2.2265625, 2.2460938] 2.2363281 +0.0002601 0.0195312
    9 [2.2265625, 2.2363281] 2.2314453 -0.0046227 0.0097656
   10 [2.2314453, 2.2363281] 2.2338867 -0.0021813 0.0048828
   11 [2.2338867, 2.2363281] 2.2351074 -0.0009606 0.0024414
Converged:                            
   12 [2.2351074, 2.2363281] 2.2357178 -0.0003502 0.0012207
@end smallexample

The next program solves the same function using a derivative solver
instead.

@example
@verbatiminclude examples/rootnewt.c
@end example
@noindent
Here are the results for Newton's method,

@example
bash$ ./a.out 
using newton method
iter        root        err   err(est)
    1  3.0000000 +0.7639320 -2.0000000
    2  2.3333333 +0.0972654 -0.6666667
    3  2.2380952 +0.0020273 -0.0952381
Converged:      
    4  2.2360689 +0.0000009 -0.0020263
@end example

@noindent
Note that the error can be estimated more accurately by taking the
difference between the current iterate and next iterate rather than the
previous iterate.  The other derivative solvers can be investigated by
changing @code{gsl_root_fdfsolver_newton} to
@code{gsl_root_fdfsolver_secant} or
@code{gsl_root_fdfsolver_steffenson}.

@node Root Finding References and Further Reading
@section References and Further Reading
@noindent
For information on the Brent-Dekker algorithm see the following two
papers,

@itemize @asis
@item
R. P. Brent, "An algorithm with guaranteed convergence for finding a
zero of a function", @cite{Computer Journal}, 14 (1971) 422-425

@item
J. C. P. Bus and T. J. Dekker, "Two Efficient Algorithms with Guaranteed
Convergence for Finding a Zero of a Function", @cite{ACM Transactions of
Mathematical Software}, Vol. 1 No. 4 (1975) 330-345
@end itemize

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -