⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 multiroots.texi

📁 用于VC.net的gsl的lib库文件包
💻 TEXI
📖 第 1 页 / 共 3 页
字号:

@deftypefun int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver * @var{s})
@deftypefunx int gsl_multiroot_fdfsolver_iterate (gsl_multiroot_fdfsolver * @var{s})
These functions perform a single iteration of the solver @var{s}.  If the
iteration encounters an unexpected problem then an error code will be
returned,

@table @code
@item GSL_EBADFUNC
the iteration encountered a singular point where the function or its
derivative evaluated to @code{Inf} or @code{NaN}.

@item GSL_ENOPROG
the iteration is not making any progress, preventing the algorithm from
continuing.
@end table
@end deftypefun

The solver maintains a current best estimate of the root at all times.
This information can be accessed with the following auxiliary functions,

@deftypefun {gsl_vector *} gsl_multiroot_fsolver_root (const gsl_multiroot_fsolver * @var{s})
@deftypefunx {gsl_vector *} gsl_multiroot_fdfsolver_root (const gsl_multiroot_fdfsolver * @var{s})
These functions return the current estimate of the root for the solver @var{s}.
@end deftypefun

@deftypefun {gsl_vector *} gsl_multiroot_fsolver_f (const gsl_multiroot_fsolver * @var{s})
@deftypefunx {gsl_vector *} gsl_multiroot_fdfsolver_f (const gsl_multiroot_fdfsolver * @var{s})
These functions return the function value @math{f(x)} at the current
estimate of the root for the solver @var{s}.
@end deftypefun

@deftypefun {gsl_vector *} gsl_multiroot_fsolver_dx (const gsl_multiroot_fsolver * @var{s})
@deftypefunx {gsl_vector *} gsl_multiroot_fdfsolver_dx (const gsl_multiroot_fdfsolver * @var{s})
These functions return the last step @math{dx} taken by the solver
@var{s}.
@end deftypefun

@node Search Stopping Parameters for the multidimensional solver
@section Search Stopping Parameters
@cindex root finding, stopping parameters

A root finding procedure should stop when one of the following conditions is
true:

@itemize @bullet
@item
A multidimensional root has been found to within the user-specified precision.

@item
A user-specified maximum number of iterations has been reached.

@item
An error has occurred.
@end itemize

@noindent
The handling of these conditions is under user control.  The functions
below allow the user to test the precision of the current result in
several standard ways.

@deftypefun int gsl_multiroot_test_delta (const gsl_vector * @var{dx}, const gsl_vector * @var{x}, double @var{epsabs}, double @var{epsrel})

This function tests for the convergence of the sequence by comparing the
last step @var{dx} with the absolute error @var{epsabs} and relative
error @var{epsrel} to the current position @var{x}.  The test returns
@code{GSL_SUCCESS} if the following condition is achieved,

@tex
\beforedisplay
$$
|dx_i| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, |x_i|
$$
\afterdisplay
@end tex
@ifinfo
@example
|dx_i| < epsabs + epsrel |x_i|
@end example
@end ifinfo
@noindent
for each component of @var{x} and returns @code{GSL_CONTINUE} otherwise.
@end deftypefun

@cindex residual, in nonlinear systems of equations
@deftypefun int gsl_multiroot_test_residual (const gsl_vector * @var{f}, double @var{epsabs})
This function tests the residual value @var{f} against the absolute
error bound @var{epsabs}.  The test returns @code{GSL_SUCCESS} if the
following condition is achieved,

@tex
\beforedisplay
$$
\sum_i |f_i| < \hbox{\it epsabs}
$$
\afterdisplay
@end tex
@ifinfo
@example
\sum_i |f_i| < epsabs
@end example
@end ifinfo

@noindent
and returns @code{GSL_CONTINUE} otherwise.  This criterion is suitable
for situations where the precise location of the root, @math{x}, is
unimportant provided a value can be found where the residual is small
enough.
@end deftypefun

@comment ============================================================

@node Algorithms using Derivatives
@section Algorithms using Derivatives

The root finding algorithms described in this section make use of both
the function and its derivative.  They require an initial guess for the
location of the root, but there is no absolute guarantee of convergence
-- the function must be suitable for this technique and the initial
guess must be sufficiently close to the root for it to work.  When the
conditions are satisfied then convergence is quadratic.


@comment ============================================================
@cindex HYBRID algorithms for nonlinear systems
@deffn {Derivative Solver} gsl_multiroot_fdfsolver_hybridsj
@cindex HYBRIDSJ algorithm
@cindex MINPACK, minimization algorithms
This is a modified version of Powell's Hybrid method as implemented in
the @sc{hybrj} algorithm in @sc{minpack}.  Minpack was written by Jorge
J. Mor@'e, Burton S. Garbow and Kenneth E. Hillstrom.  The Hybrid
algorithm retains the fast convergence of Newton's method but will also
reduce the residual when Newton's method is unreliable. 

The algorithm uses a generalized trust region to keep each step under
control.  In order to be accepted a proposed new position @math{x'} must
satisfy the condition @math{|D (x' - x)| < \delta}, where @math{D} is a
diagonal scaling matrix and @math{\delta} is the size of the trust
region.  The components of @math{D} are computed internally, using the
column norms of the Jacobian to estimate the sensitivity of the residual
to each component of @math{x}.  This improves the behavior of the
algorithm for badly scaled functions.

On each iteration the algorithm first determines the standard Newton
step by solving the system @math{J dx = - f}.  If this step falls inside
the trust region it is used as a trial step in the next stage.  If not,
the algorithm uses the linear combination of the Newton and gradient
directions which is predicted to minimize the norm of the function while
staying inside the trust region.

@tex
\beforedisplay
$$
dx = - \alpha J^{-1} f(x) - \beta \nabla |f(x)|^2
$$
\afterdisplay
@end tex
@ifinfo
@example
dx = - \alpha J^@{-1@} f(x) - \beta \nabla |f(x)|^2
@end example
@end ifinfo
@noindent
This combination of Newton and gradient directions is referred to as a
@dfn{dogleg step}.

The proposed step is now tested by evaluating the function at the
resulting point, @math{x'}.  If the step reduces the norm of the function
sufficiently then it is accepted and size of the trust region is
increased.  If the proposed step fails to improve the solution then the
size of the trust region is decreased and another trial step is
computed.

The speed of the algorithm is increased by computing the changes to the
Jacobian approximately, using a rank-1 update.  If two successive
attempts fail to reduce the residual then the full Jacobian is
recomputed.  The algorithm also monitors the progress of the solution
and returns an error if several steps fail to make any improvement,

@table @code
@item GSL_ENOPROG
the iteration is not making any progress, preventing the algorithm from
continuing.

@item GSL_ENOPROGJ
re-evaluations of the Jacobian indicate that the iteration is not
making any progress, preventing the algorithm from continuing.
@end table

@end deffn

@deffn {Derivative Solver} gsl_multiroot_fdfsolver_hybridj
@cindex HYBRIDJ algorithm
This algorithm is an unscaled version of @code{hybridsj}.  The steps are
controlled by a spherical trust region @math{|x' - x| < \delta}, instead
of a generalized region.  This can be useful if the generalized region
estimated by @code{hybridsj} is inappropriate.
@end deffn


@deffn {Derivative Solver} gsl_multiroot_fdfsolver_newton
@cindex Newton's Method for systems of nonlinear equations

Newton's Method is the standard root-polishing algorithm.  The algorithm
begins with an initial guess for the location of the solution.  On each
iteration a linear approximation to the function @math{F} is used to
estimate the step which will zero all the components of the residual.
The iteration is defined by the following sequence,

@tex
\beforedisplay
$$
x \to x' = x - J^{-1} f(x)
$$
\afterdisplay
@end tex
@ifinfo
@example
x -> x' = x - J^@{-1@} f(x)
@end example
@end ifinfo
@noindent
where the Jacobian matrix @math{J} is computed from the derivative
functions provided by @var{f}.  The step @math{dx} is obtained by solving
the linear system,

@tex
\beforedisplay
$$
J \,dx = - f(x)
$$
\afterdisplay
@end tex
@ifinfo
@example
J dx = - f(x)
@end example
@end ifinfo
@noindent
using LU decomposition.
@end deffn

@comment ============================================================

@deffn {Derivative Solver} gsl_multiroot_fdfsolver_gnewton
@cindex Modified Newton's Method for nonlinear systems
@cindex Newton algorithm, globally convergent
This is a modified version of Newton's method which attempts to improve
global convergence by requiring every step to reduce the Euclidean norm
of the residual, @math{|f(x)|}.  If the Newton step leads to an increase
in the norm then a reduced step of relative size,

@tex
\beforedisplay
$$
t = (\sqrt{1 + 6 r} - 1) / (3 r)
$$
\afterdisplay
@end tex
@ifinfo
@example
t = (\sqrt(1 + 6 r) - 1) / (3 r)
@end example
@end ifinfo
@noindent
is proposed, with @math{r} being the ratio of norms
@math{|f(x')|^2/|f(x)|^2}.  This procedure is repeated until a suitable step
size is found. 
@end deffn

@comment ============================================================

@node Algorithms without Derivatives
@section Algorithms without Derivatives

The algorithms described in this section do not require any derivative
information to be supplied by the user.  Any derivatives needed are
approximated from by finite difference.

@deffn {Solver} gsl_multiroot_fsolver_hybrids
@cindex HYBRIDS algorithm, scaled without derivatives
This is a version of the Hybrid algorithm which replaces calls to the
Jacobian function by its finite difference approximation.  The finite
difference approximation is computed using @code{gsl_multiroots_fdjac}
with a relative step size of @code{GSL_SQRT_DBL_EPSILON}.
@end deffn

@deffn {Solver} gsl_multiroot_fsolver_hybrid
@cindex HYBRID algorithm, unscaled without derivatives
This is a finite difference version of the Hybrid algorithm without
internal scaling.
@end deffn

@comment ============================================================

@deffn {Solver} gsl_multiroot_fsolver_dnewton

@cindex Discrete Newton algorithm for multidimensional roots
@cindex Newton algorithm, discrete

The @dfn{discrete Newton algorithm} is the simplest method of solving a
multidimensional system.  It uses the Newton iteration

@tex
\beforedisplay
$$
x \to x - J^{-1} f(x)
$$
\afterdisplay
@end tex
@ifinfo
@example
x -> x - J^@{-1@} f(x)
@end example
@end ifinfo
@noindent
where the Jacobian matrix @math{J} is approximated by taking finite
differences of the function @var{f}.  The approximation scheme used by
this implementation is,

@tex
\beforedisplay
$$
J_{ij} = (f_i(x + \delta_j) - f_i(x)) /  \delta_j
$$
\afterdisplay
@end tex
@ifinfo
@example
J_@{ij@} = (f_i(x + \delta_j) - f_i(x)) /  \delta_j
@end example
@end ifinfo
@noindent
where @math{\delta_j} is a step of size @math{\sqrt\epsilon |x_j|} with
@math{\epsilon} being the machine precision 
(@c{$\epsilon \approx 2.22 \times 10^{-16}$}
@math{\epsilon \approx 2.22 \times 10^-16}).
The order of convergence of Newton's algorithm is quadratic, but the
finite differences require @math{n^2} function evaluations on each
iteration.  The algorithm may become unstable if the finite differences
are not a good approximation to the true derivatives.
@end deffn

@comment ============================================================

@deffn {Solver} gsl_multiroot_fsolver_broyden
@cindex Broyden algorithm for multidimensional roots
@cindex multidimensional root finding, Broyden algorithm

The @dfn{Broyden algorithm} is a version of the discrete Newton
algorithm which attempts to avoids the expensive update of the Jacobian
matrix on each iteration.  The changes to the Jacobian are also

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -