📄 multifit.texi
字号:
@cindex nonlinear least squares fitting@cindex least squares fitting, nonlinearThis chapter describes functions for multidimensional nonlinearleast-squares fitting. The library provides low level components for avariety of iterative solvers and convergence tests. These can becombined by the user to achieve the desired solution, with full accessto the intermediate steps of the iteration. Each class of methods usesthe same framework, so that you can switch between solvers at runtimewithout needing to recompile your program. Each instance of a solverkeeps track of its own state, allowing the solvers to be used inmulti-threaded programs.The header file @file{gsl_multifit_nlin.h} contains prototypes for themultidimensional nonlinear fitting functions and related declarations.@menu* Overview of Nonlinear Least-Squares Fitting:: * Initializing the Nonlinear Least-Squares Solver:: * Providing the Function to be Minimized:: * Iteration of the Minimization Algorithm:: * Search Stopping Parameters for Minimization Algorithms:: * Minimization Algorithms using Derivatives:: * Minimization Algorithms without Derivatives:: * Computing the covariance matrix of best fit parameters:: * Example programs for Nonlinear Least-Squares Fitting:: * References and Further Reading for Nonlinear Least-Squares Fitting:: @end menu@node Overview of Nonlinear Least-Squares Fitting@section Overview@cindex nonlinear least-squares fitting, overviewThe problem of multidimensional nonlinear least-squares fitting requiresthe minimization of the squared residuals of @math{n} functions,@math{f_i}, in @math{p} parameters, @math{x_i},@tex\beforedisplay$$\Phi(x) = {1 \over 2} \sum_{i=1}^{n} f_i (x_1, \dots, x_p)^2 = {1 \over 2} || F(x) ||^2$$\afterdisplay@end tex@ifinfo@example\Phi(x) = (1/2) \sum_@{i=1@}^@{n@} f_i(x_1, ..., x_p)^2 = (1/2) || F(x) ||^2@end example@end ifinfo@noindentAll algorithms proceed from an initial guess using the linearization,@tex\beforedisplay$$\psi(p) = || F(x+p) || \approx || F(x) + J p\, ||$$\afterdisplay@end tex@ifinfo@example\psi(p) = || F(x+p) || ~=~ || F(x) + J p ||@end example@end ifinfo@noindentwhere @math{x} is the initial point, @math{p} is the proposed stepand @math{J} is theJacobian matrix @c{$J_{ij} = \partial f_i / \partial x_j$}@math{J_@{ij@} = d f_i / d x_j}. Additional strategies are used to enlarge the region of convergence.These include requiring a decrease in the norm @math{||F||} on eachstep or using a trust region to avoid steps which fall outside the linear regime.@node Initializing the Nonlinear Least-Squares Solver@section Initializing the Solver@deftypefun {gsl_multifit_fsolver *} gsl_multifit_fsolver_alloc (const gsl_multifit_fsolver_type * @var{T}, size_t @var{n}, size_t @var{p})This function returns a pointer to a a newly allocated instance of asolver of type @var{T} for @var{n} observations and @var{p} parameters.If there is insufficient memory to create the solver then the functionreturns a null pointer and the error handler is invoked with an errorcode of @code{GSL_ENOMEM}.@end deftypefun@deftypefun {gsl_multifit_fdfsolver *} gsl_multifit_fdfsolver_alloc (const gsl_multifit_fdfsolver_type * @var{T}, size_t @var{n}, size_t @var{p})This function returns a pointer to a a newly allocated instance of aderivative solver of type @var{T} for @var{n} observations and @var{p}parameters. For example, the following code creates an instance of aLevenberg-Marquardt solver for 100 data points and 3 parameters,@exampleconst gsl_multifit_fdfsolver_type * T = gsl_multifit_fdfsolver_lmder;gsl_multifit_fdfsolver * s = gsl_multifit_fdfsolver_alloc (T, 100, 3);@end exampleIf there is insufficient memory to create the solver then the functionreturns a null pointer and the error handler is invoked with an errorcode of @code{GSL_ENOMEM}.@end deftypefun@deftypefun int gsl_multifit_fsolver_set (gsl_multifit_fsolver * @var{s}, gsl_multifit_function * @var{f}, gsl_vector * @var{x})This function initializes, or reinitializes, an existing solver @var{s}to use the function @var{f} and the initial guess @var{x}.@end deftypefun@deftypefun int gsl_multifit_fdfsolver_set (gsl_multifit_fdfsolver * @var{s}, gsl_function_fdf * @var{fdf}, gsl_vector * @var{x})This function initializes, or reinitializes, an existing solver @var{s}to use the function and derivative @var{fdf} and the initial guess@var{x}.@end deftypefun@deftypefun void gsl_multifit_fsolver_free (gsl_multifit_fsolver * @var{s})@deftypefunx void gsl_multifit_fdfsolver_free (gsl_multifit_fdfsolver * @var{s})These functions free all the memory associated with the solver @var{s}.@end deftypefun@deftypefun {const char *} gsl_multifit_fsolver_name (const gsl_multifit_fdfsolver * @var{s})@deftypefunx {const char *} gsl_multifit_fdfsolver_name (const gsl_multifit_fdfsolver * @var{s})These functions return a pointer to the name of the solver. For example,@exampleprintf("s is a '%s' solver\n", gsl_multifit_fdfsolver_name (s));@end example@noindentwould print something like @code{s is a 'lmder' solver}.@end deftypefun@node Providing the Function to be Minimized@section Providing the Function to be MinimizedYou must provide @math{n} functions of @math{p} variables for the minimization algorithms to operate on. In order to allow for general parameters thefunctions are defined by the following data types:@deftp {Data Type} gsl_multifit_function This data type defines a general system of functions with parameters.@table @code@item int (* f) (const gsl_vector * @var{x}, void * @var{params}, gsl_vector * @var{f})this function should store the vector result@c{$f(x,\hbox{\it params})$}@math{f(x,params)} in @var{f} for argument @var{x} and parameters @var{params},returning an appropriate error code if the function cannot be computed.@item size_t @var{n}the number of functions, i.e. the number of components of thevector @var{f}@item size_t @var{p}the number of independent variables, i.e. the number of components ofthe vectors @var{x}@item void * @var{params}a pointer to the parameters of the function@end table@end deftp@deftp {Data Type} gsl_multifit_function_fdfThis data type defines a general system of functions with parameters andthe corresponding Jacobian matrix of derivatives,@table @code@item int (* f) (const gsl_vector * @var{x}, void * @var{params}, gsl_vector * @var{f})this function should store the vector result@c{$f(x,\hbox{\it params})$}@math{f(x,params)} in @var{f} for argument @var{x} and parameters @var{params},returning an appropriate error code if the function cannot be computed.@item int (* df) (const gsl_vector * @var{x}, void * @var{params}, gsl_matrix * @var{J})this function should store the @var{n}-by-@var{p} matrix result@c{$J_{ij} = \partial f_i(x,\hbox{\it params}) / \partial x_j$}@math{J_ij = d f_i(x,params) / d x_j} in @var{J} for argument @var{x} and parameters @var{params}, returning an appropriate error code if thefunction cannot be computed.@item int (* fdf) (const gsl_vector * @var{x}, void * @var{params}, gsl_vector * @var{f}, gsl_matrix * @var{J})This function should set the values of the @var{f} and @var{J} as above,for arguments @var{x} and parameters @var{params}. This function providesan optimization of the separate functions for @math{f(x)} and @math{J(x)} -- it is always faster to compute the function and its derivative at thesame time. @item size_t @var{n}the number of functions, i.e. the number of components of thevector @var{f}@item size_t @var{p}the number of independent variables, i.e. the number of components ofthe vectors @var{x}@item void * @var{params}a pointer to the parameters of the function@end table@end deftp@node Iteration of the Minimization Algorithm@section IterationThe following functions drive the iteration of each algorithm. Eachfunction performs one iteration to update the state of any solver of thecorresponding type. The same functions work for all solvers so thatdifferent methods can be substituted at runtime without modifications tothe code.@deftypefun int gsl_multifit_fsolver_iterate (gsl_multifit_fsolver * @var{s})@deftypefunx int gsl_multifit_fdfsolver_iterate (gsl_multifit_fdfsolver * @var{s})These functions perform a single iteration of the solver @var{s}. Ifthe iteration encounters an unexpected problem then an error code willbe returned. The solver maintains a current estimate of the best-fitparameters at all times. This information can be accessed with thefollowing auxiliary functions,@end deftypefun@deftypefun {gsl_vector *} gsl_multifit_fsolver_position (const gsl_multifit_fsolver * @var{s})@deftypefunx {gsl_vector *} gsl_multifit_fdfsolver_position (const gsl_multifit_fdfsolver * @var{s})These functions return the current position (i.e. best-fit parameters)of the solver @var{s}.@end deftypefun@node Search Stopping Parameters for Minimization Algorithms@section Search Stopping Parameters@cindex nonlinear fitting, stopping parametersA minimization procedure should stop when one of the following conditions istrue:@itemize @bullet@itemA minimum has been found to within the user-specified precision.@itemA user-specified maximum number of iterations has been reached.@itemAn error has occurred.@end itemize@noindentThe handling of these conditions is under user control. The functionsbelow allow the user to test the current estimate of the best-fitparameters in several standard ways.@deftypefun int gsl_multifit_test_delta (const gsl_vector * @var{dx}, const gsl_vector * @var{x}, double @var{epsabs}, double @var{epsrel})This function tests for the convergence of the sequence by comparing thelast step @var{dx} with the absolute error @var{epsabs} and relativeerror @var{epsrel} to the current position @var{x}. The test returns@code{GSL_SUCCESS} if the following condition is achieved,@tex\beforedisplay$$|dx_i| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, |x_i|$$\afterdisplay@end tex@ifinfo@example|dx_i| < epsabs + epsrel |x_i|@end example@end ifinfo@noindentfor each component of @var{x} and returns @code{GSL_CONTINUE} otherwise.@end deftypefun@cindex residual, in nonlinear systems of equations@deftypefun int gsl_multifit_test_gradient (const gsl_vector * @var{g}, double @var{epsabs})This function tests the residual gradient @var{g} against the absoluteerror bound @var{epsabs}. Mathematically, the gradient should beexactly zero at the minimum. The test returns @code{GSL_SUCCESS} if thefollowing condition is achieved,@tex\beforedisplay$$\sum_i |g_i| < \hbox{\it epsabs}$$\afterdisplay@end tex@ifinfo@example\sum_i |g_i| < epsabs@end example@end ifinfo@noindentand returns @code{GSL_CONTINUE} otherwise. This criterion is suitablefor situations where the the precise location of the minimum, @math{x},is unimportant provided a value can be found where the gradient is smallenough.@end deftypefun@deftypefun int gsl_multifit_gradient (const gsl_matrix * @var{J}, const gsl_vector * @var{f}, gsl_vector * @var{g})This function computes the gradient @var{g} of @math{\Phi(x) = (1/2)||F(x)||^2} from the Jacobian matrix @math{J} and the function values@var{f}, using the formula @math{g = J^T f}.@end deftypefun@node Minimization Algorithms using Derivatives@section Minimization Algorithms using DerivativesThe minimization algorithms described in this section make use of boththe function and its derivative. They require an initial guess for thelocation of the minimum. There is no absolute guarantee of convergence-- the function must be suitable for this technique and the initialguess must be sufficiently close to the minimum for it to work. @comment ============================================================@cindex Levenberg-Marquardt algorithms@deffn {Derivative Solver} gsl_multifit_fdfsolver_lmsder@cindex LMDER algorithm@cindex MINPACK, minimization algorithmsThis is a robust and efficient version of the Levenberg-Marquardtalgorithm as implemented in the scaled @sc{lmder} routine in@sc{minpack}. Minpack was written by Jorge J. Mor@'e, Burton S. Garbowand Kenneth E. Hillstrom.The algorithm uses a generalized trust region to keep each step undercontrol. In order to be accepted a proposed new position @math{x'} mustsatisfy the condition @math{|D (x' - x)| < \delta}, where @math{D} is adiagonal scaling matrix and @math{\delta} is the size of the trustregion. The components of @math{D} are computed internally, using thecolumn norms of the Jacobian to estimate the sensitivity of the residualto each component of @math{x}. This improves the behavior of thealgorithm for badly scaled functions.On each iteration the algorithm attempts to minimize the linear system@math{|F + J p|} subject to the constraint @math{|D p| < \Delta}. Thesolution to this constrained linear system is found using theLevenberg-Marquardt method.The proposed step is now tested by evaluating the function at theresulting point, @math{x'}. If the step reduces the norm of thefunction sufficiently, and follows the predicted behavior of thefunction within the trust region. then it is accepted and size of thetrust region is increased. If the proposed step fails to improve thesolution, or differs significantly from the expected behavior withinthe trust region, then the size of the trust region is decreased andanother trial step is computed.The algorithm also monitors the progress of the solution and returns anerror if the changes in the solution are smaller than the machineprecision. The possible error codes are,@table @code@item GSL_ETOLFthe decrease in the function falls below machine precision@item GSL_ETOLXthe change in the position vector falls below machine precision
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -