⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 multifit.texi

📁 This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY without ev
💻 TEXI
📖 第 1 页 / 共 2 页
字号:
$$\afterdisplay@end tex@ifinfo@example|dx_i| < epsabs + epsrel |x_i|@end example@end ifinfo@noindentfor each component of @var{x} and returns @code{GSL_CONTINUE} otherwise.@end deftypefun@cindex residual, in nonlinear systems of equations@deftypefun int gsl_multifit_test_gradient (const gsl_vector * @var{g}, double @var{epsabs})This function tests the residual gradient @var{g} against the absoluteerror bound @var{epsabs}.  Mathematically, the gradient should beexactly zero at the minimum. The test returns @code{GSL_SUCCESS} if thefollowing condition is achieved,@tex\beforedisplay$$\sum_i |g_i| < \hbox{\it epsabs}$$\afterdisplay@end tex@ifinfo@example\sum_i |g_i| < epsabs@end example@end ifinfo@noindentand returns @code{GSL_CONTINUE} otherwise.  This criterion is suitablefor situations where the precise location of the minimum, @math{x},is unimportant provided a value can be found where the gradient is smallenough.@end deftypefun@deftypefun int gsl_multifit_gradient (const gsl_matrix * @var{J}, const gsl_vector * @var{f}, gsl_vector * @var{g})This function computes the gradient @var{g} of @math{\Phi(x) = (1/2)||F(x)||^2} from the Jacobian matrix @math{J} and the function values@var{f}, using the formula @math{g = J^T f}.@end deftypefun@node Minimization Algorithms using Derivatives@section Minimization Algorithms using DerivativesThe minimization algorithms described in this section make use of boththe function and its derivative.  They require an initial guess for thelocation of the minimum. There is no absolute guarantee ofconvergence---the function must be suitable for this technique and theinitial guess must be sufficiently close to the minimum for it to work.@comment ============================================================@cindex Levenberg-Marquardt algorithms@deffn {Derivative Solver} gsl_multifit_fdfsolver_lmsder@cindex LMDER algorithm@cindex MINPACK, minimization algorithmsThis is a robust and efficient version of the Levenberg-Marquardtalgorithm as implemented in the scaled @sc{lmder} routine in@sc{minpack}.  Minpack was written by Jorge J. Mor@'e, Burton S. Garbowand Kenneth E. Hillstrom.The algorithm uses a generalized trust region to keep each step undercontrol.  In order to be accepted a proposed new position @math{x'} mustsatisfy the condition @math{|D (x' - x)| < \delta}, where @math{D} is adiagonal scaling matrix and @math{\delta} is the size of the trustregion.  The components of @math{D} are computed internally, using thecolumn norms of the Jacobian to estimate the sensitivity of the residualto each component of @math{x}.  This improves the behavior of thealgorithm for badly scaled functions.On each iteration the algorithm attempts to minimize the linear system@math{|F + J p|} subject to the constraint @math{|D p| < \Delta}.  Thesolution to this constrained linear system is found using theLevenberg-Marquardt method.The proposed step is now tested by evaluating the function at theresulting point, @math{x'}.  If the step reduces the norm of thefunction sufficiently, and follows the predicted behavior of thefunction within the trust region, then it is accepted and the size of thetrust region is increased.  If the proposed step fails to improve thesolution, or differs significantly from the expected behavior withinthe trust region, then the size of the trust region is decreased andanother trial step is computed.The algorithm also monitors the progress of the solution and returns anerror if the changes in the solution are smaller than the machineprecision.  The possible error codes are,@table @code@item GSL_ETOLFthe decrease in the function falls below machine precision@item GSL_ETOLXthe change in the position vector falls below machine precision@item GSL_ETOLGthe norm of the gradient, relative to the norm of the function, fallsbelow machine precision@end table@noindentThese error codes indicate that further iterations will be unlikely tochange the solution from its current value.@end deffn@deffn {Derivative Solver} gsl_multifit_fdfsolver_lmderThis is an unscaled version of the @sc{lmder} algorithm.  The elements of thediagonal scaling matrix @math{D} are set to 1.  This algorithm may beuseful in circumstances where the scaled version of @sc{lmder} converges tooslowly, or the function is already scaled appropriately.@end deffn@node Minimization Algorithms without Derivatives@section Minimization Algorithms without DerivativesThere are no algorithms implemented in this section at the moment.@node Computing the covariance matrix of best fit parameters@section Computing the covariance matrix of best fit parameters@cindex best-fit parameters, covariance@cindex least squares, covariance of best-fit parameters@cindex covariance matrix, nonlinear fits@deftypefun int gsl_multifit_covar (const gsl_matrix * @var{J}, double @var{epsrel}, gsl_matrix * @var{covar})This function uses the Jacobian matrix @var{J} to compute the covariancematrix of the best-fit parameters, @var{covar}.  The parameter@var{epsrel} is used to remove linear-dependent columns when @var{J} isrank deficient.The covariance matrix is given by,@tex\beforedisplay$$C = (J^T J)^{-1}$$\afterdisplay@end tex@ifinfo@examplecovar = (J^T J)^@{-1@}@end example@end ifinfo@noindentand is computed by QR decomposition of J with column-pivoting.  Anycolumns of @math{R} which satisfy @tex\beforedisplay$$|R_{kk}| \leq epsrel |R_{11}|$$\afterdisplay@end tex@ifinfo@example|R_@{kk@}| <= epsrel |R_@{11@}|@end example@end ifinfo@noindentare considered linearly-dependent and are excluded from the covariancematrix (the corresponding rows and columns of the covariance matrix areset to zero).If the minimisation uses the weighted least-squares function@math{f_i = (Y(x, t_i) - y_i) / \sigma_i} then the covariancematrix above gives the statistical error on the best-fit parametersresulting from the gaussian errors @math{\sigma_i} on the underlying data @math{y_i}.  This can be verified from the relation @math{\delta f = J \delta c} and the fact that the fluctuations in @math{f}from the data @math{y_i} are normalised by @math{\sigma_i} and so satisfy @c{$\langle \delta f \delta f^T \rangle = I$}@math{<\delta f \delta f^T> = I}.For an unweighted least-squares function @math{f_i = (Y(x, t_i) -y_i)} the covariance matrix above should be multiplied by the varianceof the residuals about the best-fit @math{\sigma^2 = \sum (y_i - Y(x,t_i))^2 / (n-p)}to give the variance-covariancematrix @math{\sigma^2 C}.  This estimates the statistical error on thebest-fit parameters from the scatter of the underlying data.For more information about covariance matrices see @ref{Fitting Overview}.@end deftypefun@comment ============================================================@node Example programs for Nonlinear Least-Squares Fitting@section ExamplesThe following example program fits a weighted exponential model withbackground to experimental data, @math{Y = A \exp(-\lambda t) + b}. Thefirst part of the program sets up the functions @code{expb_f} and@code{expb_df} to calculate the model and its Jacobian.  The appropriatefitting function is given by,@tex\beforedisplay$$f_i = ((A \exp(-\lambda t_i) + b) - y_i)/\sigma_i$$\afterdisplay@end tex@ifinfo@examplef_i = ((A \exp(-\lambda t_i) + b) - y_i)/\sigma_i@end example@end ifinfo@noindentwhere we have chosen @math{t_i = i}.  The Jacobian matrix @math{J} isthe derivative of these functions with respect to the three parameters(@math{A}, @math{\lambda}, @math{b}).  It is given by,@tex\beforedisplay$$J_{ij} = {\partial f_i \over \partial x_j}$$\afterdisplay@end tex@ifinfo@exampleJ_@{ij@} = d f_i / d x_j@end example@end ifinfo@noindentwhere @math{x_0 = A}, @math{x_1 = \lambda} and @math{x_2 = b}.@example@verbatiminclude examples/expfit.c@end example@noindentThe main part of the program sets up a Levenberg-Marquardt solver andsome simulated random data. The data uses the known parameters(1.0,5.0,0.1) combined with gaussian noise (standard deviation = 0.1)over a range of 40 timesteps. The initial guess for the parameters ischosen as (0.0, 1.0, 0.0).@example@verbatiminclude examples/nlfit.c@end example@noindentThe iteration terminates when the change in x is smaller than 0.0001, asboth an absolute and relative change.  Here are the results of runningthe program:@smallexampleiter: 0 x=1.00000000 0.00000000 0.00000000 |f(x)|=117.349status=successiter: 1 x=1.64659312 0.01814772 0.64659312 |f(x)|=76.4578status=successiter: 2 x=2.85876037 0.08092095 1.44796363 |f(x)|=37.6838status=successiter: 3 x=4.94899512 0.11942928 1.09457665 |f(x)|=9.58079status=successiter: 4 x=5.02175572 0.10287787 1.03388354 |f(x)|=5.63049status=successiter: 5 x=5.04520433 0.10405523 1.01941607 |f(x)|=5.44398status=successiter: 6 x=5.04535782 0.10404906 1.01924871 |f(x)|=5.44397chisq/dof = 0.800996A      = 5.04536 +/- 0.06028lambda = 0.10405 +/- 0.00316b      = 1.01925 +/- 0.03782status = success@end smallexample@noindentThe approximate values of the parameters are found correctly, and thechi-squared value indicates a good fit (the chi-squared per degree offreedom is approximately 1).  In this case the errors on the parameterscan be estimated from the square roots of the diagonal elements of thecovariance matrix.  If the chi-squared value shows a poor fit (i.e. @c{$\chi^2/(n-p) \gg 1$}@math{chi^2/dof >> 1}) then the error estimates obtained from thecovariance matrix will be too small.  In the example program the error estimatesare multiplied by @c{$\sqrt{\chi^2/(n-p)}$}@math{\sqrt@{\chi^2/dof@}} in this case, a common way of increasing theerrors for a poor fit.  Note that a poor fit will result from the usean inappropriate model, and the scaled error estimates may thenbe outside the range of validity for gaussian errors.@iftex@sp 1@center @image{fit-exp,3.4in}@end iftex@node References and Further Reading for Nonlinear Least-Squares Fitting@section References and Further ReadingThe @sc{minpack} algorithm is described in the following article,@itemize @asis@itemJ.J. Mor@'e, @cite{The Levenberg-Marquardt Algorithm: Implementation andTheory}, Lecture Notes in Mathematics, v630 (1978), ed G. Watson.@end itemize@noindentThe following paper is also relevant to the algorithms described in thissection,@itemize @asis@item J.J. Mor@'e, B.S. Garbow, K.E. Hillstrom, ``Testing UnconstrainedOptimization Software'', ACM Transactions on Mathematical Software, Vol7, No 1 (1981), p 17--41.@end itemize

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -