⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 conjgrad.htm

📁 模式识别的主要工具集合
💻 HTM
字号:
<html><head><title>Netlab Reference Manual conjgrad</title></head><body><H1> conjgrad</H1><h2>Purpose</h2>Conjugate gradients optimization.<p><h2>Description</h2><CODE>[x, options, flog, pointlog] = conjgrad(f, x, options, gradf)</CODE> uses a conjugate gradientsalgorithm to find the minimum of the function <CODE>f(x)</CODE> whosegradient is given by <CODE>gradf(x)</CODE>.  Here <CODE>x</CODE> is a row vectorand <CODE>f</CODE> returns a scalar value. The point at which <CODE>f</CODE> has a local minimumis returned as <CODE>x</CODE>.  The function value at that point is returnedin <CODE>options(8)</CODE>.  A log of the function valuesafter each cycle is (optionally) returned in <CODE>flog</CODE>, and a logof the points visited is (optionally) returned in <CODE>pointlog</CODE>.<p><CODE>conjgrad(f, x, options, gradf, p1, p2, ...)</CODE> allows additional arguments to be passed to <CODE>f()</CODE> and <CODE>gradf()</CODE>. <p>The optional parameters have the following interpretations.<p><CODE>options(1)</CODE> is set to 1 to display error values; also logs error values in the return argument <CODE>errlog</CODE>, and the points visitedin the return argument <CODE>pointslog</CODE>.  If <CODE>options(1)</CODE> is set to 0,then only warning messages are displayed.  If <CODE>options(1)</CODE> is -1,then nothing is displayed.<p><CODE>options(2)</CODE> is a measure of the absolute precision required for the valueof <CODE>x</CODE> at the solution.  If the absolute difference betweenthe values of <CODE>x</CODE> between two successive steps is less than<CODE>options(2)</CODE>, then this condition is satisfied.<p><CODE>options(3)</CODE> is a measure of the precision required of the objectivefunction at the solution.  If the absolute difference between theobjective function values between two successive steps is less than<CODE>options(3)</CODE>, then this condition is satisfied.Both this and the previous condition must besatisfied for termination.<p><CODE>options(9)</CODE> is set to 1 to check the user defined gradient function.<p><CODE>options(10)</CODE> returns the total number of function evaluations (includingthose in any line searches).<p><CODE>options(11)</CODE> returns the total number of gradient evaluations.<p><CODE>options(14)</CODE> is the maximum number of iterations; default 100.<p><CODE>options(15)</CODE> is the precision in parameter space of the line search;default <CODE>1e-4</CODE>.<p><h2>Examples</h2>An example of the use of the additional arguments is the minimization of an errorfunction for a neural network:<PRE>w = quasinew('neterr', w, options, 'netgrad', net, x, t);</PRE><p><h2>Algorithm</h2>The conjugate gradients algorithm constructs searchdirections <CODE>di</CODE> that are conjugate: i.e. <CODE>di*H*d(i-1) = 0</CODE>,where <CODE>H</CODE> is the Hessian matrix.  This means that minimising along<CODE>di</CODE> does not undo the effect of minimising along the previousdirection. The Polak-Ribiere formula is used to calculate new searchdirections. The Hessian is not calculated, so there is only an<CODE>O(W)</CODE> storage requirement (where <CODE>W</CODE> is the number ofparameters).  However, relatively accurate line searches must be used(default is <CODE>1e-04</CODE>).<p><h2>See Also</h2><CODE><a href="graddesc.htm">graddesc</a></CODE>, <CODE><a href="linemin.htm">linemin</a></CODE>, <CODE><a href="minbrack.htm">minbrack</a></CODE>, <CODE><a href="quasinew.htm">quasinew</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE><hr><b>Pages:</b><a href="index.htm">Index</a><hr><p>Copyright (c) Ian T Nabney (1996-9)</body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -