⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 setup.html.svn-base

📁 OPT++
💻 SVN-BASE
📖 第 1 页 / 共 2 页
字号:
      <li> void (*USERFCN2)(mode, ndim, x, fx, gx, Hx, result):  for NFL2      <li> void (*USERFCNLSQ0)(ndim, x, lsfx, result):  for LSQNLF or      <li> void (*USERFCNLSQ1)(mode, ndim, x, lsfx, lsgx, result):  for LSQNLF    </ul></ul>The arguments of these functions are fairly straightforward.<em>ndim</em> is an integer that specifies the dimension of theproblem, <em>x</em> is a ColumnVector that contains the values of theoptimization variables, <em>fx</em> is the value of the objectivefunction at <em>x</em>, <em>gx</em> is a ColumnVector containing thegradient of the objective function at <em>x</em>, <em>Hx</em> is aSymmetricMatrix containing the Hessian of the objective function at<em>x</em>, <em>mode</em> is an integer encoding of the type ofevaluation requested (i.e., function, gradient, Hessian), and<em>result</em> is an integer encoding of the type of evaluationsavailable.  For the least squares operator, <em>lsfx</em> is a ColumnVectorwith each entry containing the value of one of the least squares terms and <em>lsgx</em> is a Matrix containing the Jacobian of the least squares operator at <em>x</em>.The ColumnVector, Matrix, and SymmetricMatrix objects are describedin the <a href="http://robertnz.net/nm11.htm"> NEWMATdocumentation</a>.  The <a href="#example"> Example Problems</a>demonstrate how to implement the user-defined functions.Nonlinear constraints are quite similar in nature to the objectivefunction.  In fact, they are constructed using the function objects inthe <a href="#problem"> Problem Setup</a> section.  The interfaces forthe subroutines that evaluate the nonlinear constraints, however, areslightly different from those for evaluating the objective function.The interfaces are as follows:<ul>  <li> void (*USERNLNCON0)(ndim, x, cx, result): for nonlinear  constraints with no analytic first or second derivatives  <li> void (*USERNLNCON1)(mode, ndim, x, cx, cgx, result): for  nonlinear constraints with analytic or finite-difference first  derivative, but no analyatice second derivative  <li> void (*USERNLNCON2)(mode, ndim, x, cx, cgx, cHx, result): for  nonlinear constraints with analytic first and second derivatives</ul>The arguments of these functions are fairly straightforward.<em>ndim</em> is an integer that specifies the dimension of theproblem, <em>x</em> is a ColumnVector that contains the values of theoptimization variables, <em>cx</em> is a ColumnVector with each entrycontaining the value of one of the nonlinear constraints at<em>x</em>, <em>cgx</em> is a Matrix with each column containing thegradient of one of the nonlinear constraints at <em>x</em>,<em>cHx</em> is an OptppArray of SymmetricMatrix with each matrixcontaining the Hessian of one of the nonlinear constraints at<em>x</em>, <em>mode</em> is an integer encoding of the type ofevaluation requested (i.e., function, gradient, Hessian), and<em>result</em> is an integer encoding of the type of evaluationsavailable.  A description of OptppArray can be found in the <ahref="annotated.html"> detailed documentation</a>.  The ColumnVectorand SymmetricMatrix objects are described in the <ahref="http://robertnz.net/nm11.htm"> NEWMAT documentation</a>.  The<a href="#example"> Example Problems</a> demonstrate how to implementthe user-defined functions.  In particular, <a href="example2.html">Example 2</a> demonstrates the use of constraints.\section algorithm Algorithm SetupOnce the nonlinear function (see <a href="#problem"> ProblemSetup</a>) has been set up, it is time to construct the algorithmobject.  This defines the optimization algorithm to be used andprovides it with a pointer to the problem to be solved.  Once this isdone, any algorithmic parameters can be set, and the problem can besolved.  The full set of algorithms provided and their constructorscan be found throughout the documentation.  We list the most commonones here, grouped according to the type of problem expected.<ul>  <li> problem has no analytic derivatives (NLF0)    <ul>      <li>OptPDS(&nlp):  parallel direct search method; handles general          constraints, but only bounds robustly      <li>OptGSS(&nlp, &gsb):  generating set search method; handles           unconstrained problems only    </ul>  <li> problem has analytic or finite-difference first derivative       (NLF1 or FDNLF1)    <ul>      <li> OptCG(&nlp):  conjugate gradient method; handles           unconstrained problems only      <li> OptLBFGS(&nlp):  limited-memory quasi-Newton method for unconstrained           problems; uses L-BFGS for Hessian approximation      <li> OptQNewton(&nlp):  quasi-Newton method for unconstrained           problems; uses BFGS for Hessian approximation      <li> OptFDNewton(&nlp):  Newton method for unconstrained           problems; uses second-order finite differences for Hessian           approximation      <li> OptBCQNewton(&nlp):  quasi-Newton method for	   bound-constrained problems; uses BFGS for Hessian	   approximation      <li> OptBaQNewton(&nlp):  quasi-Newton method for           bound-constrained problems; uses BFGS for Hessian	   approximation      <li> OptBCEllipsoid(&nlp):  ellipsoid method for           bound-constrained problems      <li> OptFDNIPS(&nlp):  Newton nonlinear interior-point           method for generally constrained problems; uses	   second-order finite differences for Hessian	   approximation      <li> OptQNIPS(&nlp):  quasi-Newton nonlinear interior-point           method for generally constrained problems; uses BFGS for	   Hessian approximation    </ul>  <li> problem has analytic first and second derivatives (NLF2)    <ul>      <li> OptNewton(&nlp):  Newton method for unconstrained           problems      <li> OptBCNewton(&nlp):  Newton method for bound-constrained	   problems      <li> OptBaNewton(&nlp):  Newton method for bound-constrained           problems      <li> OptNIPS(&nlp):  nonlinear interior-point method for	   generally constrained problems    </ul>  <li> problem has least squares function operator (LSQNLF)    <ul>      <li> OptDHNIPS(&nlp):  Disaggregated Hessian Newton nonlinear            interior-point method for generally constrained problems;            uses Gauss-Newton approximations to compute objective function           gradient and Hessian;           uses quasi-Newton approximations for constraint Hessians;    </ul></ul>In these constructors, <em>nlp</em> is the nonlinear function/problemobject created as described in the <a href="#problem"> ProblemSetup</a> section and <em>gsb</em> is the generating set method described inthe <a href="gensetGuide-format.html"> Generating Set Search </a> section.  All of the Newton methods have a choice ofglobalization strategy: line search, trust region, or trustregion-PDS.  Furthermore, there are numerous algorithmic parametersthat can be set.  These can be found in the <a href="annotated.html">detailed documentation</a> for each particular method.Once the algorithm object has been instantiated and the desiredalgorithmic parameters set, the problem can then be solved by callingthat algorithm's optimize method.  Tutorial examples of how to do setup and use the optimization algorithms can be found in the <ahref="#example"> Example Problems</a>.\section example Example ProblemsIn order to clarify the explanations given above, we now step througha couple of example problems.  These examples are intended to serve asa very basic tutorial.  We recommend looking at the <ahref="examples.html"> additional examples</a> provided in thedocumentation in order to obtain a broader view of the capabilities ofOPT++.  In addition, the <a href="annotated.html"> detaileddocumentation</a> contains a complete list of the capabilitiesavailable.<ul>  <li> \ref example1  <li> \ref example2</ul><p> Previous Section:  \ref InstallDoc | Next Section:  \ref AlternativeFunctions| Back to the <a href="index.html"> Main Page</a> </p>Last revised <em> July 25, 2006</em>*/

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -