📄 nonlinear_conjugate_gradient.html
字号:
<html xmlns:mwsh="http://www.mathworks.com/namespace/mcode/v1/syntaxhighlight.dtd">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<!--
This HTML is auto-generated from an M-file.
To make changes, update the M-file and republish this document.
-->
<title>nonlinear_conjugate_gradient</title>
<meta name="generator" content="MATLAB 7.2">
<meta name="date" content="2006-09-28">
<meta name="m-file" content="script_nonlinear_conjugate_gradient"><style>
body {
background-color: white;
margin:10px;
}
h1 {
color: #990000;
font-size: x-large;
}
h2 {
color: #990000;
font-size: medium;
}
/* Make the text shrink to fit narrow windows, but not stretch too far in
wide windows. On Gecko-based browsers, the shrink-to-fit doesn't work. */
p,h1,h2,div.content div {
/* for MATLAB's browser */
width: 600px;
/* for Mozilla, but the "width" tag overrides it anyway */
max-width: 600px;
/* for IE */
width:expression(document.body.clientWidth > 620 ? "600px": "auto" );
}
pre.codeinput {
background: #EEEEEE;
padding: 10px;
}
@media print {
pre.codeinput {word-wrap:break-word; width:100%;}
}
span.keyword {color: #0000FF}
span.comment {color: #228B22}
span.string {color: #A020F0}
span.untermstring {color: #B20000}
span.syscmd {color: #B28C00}
pre.codeoutput {
color: #666666;
padding: 10px;
}
pre.error {
color: red;
}
p.footer {
text-align: right;
font-size: xx-small;
font-weight: lighter;
font-style: italic;
color: gray;
}
</style></head>
<body>
<div class="content">
<h1>nonlinear_conjugate_gradient</h1>
<introduction>
<p>Minimize a function using nonlinear conjugate gradient with secant and Polak-Ribiere.</p>
</introduction>
<h2>Contents</h2>
<div>
<ul>
<li><a href="#1">Syntax</a></li>
<li><a href="#2">Description</a></li>
<li><a href="#8">Input</a></li>
<li><a href="#14">Output</a></li>
<li><a href="#16">Signature</a></li>
<li><a href="#18">See also</a></li>
</ul>
</div>
<h2>Syntax<a name="1"></a></h2><pre> [x,iter,time_elapsed]=nonlinear_conjugate_gradient(start_x, NCG_parameters, gradient_function, gradient_arguments,verbose)</pre><h2>Description<a name="2"></a></h2>
<p>See nonlinear_conjugate_gradient_driver for an example to use this function.</p>
<p>Uses only gradient information.</p>
<p>No function evaluations needed.</p>
<p>No second derivatives needed.</p>
<p>I have implemented the Polak-Ribiere variant of nonlinear conjugate gradients algorithm, which only needs the gradient and
does not require evaluation of the function. By using a secant method for the line search, this algorithm also avoids the
need for computing the second derivatives (Hessian matrix).
</p>
<h2>Input<a name="8"></a></h2>
<div>
<ul>
<li>start_x --> 1Xd column vector, starting value.</li>
<li>NCG_parameters --> parameters of the algorithm (see below).</li>
<li>gradient_function --> name of the function which evaluates the gradient. [Returns 1xd column vector]</li>
<li>gradient_arguments --> function specific arguments for the gradient_function.</li>
<li>verbose -->if 1 shows the progress.</li>
</ul>
</div>
<p>Nonlinear Conjugate Gradient parameters</p>
<div>
<ul>
<li>NCG_parameters.epsilon_CG---CG error tolerance</li>
<li>NCG_parameters.epsilon_secant---secant method error tolerance</li>
<li>NCG_parameters.iter_max_CG---maximum number of CG iterations</li>
<li>NCG_parameters.iter_max_secant---maximum number of secant method iterations</li>
<li>NCG_parameters.sigma_0---secant method step parameter</li>
</ul>
</div>
<p>Gradient function should be of the form</p>
<p>[gradient]=gradient_function(x, gradient_arguments);</p>
<h2>Output<a name="14"></a></h2>
<div>
<ul>
<li>x --> 1Xd column vector, the minimizer.</li>
<li>iter --> Number of outer iterations.</li>
<li>time_elapsed --> Time taken in seconds.</li>
</ul>
</div>
<h2>Signature<a name="16"></a></h2>
<div>
<ul>
<li><b>Author:</b> Vikas Chandrakant Raykar
</li>
<li><b>E-Mail:</b> <a href="mailto:vikas.raykar@siemens.com">vikas.raykar@siemens.com</a>, <a href="mailto:vikas@cs.umd.edu">vikas@cs.umd.edu</a></li>
<li><b>Date:</b> June 29 2006
</li>
</ul>
</div>
<h2>See also<a name="18"></a></h2>
<p><a href="nonlinear_conjugate_gradient_driver.html">nonlinear_conjugate_gradient_driver</a>, <a href="preconditioned_nonlinear_conjugate_gradient.html">preconditioned_nonlinear_conjugate_gradient</a></p>
<p class="footer"><br>
Published with wg_publish; V1.0<br></p>
</div>
<!--
##### SOURCE BEGIN #####
%% nonlinear_conjugate_gradient
% Minimize a function using nonlinear conjugate gradient with secant and Polak-Ribiere.
%% Syntax
% [x,iter,time_elapsed]=nonlinear_conjugate_gradient(start_x, NCG_parameters, gradient_function, gradient_arguments,verbose)
%% Description
%%
% See nonlinear_conjugate_gradient_driver for an example to use this function.
%%
% Uses only gradient information.
%%
% No function evaluations needed.
%%
% No second derivatives needed.
%%
% I have implemented the Polak-Ribiere variant of nonlinear conjugate gradients algorithm,
% which only needs the gradient and does not require evaluation of the function.
% By using a secant method for the line search, this algorithm also avoids the
% need for computing the second derivatives (Hessian matrix).
%%
%% Input
%%
% * start_x REPLACE_WITH_DASH_DASH> 1Xd column vector, starting value.
% * NCG_parameters REPLACE_WITH_DASH_DASH> parameters of the algorithm (see below).
% * gradient_function REPLACE_WITH_DASH_DASH> name of the function which evaluates the gradient. [Returns 1xd column vector]
% * gradient_arguments REPLACE_WITH_DASH_DASH> function specific arguments for the gradient_function.
% * verbose REPLACE_WITH_DASH_DASH>if 1 shows the progress.
%%
% Nonlinear Conjugate Gradient parameters
%%
%%
% * NCG_parameters.epsilon_CGREPLACE_WITH_DASH_DASH-CG error tolerance
% * NCG_parameters.epsilon_secantREPLACE_WITH_DASH_DASH-secant method error tolerance
% * NCG_parameters.iter_max_CGREPLACE_WITH_DASH_DASH-maximum number of CG iterations
% * NCG_parameters.iter_max_secantREPLACE_WITH_DASH_DASH-maximum number of secant method iterations
% * NCG_parameters.sigma_0REPLACE_WITH_DASH_DASH-secant method step parameter
%%
% Gradient function should be of the form
%%
% [gradient]=gradient_function(x, gradient_arguments);
%%
%%
%% Output
%%
% * x REPLACE_WITH_DASH_DASH> 1Xd column vector, the minimizer.
% * iter REPLACE_WITH_DASH_DASH> Number of outer iterations.
% * time_elapsed REPLACE_WITH_DASH_DASH> Time taken in seconds.
%%
%% Signature
%%
% * *Author:* Vikas Chandrakant Raykar
% * *E-Mail:* vikas.raykar@siemens.com, vikas@cs.umd.edu
% * *Date:* June 29 2006
%%
%% See also
% nonlinear_conjugate_gradient_driver, preconditioned_nonlinear_conjugate_gradient
%%
%
##### SOURCE END #####
-->
</body>
</html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -