⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 maximize.hlp

📁 是一个经济学管理应用软件 很难找的 但是经济学学生又必须用到
💻 HLP
字号:
{smcl}
{* 22mar2005}{...}
{cmd:help maximize}
{hline}

{title:Title}

{p2colset 5 21 23 2}{...}
{p2col: {hi:[R] maximize} {hline 2}}Details of iterative maximization{p_end}
{p2colreset}{...}


{title:Syntax}

{phang}Maximum likelihood optimization

{p 8 20 2}
{it:mle_cmd}
{it:...} [{cmd:,} {it:options}]

{phang}Set default maximum iterations

{p 8 20 2}
{cmd:set} {cmd:maxiter} {it:#} [{cmd:,} {opt perm:anently}]

{synoptset 27}
{synopthdr}
{synoptline}
{synopt:[{cmdab:no:}]{opt lo:g}}display an iteration log of the log likelihood; typically, the default{p_end}
{synopt:{opt tr:ace}}display current parameter vector in iteration log{p_end}
{synopt:{opt grad:ient}}display current gradient vector in iteration log{p_end}
{synopt:{opt hess:ian}}display current negative Hessian matrix in iteration log{p_end}
{synopt:{opt showstep}}report steps within an iteration log{p_end}
{synopt:{cmdab:shownr:tolerance}}report the current value of g*inv(H)*g'in iteration log{p_end}
{synopt:{opt tech:nique(algorithm_spec)}}maximization technique{p_end}
{synopt:{opt iter:ate(#)}}perform maximum of {it:#} iterations; default is
	{cmd:iterate(16000)}{p_end}
{synopt:{opt tol:erance(#)}}tolerance for the coefficient vector; see 
        {help maximize##tolerance:Options} for the defaults{p_end}
{synopt:{opt ltol:erance(#)}}tolerance for the log likelihood;
        {help maximize##ltolerance:Options} for the defaults{p_end}
{synopt:{opt gtol:erance(#)}}optional tolerance for the gradient relative to
	the coefficients{p_end}
{synopt:{opt nrtol:erance(#)}}tolerance for the scaled gradient;
        {help maximize##nrtolerance:Options} for the defaults{p_end}
{synopt:{opt nonrtol:erance}}ignore the {opt nrtolerance()} option{p_end}
{synopt:{opt dif:ficult}}use a different stepping algorithm in nonconcave
	regions{p_end}
{synopt:{opt from(init_specs)}}initial values for the coefficients{p_end}
{synoptline}
{p2colreset}{...}
{p 4 6 2}
where {it:algorithm_spec} is

{p 8 8 2}
{it:algorithm} [ {it:#} [ {it:algorithm} [{it:#}] ] ... ]

{p 4 6 2}
{it:algorithm} is {c -(}{opt nr} {c |} {opt bhhh} {c |} {opt dfp} {c |} {opt bfgs}{c )-}

{p 4 6 2}
and {it:init_specs} is one of

{p 8 20 2}{it:{help matname}} [{cmd:,} {cmd:skip} {cmd:copy} ]{p_end}

{p 8 20 2}{c -(} [{it:eqname}{cmd::}]{it:name} {cmd:=} {it:#} |
	{cmd:/}{it:eqname} {cmd:=} {it:#} {c )-} [{it:...}]{p_end}

{p 8 20 2}{it:#} [{it:#} {it:...}]{cmd:,} {cmd:copy}{p_end}


{title:Description}

{pstd}
Stata has two maximum likelihood optimizers: one is used by internally
coded commands, and the other is the {helpb ml} command used by estimators
implemented as ado-files.  Both optimizers use the Newton{c -}Raphson method
with step halving (to avoid downhill steps) and special fixups when 
they encounter nonconcave regions of the likelihood.
The two optimizers are similar
but differ in the details of their implementation.  For information on
programming maximum likelihood estimators in ado-files, see {helpb ml},
{bf:[R] ml}, and
{it:{browse "http://www.stata.com/bookstore/mle.html":Maximum Likelihood Estimation with Stata,}} {browse "http://www.stata.com/bookstore/mle.html":2nd edition} (Gould, Pitblado, and Sribney 2003).

{pstd}
{cmd:set} {cmd:maxiter} specifies the default maximum number of iterations for
estimation commands that iterate.  The initial value is {cmd:16000}, and
{it:#} can be {cmd:0} to {cmd:16000}.  To change the maximum number of
iterations performed by a particular estimation command, you need not reset
{cmd:maxiter}; you can specify the {opt iterate(#)} option.  When
{opt iterate(#)} is not specified, the {cmd:maxiter} value is used.


{title:Maximization options}

{phang}
{opt log} and {opt nolog} specify whether an iteration log showing the
progress of the log likelihood is to be displayed.  For most commands, the log
is displayed by default, and {opt nolog} suppresses it.  For a few commands
(such as the {opt svy} maximum likelihood estimators), you must specify
{opt log} to see the log.

{phang}
{opt trace} adds to the iteration log a display of the current
parameter vector.

{phang}
{opt gradient} ({helpb ml}-programmed estimators only) adds to the
iteration log a display of the current gradient vector.

{phang}
{opt hessian} ({helpb ml}-programmed estimators only) adds to the
iteration log a display of the current negative Hessian matrix.

{phang}
{opt showstep} ({helpb ml}-programmed estimators only) adds to the
iteration log a report on the steps within an iteration.  This option was
added so that developers at StataCorp could view the stepping when they were
improving the {help ml} optimizer code.  At this point, it mainly provides
entertainment.

{phang}
{cmd:shownrtolerance} ({help ml}-programmed estimators only) adds to the
iteration log the current value of g*inv(H)*g', which is compared with the
value of {cmd:nrtolerance()} to test for convergence.  This value is only
computed and reported when all other stopping criteria have been met.

{phang}
{opt technique(algorithm_spec)}
({helpb ml}-programmed estimators only) specifies how the likelihood function
is to be maximized.  The following algorithms are currently implemented in 
{helpb ml}.  For details, see
{it:{browse "http://www.stata.com/bookstore/mle.html":Maximum Likelihood Estimation with Stata,}} {browse "http://www.stata.com/bookstore/mle.html":2nd edition} (Gould, Pitblado, and Sribney 2003).

{pmore}
        {cmd:technique(nr)} specifies Stata's modified Newton-Raphson (NR)
        algorithm.

{pmore}
        {cmd:technique(bhhh)} specifies the Berndt-Hall-Hall-Hausman (BHHH)
        algorithm.

{pmore}
        {cmd:technique(dfp)} specifies Davidon-Fletcher-Powell (DFP) algorithm.

{pmore}
        {cmd:technique(bfgs)} specifies the Broyden-Fletcher-Goldfarb-Shanno
        (BFGS) algorithm.

{pmore}The default is {cmd:technique(nr)}.

{pmore}
    You can switch between algorithms by specifying more than one in the
    {opt technique()} option.  By default, {cmd:ml} will use an algorithm for
    five iterations before switching to the next algorithm.  To specify a
    different number of iterations include the number after the technique in
    the option.  For example, specifying {cmd:technique(bhhh 10 nr 1000)}
    requests that {cmd:ml} perform 10 iterations using the BHHH algorithm
    perform 1000 iterations using the NR algorithm, and then switch back
    to BHHH for 10 iterations, and so on.  The process continues until
    convergence or until the maximum number of iterations is reached.

{phang}
{opt vce(vce_spec)} specifies 
the type of variance{c -}covariance matrix.
Depending on the command, {it:vce_spec} is one of the following:
{opt oim}, {opt opg}, {opt native}, {opt bootstrap}, or {opt jackknife}.

{phang2}
{cmd:vce(oim)}, {cmd:vce(opg)}, and {cmd:vce(native)} are documented in
{help ml:[R] ml}; only {opt ml}-programmed estimators will accept these
specific options.

{phang2}
{cmd:vce(bootstrap}[{cmd:,} {it:bootstrap_options}]{cmd:)}
specifies that the standard errors and coefficient covariance matrix be
estimated using the nonparametric bootstrap; see {helpb bootstrap} for
information on {it:bootstrap_options}.

{phang2}
{cmd:vce(jackknife}[{cmd:,} {it:jackknife_options}]{cmd:)}
specifies that the standard errors and coefficient covariance matrix be
estimated using the jackknife; see {helpb jackknife} for
information on {it:jackknife_options}.

{phang}
{opt iterate(#)} specifies the maximum number of iterations.
When the number of iterations equals {cmd:iterate()}, the optimizer stops and
presents the current results.  If convergence is declared before this
threshold is reached, it will stop when convergence is declared.  Specifying
{cmd:iterate(0)} is useful for viewing results evaluated at the initial value
of the coefficient vector.  Specifying {cmd:iterate(0)} and {cmd:from()} 
together allows you to view results evaluated at a specified coefficient
vector; note, however, that only a few commands allow the {opt from()} option.
The default value of {opt iterate(#)} for both estimators programmed
internally and estimators programmed with {helpb ml} is the current value of
{helpb set maxiter}, which is {cmd:iterate(16000)} by default.


{hline}
{pstd}
Below we describe the four different types of convergence tolerances employed
by Stata estimators, and we describe the {cmd:nonrtolerance} option.  After
these descriptions, we explain how the various tolerances are used to 
determine whether the maximization algorithm has converged.

{phang}
{marker tolerance}
{cmd:tolerance(#)} specifies the tolerance for the
coefficient vector.  When the relative change in the coefficient vector from
one iteration to the next is less than or equal to {opt tolerance()}, the
{opt tolerance()} convergence criterion is satisfied.

{pmore}
{cmd:tolerance(1e-4)} is the default for estimators programmed
internally in Stata.

{pmore}
{cmd:tolerance(1e-6)} is the default for estimators
programmed with {helpb ml}.

{phang}
{marker ltolerance}
{opt ltolerance(#)} specifies the tolerance for the log
likelihood.  When the relative change in the log likelihood from one
iteration to the next is less than or equal to {opt ltolerance()}, the 
{opt ltolerance()} convergence is satisfied.

{pmore}
{cmd:ltolerance(0)} is the default for estimators programmed internally in
Stata.

{pmore}
{cmd:ltolerance(1e-7)} is the default for estimators programmed with {helpb ml}.

{phang}
{opt gtolerance(#)} ({helpb ml}-programmed estimators only)
specifies the tolerance for the gradient relative to the coefficients.
When |g_i*b_i| {ul:<} {opt gtolerance()} for all parameters b_i and the
corresponding elements of the gradient g_i, the gradient tolerance criterion
is met.  By default, this criterion is not checked and so there is no
default value for the gradient tolerance.

{phang}
{marker nrtolerance}
{opt nrtolerance(#)} ({helpb ml}-programmed estimators only)
specifies the tolerance for the scaled gradient.
Convergence is declared when g*inv(H)*g' < {opt nrtolerance()}.  
{opt nrtolerance()} differs from {opt gtolerance()} in that it checks that the
gradient is scaled by H.  The default is {cmd:nrtolerance(1e-5)}.

{phang}
{opt nonrtolerance} ({helpb ml}-programmed estimators only)
specifies that the default {opt nrtolerance} criterion be turned off.

{pstd}
For internally-programmed Stata estimators, convergence is declared when
either the {opt tolerance()} or {opt ltolerance()} criterion has first
been met.  No other criteria are checked.

{pstd}
For ml-programmed estimators, by default convergence is declared when the
{opt nrtolerance()} criterion and either of the {opt tolerance()} or
{opt ltolerance()} criteria have been met.   If {opt nonrtolerance} is
specified, then convergence is declared when either of the {opt tolerance()}
or {opt ltolerance()} criteria has been met.

{pstd}
If {opt gtolerance()} is specified, then the {opt gtolerance()} criterion
must be met in addition to any other required criteria in order for
convergence to be declared.{p_end}
{hline}


{phang}
{opt difficult} ({helpb ml}-programmed estimators only) specifies that
the likelihood function is likely to be difficult to maximize due to
nonconcave regions.  When the message "not concave" appears repeatedly,
{opt ml}'s standard stepping algorithm may not be working well.
{opt difficult} specifies that a different stepping algorithm be used in
nonconcave regions.  There is no guarantee that {opt difficult} will work
better than the default; sometimes it is better, and sometimes it is worse.
You should use the {opt difficult} option only when the default stepper
declares convergence and the last iteration is "not concave", or when the
default stepper is repeatedly issuing "not concave" messages and only
producing tiny improvements in the log likelihood.

{phang}
{opt from()} specifies initial values for the coefficients.  Note that
only a few estimators in Stata currently support this option. You can specify
the initial values in one of three ways: by specifying the name of a
vector containing the initial values (e.g., {cmd:from(b0)} where {cmd:b0} is a
properly labeled vector); by specifying coefficient names with the values
(e.g., {cmd:from(age=2.1 /sigma=7.4)}); or by specifying a list of values
(e.g., {cmd:from(2.1 7.4, copy)}).  {opt from()} is intended for use when
you are doing
bootstraps (see {helpb bootstrap}) and in other special situations (e.g.,
with {cmd:iterate(0)}).  Even when the values specified in {opt from()}
are close to the values that maximize the likelihood, only a few
iterations may be saved.  Poor values in {opt from()} may lead to convergence
problems.

{phang2}
{opt skip} specifies that any parameters be ignored that are found in the
specified initialization vector but are not also found in the model.  The
default action is to issue an error message.

{phang2}
{opt copy} specifies that the list of values or the initialization
vector be copied into the initial-value vector by position rather than
by name.


{title:Option for set maxiter}

{phang}
{opt permanently} specifies that, in addition to making the change right now,
the {cmd:maxiter} setting be remembered and become the default setting when
you invoke Stata.


{title:Remarks}

{pstd}
Only in rare circumstances would you ever need to specify any of these
options, with the exception of {opt nolog}.  The {opt nolog} option is useful
for reducing the amount of output appearing in log files.


{title:Also see}

{psee}
Manual:  {bf:[R] maximize}

{psee}
Online:  
{helpb lrtest},
{helpb ml},
{helpb test}
{p_end}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -