📄 algorithmrvm.html,v
字号:
<!-- ============ METHOD DETAIL ========== --><A NAME="method_detail"><!-- --></A><TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""><TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"><TH ALIGN="left" COLSPAN="1"><FONT SIZE="+2"><B>Method Detail</B></FONT></TH></TR></TABLE><A NAME="initialize()"><!-- --></A><H3>initialize</H3><PRE>public boolean <B>initialize</B>()</PRE><DL><DD>Overrides the initialize() method in the base class. Initializes member data and prepares for execution of first step. This method "resets" the algorithm.<P><DD><DL><DT><B>Specified by:</B><DD><CODE><A HREF="Algorithm.html#initialize()">initialize</A></CODE> in class <CODE><A HREF="Algorithm.html" title="class in <Unnamed>">Algorithm</A></CODE></DL></DD><DD><DL><DT><B>Returns:</B><DD>true</DL></DD></DL><HR><A NAME="run()"><!-- --></A><H3>run</H3><PRE>public void <B>run</B>()</PRE><DL><DD>Implementation of the run function from the Runnable interface. Determines what the current step is and calls the appropriate method.<P><DD><DL><DT><B>Specified by:</B><DD><CODE>run</CODE> in interface <CODE>java.lang.Runnable</CODE><DT><B>Specified by:</B><DD><CODE><A HREF="Algorithm.html#run()">run</A></CODE> in class <CODE><A HREF="Algorithm.html" title="class in <Unnamed>">Algorithm</A></CODE></DL></DD><DD><DL></DL></DD></DL><HR><A NAME="trainFull()"><!-- --></A><H3>trainFull</H3><PRE>public boolean <B>trainFull</B>()</PRE><DL><DD>this method trains an RVM probabilistic classifier on the input data and targets provided. the inputs and targets should have a one-to-one correspondence and all targets should be either 0 (out-of-class) or 1 (in-class). The training scheme follows that of [1] section 3. It is assumed that the class data and targets are already set when this method is called. the training algorithm in [1] for RVMs proceeds in three iterative steps 1. prune away any weights whose hyperparameters have gone to infinity 2. estimate most probable weights: in this step we find those weights that maximize equation (24) of [1]. The iteratively reweighted least squares algorithm is used to find w_mp 3. estimate the covariance of a Gaussian approximation to the posterior distribution (the posterior is what we want to model in the end) over the weights centered at the weights, w_mp. 4. estimate the hyperparameters that govern the weights. this is done by evaluating the derivative over the hyperparameters and finding the maximizing hyperparameters. 1, 2, 3, and 4 are carried out iteratively until a suitable convergence criteria is satisfied.<P><DD><DL><DT><B>Returns:</B><DD>boolean value indicating status</DL></DD></DL><HR><A NAME="initFullTrain()"><!-- --></A><H3>initFullTrain</H3><PRE>public boolean <B>initFullTrain</B>()</PRE><DL><DD>initializes the data structures in preparation for a full training pass<P><DD><DL><DT><B>Returns:</B><DD>boolean value indicating status</DL></DD></DL><HR><A NAME="updateHyperparametersFull()"><!-- --></A><H3>updateHyperparametersFull</H3><PRE>public boolean <B>updateHyperparametersFull</B>()</PRE><DL><DD>Updates the hyperparameter values<P><DD><DL><DT><B>Returns:</B><DD>boolean value indicating status</DL></DD></DL><HR><A NAME="irlsTrain()"><!-- --></A><H3>irlsTrain</H3><PRE>public boolean <B>irlsTrain</B>()</PRE><DL><DD>Completes one pass of IRLS training to update the weights given the currently assigne hyperparamters. In this step, we find those weights that maximize equation (24) of [1]. The iteratively reweighted least squares algorithm is used to find w_mp. It proceeds as follows: a. initialize b. loop until convergence 1. compute the B matrix 2. compute the Hessian and the gradient 3. update weights with formula new weights = weights - inv(Hessian) * gradient 4. check convergence<P><DD><DL><DT><B>Returns:</B><DD>boolean value indicating status</DL></DD></DL><HR><A NAME="pruneAndUpdate()"><!-- --></A><H3>pruneAndUpdate</H3><PRE>public boolean <B>pruneAndUpdate</B>()</PRE><DL><DD>prunes off vectors whose hyperparameters have gone to infinity and updates working data sets<P><DD><DL><DT><B>Returns:</B><DD>true</DL></DD></DL><HR><A NAME="pruneWeights()"><!-- --></A><H3>pruneWeights</H3><PRE>public boolean <B>pruneWeights</B>()</PRE><DL><DD>Prunes off vectors which attain a zero weight during training. Auxiliary data structures that are indexed according to the working set are also updated.<P><DD><DL><DT><B>Returns:</B><DD>true</DL></DD></DL><HR><A NAME="computeVarianceCholesky()"><!-- --></A><H3>computeVarianceCholesky</H3><PRE>public boolean <B>computeVarianceCholesky</B>()</PRE><DL><DD>Computes the diagonal elements of the inverse from the cholesky decomposition<P><DD><DL><DT><B>Returns:</B><DD>true</DL></DD></DL><HR><A NAME="computeSigma()"><!-- --></A><H3>computeSigma</H3><PRE>public boolean <B>computeSigma</B>()</PRE><DL><DD>Computes the sigma valued defined on pp. 218 of [1].<P><DD><DL><DT><B>Returns:</B><DD>true</DL></DD></DL><HR><A NAME="computeLikelihood()"><!-- --></A><H3>computeLikelihood</H3><PRE>public double <B>computeLikelihood</B>()</PRE><DL><DD>Computes the log likelihood of the weights given the data. We would expect this value to increase as training proceeds<P><DD><DL><DT><B>Returns:</B><DD>Likelihood of weights given the data</DL></DD></DL><HR><A NAME="computeDecisionRegions()"><!-- --></A><H3>computeDecisionRegions</H3><PRE>public void <B>computeDecisionRegions</B>()</PRE><DL><DD>Computes the line of discrimination for the classification algorithms when the corresponding flags have been initialized<P><DD><DL></DL></DD></DL><HR><A NAME="computeErrors()"><!-- --></A><H3>computeErrors</H3><PRE>public void <B>computeErrors</B>()</PRE><DL><DD>Computes the classification error for the data points<P><DD><DL></DL></DD></DL><!-- ========= END OF CLASS DATA ========= --><HR><!-- ======= START OF BOTTOM NAVBAR ====== --><A NAME="navbar_bottom"><!-- --></A><A HREF="#skip-navbar_bottom" title="Skip navigation links"></A><TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""><TR><TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"><A NAME="navbar_bottom_firstrow"><!-- --></A><TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A> </TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> <FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT> </TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A> </TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A> </TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A> </TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A> </TD> </TR></TABLE></TD><TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM></EM></TD></TR><TR><TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="AlgorithmPF.html" title="class in <Unnamed>"><B>PREV CLASS</B></A> <A HREF="AlgorithmSVM.html" title="class in <Unnamed>"><B>NEXT CLASS</B></A></FONT></TD><TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="index.html?AlgorithmRVM.html" target="_top"><B>FRAMES</B></A> <A HREF="AlgorithmRVM.html" target="_top"><B>NO FRAMES</B></A> <SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="allclasses-noframe.html"><B>All Classes</B></A>'); } //--></SCRIPT><NOSCRIPT> <A HREF="allclasses-noframe.html"><B>All Classes</B></A></NOSCRIPT></FONT></TD></TR><TR><TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY: NESTED | FIELD | <A HREF="#constructor_summary">CONSTR</A> | <A HREF="#method_summary">METHOD</A></FONT></TD><TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2">DETAIL: FIELD | <A HREF="#constructor_detail">CONSTR</A> | <A HREF="#method_detail">METHOD</A></FONT></TD></TR></TABLE><A NAME="skip-navbar_bottom"></A><!-- ======== END OF BOTTOM NAVBAR ======= --><HR></BODY></HTML>@
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -