⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 svmfusvmsmallopt.h

📁 This is SvmFu, a package for training and testing support vector machines (SVMs). It s written in C
💻 H
字号:
//     This is a part of the SvmFu library, a library for training//     Support Vector Machines.//     Copyright (C) 2000  rif and MIT////     Contact: rif@mit.edu//     This program is free software; you can redistribute it and/or//     modify it under the terms of the GNU General Public License as//     published by the Free Software Foundation; either version 2 of//     the License, or (at your option) any later version.//     This program is distributed in the hope that it will be useful,//     but WITHOUT ANY WARRANTY; without even the implied warranty of//     MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the//     GNU General Public License for more details.//     You should have received a copy of the GNU General Public//     License along with this program; if not, write to the Free//     Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,//     MA 02111-1307 USA#ifndef SVMFU_SVM_SMALL_OPT_HEADER#define SVMFU_SVM_SMALL_OPT_HEADER#include "SvmFuSvmConstants.h"#include "SvmFuSvmTypedefs.h"#include "SvmFuSvmLargeOpt.h"#include "SvmFuSvmBase.h"#include "SvmFuSvmKernCache.h"//! Non-chunking SMO Svm/*!// This class is derived from SvmFuSvmBase, and is used for solving// small- to medium- sized optimization problems.  It doesn't necessarily// generate the entire kernel matrix, but assumes that it has enough // memory to do so.  It does NOT exploit symmetry, because this hurts// cache coherence too much.// The class can operate in two modes, fullCache or unbndCache.// In fullCache mode, the entire gradient vector is computed (incrementally// from its last value) at every step.  When unbndCache mode is entered,// the alphas are saved, and from then on, only the alphas that were // unbndSupVecs AT THE TIME unbndCache MODE WAS ENTERED are updated, or have// their gradients looked at.  (A few extra vectors which were "close" to// being unbounded may be included in this "unbounded set.")  When fullCache// mode is reentered, all the gradients are updated using the old values// of the alphas.// Whenever a kernel row is needed, all the entries of that row are generated.// Previously generated entries are read from other rows.  It stores the// kernel values in an SvmKernCache object.// This class can be used as a building block for solving larger problems.*/template <class DataPt, class KernVal> class SvmSmallOpt :  public virtual SvmBase<DataPt, KernVal> {public:    SvmSmallOpt(int svmSize, const IntVec y, 		DataPt *trnSetPtr,		const KernVal (*kernProdFuncPtr)(const DataPt &pt1,						 const DataPt &pt2),		double C=10, double tol=10E-4, double eps = 10E-12,		SvmKernCache<DataPt, KernVal> *kernCache = 0);        virtual ~SvmSmallOpt();        virtual double outputAtTrainingExample(int ex) const;    virtual double dualObjFunc() const;        virtual void setAlpha(int ex, double newAlpha);    void useFullCache();    void useUnbndCacheOnly();    bool fullCacheOnP() const;    void optimize();    //! for computing B at the end of the optimization.    void fixB(bool printInfo=true);     //! for use by modification classes (such as SvmTransduct)    //! You can get into trouble with this if you don't know what    //! you're doing.    void addToCachedGradient(int ex, double amt);    //! Note that if eps and tol are not chosen sufficiently small,     //! with (eps << tol <<<<< C), extreme brokenness can result.  This is     //! the library user's responsibility.    void setTolerance(double tol);    double getTolerance() const;        // This is NOT allowed.  If you want to see how the kernCache    // changes, pass one in and keep the pointer.    // SvmKernCache<DataPt, KernVal> *getKernCache() const;        //! This will cause offsetVec[i] to be added whenever we calculate    //! the output at point i.  The classic use of this is when there    //! are SVs that aren't in the working set (call the set of such SV's J),    //! to set offset[i] = \sum{j \in J} y_j*alpha_j*K(x_i,X_j).    //! Note that the computation of the objective function is also    //! affected by this.    void setOffsetVec(const DoubleVec offsetVec); // COPY    double getOffset(int ex) const;    double getCachedGradient(int ex) const;    //! This is the sum of the alphas that aren't in the working set.    //! It's so that we can get the corrective objective function when    //! the LargeOpt builds chunks.    void setAlphaOffset(double alphaOffset);    double getAlphaOffset() const;    int getStepsTaken() const;private:    DoubleVec oldAlphas_;    DoubleVec cachedGradients_;    bool takeStep(int ex1, int ex2);        IntVec workingSet_, nonWorkingSet_, totalSet_;    int workingSetSize_, nonWorkingSetSize_;        bool fullCacheOnP_;    int stepsTaken_; //!< The number of optimization steps taken.    void setTopBottomPosNeg();    void chooseExamples();    double topPos_, bottomPos_, topNeg_, bottomNeg_;    int topPosInd_, bottomPosInd_, topNegInd_, bottomNegInd_;    int ex1_, ex2_;    double maxVal_;    double tol_;        SvmKernCache<DataPt, KernVal> *kernCache_;    bool kernCacheCreatedP_;        double alphaOffset_;    DoubleVec offsetVec_;    // Moved into SvmFuSvmKernCache:    // KernVal cachedKernProd(int ex1, int ex2);    // const KernVal *cachedKernProdRowPtr(int ex);    // void generateKernelRow(int ex);     // KernVal **kernelRows_;    // int numRowsGenerated_;    // BoolVec kernelRowsGeneratedP_;    // We have no interest in copies.    SvmSmallOpt(const SvmSmallOpt&);    SvmSmallOpt& operator= (const SvmSmallOpt&);};  #endif // SVMFU_SVM_SMALL_OPT_HEADER

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -