⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 lpboost.h

📁 C++编写的机器学习算法 Lemga is a C++ package which consists of classes for several learning models and gener
💻 H
字号:
// -*- C++ -*-#ifndef __LEMGA_AGGREGATING_LPBOOST_H__#define __LEMGA_AGGREGATING_LPBOOST_H__/** @file *  @brief Declare @link lemga::LPBoost LPBoost@endlink class. * *  $Id: lpboost.h 2167 2005-06-09 00:05:52Z ling $ */#include "boosting.h"namespace lemga {/** @brief %LPBoost (Linear-Programming %Boosting). * *  With a similar idea to the original %LPBoost [1], which solves *  \f{eqnarray*} *          \min && -\rho + D \sum_i \xi_i \\ *  \textrm{s.t.}&& y_i \left(\sum_t w_t h_t(x_i)\right) \ge \rho - \xi_i,\\ *               && \xi_i \ge 0, \quad w_t \ge 0, \quad \sum_t w_t = 1. *  \f} *  we instead implement the algorithm to solve *  \f{eqnarray*} *          \min && \sum_t w_t + C \sum_i \xi_i \\ *  \textrm{s.t.}&& y_i \left(\sum_t w_t h_t(x_i)\right) \ge 1 - \xi_i,\\ *               && \xi_i \ge 0, \quad w_t \ge 0. *  \f} *  by column generation. Note that the dual problem is *  \f{eqnarray*} *          \max && \sum_i u_i \\ *  \textrm{s.t.}&& \sum_i u_i y_i h_t(x_i) \ge 1, \qquad (*)\\ *               && 0 \le u_i \le C. *  \f} *  Column generation corresponds to generating the constraints (*). *  We actually use individual upper bound @f$C_i@f$ proportional to *  example's initial weight. * *  If we treat @f$v_i@f$, the normalized version of @f$u_i@f$, as the *  sample weight, and @f$\Sigma_u = \sum_i u_i@f$ as the normalization *  constraint, (*) is the same as *     @f[ \Sigma_u (1 - 2 e(h_t, v)) \ge 1, @f] *  which means *     @f[ e(h_t, v) \ge \frac12 (1 - \Sigma_u^{-1}).\qquad (**) @f] *  Assume that we have found @f$h_1, \dots, h_T@f$ so far, solving the dual *  problem with @f$T@f$ (*) constraints gives us @f$\Sigma_u@f$. If for *  every remaining @f$h@f$ in @f$\cal{H}@f$, *     @f$ e(h, v) \ge \frac12 (1 - \Sigma_u^{-1}),@f$ *  the duality condition tells us that even if we set @f$w=0@f$ for those *  remaining @f$h@f$, the solution is still optimal. Thus, we can train the *  weak learner with sample weight @f$v@f$ in each iteration, and terminate *  if the best hypothesis has satisfied (**). * *  [1] A. Demiriz, K. P. Bennett, and J. Shawe-Taylor. Linear programming *      boosting via column generation. <EM>Machine Learning</EM>, *      46(1-3):225-254, 2002. */class LPBoost : public Boosting {    REAL RegC;public:    explicit LPBoost (REAL _C = 1.0) : Boosting(false) { set_C(_C); }    explicit LPBoost (std::istream& is) { is >> *this; }    virtual const id_t& id () const;    virtual LPBoost* create () const { return new LPBoost(); }    virtual LPBoost* clone () const { return new LPBoost(*this); }    virtual REAL train ();    /// The regularization constant C.    REAL C () const { return RegC; }    /// Set the regularization constant C.    void set_C (REAL _C) { assert(_C >= 0); RegC = _C; }};} // namespace lemga#ifdef  __LPBOOST_H__#warning "This header file may conflict with another `lpboost.h' file."#endif#define __LPBOOST_H__#endif

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -