⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 adaboost.cpp

📁 C++编写的机器学习算法 Lemga is a C++ package which consists of classes for several learning models and gener
💻 CPP
字号:
/** @file *  $Id: adaboost.cpp 2508 2005-11-15 08:29:43Z ling $ */#include <assert.h>#include <cmath>#include <iostream>#include "adaboost.h"REGISTER_CREATOR(lemga::AdaBoost);namespace lemga {REAL AdaBoost::train () {    cur_err.resize(n_samples);    const REAL err = Boosting::train();    cur_err.clear();    return err;}REAL AdaBoost::linear_weight (const DataWgt& sw, const LearnModel& l) {    assert(n_output() == l.n_output());    REAL err = 0;    for (UINT i = 0; i < n_samples; ++i) {        if ((cur_err[i] = l.c_error(l.get_output(i), ptd->y(i))) > 0.1)            err += sw[i];    }#if VERBOSE_OUTPUT    std::cout << "Weighted classification error: " << err*100 << "%\n";#endif    if (err >= 0.5) return -1;    REAL beta;    if (err <= 0)        beta = 1000;    else        beta = 1 / err - 1;    return std::log(beta) / 2;}REAL AdaBoost::convex_weight (const DataWgt&, const LearnModel&) {    std::cerr << "Please use the gradient descent methods for"        " convex combinations\n";    OBJ_FUNC_UNDEFINED("convex_weight");}/* We assume classification problem here. The density update rule is *      d <- d * e^(-w y f) * if y and f are binary, it is equivalent to say *      d <- d * beta for y != f, where beta = e^2w */void AdaBoost::linear_smpwgt (DataWgt& sw) {    const REAL beta = std::exp(2 * lm_wgt[n_in_agg-1]);    REAL bw_sum = 0;    for (UINT i = 0; i < n_samples; ++i) {        if (cur_err[i] > 0.1)            sw[i] *= beta;        bw_sum += sw[i];    }    assert(bw_sum != 0);    for (UINT i = 0; i < n_samples; ++i)        sw[i] /= bw_sum;}} // namespace lemga

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -