⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 aggregating.cpp

📁 C++编写的机器学习算法 Lemga is a C++ package which consists of classes for several learning models and gener
💻 CPP
字号:
/** @file *  $Id: aggregating.cpp 2511 2005-11-23 03:34:52Z ling $ */#include <assert.h>#include "aggregating.h"namespace lemga {/** Delete learning models stored in @a lm. This is only used in *  operator= and load(). *  @note @a lm_base is not deleted since load() will need it *  @todo make it public under the name clear()? Or remove it */void Aggregating::clear () {    lm.clear();    n_in_agg = 0;}/** @copydoc LearnModel(UINT,UINT) */Aggregating::Aggregating ()    : LearnModel(0,0), lm_base(), n_in_agg(0), max_n_model(0){ /* empty */ }/** @note Brand new models are used in the new born object. Thus *  any future change to the learning models @a a will not affect *  this model. */Aggregating::Aggregating (const Aggregating& a)    : LearnModel(a), n_in_agg(a.n_in_agg), max_n_model(a.max_n_model){    lm_base = a.lm_base->clone();    const UINT lms = a.lm.size();    assert(n_in_agg <= lms);    for (UINT i = 0; i < lms; ++i)        lm.push_back(a.lm[i]->clone());}/** @copydoc Aggregating(const Aggregating&) */const Aggregating& Aggregating::operator= (const Aggregating& a) {    if (&a == this) return *this;    clear();    LearnModel::operator=(a);    lm_base = a.lm_base->clone();    n_in_agg = a.n_in_agg;    max_n_model = a.max_n_model;    const UINT lms = a.lm.size();    assert(n_in_agg <= lms);    for (UINT i = 0; i < lms; ++i)        lm.push_back(a.lm[i]->clone());    return *this;}bool Aggregating::serialize (std::ostream& os, ver_list& vl) const {    SERIALIZE_PARENT(LearnModel, os, vl, 1);    if (!(os << lm.size() << ' ' << (lm_base != 0) << '\n'))        return false;    if (lm_base != 0)        if (!(os << *lm_base)) return false;    for (UINT i = 0; i < lm.size(); ++i)        if (!(os << *lm[i])) return false;    return true;}bool Aggregating::unserialize (std::istream& is, ver_list& vl, const id_t& d) {    if (d != id() && d != empty_id) return false;    UNSERIALIZE_PARENT(LearnModel, is, vl, 1, v);    if (v == 0) /* Take care of _n_in and _n_out */        if (!(is >> _n_in >> _n_out)) return false;    UINT t3, t4;    if (!(is >> t3 >> t4) || t4 > 1) return false;    clear();    if (!t4) lm_base = 0;    else {        if (v == 0) { /* ignore a one-line comment */            char c; is >> c;            assert(c == '#');            is.ignore(100, '\n');        }        LearnModel* p = (LearnModel*) Object::create(is);        if (p == 0)            return false;        lm_base = p;    }    for (UINT i = 0; i < t3; ++i) {        LearnModel* p = (LearnModel*) Object::create(is);        if (p == 0 || p->n_input() != _n_in || p->n_output() != _n_out)            return false;        lm.push_back(p);    }    n_in_agg = t3;    return true;}/** @brief Set the base learning model. *  @todo Allowed to call when !empty()? */void Aggregating::set_base_model (const LearnModel& blm) {    lm_base = blm.clone();    if (!_n_in)  _n_in  = lm_base->n_input();    if (!_n_out) _n_out = lm_base->n_output();    assert((blm.n_input() == n_input() || !blm.n_input()) &&           (blm.n_output() == n_output() || !blm.n_output()));}/** @brief Specify the number of hypotheses used in aggregating. *  @return @c false if @a n is larger than size(). * *  Usually all the hypotheses are used in aggregating. However, a *  smaller number @a n can also be specified so that only the first *  @a n hypotheses are used. */bool Aggregating::set_aggregation_size (UINT n) {    if (n <= size()) {        n_in_agg = n;        return true;    }    else return false;}void Aggregating::initialize () {    clear();    assert(lm_base != NULL);    lm_base->initialize();}void Aggregating::set_train_data (const pDataSet& pd, const pDataWgt& pw) {    LearnModel::set_train_data(pd, pw);    for (UINT i = 0; i < lm.size(); ++i)        if (lm[i] != 0)            lm[i]->set_train_data(ptd, ptw);}}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -