⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 network.h

📁 足球机器人仿真组SimuroSot11vs11的源程序。
💻 H
📖 第 1 页 / 共 2 页
字号:
/* * Lightweight Neural Net ++  * http://lwneuralnetplus.sourceforge.net/ * * This C++ library provides the class network wich implements a  * feed forward neural network with backpropagation, * You can use logistic or tanh as sigmoidal function * Library provides on line training, momentum,  * batch training and superSAB training. * * By Lorenzo Masetti <lorenzo.masetti@libero.it> and Luca Cinti <lucacinti@supereva.it> * Based on lwneuralnet C library by Peter Van Rossum <petervr@debian.org>, Luca Cinti and Lorenzo Masetti  * http://lwneuralnet.sourceforge.net * * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. *  */#ifndef NETWORK_H#define NETWORK_H#define NET_LOGISTIC 0#define NET_TANH 1#include <stdio.h>#include <stdexcept>#include <iostream>#include <vector>

using namespace std;/*! \brief Class implementing a feed forward neural network with backpropagation * This C++ library provides the class network wich implements a  * feed forward neural network with backpropagation. * * You can use logistic or tanh as sigmoidal function * * Provides on line training, momentum,  * batch training and superSAB training. */class network{public:  /*!\brief Public constant for the logistic function */  static const int LOGISTIC;  /*!\brief Public constant for the tanh function */  static const int TANH;  /*!\brief Constructor for a network    * \param activ activation function (network::LOGISTIC or network::TANH)   * \param no_of_layers Integer.   * \param ... Sequence of integers.   *   * Allocate memory for a neural network with no_of_layers layers,   * including the input and output layer. The number of neurons in each   * layer is given as ..., starting with the input layer and ending with   * the output layer.   *   * The parameters of the network are set to default values.    * (for example momentum is 0).    * You can change them later by the mutators methods.   *   * If no_of_layers < 2 throws a runtime_error exception   */    network (int activ, int no_of_layers, ...);  /*!\brief Constructor for a network    * \param activ activation function (network::LOGISTIC or network::TANH)   * \param layers vector of integers containing the number of neurons of each   *        layer   *   * Allocate memory for a neural network with layers.size() layers,   * including the input and output layer. The number of neurons in each   * layer is given in the vector, starting with the input layer and ending with   * the output layer.   *   * The parameters of the network are set to default values.    * (for example momentum is 0).    * You can change them later by the mutators methods.   *   * If layers.size() < 2 throws a runtime_error exception   */    network (int activ, std::vector<int> layers);  /*!\brief Constructor. Load network from  file.   * \param filename Pointer to name of file to load    * \param binary bool if true (default) the file is binary   *        otherwise is a text file    * If filename does not exist throws a runtime_error exception   */    network (const char *filename, bool binary = true);  /*! \brief Copy constructor   */    network (const network & b);  /*!\brief Destructor. Free memory allocated for a network.   */   ~network ();  /*!\brief Assign random values to all weights in the network.   * \param range doubleing point number.   *   * All weights in the neural network are assigned a random value   * from the interval [-range, range].   */  void randomize (double range);  /****************************************   * Accessors   ****************************************/  /*!\brief Retrieve the momentum of a network.   * \return Momentum of the neural work.   */  double get_momentum () const;  /*!\brief Retrieve the momentum of a network.   * \return Learning rate of the neural work.   */  double get_learning_rate () const;  /*!\brief Retrieve the number of inputs of a network.   * \return Number of neurons in the input layer of the neural network.   */  int get_no_of_inputs () const;  /*!\brief Retrieve the number of outputs of a network.   * \return Number of neurons in the output layer of the neural network.   */  int get_no_of_outputs () const;  /*!\brief Retrieve the number of layers of a network.   * \return Number of layers, including the input and output layers, of the    * neural network.   */  int get_no_of_layers () const;  /*!\brief Retrieve number of neurons on a layer of a netwrwork   * \param l layer index ( should be 0 <= l < get_no_of_layers() )   * \return number of neurons on layer l   */  int get_no_of_neurons (int l) const;  /*!\brief Retrieve a weight of a network.   * \param l Number of lower layer.   * \param nl Number of neuron in the lower layer.   * \param nu Number of neuron in the next layer.   * \return Weight connecting the neuron numbered nl in the layer   * numbered l with the neuron numbered nu in the layer numbered l+1.   */  double get_weight (int l, int nl, int nu) const;  /*!\brief Retrieve the number of patterns in batch training   * \return number of patterns   */  int get_no_of_patterns () const;  /*!\brief Retrieve the activation function of network (network::LOGISTIC or network::TANH)   * \return activation function   */  int get_activation () const;  /*!\brief Retrieve the output error of a network.     * \return Output error of the neural network.     *     * Before calling this routine, compute() and     * compute_output_error() should have been called to compute outputs     * for given inputs and to acually compute the output error. This     * routine merely returns the output error (which is stored internally     * in the neural network).   */  double get_output_error () const;  /* Accessors for parameters of SuperSab */  /*!\brief Retrieve maximum learning rate allowed in SuperSab mode    * \return double maximum learning rate   *   * Values of learning rates cannot be greater than this value   */  double get_max_learning_rate ();  /*!\brief Retrieve minimum learning rate allowed in SuperSab mode    * \return double minimum learning rate   *   * Values of learning rates cannot be lesser than this value   */  double get_min_learning_rate ();  /*!\brief Retrieve factor for increasing learning rate in SuperSab mode   * \return double factor for increasing learning rate   *   * In SuperSab mode: if delta at this step has the same sign of delta at   * the previous step, the learning rate of that weight is multiplied by   * this value   */  double get_ssab_up_factor ();  /*!\brief Retrieve factor for decreasing learning rate in SuperSab mode   * \return double factor for decreasing learning rate   *   * In SuperSab mode: if delta at this step has the opposite  sign of delta at   * the previous step, the learning rate of that weight is multiplied by   * this value   */  double get_ssab_down_factor ();  /****************************************   * Mutators   ****************************************/  /*!\brief Change the learning rate of a network.   * \param learning_rate doubleing point number.   */  void set_learning_rate (double learning_rate);  /*!\brief Set activation function of the network.   * \param num_func Number of function (network::LOGISTIC or network::TANH)   */  void set_activation (int num_func);  /*!\brief Change the momentum of a network.   * \param momentum doubleing point number.   */  void set_momentum (double momentum);  /* Mutators for parameter of SuperSab training */  /*!\brief Set maximum learning rate allowed in SuperSab mode    * \param max maximum learning rate   *   * Values of learning rates cannot be greater than this value.   *   * If the previous max learning rate was greater than the new one   * and SuperSab mode is active, all the learning rates are changed to make   * them lesser than the new maximum.    *   * So, if you just want to change default max learning rate,    * call this method before begin_ssab().   */  void set_max_learning_rate (double max);    /*!\brief Set minimum learning rate allowed in SuperSab mode    * \param min minimum learning rate   *   * Values of learning rates cannot be lesser than this value   *   * If the previous min learning rate was lesser  than the new one   * and SuperSab mode is active, all the learning rates are changed to make   * them greater than the new minimum.    *   * So, if you just want to change default min learning rate,    * call this method before begin_ssab().   */  void set_min_learning_rate (double min);  /*!\brief Set factor for increasing learning rate in SuperSab mode   * \param factor (for increasing learning rate)   *   * In SuperSab mode: if delta at this step has the same sign of delta at   * the previous step, the learning rate of that weight is multiplied by   * this value ( should be factor > 1 )   */  void set_ssab_up_factor (double factor);  /*!\brief Set factor for decreasing learning rate in SuperSab mode   * \param factor (for decreasing learning rate)   *   * In SuperSab mode: if delta at this step has the opposite  sign of delta at   * the previous step, the learning rate of that weight is multiplied by   * this value ( should be 0 < factor < 1 )   */  void set_ssab_down_factor (double factor);  /****************************************   * File I/O for binary files   ****************************************/  /*!\brief Write a network to a binary file.   * \param filename Pointer to name of file to write to.   * \return true on success, false on failure.   */  bool save (const char *filename) const;  /*!\brief Read a network from a binary file.   * \param filename Pointer to name of file to read from.   * If filename does not exist, or the format of    * the file is wrong throws a runtime_error exception.   *   * It is possible to import files in old format   * used by C-library lwneuralnet   */  void load (const char *filename);  /****************************************   * Friendly printing   ****************************************/  /*!\brief Write a network to stdout in a friendly format.   * \param show If show==true weights are displayed   *   * Se also operator<<()    */  void friendly_print (const bool show = false) const;  /****************************************   * File I/O for Text Files   ****************************************/  /* Please note that Text File format is provided for compatibility   * with old lwneuralnet format but it should not be used   *   * NOTE FOR LWNEURALNET USERS:   *   * Text files containing networks created by lwneuralnet might have a    * different format, which does not have the number of sigmoidal function   * as first information. But, since in old versions logistic function was    * the only one provided, you can convert those files by the command    * echo 0 > newfile.net; cat oldfile.net >> newfile.net   *   *    * Starting from version 0.88 textload method provides a solution to this   * problem:   * if the first number in text file is >= 2 it is interpreted as the number   * of layers and the function is set to logistic.   */  /*!\brief Write a network to a stdout.   */  void print () const;  /*!\brief Write a network to a text file.   * \param filename Pointer to name of file to write to.   * \return true on success, false on failure.   */  bool textsave (const char *filename) const;  /*!\brief Read a network from a text file.   * \param filename Pointer to name of file to read from.   *   * If filename does not exist throws a runtime_error exception   */  void textload (const char *filename);  /****************************************   * Errors   *   * Before calling these routines, compute() should have been called to   * compute the ouputs for a given input. This routine compares the   * actual output of the neural network (which is stored internally in   * the neural network) and the intended output (in target).   *   ****************************************/  /*!\brief Compute the output error of a network.   * \param target Pointer to a sequence of doubleing point numbers.   * \return Output error of the neural network.   *   * The return value is the square of the Euclidean distance between the    * actual output and the target. This routine also prepares the network   * for  backpropagation training by storing (internally in the neural   * network) the errors associated with each of the outputs. */  double compute_output_error (const double *target);  /*!\brief Compute the average error of a network   * \param target Pointer to a sequence of doubleing point numbers.   * \return Average error of the neural network.   *   * The average error is defined as the average value of absolute   * differences between output and target   */  double compute_average_error (const double *target) const;  /*!\brief Compute the quadratic error a network   * \param target Pointer to a sequence of doubleing point numbers.   * \return Quadratic error of the neural network.   *   * The quadratic error is defined as    * sqrt(sum ( T_j - O_j )^2) / N where T_j are targets and O_j are outputs   */  double compute_quadratic_error (const double *target) const;  /*!\brief Compute the max error a network   * \param target Pointer to a sequence of doubleing point numbers.   * \return Maximum error of the neural network.   *   * The maximum error is defined as the maximum of absolute differences   * between outputs and targets.   */  double compute_max_error (const double *target) const;  /****************************************   * Evaluation and Training   ****************************************/  /*!\brief Compute outputs of a network for given inputs.   * \param input Pointer to sequence of doubleing point numbers.   * \param output Pointer to sequence of doubleing point numbers or NULL.   *   * Compute outputs of a neural network for given inputs by forward   * propagating the inputs through the layers. If output is non-NULL, the   * outputs are copied to output (otherwise they are only stored   * internally in the network).   */  void compute (const double *input, double *output);  /*!\brief Train a network.   *   * Before calling this routine, compute() and   * compute_output_error() should have been called to compute outputs   * for given inputs and to prepare the neural network for training by   * computing the output error. This routine performs the actual training   * by backpropagating the output error through the layers.   */  void train ();  /****************************************   * SuperSab   ****************************************/  /*!\brief True if ssab is active     \return true if supersab mode is active, false otherwise.   */  bool is_ssab_active () const;  /*! \brief Count the number of weights of the network   *  \return  number of weights   */  int count_weights () const;  /*!\brief Begin SuperSab mode setting the nus to learning rate of the    *        network   *   * Precondition: (! is_ssab_active()) i.e. begin_ssab was not called before.   *   * If is_ssab_active() and you want to reset the values of nus, use 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -