⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 network.h

📁 也是遗传算法的源代码
💻 H
字号:
/***************************************************************************                          network.h  -  description                             -------------------    copyright            : (C) 2001, 2002 by Matt Grover    email                : mgrover@amygdala.org ***************************************************************************//*************************************************************************** *                                                                         * *   This program is free software; you can redistribute it and/or modify  * *   it under the terms of the GNU General Public License as published by  * *   the Free Software Foundation; either version 2 of the License, or     * *   (at your option) any later version.                                   * *                                                                         * ***************************************************************************/#ifndef NETWORK_H#define NETWORK_Husing namespace std;#include <amygdala/types.h>// check for g++ version 3.0.0 or higher#if GCC_VERSION >= 30000    #include <ext/hash_map>#else    #include <hash_map>#endif#include <string>#include <queue>#include <amygdala/mpspikeinput.h>enum NEvent { NOACTION, RMSPIKE, SPIKE, INPUTSPIKE, RESPIKE };class Neuron;class FunctionLookup;class Layer;class SpikeInput;class Synapse;/** @class Network network.h amygdala/network.h  * @brief This class manages the NN as it runs.  *  * Network acts as a container for Neurons and Layers and  * also coordinates spike transmission between neurons.  * Amygdala supports multi-threading through the use of the  * MpNetwork class which allows neural nets to be partitioned  * across multiple Network objects. MpNetwork handles all communication  * between Networks and Neurons running in separate threads. This  * model will be extended in the future to allow Amygdala networks  * to run on clustered computer systems.  * <P>If there is a need to run two or more entirely separate  * neural nets at the same time, it is recommended that they be put into  * separate processes using fork() (or any other method that is useful).  * It would be possible to run separate networks by using MpNetwork, but  * the results may not be what is expected since MpNetwork is designed with  * the assumption that all MpNetwork objects belong to the same physical neural net.  * @see MpNetwork  * @author Matt Grover  */class Network {public:    //friend void Neuron::SendSpike(AmTimeInt& now);    friend class Neuron;    friend void MpSpikeInput::ReadInputBuffer();    Network();    virtual ~Network();    /**     * Schedule an event.  Only SPIKE, and INPUTSPIKE are currently used     * for the eventType.     * @param eventType The action that the neuron should carry out.     * @param eventTime Time that the event should be triggered.     * @param reqNrn Pointer to the neuron that will execute the event. */    void ScheduleNEvent(NEvent eventType,                        AmTimeInt eventTime,                        Neuron* reqNrn);    /**     * Schedule an event using the Neuron's ID instead of a pointer to     * the Neuron.     * @param eventType The action that the neuron should carry out.     * @param eventTime Time that the event should be triggered.     * @param reqNrnId ID of neuron that will execute the event. */    void ScheduleNEvent(NEvent eventType,                        AmTimeInt eventTime,                        AmIdInt reqNrnId);    /** Add a neuron to the network with a pre-determined neuron ID.     * @param nRole One of INPUTNEURON, HIDDENNEURON, or OUTPUTNEURON.     * @param nId The desired neuron ID.     * @return A pointer to the new neuron on success. Null pointer     * on failure. */    Neuron* AddNeuron(NeuronRole nRole, AmIdInt nId);    /** Add a Neuron to the network.     * @param nRole One of INPUTNEURON, HIDDENNEURON, or OUTPUTNEURON.     * @param nrn Pointer to a Neuron.  Network assumes ownership     * of the pointer once it has been added to the network.     * @return A pointer to the neuron is returned on success, a null     * pointer on failure. */    Neuron* AddNeuron(NeuronRole nRole, Neuron* nrn);    /** Connect two neurons together.     * @param preSynapticNeuron The pre-synaptic (originating) neuron ID.     * @param postSynapticNeuron The post-synaptic (receiving) neuron ID.     * @param weight A weight value in the range [-1, 1].     * @param delay The spike transmission delay in microseconds. Delay values     * will be rounded to the nearest whole multiple of the time step size.     * @return True on success. */    bool ConnectNeurons(AmIdInt preSynapticNeuron,                        AmIdInt postSynapticNeuron,                        float weight,                        AmTimeInt delay=0);    /** Connect two neurons together.     * @param preSynapticNeuron The pre-synaptic (originating) neuron pointer.     * @param postSynapticNeuron The post-synaptic (receiving) neuron pointer.     * @param weight A weight value in the range [-1, 1].     * @param delay The spike transmission delay in microseconds. Delay values     * will be rounded to the nearest whole multiple of the time step size.     * @return True on success. */    bool ConnectNeurons(Neuron* preSynapticNeuron,                        Neuron* postSynapticNeuron,                        float weight,                        AmTimeInt delay=0);    /** Run the simulation. The simulation will continue     * until either the event queues are completely empty     * or the network has run for maximum alloted time.  If     * streaming input or MP mode is in use, the simulation will     * continue until maxRunTime has been reached even if the event     * queues are empty.     * @param maxRunTime Maximum time to run before returning     * in microseconds. */    void Run(AmTimeInt maxRunTime);    /** @return Number of neurons in the network. */    int Size() { return net.size(); }    /** @return The number of layers in the network. */    int LayerCount() { return layers.size(); }    /** Define an iterator to parse through all Neurons     * contained in the network. */    typedef hash_map<AmIdInt, Neuron*>::const_iterator const_iterator;    /** @return The first position of the iterator. */    const_iterator begin() const { return net.begin(); }    /** @return The final position of the iterator. */    const_iterator end() const { return net.end(); }    /** Define an iterator to parse through all Layers     * contained in the network. */    typedef hash_map<unsigned int, Layer*>::iterator layer_iterator;    /** @return The first position of the layer iterator. */    layer_iterator layer_begin() { return layers.begin(); }    /** @return The final position of the layer iterator. */    layer_iterator layer_end() { return layers.end(); }    /** Add a layer to the network. The Network assumes ownership over the     * Layer object and the Neuron objects contained within that Layer.     * @param newLayer Pointer to the Layer being added. */    void AddLayer(Layer* newLayer);    /** @return The largest neuron ID currently in the network. */    AmIdInt MaxNeuronId() { return (nextNeuronId - 1); }    /** @return True if the Network contains Layers. */    bool Layered() { return isLayered; }    /** Set the size of the simulation time steps. This     * must be set before Neurons are added.     * @param stepSize Time step size in microseconds. The defaults     * to 100 us. */    static void SetTimeStepSize(AmTimeInt stepSize);    /** @return The size of the time step in microseconds. */    static AmTimeInt TimeStepSize() { return simStepSize; }    /** @return The current simulation time in microseconds. */    static AmTimeInt SimTime() { return simTime; }    /** Get a reference to a specific Neuron.     * @param nId The neuron ID of the Neuron.     * @return Pointer to the neuron with ID nId. */    Neuron* GetNeuron(AmIdInt nId) { return net[nId]; }    /** @return Pointer to the SpikeInput object that is being used. */    SpikeInput* GetSpikeInput() { return spikeInput; }    /** Specify a SpikeInput object for the Network to use.     * Network creates a SimpleSpikeInput object in the constructor,     * so there is no need to call this function unless a different     * SpikeInput class is needed. Network assumes ownership of the     * pointers and the existing SpikeInput object is destroyed     * when a new one is passed in.     * @param sIn Pointer to the SpikeInput object. */    void SetSpikeInput(SpikeInput* sIn);    /** @return True if streaming input is in use. */    bool StreamingInput() { return streamingInput; }    /** Toggle the streaming input state.     * @param streaming A value of true turns on streaming     * input. Streaming input should be used if the input spike queue     * cannot be filled before calling Network::Run(). */    void StreamingInput(bool streaming) { streamingInput = streaming; }    /** Reset the simulation time to 0. */    static void ResetSimTime();    /** Toggle the training mode.     * @param tMode True if training should be enabled. */    void SetTrainingMode(bool tMode);    /** @return True if training is enabled. */    bool GetTrainingMode() const { return trainingMode; }    /** @return a pointer to the Layer with layerID */    Layer * GetLayer(AmIdInt layerId);    /** Turn on spike batching (send all spikes to a neuron in     * a group, rather than one at a time).  This is turned on     * automatically if spike delays are used and cannot be     * turned off again once this function has been called. */    void EnableSpikeBatching() { spikeDelaysOn = true; }protected:    /** Schedule the transmission of a spike down an axon. This     *  may be done in order to implement spike batching or to     *  model transmission delays. This is normally called from     *  Neuron.     *  @param axon The axon vector from a Neuron. A spike will     *  be scheduled to cross each Synapse after the delay time has     *  passed.  Delay times are stored in Synapse and set when     *  Neurons are connected together.     *  @see Neuron::SendSpike(), Network::ConnectNeurons(). */    void ScheduleSpikeDelay(vector<Synapse*>& axon);    /** Increment the simTime variable to the next time step.     * In multithreaded mode, simTime is not incremented until     * all Nodes have called IncrementSimTime(). */    virtual void IncrementSimTime();    int pspLSize;                   // Size of lookup tables    int pspLRes;                    // Lookup table timestep resolution    static AmTimeInt simStepSize;    int netSize;    bool streamingInput;    AmIdInt nextNeuronId;    unsigned int nextLayerId;    FunctionLookup* functionRef;    // Container for lookup tables    SpikeInput* spikeInput;    hash_map<AmIdInt, Neuron*> net;     // Neurons that make up net. Key is neuron ID.    hash_map<unsigned int, Layer*> layers; // layer container    /** SpikeRequest is used to keep track of scheduled spikes in the     * event queue. The priority_queue from the STL ranks entries     * based on the < operator (defined below). The ranking will be     * in order of spikeTime, requestTime, and requestOrder. */    struct SpikeRequest {        AmTimeInt spikeTime;     // Desired time of spike        AmTimeInt requestTime;   // Time SpikeRequest was entered in queue        unsigned int requestOrder;  // Entry number within a given time step.        Neuron* requestor;          // Neuron scheduling spike        // operator< overloaded to make the priority_queue happy        bool operator<(const SpikeRequest& sr) const        {            if(spikeTime != sr.spikeTime) {                return spikeTime>sr.spikeTime;            }            else if(requestTime != sr.requestTime) {                return requestTime>sr.requestTime;            }            else {                return requestOrder<sr.requestOrder;            }        }    };    priority_queue<SpikeRequest> eventQ;    // Main event queue    priority_queue<SpikeRequest> inputQ;    // Queue of inputs into network    vector< vector<Synapse*> > delayedSpikeQ;    AmTimeInt currSpikeDelayOffset;    AmTimeInt maxOffset;    bool spikeDelaysOn;    AmTimeInt maxSpikeDelay;    Synapse* maxSpikeDelaySyn;    static AmTimeInt simTime;               // Current simulation time    static unsigned int runCount;           // number of Network objects calling Run()    unsigned int spikeBatchCount;    inline void IncrementDelayOffset();private:    /** Call Neuron::InputSpike() for each delayed spike     *  scheduled for the next offset.  This function will     *  increment currSpikeDelayOffset; */    void SendDelayedSpikes();    /** Set default network parameters. */    void SetDefaults();    /** Initialize the delayed spike queue. */    void InitializeDelayedSpikeQ();    unsigned int eventRequestCount;         // Counter for SpikeRequest.requestOrder    bool isLayered;    bool trainingMode;};#endif

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -