⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 network.c

📁 * Lightweight backpropagation neural network. * This a lightweight library implementating a neura
💻 C
📖 第 1 页 / 共 3 页
字号:
/* network.c -- lightweight backpropagation neural network, version 0.8 * copyright (c) 1999-2005 Peter van Rossum <petervr@users.sourceforge.net> * released under the GNU Lesser General Public License * $Id: network.c,v 1.41 2005/07/28 18:19:44 petervr Exp $ *//*!\file network.c * Lightweight backpropagation neural network.  * * This is a lightweight library implementating a neural network for use * in C and C++ programs. It is intended for use in applications that * just happen to need a simply neural network and do not want to use * needlessly complex neural network libraries. It features multilayer * feedforward perceptron neural networks, sigmoidal activation function * with bias, backpropagation training with settable learning rate and * momentum, and backpropagation training in batches. */#include <assert.h>#include <stdarg.h>#include <stdio.h>#include <stdlib.h>#include <string.h>#include <malloc.h>#include <math.h>#include "lwneuralnet.h"/**************************************** * Compile-time options ****************************************/#define DEFAULT_MOMENTUM 0.1#define DEFAULT_LEARNING_RATE 0.25#define DEFAULT_WEIGHT_RANGE 1.0/**************************************** * Initialization ****************************************//*!\brief Assign random values to all weights in the network. * \param net Pointer to neural network. * \param range Floating point number. * * All weights in the neural network are assigned a random value * from the interval [-range,range]. */voidnet_randomize (network_t *net, float range){  int l, nu, nl;  assert (net != NULL);  assert (range >= 0.0);  for (l = 1; l < net->no_of_layers; l++) {    for (nu = 0; nu < net->layer[l].no_of_neurons; nu++) {      for (nl = 0; nl <= net->layer[l - 1].no_of_neurons; nl++) {        net->layer[l].neuron[nu].weight[nl] =          2.0 * range * ((float) random () / RAND_MAX - 0.5);      }    }  }}#if 0/*!\brief Set weights of the network to 0. * \param net Pointer to neural network. */static voidnet_reset_weights (network_t *net){  int l, nu, nl;  assert (net != NULL);  for (l = 1; l < net->no_of_layers; l++) {    for (nu = 0; nu < net->layer[l].no_of_neurons; nu++) {      for (nl = 0; nl <= net->layer[l - 1].no_of_neurons; nl++) {        net->layer[l].neuron[nu].weight[nl] = 0.0;      }    }  }}#endif/*!\brief Set deltas of the network to 0. * \param net Pointer to neural network. */voidnet_reset_deltas (network_t *net){  int l, nu, nl;  assert (net != NULL);  for (l = 1; l < net->no_of_layers; l++) {    for (nu = 0; nu < net->layer[l].no_of_neurons; nu++) {      for (nl = 0; nl <= net->layer[l - 1].no_of_neurons; nl++) {        net->layer[l].neuron[nu].delta[nl] = 0.0;      }    }  }}/*! \brief Enable or disable use of bias. *  \param net Pointer to neural network. *  \param flag Boolean. *  Disable use of bias if flag is zero; enable otherwise. *  By default, bias is enabled. */voidnet_use_bias(network_t *net, int flag){  int l;  assert (net != NULL);  if (flag != 0) {    /* permanently set output of bias neurons to 1 */    for (l = 0; l < net->no_of_layers; l++) {      net->layer[l].neuron[net->layer[l].no_of_neurons].output = 1.0;    }  } else {    /* permanently set output of bias neurons to 0 */    for (l = 0; l < net->no_of_layers; l++) {      net->layer[l].neuron[net->layer[l].no_of_neurons].output = 0.0;    }  }}/**************************************** * Memory Management ****************************************//*!\brief [Internal] Allocate memory for the neurons in a layer of a network. * \param layer Pointer to layer of a neural network. * \param no_of_neurons Integer. * * Allocate memory for a list of no_of_neuron + 1 neurons in the specified * layer. The extra neuron is used for the bias. */static voidallocate_layer (layer_t *layer, int no_of_neurons){  assert (layer != NULL);  assert (no_of_neurons > 0);  layer->no_of_neurons = no_of_neurons;  layer->neuron = (neuron_t *) calloc (no_of_neurons + 1, sizeof (neuron_t));}/*!\brief [Internal] Allocate memory for the weights connecting two layers. * \param lower Pointer to one layer of a neural network. * \param upper Pointer to the next layer of a neural network. * * Allocate memory for the weights connecting two layers of a neural * network. The neurons in these layers should previously have been * allocated with allocate_layer(). */static voidallocate_weights (layer_t *lower, layer_t *upper){  int n;  assert (lower != NULL);  assert (upper != NULL);  for (n = 0; n < upper->no_of_neurons; n++) {    upper->neuron[n].weight =      (float *) calloc (lower->no_of_neurons + 1, sizeof (float));    upper->neuron[n].delta =      (float *) calloc (lower->no_of_neurons + 1, sizeof (float));  }  /* no incoming weights for bias neurons */  upper->neuron[n].weight = NULL;  upper->neuron[n].delta = NULL;}/*!\brief Allocate memory for a network. * \param no_of_layers Integer. * \param arglist Pointer to sequence of integers. * \return Pointer to newly allocated network. * * Allocate memory for a neural network with no_of_layer layers, * including the input and output layer. The number of neurons in each * layer is given in arglist, with arglist[0] being the number of * neurons in the input layer and arglist[no_of_layers-1] the number of * neurons in the output layer. */network_t *net_allocate_l (int no_of_layers, const int *arglist){  int l;  network_t *net;  assert (no_of_layers >= 2);  assert (arglist != NULL);  /* allocate memory for the network */  net = (network_t *) malloc (sizeof (network_t));  net->no_of_layers = no_of_layers;  net->layer = (layer_t *) calloc (no_of_layers, sizeof (layer_t));  for (l = 0; l < no_of_layers; l++) {    assert (arglist[l] > 0);    allocate_layer (&net->layer[l], arglist[l]);  }  for (l = 1; l < no_of_layers; l++) {    allocate_weights (&net->layer[l - 1], &net->layer[l]);  }  /* abbreviations for input and output layer */  net->input_layer = &net->layer[0];  net->output_layer = &net->layer[no_of_layers - 1];  /* default values for network constants */  net->momentum = DEFAULT_MOMENTUM;  net->learning_rate = DEFAULT_LEARNING_RATE;  /* initialize weights and deltas */  net_randomize (net, DEFAULT_WEIGHT_RANGE);  net_reset_deltas (net);  /* permanently set output of bias neurons to 1 */  net_use_bias (net, 1);  return net;}/*!\brief Allocate memory for a network. * \param no_of_layers Integer. * \param ... Sequence of integers. * \return Pointer to newly allocated network. * * Allocate memory for a neural network with no_of_layer layers, * including the input and output layer. The number of neurons in each * layer is given as ..., starting with the input layer and ending with * the output layer. */network_t *net_allocate (int no_of_layers, ...){  int l, *arglist;  va_list args;  network_t *net;  assert (no_of_layers >= 2);  arglist = calloc (no_of_layers, sizeof (int));  va_start (args, no_of_layers);  for (l = 0; l < no_of_layers; l++) {    arglist[l] = va_arg (args, int);  }  va_end (args);  net = net_allocate_l (no_of_layers, arglist);  free (arglist);  return net;}/*!\brief Free memory allocated for a network. * \param net Pointer to a neural network. */voidnet_free (network_t *net){  int l, n;  assert (net != NULL);  for (l = 0; l < net->no_of_layers; l++) {    if (l != 0) {      for (n = 0; n < net->layer[l].no_of_neurons; n++) {        free (net->layer[l].neuron[n].weight);        free (net->layer[l].neuron[n].delta);      }    }    free (net->layer[l].neuron);  }  free (net->layer);  free (net);}/**************************************** * Access ****************************************//*!\brief Change the momentum of a network. * \param net Pointer to a neural network. * \param momentum Floating point number. */voidnet_set_momentum (network_t *net, float momentum){  assert (net != NULL);  assert (momentum >= 0.0);  net->momentum = momentum;}/*!\brief Retrieve the momentum of a network. * \param net Pointer to a neural network. * \return Momentum of the neural work. */floatnet_get_momentum (const network_t *net){  assert (net != NULL);  assert (net->momentum >= 0.0);  return net->momentum;}/*!\brief Change the learning rate of a network. * \param net Pointer to a neural network. * \param learning_rate Floating point number. */voidnet_set_learning_rate (network_t *net, float learning_rate){  assert (net != NULL);  assert (learning_rate >= 0.0);  net->learning_rate = learning_rate;}/*!\brief Retrieve the momentum of a network. * \param net Pointer to a neural network. * \return Learning rate of the neural work. */floatnet_get_learning_rate (const network_t *net){  assert (net != NULL);  return net->learning_rate;}/*!\brief Retrieve the number of inputs of a network. * \param net Pointer to a neural network. * \return Number of neurons in the input layer of the neural network. */intnet_get_no_of_inputs (const network_t *net){  assert (net != NULL);  return net->input_layer->no_of_neurons;}/*!\brief Retrieve the number of outputs of a network. * \param net Pointer to a neural network. * \return Number of neurons in the output layer of the neural network. */intnet_get_no_of_outputs (const network_t *net){  assert (net != NULL);  return net->output_layer->no_of_neurons;}/*!\brief Retrieve the number of layers of a network. * \param net Pointer to a neural network. * \return Number of layers, including the input and output layers, of the  * neural network. */intnet_get_no_of_layers (const network_t *net){  assert (net != NULL);  return net->no_of_layers;}/*!\brief Retrieve the number of weights of a network. * \param net Pointer to a neural network. * \return The total number of weights in the neural network. */intnet_get_no_of_weights (const network_t *net){  int l, result;  assert (net != NULL);  result = 0;  for (l = 1; l < net->no_of_layers; l++) {    result += (net->layer[l-1].no_of_neurons + 1) * net->layer[l].no_of_neurons;  }  return result;}/*!\brief Set a weight of a network. * \param net Pointer to a neural network. * \param l Number of lower layer. * \param nl Number of neuron in the lower layer. * \param nu Number of neuron in the next layer. * \param weight Floating point number. * The weight connecting the neuron numbered nl in the layer * numbered l with the neuron numbered nu in the layer numbered l+1 * is set to weight. */voidnet_set_weight (network_t *net, int l, int nl, int nu, float weight){  assert (net != NULL);  assert (0 <= l && l < net->no_of_layers);  assert (0 <= nl && nl <= net->layer[l].no_of_neurons);  assert (0 <= nu && nu < net->layer[l+1].no_of_neurons);  net->layer[l].neuron[nu].weight[nl] = weight;}/*!\brief Retrieve a weight of a network. * \param net Pointer to a neural network. * \param l Number of lower layer. * \param nl Number of neuron in the lower layer. * \param nu Number of neuron in the next layer. * \return Weight connecting the neuron numbered nl in the layer * numbered l with the neuron numbered nu in the layer numbered l+1. */floatnet_get_weight (const network_t *net, int l, int nl, int nu){  assert (net != NULL);  assert (0 <= l && l < net->no_of_layers-1);  assert (0 <= nl && nl <= net->layer[l].no_of_neurons);  assert (0 <= nu && nu < net->layer[l+1].no_of_neurons);  return net->layer[l].neuron[nu].weight[nl];}/*!\brief Retrieve a bias weight of a network. * \param net Pointer to a neural network. * \param l Number of layer. * \param nu Number of the layer. * \return Bias weight of the neuron numbered nu in the layer numbered l. * * [internal] Bias is implemented by having an extra neuron in every * layer. The output of this neuron is permanently set to 1. The bias * weight returned by this routine is simply the weight from this extra * neuron in the layer numbered l-1 to the neuron numbered nu in the * layer numbered l. */floatnet_get_bias (const network_t *net, int l, int nu){  assert (net != NULL);  assert (0 < l && l < net->no_of_layers);  assert (0 <= nu && nu < net->layer[l].no_of_neurons);  return net_get_weight(net, l-1, net->layer[l-1].no_of_neurons, nu);}/*!\brief Retrieve a bias weight of a network. * \param net Pointer to a neural network. * \param l Number of layer. * \param nu Number of the layer. * \param weight Floating point number. * Set the bias weight of the neuron numbered nu in the layer numbered l. * * [internal] Bias is implemented by having an extra neuron in every * layer. The output of this neuron is permanently set to 1. This * routine simply sets the the weight from this extra neuron in the * layer numbered l-1 to the neuron numbered nu in the layer numbered l. */void

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -