⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 fann.xml

📁 一个功能强大的神经网络分析程序
💻 XML
📖 第 1 页 / 共 5 页
字号:
}]]>	</programlisting>      </example>    </section>    <section id="fixed.precision">      <title id="fixed.precision.title">Precision of a Fixed Point ANN</title>      <para>	The fixed point ANN is not as precise as a floating point ANN, furthermore it approximates the sigmoid function by a stepwise linear function. Therefore,	it is always a good idea to test the fixed point ANN after loading it from a file. This can be done by calculating the mean square error as described	<link linkend="example.calc_mse">earlier</link>. There is, however, one problem with this approach: The training data stored in the file is in floating	point format. Therefore, it is possible to save this data in a fixed point format from within the floating point program. This is done by the function	<link linkend="api.fann_save_train_to_fixed"><function>fann_save_train_to_fixed</function></link>. Please note that this function takes the decimal point	as an argument, meaning that the decimal point should be calculated first by using the	<link linkend="api.fann_save_to_fixed"><function>fann_save_to_fixed</function></link> function.      </para>    </section>  </chapter>  <chapter id="theory">    <title id="theory.title">Neural Network Theory</title>    <para>      This section will briefly explain the theory of neural networks (hereafter known as NN) and artificial neural      networks (hereafter known as ANN). For a more in depth explanation of these concepts please consult the      literature; [<xref linkend="bib.hassoun_1995" endterm="bib.hassoun_1995.abbrev" />] has good coverage of most      concepts of ANN and [<xref linkend="bib.hertz_1991" endterm="bib.hertz_1991.abbrev" />] describes the mathematics      of ANN very thoroughly, while [<xref linkend="bib.anderson_1995" endterm="bib.anderson_1995.abbrev" />] has a      more psychological and physiological approach to NN and ANN. For the pragmatic I (SN) could recommend      [<xref linkend="bib.tettamanzi_2001" endterm="bib.tettamanzi_2001.abbrev" />], which has a short and easily      understandable introduction to NN and ANN.    </para>    <section id="theory.neural_networks">      <title id="theory.neural_networks.title">Neural Networks</title>      <para>        The human brain is a highly complicated machine capable of solving very complex problems. Although we have        a good understanding of some of the basic operations that drive the brain, we are still far from understanding        everything there is to know about the brain.      </para>      <para>        In order to understand ANN, you will need to have a basic knowledge of how the internals of the brain work.	The brain is part of the central nervous system and consists of a very large NN. The NN is actually quite	complicated, so the following discussion shall be relegated to the details needed to understand ANN, in order	to simplify the	explanation.      </para>      <para>        The NN is a network consisting of connected neurons. The center of the neuron is called the nucleus. The	nucleus is connected to other nucleuses by means of the dendrites and the axon. This connection is called a	synaptic connection.      </para>      <para>        The neuron can fire electric pulses through its synaptic connections, which is received at the dendrites of        other neurons.      </para>      <para>        When a neuron receives enough electric pulses through its dendrites, it activates and fires a pulse through	its axon, which is then received by other neurons. In this way information can propagate through the NN. The	synaptic connections change throughout the lifetime of a neuron and the amount of incoming pulses needed to	activate a neuron (the threshold) also change. This behavior allows the NN to learn.      </para>      <para>        The human brain consists of around 10^11 neurons which are highly interconnected with around 10^15        connections [<xref linkend="bib.tettamanzi_2001" endterm="bib.tettamanzi_2001.abbrev" />]. These neurons	activates in parallel as an effect to internal and external sources. The brain is connected to the rest of the	nervous system, which allows it to receive information by means of the five senses and also allows it to	control the muscles.      </para>    </section>    <section id="theory.artificial_neural_networks">      <title id="theory.artificial_neural_networks.title">Artificial Neural Networks</title>      <para>        It is not possible (at the moment) to make an artificial brain, but it is possible to make simplified        artificial neurons and artificial neural networks. These ANNs can be made in many different ways and can try to        mimic the brain in many different ways.      </para>      <para>        ANNs are not intelligent, but they are good for recognizing patterns and making simple rules for complex        problems. They also have excellent training capabilities which is why they are often used in artificial        intelligence research.      </para>      <para>        ANNs are good at generalizing from a set of training data. E.g. this means an ANN given data about a set of	animals connected to a fact telling if they are mammals or not, is able to predict whether an animal outside	the original set is a mammal from its data. This is a very desirable feature of ANNs, because you do not need	to know the characteristics defining a mammal, the ANN will find out by itself.      </para>    </section>    <section id="theory.training">      <title id="theory.training.title">Training an ANN</title>      <para>        When training an ANN with a set of input and output data, we wish to adjust the weights in the ANN, to make	the ANN give the same outputs as seen in the training data. On the other hand, we do not want to make the ANN	too specific, making it give precise results for the training data, but incorrect results for all other data.	When this happens, we say that the ANN has been over-fitted.      </para>      <para>        The training process can be seen as an optimization problem, where we wish to minimize the mean square	error of the entire set of training data. This problem can be solved in many different ways, ranging from	standard optimization heuristics like simulated annealing, through more special optimization techniques like	genetic algorithms to specialized gradient descent algorithms like backpropagation.      </para>      <para>        The most used algorithm is the backpropagation algorithm, but this algorithm has some limitations	concerning, the extent of adjustment to the weights in each iteration. This problem has been solved in more	advanced algorithms like RPROP [<xref linkend="bib.riedmiller_1993" endterm="bib.riedmiller_1993.abbrev" />]	and quickprop [<xref linkend="bib.fahlman_1988" endterm="bib.fahlman_1988.abbrev" />].      </para>    </section>  </chapter>  <chapter id="api">    <title id="api.title">API Reference</title>    <para>This is a list of all functions and structures in FANN.</para>    <section id="api.sec.create_destroy">      <title id="api.sec.create_destroy.title">Creation, Destruction, and Execution</title>      <refentry id="api.fann_create">        <refnamediv>          <refname>fann_create</refname>          <refpurpose>Create a new artificial neural network, and return a pointer to it.</refpurpose>        </refnamediv>        <refsect1>          <title>Description</title>          <methodsynopsis>            <type>struct fann *</type>            <methodname>fann_create</methodname>            <methodparam>              <type>float</type>              <parameter>connection_rate</parameter>            </methodparam>            <methodparam>              <type>float</type>              <parameter>learning_rate</parameter>            </methodparam>            <methodparam>              <type>unsigned int</type>              <parameter>num_layers</parameter>            </methodparam>            <methodparam>              <type>unsigned int</type>              <parameter>...</parameter>            </methodparam>          </methodsynopsis>          <para>            <function>fann_create</function> will create a new artificial neural network, and return	    a pointer to it.  The <parameter>connection_rate</parameter> controls how many	    connections there will be in the network. If the connection rate is set to 1, the	    network will be fully connected, but if it is set to 0.5 only half of the connections	    will be set.	  </para>	  <para>	    The <parameter>num_layers</parameter> is the number of layers including the input and	    output layer. This parameter is followed by one parameter for each layer telling how	    many neurons there should be in the layer.	  </para>          <para>This function appears in FANN &gt;= 1.0.0.</para>        </refsect1>      </refentry>      <refentry id="api.fann_create_array">        <refnamediv>          <refname>fann_create_array</refname>          <refpurpose>Create a new artificial neural network, and return a pointer to it.</refpurpose>        </refnamediv>        <refsect1>          <title>Description</title>          <methodsynopsis>            <type>struct fann *</type>            <methodname>fann_create_array</methodname>            <methodparam>              <type>float</type>              <parameter>connection_rate</parameter>            </methodparam>            <methodparam>              <type>float</type>              <parameter>learning_rate</parameter>            </methodparam>            <methodparam>              <type>unsigned int</type>              <parameter>num_layers</parameter>            </methodparam>            <methodparam>              <type>unsigned int *</type>              <parameter>neurons_per_layer</parameter>            </methodparam>          </methodsynopsis>          <para>            <function>fann_create_array</function> will create a new artificial neural network, and return a pointer to	    it. It is the same as <function>fann_create</function>, only it accepts an array as its final parameter	    instead of variable arguments.	  </para>	  <para>	    <example id="example.api.fann_create_array">	      <title id="example.api.fann_create_array.title"><function>fann_create_array</function> example</title>	      <programlisting><![CDATA[unsigned int neurons_per_layer[3] = {2, 3, 1};// The following two calls have identical resultsstruct fann * ann = fann_create_array(1.0f, 0.7f, 3, neurons_per_layer);struct fann * ann2 = fann_create(1.0f, 0.7f, 3, 2, 3, 1);fann_destroy(ann);fann_destroy(ann2);]]>	      </programlisting>	    </example>	  </para>          <para>This function appears in FANN &gt;= 1.0.5.</para>        </refsect1>      </refentry>      <refentry id="api.fann_create_shortcut">        <refnamediv>          <refname>fann_create_shortcut</refname>          <refpurpose>Create a new artificial neural network with shortcut connections, and return a pointer to it.</refpurpose>        </refnamediv>        <refsect1>          <title>Description</title>          <methodsynopsis>            <type>struct fann *</type>            <methodname>fann_create_shortcut</methodname>            <methodparam>              <type>float</type>              <parameter>learning_rate</parameter>            </methodparam>            <methodparam>              <type>unsigned int</type>              <parameter>num_layers</parameter>            </methodparam>            <methodparam>              <type>unsigned int</type>              <parameter>...</parameter>            </methodparam>          </methodsynopsis>          <para>            <function>fann_create_shortcut</function> will create a new artificial neural network, and return	    a pointer to it. The network will be fully connected, and will furthermore have all shortcut 	    connections connected.	  </para>	  <para>            Shortcut connections are connections that skip layers. A fully connected network with shortcut	    connections, is a network where all neurons are connected to all neurons in later layers. 	    Including direct connections from the input layer to the output layer.	  </para>	  <para>	    The <parameter>num_layers</parameter> is the number of layers including the input and	    output layer. This parameter is followed by one parameter for each layer telling how	    many neurons there should be in the layer.	  </para>          <para>This function appears in FANN &gt;= 1.2.0.</para>        </refsect1>      </refentry>      <refentry id="api.fann_create_shortcut_array">        <refnamediv>          <refname>fann_create_shortcut_array</refname>          <refpurpose>Create a new artificial neural network with shortcut connections, and return a pointer to it.</refpurpose>        </refnamediv>        <refsect1>          <title>Description</title>          <methodsynopsis>            <type>struct fann *</type>            <methodname>fann_create_shortcut_array</methodname>            <methodparam>              <type>float</type>              <parameter>learning_rate</parameter>            </methodparam>            <methodparam>              <type>unsigned int</type>              <parameter>num_layers</parameter>            </methodparam>            <methodparam>              <type>unsigned int *</type>              <parameter>neurons_per_layer</parameter>            </methodparam>          </methodsynopsis>          <para>            <function>fann_create_shortcut_array</function> will create a new artificial neural network, and return a pointer to	    it. It is the same as <function>fann_create_shortcut</function>, only it accepts an array as its final parameter	    instead of variable arguments.	  </para>          <para>This function appears in FANN &gt;= 1.2.0.</para>        </refsect1>      </refentry>      <refentry id="api.fann_destroy">        <refnamediv>          <refname>fann_destroy</refname>          <refpurpose>Destroy an ANN.</refpurpose>        </refnamediv>        <refsect1>          <title>Description</title>          <methodsynopsis>            <type>void</type>            <methodname>fann_destroy</methodname>            <methodparam>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -