📄 fann.xml
字号:
<?xml version='1.0' encoding='iso-8859-1'?><!-- $Id: fann.xml,v 1.20 2004/07/06 16:46:44 looksirdroids Exp $ --><!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" "docbook/xml-dtd-4.1.2/docbookx.dtd"><book> <bookinfo id="bookinfo"> <title>Fast Artificial Neural Network Library</title> <authorgroup id="authors"> <author> <firstname>Steffen</firstname> <surname>Nissen</surname> </author> <author> <firstname>Evan</firstname> <surname>Nemerson</surname> </author> </authorgroup> <copyright> <year>2004</year> </copyright> </bookinfo> <chapter id="intro"> <title>Introduction</title> <para> fann - Fast Artificial Neural Network Library is written in ANSI C. The library implements multilayer feedforward ANNs, up to 150 times faster than other libraries. FANN supports execution in fixed point, for fast execution on systems like the iPAQ. </para> <section id="intro.dl"> <title id="intro.dl.title">Getting FANN</title> <para> Copies of FANN can be obtained from our SourceForge project page, located at <ulink url="http://www.sourceforge.net/projects/fann/">http://www.sourceforge.net/projects/fann/</ulink> </para> <para> You can currently get FANN as source code (<filename>fann-*.tar.bz2</filename>), Debian packages (<filename>fann-*.deb</filename>), or RPM's (<filename>fann-*.rpm</filename>). </para> <para> FANN is available under the terms of the <ulink url="http://www.fsf.org/copyleft/lesser.html">GNU Lesser General Public License</ulink>. </para> </section> <section id="intro.install"> <title>Installation</title> <section id="intro.install.rpm"> <title>RPMs</title> <para> RPMs are a simple way to manage packages, and is used on many common Linux distributions such as <ulink url="http://www.redhat.com">Red Hat</ulink>, <ulink url="http://www.mandrake.com/">Mandrake</ulink>, and <ulink url="http://www.suse.com/">SuSE</ulink>. </para> <para> Two separate packages exist; fann, the runtime library, and fann-devel, the development library and header files. </para> <para> After downloading FANN, simply run (as root) the following command: <command>rpm -ivh $PATH_TO_RPM</command> </para> </section> <section id="intro.install.deb"> <title>DEBs</title> <para> DEBs are packages for the <ulink url="http://www.debian.org">Debian</ulink> Linux distribution. Two separate packages exists libfann1 and libfann1-dev, where libfann1 is the runtime library and libfann1-dev is the development library. </para> <para> Fann is included in the testing distribution of Debian, so testing users can simply run (as root) the following command: <command>apt-get install libfann1 libfann1-dev</command>. </para> <para> After downloading the FANN DEB package, simply run (as root) the following command: <command>dpkg -i $PATH_TO_DEB</command> </para> </section> <section id="intro.install.win32"> <title>Windows</title> <para> FANN >= 1.1.0 includes a Microsoft Visual C++ 6.0 project file, which can be used to compile FANN for Windows. To build the library and examples with MSVC++ 6.0: </para> <!-- Thanks to Koen Tanghe for this part. --> <para> First, navigate to the MSVC++ directory in the FANN distribution and open the <filename>all.dsw</filename> workspace. In the Visual Studio menu bar, choose "Build" -> "Batch build...", select the project configurations that you would like to build (by default, all are selected), and press "rebuild all" </para> <para> When the build process is complete, the library and examples can be found in the <filename class="directory">MSVC++\Debug</filename> and <filename class="directory">MSVC++\Release</filename> directories and the release versions of the examples are automatically copied into the <filename class="directory">examples</filename> where they are supposed to be run. </para> <!-- /Koen --> </section> <section id="intro.install.src"> <title id="intro.install.src.title">Compiling from source</title> <para> Compiling FANN from source code entails the standard GNU autotools technique. First, configure the package as you want it by typing (in the FANN directory), <command>./configure</command> If you need help choosing the options you would like to use, try <command>./configure --help</command> </para> <para> Next, you have to actually compile the library. To do this, simply type <command>make</command> </para> <para> Finally, to install the library, type <command>make install</command>. Odds are you will have to be root to install, so you may need to <command>su</command> to root before installing. Please remember to log out of the root account immediately after <command>make install</command> finishes. </para> <para> Some people have experienced problems with compiling the library with some compilers, especially windows compilers which can not use GNU autotools. Please look through the <ulink url="http://sourceforge.net/forum/forum.php?forum_id=323465">help forum</ulink> and the <ulink url="http://sourceforge.net/mailarchive/forum.php?forum=fann-general">mailing list</ulink> archives for info on how these problems was solved. If you do not find any information here, feel free to ask questions. </para> </section> </section> <section id="intro.start"> <title id="intro.start.title">Getting Started</title> <para> An ANN is normally run in two different modes, a training mode and an execution mode. Although it is possible to do this in the same program, using different programs is recommended. </para> <para> There are several reasons to why it is usually a good idea to write the training and execution in two different programs, but the most obvious is the fact that a typical ANN system is only trained once, while it is executed many times. </para> <section id="intro.start.train"> <title id="intro.start.train.title">Training</title> <para> The following is a simple program which trains an ANN with a data set and then saves the ANN to a file. </para> <example id="example.simple_train"> <title id="example.simple_train.title">Simple training example</title> <programlisting><![CDATA[#include "fann.h"int main(){ const float connection_rate = 1; const float learning_rate = 0.7; const unsigned int num_input = 2; const unsigned int num_output = 1; const unsigned int num_layers = 3; const unsigned int num_neurons_hidden = 4; const float desired_error = 0.0001; const unsigned int max_iterations = 500000; const unsigned int iterations_between_reports = 1000; struct fann *ann = fann_create(connection_rate, learning_rate, num_layers, num_input, num_neurons_hidden, num_output); fann_train_on_file(ann, "xor.data", max_iterations, iterations_between_reports, desired_error); fann_save(ann, "xor_float.net"); fann_destroy(ann); return 0;}]]> </programlisting> </example> <para> The file xor.data, used to train the xor function: <literallayout class="monospaced" id="file_contents.xor.data">4 2 10 000 111 011 10 </literallayout> The first line consists of three numbers: The first is the number of training pairs in the file, the second is the number of inputs and the third is the number of outputs. The rest of the file is the actual training data, consisting of one line with inputs, one with outputs etc. </para> <para> This example introduces several fundamental functions, namely <link linkend="api.fann_create"><function>fann_create</function></link>, <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link>, <link linkend="api.fann_save"><function>fann_save</function></link>, and <link linkend="api.fann_destroy"><function>fann_destroy</function></link>. </para> </section> <section id="intro.start.execution"> <title id="intro.start.execution.title">Execution</title> <para> The following example shows a simple program which executes a single input on the ANN. The program introduces two new functions (<link linkend="api.fann_create_from_file"><function>fann_create_from_file</function></link> and <link linkend="api.fann_run"><function>fann_run</function></link>) which were not used in the training procedure, as well as the <type>fann_type</type> type. </para> <example id="example.simple_exec"> <title id="example.simple_exec.title">Simple execution example</title> <programlisting><![CDATA[#include <stdio.h>#include "floatfann.h"int main(){ fann_type *calc_out; fann_type input[2]; struct fann *ann = fann_create_from_file("xor_float.net"); input[0] = 0; input[1] = 1; calc_out = fann_run(ann, input); printf("xor test (%f,%f) -> %f\n", input[0], input[1], *calc_out); fann_destroy(ann); return 0;}]]> </programlisting> </example> </section> </section> <section id="intro.help"> <title id="intro.help.title">Getting Help</title> <para> If after reading the documentation you are still having problems, or have a question that is not covered in the documentation, please consult the fann-general mailing list. Archives and subscription information are available <ulink url="http://lists.sourceforge.net/lists/listinfo/fann-general">here</ulink>. </para> </section> </chapter> <chapter id="adv"> <title id="adv.title">Advanced Usage</title> <para> This section describes some of the low-level functions and how they can be used to obtain more control of the fann library. For a full list of functions, lease see the <link linkend="api">API Reference</link>, which has an explanation of all the fann library functions. Also feel free to take a look at the source code. </para> <para> This section describes different procedures, which can help to get more power out of the fann library: <link linkend="adv.adj" endterm="adv.adj.title" />, <link linkend="adv.design" endterm="adv.design.title" />, <link linkend="adv.errval" endterm="adv.errval.title" />, and <link linkend="adv.train_test" endterm="adv.train_test.title" />. </para> <section id="adv.adj"> <title id="adv.adj.title">Adjusting Parameters</title> <para> Several different parameters exists in an ANN, these parameters are given defaults in the fann library, but they can be adjusted at runtime. There is no sense in adjusting most of these parameters after the training, since it would invalidate the training, but it does make sense to adjust some of the parameters during training, as will be described in <link linkend="adv.train_test" endterm="adv.train_test.title" />. Generally speaking, these are parameters that should be adjusted before training. </para> <para> The learning rate is one of the most important parameters, but unfortunately it is also a parameter which is hard to find a reasonable default for. I (SN) have several times ended up using 0.7, but it is a good idea to test several different learning rates when training a network. It is also worth noting that the activation function has a profound effect on the optimal learning rate [<xref linkend="bib.thimm_1997" endterm="bib.thimm_1997.abbrev"/>]. The learning rate can be set when creating the network, but it can also be set by the <link linkend="api.fann_set_learning_rate"><function>fann_set_learning_rate</function></link> function. </para> <para> The initial weights are random values between -0.1 and 0.1, if other weights are preferred, the weights can be altered by the <link linkend="api.fann_randomize_weights"><function>fann_randomize_weights</function></link> or <link linkend="api.fann_init_weights"><function>fann_init_weights</function></link> function. </para> <para> In [<xref linkend="bib.fiesler_1997" endterm="bib.fiesler_1997.abbrev"/>], Thimm and Fiesler state that, "An <emphasis>(sic)</emphasis> fixed weight variance of 0.2, which corresponds to a weight range of [-0.77, 0.77], gave the best mean performance for all the applications tested in this study. This performance is similar or better as compared to those of the other weight initialization methods." </para> <para> The standard activation function is the sigmoid activation function, but it is also possible to use the threshold activation function. A list of the currently available activation functions is available in the <link linkend="api.sec.constants.activation" endterm="api.sec.constants.activation.title"/> section. The activation functions are chosen using the <link linkend="api.fann_set_activation_function_hidden"><function>fann_set_activation_function_hidden</function></link> and <link linkend="api.fann_set_activation_function_output"><function>fann_set_activation_function_output</function></link> functions. </para> <para> These two functions set the activation function for the hidden layers and for the output layer. Likewise the steepness parameter used in the sigmoid function can be adjusted with the <link linkend="api.fann_set_activation_steepness_hidden"><function>fann_set_activation_steepness_hidden</function></link> and <link linkend="api.fann_set_activation_steepness_output"><function>fann_set_activation_steepness_output</function></link> functions. </para> <para> FANN distinguishes between the hidden layers and the output layer, to allow more flexibility. This is especially a good idea for users wanting discrete output from the network, since they can set the activation function for the output to threshold. Please note, that it is not possible to train a network when using the threshold activation function, due to the fact, that it is not differentiable. </para> </section>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -