📄 c104.html
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"><HTML><HEAD><TITLE>Advanced Usage</TITLE><link href="../style.css" rel="stylesheet" type="text/css"><METANAME="GENERATOR"CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINKREL="HOME"TITLE="Fast Artificial Neural Network Library"HREF="index.html"><LINKREL="PREVIOUS"TITLE="Getting Help"HREF="x100.html"><LINKREL="NEXT"TITLE="Network Design"HREF="x141.html"></HEAD><BODYCLASS="chapter"BGCOLOR="#FFFFFF"TEXT="#000000"LINK="#0000FF"VLINK="#840084"ALINK="#0000FF"><DIVCLASS="NAVHEADER"><TABLESUMMARY="Header navigation table"WIDTH="100%"BORDER="0"CELLPADDING="0"CELLSPACING="0"><TR><THCOLSPAN="3"ALIGN="center">Fast Artificial Neural Network Library</TH></TR><TR><TDWIDTH="10%"ALIGN="left"VALIGN="bottom"><AHREF="x100.html"ACCESSKEY="P">Prev</A></TD><TDWIDTH="80%"ALIGN="center"VALIGN="bottom"></TD><TDWIDTH="10%"ALIGN="right"VALIGN="bottom"><AHREF="x141.html"ACCESSKEY="N">Next</A></TD></TR></TABLE><HRALIGN="LEFT"WIDTH="100%"></DIV><DIVCLASS="chapter"><H1><ANAME="adv"></A>Chapter 2. Advanced Usage</H1><P> This section describes some of the low-level functions and how they can be used to obtain more control of the fann library. For a full list of functions, lease see the <AHREF="c253.html">API Reference</A>, which has an explanation of all the fann library functions. Also feel free to take a look at the source code. </P><P> This section describes different procedures, which can help to get more power out of the fann library: <AHREF="c104.html#adv.adj"><I>Adjusting Parameters</I></A>, <AHREF="x141.html"><I>Network Design</I></A>, <AHREF="x148.html"><I>Understanding the Error Value</I></A>, and <AHREF="x161.html"><I>Training and Testing</I></A>. </P><DIVCLASS="section"><H1CLASS="section"><ANAME="adv.adj">2.1. Adjusting Parameters</A></H1><P> Several different parameters exists in an ANN, these parameters are given defaults in the fann library, but they can be adjusted at runtime. There is no sense in adjusting most of these parameters after the training, since it would invalidate the training, but it does make sense to adjust some of the parameters during training, as will be described in <AHREF="x161.html"><I>Training and Testing</I></A>. Generally speaking, these are parameters that should be adjusted before training. </P><P> The learning rate is one of the most important parameters, but unfortunately it is also a parameter which is hard to find a reasonable default for. I (SN) have several times ended up using 0.7, but it is a good idea to test several different learning rates when training a network. It is also worth noting that the activation function has a profound effect on the optimal learning rate [<AHREF="b3048.html#bib.thimm_1997"><I>Thimm and Fiesler, 1997</I></A>]. The learning rate can be set when creating the network, but it can also be set by the <AHREF="r1007.html"><CODECLASS="function">fann_set_learning_rate</CODE></A> function. </P><P> The initial weights are random values between -0.1 and 0.1, if other weights are preferred, the weights can be altered by the <AHREF="r396.html"><CODECLASS="function">fann_randomize_weights</CODE></A> or <AHREF="r421.html"><CODECLASS="function">fann_init_weights</CODE></A> function. </P><P> In [<AHREF="b3048.html#bib.fiesler_1997"><I>Thimm and Fiesler, High-Order and Multilayer Perceptron Initialization, 1997</I></A>], Thimm and Fiesler state that, "An <SPANCLASS="emphasis"><ICLASS="emphasis">(sic)</I></SPAN> fixed weight variance of 0.2, which corresponds to a weight range of [-0.77, 0.77], gave the best mean performance for all the applications tested in this study. This performance is similar or better as compared to those of the other weight initialization methods." </P><P> The standard activation function is the sigmoid activation function, but it is also possible to use the threshold activation function. A list of the currently available activation functions is available in the <AHREF="r2030.html"><I>Activation Functions</I></A> section. The activation functions are chosen using the <AHREF="r1040.html"><CODECLASS="function">fann_set_activation_function_hidden</CODE></A> and <AHREF="r1076.html"><CODECLASS="function">fann_set_activation_function_output</CODE></A> functions. </P><P> These two functions set the activation function for the hidden layers and for the output layer. Likewise the steepness parameter used in the sigmoid function can be adjusted with the <AHREF="r1112.html"><CODECLASS="function">fann_set_activation_steepness_hidden</CODE></A> and <AHREF="r1149.html"><CODECLASS="function">fann_set_activation_steepness_output</CODE></A> functions. </P><P> FANN distinguishes between the hidden layers and the output layer, to allow more flexibility. This is especially a good idea for users wanting discrete output from the network, since they can set the activation function for the output to threshold. Please note, that it is not possible to train a network when using the threshold activation function, due to the fact, that it is not differentiable. </P></DIV></DIV><DIVCLASS="NAVFOOTER"><HRALIGN="LEFT"WIDTH="100%"><TABLESUMMARY="Footer navigation table"WIDTH="100%"BORDER="0"CELLPADDING="0"CELLSPACING="0"><TR><TDWIDTH="33%"ALIGN="left"VALIGN="top"><AHREF="x100.html"ACCESSKEY="P">Prev</A></TD><TDWIDTH="34%"ALIGN="center"VALIGN="top"><AHREF="index.html"ACCESSKEY="H">Home</A></TD><TDWIDTH="33%"ALIGN="right"VALIGN="top"><AHREF="x141.html"ACCESSKEY="N">Next</A></TD></TR><TR><TDWIDTH="33%"ALIGN="left"VALIGN="top">Getting Help</TD><TDWIDTH="34%"ALIGN="center"VALIGN="top"> </TD><TDWIDTH="33%"ALIGN="right"VALIGN="top">Network Design</TD></TR></TABLE></DIV></BODY></HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -