⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 fann.xml

📁 一个功能强大的神经网络分析程序
💻 XML
📖 第 1 页 / 共 5 页
字号:
    <section id="adv.design">      <title id="adv.design.title">Network Design</title>      <para>	When creating a network it is necessary to define how many layers, neurons and connections it should have. If the network become too large, the ANN will	have difficulties learning and when it does learn it will tend to over-fit resulting in poor generalization. If the network becomes too small, it will	not be able to represent the rules needed to learn the problem and it will never gain a sufficiently low error rate.      </para>      <para>	The number of hidden layers is also important. Generally speaking, if the problem is simple it is often enough to have one or two hidden layers, but as	the problems get more complex, so does the need for more layers.      </para>      <para>        One way of getting a large network which is not too complex, is to adjust the connection_rate parameter given to	<link linkend="api.fann_create"><function>fann_create</function></link>. If this parameter is 0.5, the constructed network will have the same amount of	neurons, but only half as many connections. It is difficult to say which problems this approach is useful for, but if you have a problem which can be	solved by a fully connected network, then it would be a good idea to see if it still works after removing half the connections.      </para>    </section>    <section id="adv.errval">      <title id="adv.errval.title">Understanding the Error Value</title>      <para>	The mean square error value is calculated while the ANN is being trained. Some functions are implemented, to use and manipulate this error value. The	<link linkend="api.fann_get_MSE"><function>fann_get_MSE</function></link> function returns the error value and the	<link linkend="api.fann_reset_MSE"><function>fann_reset_MSE</function></link> resets the error value. The following explains how the mean square error	value is calculated, to give an idea of the value's ability to reveal the quality of the training.      </para>      <para>	If <emphasis>d</emphasis> is the desired output of an output neuron and <emphasis>y</emphasis> is the actual output of the neuron, the square error is	(d - y) squared. If two output neurons exists, then the mean square error for these two neurons is the average of the two square errors.      </para>      <para>	When training with the <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link> function, an error value is printed. This	error value is the mean square error for all the training data. Meaning that it is the average of all the square errors in each of the training pairs.      </para>    </section>    <section id="adv.train_test">      <title id="adv.train_test.title">Training and Testing</title>      <para>        Normally it will be sufficient to use the <link linkend="api.fann_train_on_file"><function>fann_train_on_file</function></link> training function, but	sometimes you want to have more control and you will have to write a custom training loop. This could be because you would like another stop criteria,	or because you would like to adjust some of the parameters during training. Another stop criteria than the value of the combined mean square error could	be that each of the training pairs should have a mean square error lower than a given value.      </para>      <example id="example.train_on_file_internals">        <title id="example.train_on_file_internals.title">	  The internals of the <function>fann_train_on_file</function> function, without writing the status line.	</title>        <programlisting><![CDATA[struct fann_train_data *data = fann_read_train_from_file(filename);for(i = 1 ; i <= max_epochs ; i++) {  fann_reset_MSE(ann);  for (j = 0 ; j != data->num_data ; j++) {    fann_train(ann, data->input[j], data->output[j]);  }  if ( fann_get_MSE(ann) < desired_error ) {    break;  }}fann_destroy_train(data);]]>        </programlisting>      </example>      <para>	This piece of code introduces the <link linkend="api.fann_train"><function>fann_train</function></link> function, which trains the ANN for one iteration	with one pair of inputs and outputs and also updates the mean square error. The	<link linkend="api.struct.fann_train_data"><type>fann_train_data</type></link> structure is also introduced, this structure is a container for the	training data in the file described in figure 10. The structure can be used to train the ANN, but it can also be used to test the ANN with data which it	has not been trained with.      </para>      <example id="example.calc_mse">	<title id="example.calc_mse.title">Test all of the data in a file and calculates the mean square error.</title>	<programlisting><![CDATA[struct fann_train_data *data = fann_read_train_from_file(filename);fann_reset_MSE(ann);for(i = 0 ; i != data->num_data ; i++ ) {  fann_test(ann, data->input[i], data->output[i]);}printf("Mean Square Error: %f\n", fann_get_MSE(ann));fann_destroy_train(data);]]>	</programlisting>      </example>      <para>	This piece of code introduces another useful function: <link linkend="api.fann_test"><function>fann_test</function></link> function, which takes an input	array and a desired output array as the parameters and returns the calculated output. It also updates the mean square error.      </para>    </section>    <section id="adv.over_fit">      <title id="adv.over_fit.title">Avoid Over-Fitting</title>      <para>        With the knowledge of how to train and test an ANN, a new approach to training can be introduced. If too much training is applied to a set of data, the	ANN will eventually over-fit, meaning that it will be fitted precisely to this set of training data and thereby loosing generalization. It is often a	good idea to test, how good an ANN performs on data that it has not seen before. Testing with data not seen before, can be done while training, to see	how much training is required in order to perform well without over-fitting. The testing can either be done by hand, or an automatic test can be applied,	which stops the training when the mean square error of the test data is not improving anymore.      </para>    </section>    <section id="adv.adj_train">      <title id="adv.adj_train.title">Adjusting Parameters During Training</title>      <para>	If a very low mean square error is required it can sometimes be a good idea to gradually decrease the learning rate during training, in order to make the	adjusting of weights more subtle. If more precision is required, it might also be a good idea to use double precision floats instead of standard floats.      </para>      <para>	The threshold activation function is faster than the sigmoid function, but since it is not possible to train with this function, you may wish to consider	an alternate approach:      </para>      <para>	While training the ANN you could slightly increase the steepness parameter of the sigmoid function. This would make the sigmoid function more steep and	make it look more like the threshold function. After this training session you could set the activation function to the threshold function and the ANN	would work with this activation function. This approach will not work on all kinds of problems, and has been successfully tested on the XOR function.      </para>    </section>  </chapter>  <chapter id="fixed">    <title id="fixed.title">Fixed Point Usage</title>    <para>      It is possible to run the ANN with fixed point numbers (internally represented as integers). This option is only intended for use on computers with no      floating point processor, for example, the iPAQ, but a minor performance enhancement can also be seen on most modern computers      [<xref linkend="bib.IDS_2000" endterm="bib.IDS_2000.abbrev"/>].    </para>    <section id="fixed.train">      <title id="fixed.train.title">Training a Fixed Point ANN</title>      <para>        The ANN cannot be trained in fixed point, which is why the training part is basically the same as for floating point numbers. The only difference is that	you should save the ANN as fixed point. This is done by the <link linkend="api.fann_save_to_fixed"><function>fann_save_to_fixed</function></link>	function. This function saves a fixed point version of the ANN, but it also does some analysis, in order to find out where the decimal point should be.	The result of this analysis is returned from the function.      </para>      <para>	The decimal point returned from the function is an indicator of, how many bits is used for the fractional part of the fixed point numbers. If this number	is negative, there will most likely be integer overflow when running the library with fixed point numbers and this should be avoided. Furthermore, if	the decimal point is too low (e.g. lower than 5), it is probably not a good idea to use the fixed point version.      </para>      <para>	Please note, that the inputs to networks that should be used in fixed point should be between -1 and 1.      </para>      <example id="example.train_fixed">	<title id="example.train_fixed.title">An example of a program written to support training in both fixed point and floating point numbers</title>	<programlisting><![CDATA[#include "fann.h"#include <stdio.h>int main(){	fann_type *calc_out;	const float connection_rate = 1;	const float learning_rate = 0.7;	const unsigned int num_input = 2;	const unsigned int num_output = 1;	const unsigned int num_layers = 3;	const unsigned int num_neurons_hidden = 4;	const float desired_error = 0.001;	const unsigned int max_iterations = 20000;	const unsigned int iterations_between_reports = 100;	struct fann *ann;	struct fann_train_data *data;		unsigned int i = 0;	unsigned int decimal_point;	printf("Creating network.\n");	ann = fann_create(connection_rate, learning_rate, num_layers,		num_input,		num_neurons_hidden,		num_output);	printf("Training network.\n");	data = fann_read_train_from_file("xor.data");	fann_train_on_data(ann, data, max_iterations, iterations_between_reports, desired_error);	printf("Testing network.\n");	for(i = 0; i < data->num_data; i++){		calc_out = fann_run(ann, data->input[i]);		printf("XOR test (%f,%f) -> %f, should be %f, difference=%f\n",		data->input[i][0], data->input[i][1], *calc_out, data->output[i][0], fann_abs(*calc_out - data->output[i][0]));	}		printf("Saving network.\n");	fann_save(ann, "xor_float.net");	decimal_point = fann_save_to_fixed(ann, "xor_fixed.net");	fann_save_train_to_fixed(data, "xor_fixed.data", decimal_point);		printf("Cleaning up.\n");	fann_destroy_train(data);	fann_destroy(ann);		return 0;}]]>	</programlisting>      </example>    </section>    <section id="fixed.run">      <title id="fixed.run.title">Running a Fixed Point ANN</title>      <para>	Running a fixed point ANN is done much like running an ordinary ANN. The difference is that the inputs and outputs should be in fixed point	representation. Furthermore the inputs should be restricted to be between -<parameter>multiplier</parameter> and <parameter>multiplier</parameter> to	avoid integer overflow, where the <parameter>multiplier</parameter> is the value returned from	<link linkend="api.fann_get_multiplier"><function>fann_get_multiplier</function></link>. This multiplier is the value that a floating point number should	be multiplied with, in order to be a fixed point number, likewise the output of the ANN should be divided by this multiplier in order to be between zero	and one.      </para>      <para>	To help using fixed point numbers, another function is provided.	<link linkend="api.fann_get_decimal_point"><function>fann_get_decimal_point</function></link> which returns the decimal point. The decimal point is the	position dividing the integer and fractional part of the fixed point number and is useful for doing operations on the fixed point inputs and outputs.      </para>      <example id="example.exec_fixed">	<title id="example.exec_fixed.title">An example of a program written to support both fixed point and floating point numbers</title>	<programlisting><![CDATA[#include <time.h>#include <sys/time.h>#include <stdio.h>#include "fann.h"int main(){	fann_type *calc_out;	unsigned int i;	int ret = 0;	struct fann *ann;	struct fann_train_data *data;	printf("Creating network.\n");#ifdef FIXEDFANN	ann = fann_create_from_file("xor_fixed.net");#else	ann = fann_create_from_file("xor_float.net");#endif		if(!ann){		printf("Error creating ann --- ABORTING.\n");		return 0;	}	printf("Testing network.\n");#ifdef FIXEDFANN	data = fann_read_train_from_file("xor_fixed.data");#else	data = fann_read_train_from_file("xor.data");#endif	for(i = 0; i < data->num_data; i++){		fann_reset_MSE(ann);		calc_out = fann_test(ann, data->input[i], data->output[i]);#ifdef FIXEDFANN		printf("XOR test (%d, %d) -> %d, should be %d, difference=%f\n",		data->input[i][0], data->input[i][1], *calc_out, data->output[i][0], (float)fann_abs(*calc_out - data->output[i][0])/fann_get_multiplier(ann));		if((float)fann_abs(*calc_out - data->output[i][0])/fann_get_multiplier(ann) > 0.1){			printf("Test failed\n");			ret = -1;		}#else		printf("XOR test (%f, %f) -> %f, should be %f, difference=%f\n",		data->input[i][0], data->input[i][1], *calc_out, data->output[i][0], (float)fann_abs(*calc_out - data->output[i][0]));#endif	}	printf("Cleaning up.\n");	fann_destroy_train(data);	fann_destroy(ann);	return ret;

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -