⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 manual.txt

📁 BP神经网络算法的VC+源码
💻 TXT
📖 第 1 页 / 共 5 页
字号:

int length = bit->readbitmap( INPUT, 2 );

Bitmaps were created to be used in training (originally). This means that bitmaps
could be read a runtime into a vector (see "kohen_example.cpp" or "sigmoid_example.cpp").
Then, during training the bitmaps could be used to train the network. This is exactly
what is done in the examples above.

As for the bitmap file format, the following is an example of a short bitmap file:

:a:
0100
0200
0020
0100
0100
:b:
010
010
001
010
:end:

a is referenced by "bit->readbitmap( INPUT, 0 );"
INPUT will now contain the contents of "a"
b is referenced by "bit->readbitmap( INPUT, 1 );"
INPUT will now contain the contents of "b"

If you want to see some more examples of bitmap files view the examples directory and
take a look at "bitmap.txt" and "bit.txt". Also, for programs that implement this
accessory, view "sigmoid_example.cpp" and "kohen_example.cpp".

Here are some other function you can find in the reference section that may be of help
in using bitmaps:

bit->noise( ... )    //Distort the bitmap, add noise

bit->Print( ... )    //Print the bitmap as it is seen in the bitmap file

SIGMOID_layer_you_define->sigmoid( ... )
	//this function is for bitmaps and allows modification of values
	//converts 0's to 0.1's and 1's to 0.9's

HOPEFIELD_layer_you_define->Hopefield( ... )
//Hopefield modification for bitmaps converts 0's to -1's

5.0 Hopefield Networks
-------------------------------------------------
For a fully working Hopefield Network view the
hopefield.cpp file in the examples directory. Note
that it makes use of the bitmap accessory so you
may want to read section 4.0 if you have not already
done so. In the meantime, this section serves to direct
you to the references that will explain more about Hopefield networks.

HopefieldMultiply		(multiplies two vectors)
HopefieldAdd			(adds two matrices)
HopefieldAddTo			(multiplies and adds vectors to result)
Hopefield			(converts 0's to -1's)
sgn (Hopefield Binomial)	( < 0 = -1 otherwise 1)

Note that hopefield functions do not undergo training. The best
way to understand these functions is to view the "hopefield.cpp" file.

5.1 How to use the Topology Constructor
-----------------------------------------------
The topology constructor is a layer constructor that
allows you to create a network in a file and then
load the network into your program. The following is an
example of a "topology.txt" file:

:first
size
2 3
bias
1 2 3
weight
2 3 4
5 6 7
:second
size
3 3
bias
2 2.0 3.0
weight
2   9.0  1.0
3   0.4  0.1
4  -0.1 -.02
:end:

:first defines a simple name for the network (must not include
spaces).
size tells the reader that the next line defines the row x col
of the layer
bias is optional and tells the reader that the next line is a
vector of the bias weights
weight is optional and tells the reader that the next line is
a matrix of the layer weights.
the file should end with an :end: tag

So how do we read these files? It's actually quite simple. If
we wanted to read this file into the variables hidden1 and output
we would do the following:

SIGMOID *hidden1 = new SIGMOID( "topology.txt", 0 );
SIGMOID *output = new SIGMOID( "topology.txt", 1, &ppHidden1 );

The layers would be connected and contain the weights. For a
working example of this view the "topology_example.cpp" file.

Note also that you may define separate layers in separate files,
hence you can define different filenames and indexes. Note though
that you CAN still connect layers from different files, as well
as connecting program defined layers to text-file (topology) defined
layers.

Note also that an updated feature has been added to the topology function
it implements the use of binary numbers in PNN's. It is not required, but
if you are interested view 5.4.2 that includes this updated information.

5.2 Recurrent Neural Networks
------------------------------------------------
This section focuses on the creation of recurrent
networks in which each unit in each layer is connected
to itself and every other unit in it's layer, as well
as every unit in the next layer.

So, what can you do with recurrent neural networks?
Teach a network sequences, basically. With recurrent
networks you can teach a network a pattern over time,
or patterns in which the output is dependent upon a
previously determined or inputed value. An example can
be found in recurrent.cpp.

The basic change between normal and recurrent networks
are that recurrent networks use matrices for input, output,
and target. The reason for this is that each row represents
an input vector at time t, where t is referenced by MATRIX[t].

So, the first step to creating a recurrent network is to 
overload the GetInput and CheckTrain functions. This can
be done quite easily, however the CheckTrain and GetInput
functions for recurrent networks are different from other
networks. (View recurrent.cpp for more information).
Here is the basic overloaded class and functions:

class MINE : public nn_utility_functions<float>{
	public:
		typedef float VECTOR[NN_UTIL_SIZE];
		typedef float MATRIX[NN_UTIL_SIZE][NN_UTIL_SIZE];
	
		void GetInput( int iteration, MATRIX &send, MATRIX &target, int &inputs );
		bool CheckTrain( MATRIX output, MATRIX target, int length_of_output, int length );
}derived;

void MINE::GetInput( int iteration, MATRIX &send, MATRIX &target, int &inputs ){
	LoadVector( send[0], .... )
	LoadVector( send[1], .... );

	LoadVector( target[0], ... );
	LoadVector( target[0], ... );

	inputs = 2;
}
//inputs alway refers to the number of time steps that you want the neural network to process
//Check train works in a similar way, and length_of_output contains the length of the output and 
//length contains the number of time steps.

So, once this has been done, what is the next step? Well, creating the networks. This is done
using a parental layer class. This is a kind of handler layer that contains an array of layers
that can be connected to one another in a recurrent format. Here is an example:

layer<float> *parent = new layer<float>();
parent->define_recurrent();
SIGMOID *one = new SIGMOID();
SIGMOID *two = new SIGMOID();

one->define( 10,5 );
one->define( 10,5 );

layer<float> *ppOne = one;
layer<float> *ppTwo = two;

parent->add( &ppOne );
parent->add( &ppTwo );

And to train "one" and "two" (the two layers belonging to "parent") we do this:

train( &parent, INTERATIONS, LEARNING_RATE );

where INTERATIONS is the number of times to train the network, and LEARNING_RATE is the
learning rate to enforce.

To FeedForward:

parent->FeedForward_recurrent( INPUT, FINAL, INPUTS );

where INPUT is the MATRIX of inputs, and FINAL is the MATRIX into which to place the output. INPUTS
is the number of rows in INPUT.

To BackPropagate:

parent->BackPropagate_recurrent( INPUT, TARGET, LEARNING_RATE, INPUTS );

where TARGET is the MATRIX of targets.

View recurrent.cpp for an example of recurrent neural networks.

5.3 How to make your code compatible with template version 1.9
--------------------------------------------------------------
Okay, as the developer of nn-utility I am sorry that you actually
have to read this section. nn-utility is not an entirely mature
library yet, and it has gone through two significant changes in
architecture: v1.6 in which a namespace was implemented, and now
v1.9 in which templates were implemented. However, if you bare
with me and read through the tips in the following section
you can be ready to easily convert your code into a much faster,
more efficient and more flexible system.

Okay, this section is structure in the following manner. I give an
example of how networks used to be defined, and then re-define the
same example in the new format. If you have any further questions
about format feel free to submit a support request, you should
receive an answer within 24 hours:
(http://sourceforge.net/projects/nn-utility)

was:	nn_utility_functions derived;
is :	nn_utility_functions<float> derived;

was:	VECTOR input, FINAL, target;
is :	nn_utility_functions<float>::VECTOR input, FINAL, target;

was:	layer *first = new layer( layer::SIGMOID, 2,3 );
is :	SIGMOID *first = new SIGMOID();
	first->define( 2,3 );

was:	train( &first, 20, 0.9 );
is :	layer<float> *ppFirst = first;
	train( &ppFirst, 20, 0.9F );

was: 	BITMAP *bit = new BITMAP( "bit.txt" );
is :	BITMAP<float> *bit = new BITMAP<float>( "bit.txt" );

was: 	class MY_CLASS : nn_utility_functions{ ... }derived;
is :	class MY_CLASS : nn_utility_functions<float> { ... }derived;

was:    multi_layer *multi = new multi_layer( layer::SIGMOID );
	multi->add( layer::SIGMOID, 10,5 );
	multi->add( layer::PNN, 10,5 );
is :	layer<float> *multi = new layer<float>();
	multi->define_recurrent();
	SIGMOID *one = new SIGMOID();
	SIGMOID *two = new SIGMOID();
	one->define( 10,5 );
	two->define( 10,5 );

	layer<float> *ppOne = one;
	layer<float> *ppTwo = two;
	parent->add( &ppOne );
	parent->add( &ppTwo );

In general, yes things have gotten a little bit longer, but the format
is actually easier to read and is easier to manipulate, for instance in
the multi_layer class, it is much easier to access individual layers. Also,
having the ability to use floating-point is a great advantage because it
speeds things up.

5.4 Probabilistic Neural Networks and Partially Connected Neural Nets
---------------------------------------------------------------------
If you are not new to nn-utility it is highly recommended you read
this section of the MANUAL. Probabilistic neural networks have implemented
quite a few new features and have modified a few old ones. This section
explain further and give details as to what was modified and implemented.

nn-utility implements Altered Probabilistic Neural Networks. Just as normal
Probabilistic Neural Networks, they do not train. They are composed of
three layers (input, pattern, and summation). The input layer is not
defined as a layer, but rather as an input vector. It's length is 
equivalent to the number of factors being tested. For example a classification
in two-dimensional Cartesian coordinate systems would require a vector of
length 2. The pattern layer contains nodes or units (length) equivalent
to the size of the training data. In other words, if you classifying a point
by comparing to to 500 other points, the pattern layers size is 500. The summation
layer is the size of the pattern classification. For example in the later example

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -