⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 manual.txt

📁 BP神经网络算法的VC+源码
💻 TXT
📖 第 1 页 / 共 5 页
字号:
nn-utility MANUAL (written by Panayiotis Thomakos)

If you are reading this documentation I think it is fair
to assume that you have installed nn-utility. If you have
not you should read the INSTALL file which will guide you
through the quick and simple installation process.

This MANUAL corresponds to the v2.1 distribution of this
neural network library. The following is a table of
contents that you may skim through to get an idea of what
this documentation encompass.

If you want to get a quick start and build some simple networks
you can jump to section 2.0 "Basics: How do I use it?" and then
to section 3.0 "Intermediate: Training a Neural Network" to learn
how to train the networks you make.

If you have used nn-utility before it is very important that you
re-read section 2.0, or read section 5.3 on how to make your
old code compatible with v1.9. The architecture and program prototype
has changed from version 1.9 and above. If you are familiar with
then new nn-utility architecture this is necessary.

IMPORTANT: This Library has been released under the GNU
	Lesser Public Licence. Please view the
	COPYRIGHT file for more details.

Contents					   aprox
------------------------------------------------------------	
0.1 Updates from previous Versions
    0.1.0 Updates from version 1.0  to 1.7 	(   short  )
    0.1.1 Updates from version 1.7  to 1.8	(   short  )
    0.1.2 Updates from version 1.8  to 1.9	(   short  )
1.0 What is nn-utility?                         (  5 lines )
2.0 Basics: How do I use it?			(  5 lines )
    2.1 Program prototype			( 20 lines )
    2.2 Verify that the library works		( 20 lines )
    2.3 Creating a neural network		( 10 lines )
        2.3.1 Defining topology			( 40 lines )
	2.3.2 Feed Forward sweep		( 35 lines )
	2.3.3 Back Propagation sweep		( 20 lines )
	2.3.4 General IMPORTANT note		( 10 lines )
3.0 Intermediate: Training a Neural Network	( 10 lines )
    3.1 The GetInput Function			( 20 lines )
    3.2 The CheckTrian Function			( 10 lines )
    3.3 The train Function			( 10 lines )
4.0 Built in Accessories: Using Bitmaps		( 25 lines )
5.0 Hopefield Networks				( 10 lines )
5.1 How to use the Topology Constructor		( 20 lines )
5.2 Recurrent Neural Networks			(    ..    )
5.3 Making your code compatible with v1.9	(    ..    )
5.4 Probabilistic and Partially Connected
    Networks					(    ..    )
5.5 Defining your own Network Functions		(    ..    )
5.6 Annealing Networks				(    ..    )
5.7 Notes on Optimizing your code		(    ..    )
6.0 Reference:    The Functions and Classes	(   index  )
    This is a section of all the commands, variables, typedefs,
    classes, and concepts of nn-utility. It's alphabetically ordered.
--------------------------------------------

---------------------------------------------
0.1.0 Updates from version 1.0 to 1.7
--------------------------------------------
(UPDATE 1.2.1 ) General "Makefile" for nn-utility in the examples directory.
(UPDATE 1.2.2 ) Ability to declare and assign networks on one line.
(UPDATE 1.2.3 ) The Hopefield Functions See section 5.0.
(UPDATE 1.2a.1) Topology functions and files (5.1).
(UPDATE 1.2a.2) Basic functions of recurrent and partially connected (5.2).
(UPDATE 1.2a.3) PrintVectorExact function. (see reference.
(UPDATE 1.2a.4) AddVectors function. (see reference ).
(UPDATE 1.3.1) GetInput function without length (see reference).
(UPDATE 1.3.2) Multi-Layer Networks class (5.3).
(UPDATE 1.3.3) ClearVector can now fill vectors (see reference).
(UPDATE 1.4.1 ) Ostream operators for layers (see reference write).
(UPDATE 1.5.1) Display changed to DisplayLAYER (see reference).
(UPDATE 1.5.2) Examples have been debugged, and now work better.
(UPDATE 1.6.0) Program architecture has changed, re-read section 2.0
(UPDATE 1.6.1) Probabilistic and Partially Connected NN's (5.4).
(UPDATE 1.6.2) SetColumn, Topology Fire, and NET.PNN (see reference).
(UPDATE 1.7.1) Recurrent Networks Implemented View (5.2)

------------------------------------------
0.1.1 Updates from version 1.7 to 1.8
-----------------------------------------
(UPDATE 1.8.1) PrintMatrix implemented (see reference)
(UPDATE 1.8.2) Overloading of PrintVector (see reference)
(UPDATE 1.8.3) Overloading LoadVector function to work with integers(see reference)
(UPDATE 1.8.4) ostream and istream operator for vectors (see reference)
(UPDATE 1.8.5) SetConstant for layers (see reference)
(UPDATE 1.8.7) annealing networks implemented (5.6 and see reference)

------------------------------------------
0.1.2 Updates from version 1.8 to 1.9
-----------------------------------------
(UPDATE 1.9.1) All double values have been converted to templates (see reference)
(UPDATE 1.9.2) All supported neural networks are derived from layer (see reference)
(UPDATE 1.9.3) Ostream and Istream is done with read and write (see reference)
(UPDATE 1.9.4) Removed << and >> operators for VECTORS (see reference)
(UPDATE 1.9.5) Notes on Optimizing your code (5.7)
(UPDATE 1.9.6) Reference is updated in alphabetical order (see reference)

1.0 What is nn-utility?
-------------------------------------------

In short nn-utility is a Neural Networks library for the
C++ Programmer. It is entirely object oriented and focuses
on reducing tedious and confusing problems of programming neural networks.
By this I mean that network layers are easily defined. An
entire multi-layer network can be created in a few lines, and
trained with two functions. Layers can be connected to one another
easily and painlessly.

nn-utility also has predefined network functions, such as kohen,
sigmoid, hopefiled, self-organizing feature maps, radial basis,
probabilistic, binomial, recurrent, simulated annealing, and
any user defined function you can think of. In fact, what is
special about nn-utility is that you can define your own
functions. You don't need to depend on predefined ones
(although you have that option).

2.0 Basics: How do I use it?
------------------------------------------

The requirements for using this library include a competence in c++,
and very preferably an understanding of pointers, even though this
is not necessary. It is also highly recommended that the programmer
understands object oriented programming, otherwise a lot of the
programming commands will seem meaningless. The library will, however,
still be usable.

To begin programming read through the subsections of section 2.0:

2.1 Program Prototype
---------------------------------------------------------

I use VI for writing C++ files, if you use another
editor simply substitute the few commands that are used.

Create an empty file named "sigmoid.cpp"
$ vi sigmoid.cpp

In the "sigmoid.cpp" file type the following lines:
//----------------------------------------------
#include <nn-utility.h>

using namespace nn_utility;
nn_utility_functions<float> derived;

int main(){ return 0; }
//---------------------------------------------

This is the program prototype. (A program prototype is the minimum
requirements that a program needs to run. In this case these are the
minimum requirements to run a neural network utility program).
Save the file and exit to the command line (:x in VI).

The first line ("#include <nn-utility.h>") directs the compiler
to include the nn-utility definitions. The second line ("using
namespace nn_utility;") tells the compiler to use the nn_utility
definitions. And the third line ("nn_utility_functions<float> derived;")
directs the compiler to create a handler for neural network
functions. The difference here between v1.9 and versions bellow v1.9, is
that <float> is required. If we were creating a network in which
accuracy was necessary we could use <double> instead. If we were
creating a network in which we were only using integers, we could
replace the <float> with <int>.

The first step is to verify that the library actually works.
Once this has been done I will explain what each function does
and work out an example for you.

2.2 Verify that the library works
---------------------------------------------------------
To make sure that the library works, compile the program:

Compile the file an include the nn-utility library
$ g++ sigmoid.cpp -lnn-utility

If this does not work, try the following:
$ g++ sigmoid.cpp -L/usr/lib/. -lnn-utility

If this does not work, try the following:
$ g++ sigmoid.cpp -L/usr/lib/. -I/usr/include/. -lnn-utility

If this works and the program compiles skip to section 2.3, otherwise
keep reading...

If this does not work then you have incorrectly installed nn-utility,
or you are using incorrect path names. If you adjusted the Makefile
before installing this file you may need to adjust the path names
after the -I and -L. -I stands for the include file path name, and -L
stands for the library path name. If you are new to using libraries
I highly suggest you read a tutorial about them. If you are in a hurry
what you will need to do is locate the "nn-utility" and "nn-utility.a" files
and place the "nn-utility.h" file in the same directory with the "iostream.h" file.
Then, place the "nn-utility.a" file in the same directory with all the other
".a" files, this directory is usually located at the same level with the
library/. directory into which you will most likely place the "nn-utility.h" file.
If you have done this correctly the first compiling suggestion should work.

If you still have questions take a look at the INSTALL file. And if you are still
experiencing problems, please feel free to contact me at panthomakos@users.sourceforge.net,
post a support request on sourceforge.net/projects/nn-utility.

2.3 Creating the Network
-----------------------------------------------------

Okay, now you have verified that the library works, so how do you
create a network? The topology of the network is defined in the main
function (usually), unless you wish to define it globally, a process
you can accomplish on your own. For now I am simply working with a
simple Feed Forward and Back-Propagating sigmoid function network.

To complete this section please make sure that you have created the
prototype file from section 2.1, because this file will be built on
in the next few steps.

2.3.1 Defining Topology
-----------------------------------------------------
In the main function of your program insert the following lines:

//--------------------------------------------
SIGMOID *hidden1 = new SIGMOID();
SIGMOID *hidden2 = new SIGMOID();
SIGMOID *output = new SIGMOID();

hidden1->define( 2,2 );
hidden2->define( 2,2 );
output->define( 2,1 );
//-------------------------------------------

By performing these steps you have created three layers, however,
you are disregarding the input layer, since the input is defined separately.
What this means is that whenever you want to present the network with an
input, you define the input as a VECTOR (which is basically an array of values),
and then using a Feed Forward function stimulate the network, or send the input
to the network.

As for the definition lines you just typed in, in C++ terms you have created
three pointers to classes. You have defined the layers as SIGMOID functions.
You have also set the row size and column size of each layer matrix (through
the use of the define function). The row size represents the number of inputs
nodes, and the column size represents the number of nodes in the current layer.
What does this mean? Well, layers in a network are represented by matrices,
the rows of each matrix (or each layer) represent the number of inputs the
layer is receiving. The columns of this matrix represent the number of nodes
or units in the layer. For example, the "hidden1" layer is a 2x2, which means
it has 2 input units before it and it is composed of 2 units itself. The next
layer "hidden2" is the same, since it will receive input from the "hidden1"
layer and because it has 2 units/neurons itself. The final "output" layer only
has one node in it (the column size), but receives input from "hidden2", which
has two nodes, so "output"'s row size is 2. The numbers in the matrix represent
the weights of the connections between nodes. An extra VECTOR can also be added
to the matrix to represent biased weights that always have an input of 1.

Okay, so we have defined the layers, but how do we connect them? Don't' worry,
it's extremely easy. You may now connect all the layers by typing the following lines
in the main function, just after the lines you just typed:

//---------------------------------------------
layer<float> *ppHidden1 = hidden1;
layer<float> *ppHidden2 = hidden2;
layer<float> *ppOutput  = output;

derived.Insert( &ppHidden1, &ppHidden2 );
derived.Insert( &ppHidden2, &ppOutput );
//--------------------------------------------

First what we have done is create "buffers", which are essentially pointers to the
SIGMOID layers, so that we may manipulate them within the "derived" class. Since the
derived class was defined as "<float>", it expects all layers to be of type <float>,
but the layers we defined are of type SIGMOID. In cases like this we must define
buffers that will link SIGMOID layers with <float> layers.

What Insert does is connect in sequence: hidden1->hidden2->output. This code can
also be written in two different ways. (For more information view reference INSERT).

Right now the layer matrices do not have any values in them. You will most likely
want to put weights into these layers, so let us do that. Following the code you just
inserted, type the following lines:

//-------------------------------------------
hidden1->Setf( -2.0, 3.0, -2.0, 3.0 );

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -