📄 manual.txt
字号:
if you have classified the 500 points into 4 categories the summation layer is
4 in length. All nodes from the pattern layer fire only to the nodes in the summation
layer of which they belong. pnn.cpp shows a simple example of the implementation
of a PNN. Here is what you need to know:
PNN's use only a fire function, and do not backpropagate.
They can be defined in the following manner:
PNN hidden1 = new PNN();
hidden1->define( ... );
The summation layer has one more property that is crucial it is called a binary
number. What this number represents is the nodes it receives input from. For example
in a 3-2 PNN, the summation layer (size 2) might have a binary number like this:
1 0 1
0 1 0
This means that the first node in the summation layer receives input from the first
and last nodes in the pattern layer. The second node in the summation layer receives
input from the second node in the pattern layer. This is also useful for creating
partially connected networks, because this binary number can be implemented for any
layer in a network. So how do you define this number?
LAYER_NAME->SetBinary( ... );
For the later example of a 3-2 PNN, the network would be set up like this:
PNN *pattern = new PNN();
pattern->define( 2,3 );
PNN *summation = new PNN();
summation->define( 3,2 );
summation->SetBinary( 1, 0, 1,
0, 1, 0 );
Okay, so we have PNN's under our belt. The next section 5.4.2 will describe
new feature implemented in version 1.6 of the nn-utility library.
5.5 Defining your own Network Functions
-------------------------------------------------
If you have gotten this far you should have a good
understanding of the nn-utility architecture and
functionality. Defining your own network functions
is quite simple. What you need to do is derive a
layer class, much like you would derive an
nn_utility_functions class for training, except with
layers this time:
class MY_LAYER : public layer<float>{
public:
float function( VECTOR input, VECTOR weights, double bias, int length,
bool output );
void UpdateLayer( VECTOR target, VECTOR &new_target, int length,
float learning_rate, VECTOR &biases, VECTOR input, MATRIX &weights,
VECTOR result, int rows, int cols, bool output );
}
Finally, the code can be inserted by creating the functions like this:
float MY_LAYER::function( VECTOR input, VECTOR weights, double bias, int length, bool output ){
//.... insert your code here....
return ....;
}
void MY_LAYER::UpdateLayer( VECTOR target, VECTOR &new_target, int length, float learning_rate,
VECTOR input, VECTOR result, bool output ){
// .... insert your code here....
}
Be sure that if you want to implement these changes into a layer you create you must define the
layer using MY_LAYER instead of layer:
MY_LAYER *hidden = new MY_LAYER();
MY_LAYER->define( 2,3 );
That's all there is to it. If you have questions post a support request at
http://sourceforge.net/projects/nn-utility
or email
panthomakos@users.sourceforge.net
5.6 Annealing Networks
---------------------------------------------------
Simulated annealing networks are usually used for finding a function
minima. For instance, given a function f(x) : x is a vector of length 2,
find the local minima of f(x). If f(x) is defined as a network then through
annealing networks the minima of f(x) can be found. nn-utility now implements
Binomial Simulated Annealing Networks. This means that local minima of
binomial functions can be found.
How? Here are the basic things you need to know:
When training an annealing network you don't adjust the network weights. This
is because the network weights define the function. You adjust the input, because
the input defines the output of the local minima.
So here is an example:
Given a function that can be defined by a network:
WIDROW_HOFF<float> *hidden = new WIDROW_HOFF<float>();
hidden->define( 2, 2, 2.0, 2.0, -1.0, -1.0 );
the minima of this function can be derived in the following manner (or something
of this sort):
VECTOR IN;
derived.LoadVector( IN, 2, 0, 1 ); //create a starting input that will be changed later on
hidden1->set_annealing( IN, 2, 1.0, 0.8, 2 );
//create an annealing network that starts training at input value IN, of length 2,
//starts at temperature or range value of 1.0, and multiplies temperature by 0.8 every
//2 times the input has changed.
layer<float> *ppHidden1 = hidden1;
train( &ppHidden1, 1000, 0.1, true );
//train the annealing network
Take another look at the annealing function in the reference section.
This example can be found in examples/annealing.cpp.
5.7 Optimizing your code
---------------------------------------------------
Here are some quick tips on how to optimize your code.
NN_UTIL_SIZE is the maximum number of nodes in a
layer. If you are only using 10 nodes, for instance,
chance the definition of NN_UTIL_SIZE in "nn-utility.h"
to 10, then recompile the library.
RECURRENT refers to the maximum number of inputs
or layers you can have in a recurrent network. If you
need more connections, change this numbers definition
(found in nn-utility.h) to the maximum number you
need, and recompile the library.
6.0 Reference: The Functions and Classes
-------------------------------------------------
The reference is alphabetized. The format is quite simple:
KEYWORD <TYPE> : explanation
Where KEYWORD is the word you are looking for, as
in an index, and where TYPE is
<v> = variable
<f> = function
<t> = typedef/constant
<i> = idea/concept
<c> = class
When you encounter a <T>, it means define as a template.
In other words, replace with int, float, or double.
(see: ... ) is a series of suggestions of other topics you
may be interested in if you found what you were looking for
in the current keyword.
add <f> : Used in recurrent networks to add a layer to a parent layer.
USE: PARENT_LAYER->add( &CHILD_LAYER );
(see: RECURRENT, layer)
AddVectors <f> : Adds two vectors together by adding each element to it's corresponding element.
USE: nn_utility_functions<T>::AddVectors( DESTINATION, SOURCE, SOURCE, LENGTH );
integer: LENGTH
(see: VECTOR, nn_utility_functions)
annealing <f, i> : annealing is actually a function, however view ANNEALING for information on the
concept of simulated annealing neural networks. annealing is usually used by the
train function. It performs an iteration on a simulated annealing network.
USE: LAYER->annealing( INPUT_VECTOR, INPUT_LENGTH, OUTPUT_VECTOR, OUTPUT_LENGTH );
integer: INPUT_LENGTH, OUTPUT_LENGTH
(see: annealing, annealing_*, layer)
ANNEALING <i> : There is no ANNEALING class. Annealing network are any kind of network that
attempts to find a function minima through the use of simulated annealing.
The concept of simulated annealing is finding a global minima over a range by
randomly selecting points and remembering the lowest point. The range is then
eventually reduced until a minima is found. The advantage of this network
is that it enables the network to become worse, for a short amount of time,
hence allowing it to try other possibilities.
(see: annealing, annealing_*, layer)
annealing_alpha <v> : The variable which determines the increment of the annealing temperature:
(annealing_temperature) = (annealing_alpha)*(annealing_temperature)
(see: annealing, annealing_*)
annealing_energy <v> : The change in energy between the last output and the current output of an
annealing network at any given time iteration.
(see: annealing, annealing_*)
annealing_input <v> : The current input being presented to an annealing network.
(see: annealing, annealing_*)
annealing_length <v> : The length of the input being presented to the annealing network.
(see: annealing, annealing_*)
annealing_M <v> : Interval of iterations at which to adjust the temperature (see annealing_alpha).
(see: annealing, annealing_*)
annealing_on <v> : Boolean value of whether or not simulated annealing is activated.
(see: annealing, annealing_*)
annealing_probability <v> : exp(-(annealing_energy)/(annealing_temperature)) at a time iteration.
(see: annealing, annealing_*)
annealing_result <v> : Result of a feed forward sweep on an annealing function network.
(see: annealing, annealing_*)
annealing_temperature <v> : Range of the annealing function to check.
(see: annealing, annealing_*)
annealing_update <v> : Counter to the number of iterations before annealing_M is reached.
(see: annealing, annealing_*)
BackPropagate <f> : Performs a back propagation sweep on any layer. It is also used by the
train function during each iteration in a back propagation network.
USE: LAYER_NAME->( INPUT_VECTOR, TARGET_VECTOR, LEARNING_RATE, INPUT_LENGTH );
T: LEARNING_RATE
integer: INPUT_LENGTH
(see: FeedForward, layer)
binary_fire <v> : Boolean: Whether or not to perform a partially connected network modification on
the current feed forward sweep.
(see: PNN, binary_number)
binary_number <v> : Bit string defining nodes in a layer which will receive input from specific nodes
in the preceding layer.
(see: PNN, binary_fire)
BINOMIAL <c, i> : The binary function basically presents an output of 0 or 1, depending on if a
signal is strong enough. The BINOMIAL class implements this network function.
It does not however, include a back propagation algorithm.
(see: function, WIDROW_HOFF)
BITMAP <c> : Class used to read text-defined binary grids. Is generally used to train
neural networks in pattern recognition of images, numbers, and letters that
are defined using bitmaps.
(see: filename, readbitmap, noise, Print)
buffers <i>: These are used to perform function operations that are otherwise not possible.
For instance, to pass a SIGMOID function to DisplayLAYER, we must create
a buffer, because DisplayLAYER belongs to nn_utility_functions<T>:
layer<T> *ppLAYER = LAYER;
nn_utility_functions<T>::DisplayLAYER( &ppLAYER );
(see: layer, SIGMOID, nn_utility_functions, DisplayLAYER)
CheckTrain <f> : Defined by the user to determine if a network should be trained on a given
iteration. The return type is a boolean value for whether or not to train.
There are two formats, one for normal network and one for recurrent networks.
USE: nn_utility_functions<T>::
CheckTrain( VECTOR_OUTPUT, VECTOR_TARGET, INTEGER_LENGTH );
CheckTrain( MATRIX_OUTPUT, MATRIX_TARGET, FINAL_LENGTH, INPUT_LENGTH );
integer: FINAL_LENGTH, INPUT_LENGTH
(the MATRIX version is for recurrent networks)
(see: GetInput, nn_utility_function)
ClearVector <f> : Clears a vector by setting every element to 0, or set's every element to
any other value you select.
USE: nn_utility_functions<T>::
ClearVector( VECTOR_NAME, INTEGER_LENGTH );
ClearVector( VECTOR_NAME, INTEGER_LENGTH, T_VALUE_TOSET );
(see: VECTOR, nn_utility_functions)
col <v> : Variable found in the layer class. This variable stores the number of
columns in the layer matrix.
(see: row, layer)
CopyVector <f> : Copies the contents of one vector into another.
USE: nn_utility_functions<T>::
CopyVector( VECTOR_DESTINATION, VECTOR_SOURCE, INTEGER_LENGTH );
(see: VECTOR, nn_utility_functions)
CreateVector <f> : Used by the layer class to create a vector from a matrix column.
USE: LAYER_NAME->CreateVector( VECTOR_DESTINATION, INTEGER_COLUMN_NUMBER );
(see: VECTOR)
define <f> : Used to define a layers' size, or read layer from a topology file.
USE: LAYER_NAME->( INTEGER_ROWS, INTEGER_COLUMNS );
LAYER_NAME->( STRING_FILENAME, INTEGER_IDENTIFICATION );
LAYER_NAME->( STRING_FILENAME, INTEGER_IDENTIFICATION, LAYER_PREVIOUS );
INTEGER_IDENTIFICATION is the number of the layer in the topology file. If there
are N layers in the topology file, INTEGER_IDENTIFICATION = 0...(N-1)
(see: definef, definei, topology, layer)
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -