📄 manual.txt
字号:
definef <f> : Used to define a layers size and floating-point elements.
USE: LAYER_NAME->definef( INTEGER_ROW, INTEGER_COLUMNS, VALUES, ... );
VALUES, ... is a series of values of type T that fill a ROWxCOLUMNS matrix.
All the values must be floating-point or double type.
(see: define, definei, layer)
definei <f> : Used to define a layers size and integer elements.
USE: LAYER_NAME->definei( INTEGER_ROW, INTEGER_COLUMNS, VALUES, ... );
VALUES, ... is a series of values of type T that fill a ROWxCOLUMNS matrix.
All the values must be of integer type.
(see: define, definef, layer)
define_recurrent <f> : Used to define a layer as a parent "handle" layer of a recurrent network.
USE: LAYER_NAME->define_recurrent();
(see: layer, RECURRENT)
DisplayLAYER <f> : Used to screen dump the values in a layer matrix in a pleasing format.
USE: nn_utility_functions<T>::DisplayLAYER( &LAYER_FLOAT );
You must use a buffer to perform this operation, for instance:
layer<float> *ppLAYER_FLOAT = LAYER_FLOAT;
nn_utility_functions<T>::DisplayLayer( &ppLAYER_FLOAT );
(only if LAYER_FLOAT has not already been defined as a layer<float> layer).
(see: layer, buffers, nn_utility_functions)
FeedForward <f> : Used to perform a feed forward sweep on a layer and any layers connected
after it. The recurrent version works only on networks that have been
defined are recurrent networks.
USE: LAYER_NAME->FeedForward( VECTOR_INPUT, VECTOR_FINAL );
LAYER_NAME->FeedForward( MATRIX_INPUT, MATRIX_FINAL, INTEGER_INPUTS );
VECTOR_FINAL will be replaced with the result of the feed forward sweep.
MATRIX_FINAL will be replaced with the result of the feed forward sweep.
(see: BackPropagate, layer, define_recurrent)
FeedForward_recurrent <f> : Performs a feed forward sweep on a recurrent network, however this
function is automatically called if a layer has been defined as a recurrent
layer.
USE: LAYER_NAME->FeedForward_recurrent( MATRIX_INPUT, MATRIX_FINAL, INTEGER_INPUTS );
MATRIX_FINAL will be replaced with the result of the feed forward sweep.
(see: BackPropagate_recurrent, layer, define_recurrent)
filename <v> : Used in BITMAP to store the name of the file that contains the bitmap definitions.
(see: BITMAP)
FILE_NAME_SIZE <t> : Defined in nn-utility.h. Is the maximum number of characters "filename" can be.
(see: filename, BITMAP)
function <f> : This is the virtual function of the layer class that performs the network
function. It can be overloaded, and is overloaded in any supported network
type, for instance SIGMOID overloads this function with a sigmoid function.
If you overload a layer and create your own network function, this is the
function you must overload and create a new body for. The return type is of
type T.
USE: LAYER_NAME->function( VECTOR_INPUT, VECTOR_WEIGHT, T_BIAS,
INTEGER_LENGTH, BOOLEAN_OUTPUT );
VECTOR_WEIGHT is the weight vector of the individual node.
VECTOR_INPUT is the vector of inputs being passed to the individual node.
T_BIAS is of type T and is the bias weight of the node.
BOOLEAN_OUTPUT is true if the layer is an output layer.
(see: UpdateLayer, layer, FeedForward)
GetInput <f> : Used to train a network. It results from deriving a class from nn_utility_functions
and creating a body for GetInput. Then, when the class is used to train a network,
the input is received from this function. It can be used with MATRICES for
recurrent networks, or VECTORS for non-recurrent networks.
USE: nn_utility_functions<T>::
GetInput( INTEGER_ITERATION, VECTOR_SEND, VECTOR_TARGET );
GetInput( INTEGER_ITERATION, MATRIX_SEND, MATRIX_TARGET, INTEGER_INPUTS );
SEND is the MATRIX or VECTOR to send to the network.
ITERATION is the iteration of training the network is at.
TARGET is the target for the SEND that you set.
INTEGER_INPUTS is the number of inputs in MATRIX_SEND.
If you define this function body, you do not set INTEGER_ITERATION, but rather
use INTEGER_ITERATION to determine what pattern you want to present the network
with. This way you can present different patterns at different iterations.
(see: CheckTrain, RECURRENT, nn_utility_functions)
history <v> : A MATRIX used in recurrent networks to hold the history of feed forward sweeps
through different time steps of the training of the network.
(see: RECURRENT, layer)
Hopefield <f, i> : The idea behind a hopefield network is to store a pattern and recall it when
cued. It is auto-associative and should be able to recall a pattern even if it
is noisy. To read more about this view HOPEFIELD. The Hopefield function is
a function within the HOPEFIELD class, what it does is switch a bitmap's values
from 0's and 1's to -1's and 1's, so as to make the bitmap compatible with
Hopefield networks.
USE: HOPEFIELD_OR_SGN_LAYER->Hopefield( VECTOR_BITMAP, INTEGER_LENGTH );
(see: HOPEFIELD, layer, BITMAP)
HOPEFIELD <c, i> : See Hopefield for the <i>. The HOPEFIELD class allows for the creation of an
auto-associative Hopefield network, however without it's feed forward function.
To implement a function for a Hopefield network, you must either use the
SGN class, or derive a HOPEFIELD class and create your own function.
(see: Hopefield, WIDROW_HOFF, layer)
HopefieldAdd <f> : Used in the HOPEFIELD class to update the layer matrix by adding two matrices. This
function has become almost obsolete, however it remains in case any user may need
to access such a feature.
USE: HOPEFIELD_OR_SGN_LAYER->HopefieldAdd( &LAYER_SOURCE, &LAYER_SOURCE );
(see: HOPEFIELD, layer)
HopefieldAddTo <f> : Used in the HOPEFIELD class to add an auto-associative pattern to the hopefield
network. Takes the two vectors that are passed, multiplies them, and adds them
to the layer matrix.
USE: HOPEFIELD_OR_SGN_LAYER->HopefieldAddTo( VECTOR_SOURCE1, VECTOR_SOURCE2 );
VECTOR_SOURCE1 is the actual pattern, VECTOR_SOURCE2 is conventionally the
auto-associative pattern for VECTOR_SOURCE1.
(see: HOPEFIELD, layer)
HopefieldMultiply <f> : Used in the HOPEFIELD class to multiply two vectors and store them in the layers'
matrix. When creating a hopefield network, this function is used to pass the
first auto-associative-pattern vector pair (see HopefieldAddTo), because it
replaces the entire matrix.
USE: HOPEFIELD_OR_SGN_LAYER->HopefieldMultiply( VECTOR_SOURCE1, VECTOR_SOURCE2 );
(see HopefieldAddTo for descriptions of VECTOR_SOURCE1 and VECTOR_SOURCE2).
When used in conjunction with HopefieldAddTo:
LAYER->HopefieldMultiply( VECTOR_A, ASSOCIATIVE_VECTOR_A );
LAYER->HopefieldAddTo( VECTOR_B, ASSOCIATIVE_VECTOR_B );
LAYER->HopefieldAddTo( VECTOR_C, ASSOCIATIVE_VECTOR_C );
(see: HOPEFIELD, layer, HopefieldAddTo )
Insert <f> : Adds a layer to the end of a multi-layer "list". Layers are connected in a
linked list format, so that once you use this command, a new layer is added
to the end of the linked list of a multi-layer network.
USE: nn_utility_functions<T>::
Insert( &ppLAYER_ALREADY_NETWORK, &ppLAYER_TO_ADD );
Notice that you must use buffers to pass the layers.
(see: buffers, InsertAfter, layer, nn_utility_functions)
InsertAfter <f> : Adds a layer right after another layer in a multi-layer network.
USE: nn_utility_functions<T>::
InsertAfter( &ppLAYER_FIRST, &ppLAYER_SECOND );
Notice that you must use buffers to pass teh layers.
(see: buffers, Insert, layer, nn_utility_functions)
KOHEN <c, i> : A KOHEN network is usually self-organizing, however the KOHEN class allows you
to create a non-self-organizing KOHEN network. KOHEN networks learn by calculating
the closest distance between an input and a neural unit. In other words, the unit
that is closest to the input will be update to "remember" that input pattern. In
KOHEN networks this is done with supervision.
(see: KOHEN_SOFM, layer)
KOHEN_SOFM <c, i> : A KOHEN self-organizing feature map is created using the KOHEN_SOFM class. The idea
is that the network finds the node or nodes within a radius that are closest to the
input pattern. It then updates these individual nodes to be even closer to the
input. The final idea is that after training the network will find it's own
classifying pattern for the inputs it has been presented.
(see: KOHEN, layer)
layer <c, i> : A layer is what multi-layer networks are made up of. They can act on their own
as well. They can perform feed forward and back propagation sweeps. In
nn-utility layer is one of the two most used classes, simply because you
can't create a network without it. It has been derived into supported types as
well: (SIGMOID, KOHEN, KOHEN_SOFM, WIDROW_HOFF, HOPEFIELD, SGN, PNN, and
RADIAL_BASIS). These types have defined "function" and "UpdateLayer"
(see: matrix, Predefined Functions, layers, RECURRENT, nn_utility_functions,
FeedForward, BackPropagate, function, UpdateLayer)
layers[RECURRENT] <v> : Is used within a layer class to define a recurrent network.
(see: RECURRENT, layer, add, define_recurrent)
length <v> : Used in a recurrent network. "length" holds the length of the recurrent network.
(see: RECURRENT, add)
LoadVectorf <f> : Loads a vector with a series of floating or double values.
USE: nn_utility_functions<T>::
LoadVectorf( VECTOR_TO_LOAD, SIZE_OF_ARGUMENTS, ARGUMENTS, ... );
ARGUMENTS is a list of type T (floating or double) arguments that should
be loaded into VECTOR_TO_LOAD. SIZE_OF_ARGUMENTS is an integer with the
number of arguments that are passed.
(see: LoadVectori, VECTOR, CopyVector, ClearVector)
LoadVectori <f> : Loads a vector with a series of integer values.
USE: nn_utility_functions<T>::
LoadVectori( VECTOR_TO_LOAD, SIZE_OF_ARGUMENTS, ARGUMENTS, ... );
ARGUMENTS is a list of type T (integer) arguments that should be loaded
into VECTOR_TO_LOAD. SIZE_OF_ARGUMENTS is an integer with the number of
arguments that are passed.
(see: LoadVectorf, VECTOR, CopyVector, ClearVector)
matrix <v> : The MATRIX inside layer that holds the weight vales of the layer.
(see: MATRIX, layer)
MATRIX <t> : typdef T MATRIX[NN_UTIL_SIZE][NN_UTIL_SIZE];
Is used in any transaction that defines or uses a matrix.
(see: matrix, layer, NN_UTIL_SIZE)
Merge <f> : Merges two vectors together to form an individual vector. This function is
used in recurrent networks when two inputs have to be merged and presented
to an individual node simultaneously.
USE: LAYER_NAME->Merge( DESTINATION, SOURCE1, LENGTH1, SOURCE2, LENGTH2 );
DESTINATION, SOURCE1, and SOURCE2 are VECTORs
LENGTH1, and LENGTH2 are integers for the lengths of SOURCE1 and SOURCE2 respectively.
(see: VECTOR, RECURRENT, layer)
network_recurrent <v> : Boolean value that determines whether or not a network should act as a recurrent
network.
(see: layer, define_recurrent, RECURRENT)
Next <v> : Pointer to a class "layer" that defines the next layer in a multi-layer network.
(see: Previous, layer)
NULL <t> : Constant defined in iostream.
(see: NULL_VECTOR)
NULL_VECTOR <t> : Constant defined for recurrent networks when half of the input to a node
is of NULL type. This is the NULL equivalent in VECTOR form.
(see: NULL, VECTOR, layer, RECURRENT)
nn_utility_functions <c> : One of the two largest classes in nn-utility. This class contains the
training and vector handling functions. It inserts and adds layers. It takes
care of a lot of work that does not need to be repeated in the layer class.
(see: VECTOR, MATRIX, layer)
NN_UTIL_SIZE <t> : The maximum size of a VECTOR and MATRIX row/column. Is defined in nn-utility.h.
Can be adjust for more efficiency, but requires a re-compilation of the library.
(see: VECTOR, MATRIX)
noise <f> : Add's noise/static to a bitmap.
USE: BITMAP_HANDLER->noise( VECTOR_BITMAP, INTEGER_LENGTH, TYPE, AMPLITUDE );
BITMAP_HANDLER->noise( VECTOR_BITMAP, INTEGER_LENGTH, AMPLITUDE );
The lower the AMPLITUDE, the greater the chance of more nose. AMPLITUDE ranges
from 0 to any integer number. TYPE is used to define what kind of nose to add.
TYPE = 0 means only add noise next to 1's.
TYPE = 1 means add noise everywhere.
The second instance of noise adds noise simply changes random characters to 1.
(see: BITMAP)
PNN <c, i> : PNN is a class for handling Probabilistic Neural Networks. Probabilistic Neural
Nets are usually used to determine if a point is belongs to one group or another.
The first layer receives input from the dimensions of the point. Each node in
the first layer is a point of the preset-data (the data that defines the groups).
The second layer defines the groups, and each point in the first layer
that belongs to a group in the second layer is connected only to the group it
belongs to. View the MANUAL section on Probabilistic Neural Networks for more
information on how to partially connect layers.
(see: SetBinary, layer)
Predefined Functions <i> : There are a series of derived layer classes that allow you to perform certain
neural network functions. See the following for more information on each class and
network type: annealing, BINOMIAL, SIGMOID, KOHEN, KOHEN_SOFM, RECURRENT,
RADIAL_BASIS, SGN, PNN, HOPEFIELD, and WIDROW_HOFF.
Previous <v> : Pointer in class "layer" that points to the previous layer in a multi-layer network.
(see: Next, layer)
Print <f> : Print is found in two instances. The first is for the class layer, and the second
is for the BITMAP class. In the layer class the function prints the layer in a
pleasing format. In the BITMAP class, the function prints the bitmap as it was
read from the bitmap file.
USE: LAYER_NAME->Print();
BITMAP_HANDLER_NAME->Print( VECTOR_BITMAP, INTEGER_LINE, INTEGER_LENGTH );
INTEGER_LINE is the length of an individual line in the bitmap.
INTEGER_LENGTH is the length of the entire bitmap.
(see: BITMAP, layer)
PrintMatrix <f> : Uses a screen dump to print the values in a matrix in a pleasing format.
USE: nn_utility_functions<T>::
PrintMatrix( STRING, MATRIX_TO_PRINT, INTEGER_ROWS, INTEGER_COLUMNS );
PrintMatrix( MATRIX_TO_PRINT, INTEGER_ROWS, INTEGER_COLUMNS );
STRING is used if you want to precede the printing of a matrix with some
kind of word or phrase.
(see: matrix, MATRIX, nn_utility_functions)
PrintVector <f> : Uses a screen dump to print the values in a vector in a pleasing format.
USE: nn_utility_functions<T>::
PrintVector( STRING, VECTOR_TO_PRINT, INTEGER_LENGTH );
PrintVector( VECTOR_TO_PRINT, INTEGER_LENGTH );
STRING is used if you want to precede the printing of the vector with some kind
of a word or phrase.
(see: VECTOR, PrintVectorExact, nn_utility_functions)
PrintVectorExact <f> : Performs the exact same function as PrintVector, but with more accuracy.
USE: nn_utility_functions<T>::
PrintVectorExact( VECTOR_TO_PRINT, INTEGER_LENGTH );
(see: VECTOR, PrintVector, nn_utility_functions)
RADIAL_BASIS <c, i> : RADIAL_BASIS is a derived layer class for radial basis networks. Radial basis
networks are based on three layers. The idea is that a non-linear problem can
be cast into a high-dimensional space and most likely be solved. The first mapping
is non-linear and the second is linear. Usually the number of hidden un
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -