⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme

📁 一个运用遗传算法来训练神经网络的程序
💻
字号:
                                     NNUGA                    Neural Network Using Genetic Algorithms                        By Omri Weisman and Ziv Pollack                   A final project in the course Robotics 95 --------------------------------------------------------------------------------Using Genetic Algorithms in Computer Learning     Essentially, Genetic Algorithms (GA) are a method of "breeding" computer     programs and solutions to optimization or search problems by means of     simulated evolution. Processes loosely based on natural selection,     crossover, and mutation are repeatedly applied to a population of binary     strings which represent potential solutions. Over time, the number of     above-average individuals increases, and better fit individuals are     created, until a good solution to the problem at hand is found.     In NNUGA, we are using GA to find the solution to a classification problem     with a neural network (NN). The neural network is a structure which is     able to respond with True (1) or False (0) to a given input vector. We are     trying to "teach" our neural network to correctly classify a set of input     vectors, which can be thought of as learning a concept. We then expect     that when the neural network will be presented with a vector P not from     this set, it will tend to exhibit generalization by responding with an     output similar to target vectors for input vectors close to the previously     unseen input vector P.     Our genetic algorithm works with not one but multiple populations, all of     which evolve seperately most of the time, except for once every several     generations where we allow different populations to mix with each other.The Neural Network     Our neural network has 2 inputs and 1 output. It is constructed from 6     neurons in the first layer and 1 neuron in a second layer.     In the first layer we have 2 neurons with atan transfer functions:                     atan( P * W + b )     2 with linear transfer functions:                     P * W + b     and 2 with hardlim transfer functions:                     P * W + b > 0     where P is the input vector presented to the network, W is the vector of     weights and b is the bias of the neuron.     The output function of the neuron in the second layer is:                     A * W + b > 0     where A is the vector of the outputs of the neurons in the first layer, W     is the vector of weights and b is its bias. The output of this neuron is     also the output of the network.The Learning Method     As stated earlier, the neural network learns using genetic algorithms.     This means that the weights and biases of all the neurons are joined to     created a single vector. A certain set of vectors is a correct solution to     the classification problem at hand. We are hoping to find one of these     vectors using an evolutionary algorithm.     The Canonical GA (pseudo code):             choose initial population             evaluate each individual's fitness             repeat                     select individuals to reproduce                     mate pairs at random                     apply crossover operator                     apply mutation operator                     evaluate each individual's fitness             until terminating condition     The learning loop can terminate either if a satisfactory solution is     found, or the number of generations pass a preset limit, suggesting that a     complete solution will not be found with this set of individuals.     In our case, we have several seperated populations, all of which evolve     seperately. Once every several generations we allow cross-over from     different populations.Limitations     Sometimes it could happen that though the NN could theoretically solve a     certain classification problem, our system will not return a correct     solution. This is because of the random nature of the algorithm and its     reliance on natural selection, mutation and cross-overs. Naturally, it     could happen that a certain flow of events that would lead to a correct     solution will not occure and thus a solution will not be found. However,     by using several unrelated populations we have decreased the probabilty of     this happening, since if some population has poor individuals the solution     could still be found at another.Our Implementation     We implemented a network with 2 inputs. The input for the neural network     can be taken from a graphic user interface, by clicking on points in a     board. A click with the left mouse button generates a '+' sign on the     board, marking that it's a point where the NN should respond with 'True'.     A click with the right mouse button generates a '-' sign on the board,     marking that it's a point where the NN should respond with 'False'. When     enough points have been entered, the user can click on 'Start', which will     introduce these points as inputs to the NN, have it learn these input     vectors and show a group of lines which corresponds to the division of the     plane into regions of opposite neuron response. While the learning takes     place, a textual indication of the learning process is presented on the     standard output; This includes the fitness of the best individual in each     population on each generation, and a schematic textual division of the     plane once every 50 generations, to allow the user to inpect the progress.Source Code     In order to be able to run the program, save all these files in a     directory, type 'make', and use 'nnuga' to start the system.The program     should run under all platforms which support Tk, though we tested it for     UNIX SunOS only.        o  nnuga - The Tk interface (8K).        o  nnuga.c - The actual Neural Network, C code (18K).        o  readme - The readme file (very much like this file) (6K).        o  makefile - The (trivial) makefile (<1K).     Note: This code is free and is in the public domain. You may copy, use,     and change it as much as you like. However, the writers take no     responsibility for the results of using this program.-------------------------------------------------------------------------------Last updated August 13th 1995Created by Omri Weisman

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -