📄 readme
字号:
ods by Bernd Fritzke which is available in HTML3- and in Postscript-format4.5.4.1 LBG, LBG-U__________________________ Number of Signals _The number of input signals (only discrete distributions)._________________________ ______________LBG-U____ _Switches from LBG to LBG-U and back.5.4.2 Hard Competitive Learning_______________Variable____Switches from a constant to a variable learning rate.____________ epsilon _This value (ffl) determines the extent to which the winner is adapted____________ towards the input signal (constant learning rate).______________ epsilon_i _epsilon initial (ffl ).______________ i______________ epsilon_f _epsilon final (ffl )._____________ _ f____________t_max____The simulation ends, if the number of input signals exceeds this value (tmax )._____________________________________3 4http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz 9 UNI Small Spirals UNIT Rectangle Large Spirals HiLo Density Circle Ring Discrete Move & Jump Move Jump Figure 8: Overview of the available probability distributions The variable learning rate is determined according to ffl(t) = ffli(fflf =ffli)t=tmax:5.4.3 Neural Gas_______________lambda_i___ _lambda initial (i).________________lambda_f_____lambda final (f ).______________ epsilon_i _epsilon initial (ffl ).______________ i______________ epsilon_f _epsilon final (ffl )._____________ _ f____________t_max____The simulation ends, if the number of input signals exceeds this value (tmax ).The reference vectors are adjusted according to w i = ffl(t) h (ki( ; A)) ( w i) 10with the following time-dependencies: (t) = i(f =i)t=tmax ffl(t) = ffli(fflf =ffli)t=tmax h (k) = exp (k=(t)):5.4.4 Competitive Hebbian LearningThis implementation requires no model specific parameters. In general one wouldhave a maximum number of time steps (tmax ).5.4.5 Neural Gas with Competitive Hebbian Learning_______________lambda_i___ _lambda initial (i).________________lambda_f_____lambda final (f ).______________ epsilon_i _epsilon initial (ffl ).______________ i______________ epsilon_f _epsilon final (ffl )._____________ _ f____________t_max____The simulation ends, if the number of input signals exceed this value (tmax ).___________ edge_i _Initial value for time-dependend edge aging (T ).___________ i___________ edge_f _Final value for time-dependend edge aging (T ).__________ _ fEdges are removed with an age larger than the maximal age T (t) whereby T (t) = Ti(Tf =Ti)t=tmax:The reference vectors are adjusted according to w i = ffl(t) h (ki( ; A)) ( w i)with the following time-dependencies: (t) = i(f =i)t=tmax ffl(t) = ffli(fflf =ffli)t=tmax h (k) = exp (k=(t)):5.4.6 Growing Neural Gas, Growing Neural Gas with Utility_______________________No_new_Nodes________No new nodes will be inserted.___________ Utility _Switches from GNG to GNG-U and back. The value (k) determines the__________ _ deletion of the unit with the smallest utility Ui (i := arg mincUc), if the utility value falls below a certain fraction of the error variable Eq: k < Eq=Ui.______________Lambda____ _If the number of input signals generated so far is an integer multiple of this value (), insert a new node. 11______________________ max. Edge Age _Remove edges with an age larger than this value (a ). If this_____________________ _ max results in nodes having no emanating edges, remove them as well.______________________ Epsilon winner _Move the winner node towards the input signal by this fraction______________________ of the total distance (fflb).________________________ Epsilon neighbor _Move the neighbors of the winner node towards the input_______________________ _ signal by this fraction of the total distance (ffln ).__________ alpha _Decrease the error variables of the nodes neighboring to the newly inserted_________ _ node by a fraction of this size (ff).__________beta___Decrease the error variables of all nodes by a fraction of this size (fi).5.4.7 Self-Organizing Map_______________Grid_size__ _The form of the grid and the number of nodes are determined.______________ epsilon_i _epsilon initial (ffl ).______________ i______________ epsilon_f _epsilon final (ffl )._____________ _ f____________ sigma_i _sigma initial (oe ).___________ _ i_____________ sigma_f _sigma final (oe )._____________ f____________t_max____The simulation ends, if the number of input signals exceed this value (tmax ).Determine ffl(t) and oe(t) according to ffl(t) = ffli(fflf =ffli)t=tmax; oe(t) = oei(oef =oei)t=tmax:5.4.8 Growing Grid_______________________No_new_Nodes________No new nodes will be inserted._______________ lambda_g _This parameter ( ) indicates how many adaptation steps on average_______________ g are done per node before new nodes are inserted (growth phase).________________lambda_f_____This parameter (f ) indicates how many adaptation steps on average are done per node before new nodes are inserted (fine-tuning phase).______________ epsilon_i _epsilon initial (ffl ).______________ i______________ epsilon_f _epsilon final (ffl )._____________ _ f__________ sigma _This parameter (oe) determines the width of the bell-shaped neighborhood_________ _ interaction function.In the fine-tuning phase the time-dependend learning rate ffl(t) is determined ac-cording to ffl(t) = ffli(fflf =ffli)t=tmax: 126 WishlistThis is a list of projects for DemoGNG. Bug reports and requests which can notbe fixed or honored right away will be added to this list. If you have some time forJava hacking, you are encouraged to try to provide a solution to one of the followingproblems. It might be a good idea to send a mail first, though. fflChange looping in the run-procedure. fflNew Method k-means (MacQueen, 1967). fflRedesign of the user interface (JDK 1.2). fflSupervised Learning fflTune the main Makefile. Please send any comments or suggestions you might have, along with any bugsthat you may encounter to:Hartmut S. LoosHartmut.Loos@neuroinformatik.ruhr-uni-bochum.de 13ReferencesFritzke, B. (1994). Fast learning with incremental RBF networks. Neural Processing Letters, 1(1):2-5.Fritzke, B. (1995a). A growing neural gas network learns topologies. In Tesauro, G., Touretzky, D. S., and Leen, T. K., editors, Advances in Neural Information Processing Systems 7, pages 625-632. MIT Press, Cambridge MA.Fritzke, B. (1995b). Growing grid - a self-organizing network with constant neigh- borhood range and adaptation strength. Neural Processing Letters, 2(5):9-13.Fritzke, B. (1997a). A self-organizing network that can follow non-stationary dis- tributions. In ICANN'97: International Conference on Artificial Neural Net- works, pages 613-618. Springer.Fritzke, B. (1997b). The LBG-U method for vector quantization - an improvement over LBG inspired from neural networks. Neural Processing Letters, 5(1).Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43:59-69.Linde, Y., Buzo, A., and Gray, R. M. (1980). An algorithm for vector quantizer design. IEEE Transactions on Communication, COM-28:84-95.MacQueen, J. (1967). Some methods for classification and analysis of multivari- ate observations. In LeCam, L. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical statistics and probability, volume 1, pages 281-297, Berkeley. University of California Press.Martinetz, T. M. (1993). Competitive Hebbian learning rule forms perfectly topol- ogy preserving maps. In ICANN'93: International Conference on Artificial Neural Networks, pages 427-434, Amsterdam. Springer.Martinetz, T. M. and Schulten, K. J. (1991). A "neural-gas" network learns topolo- gies. In Kohonen, T., M"akisara, K., Simula, O., and Kangas, J., editors, Artificial Neural Networks, pages 397-402. North-Holland, Amsterdam.Martinetz, T. M. and Schulten, K. J. (1994). Topology representing networks. Neural Networks, 7(3):507-522. 147 Change log19.10.1998: Version 1.5 released. These are the most important changes from v1.0 to v1.5:19.10.1998: Updated the manual.17.07.1998: LBG: Marked nodes which haven't moved since the last update.16.07.1998: GNG-U: Fixed a small bug regarding 'beta' for utility.15.07.1998: Non-stationary probability distributions.23.10.1997: Fixed: nodes without neighbors are not deleted in GNG-U (but in GNG).13.10.1997: Trifles in GNG-U.29.09.1997: Fixed a small bug in Method LBG.28.09.1997: New Method LBG-U (Fritzke 1997).27.09.1997: New Method GNG-U (Fritzke 1997).10.03.1997: Version 1.3 released.09.03.1997: Proofred the manual.08.03.1997: Changed the order of the models (model menu and manual).07.03.1997: GNG: added two variables (alpha, beta).06.03.1997: Tuned the code (20% faster).05.03.1997: Cleaned up the code/deleted DEBUG statements.04.03.1997: Updated the manual.03.03.1997: Added a new button 'Random Init'.02.03.1997: Made some improvements concerning LBG.28.02.1997: Version 1.2 released.26.02.1997: Version 1.1 skipped for PR reasons :-).24.02.1997: Added description of the model specific options to the manual.24.02.1997: Improved the variable learning rate for Hard Competitive Learning.21.02.1997: Changed eta to epsilon.20.02.1997: First draft of the manual for v1.1.18.02.1997: Updated the manual.17.02.1997: Made some screen dumps for the manual.30.10.1996: Some improvements and a bugfix concerning the error values.28.10.1996: New method Self-organizing map (SOM).23.10.1996: Redesigned the user interface. I have made minor changes only. The next major version 2.0 will have a complete new interface.22.10.1996: Added a close button to the error graph.21.10.1996: In the method Growing Grid a percentage counter appears in the fine-tuning phase. At 100% the calculation stops.17.10.1996: Added a new button 'White'. It switches the background of the drawing area to white. This is useful for making hardcopies of the screen.17.10.1996: New distribution UNIT. 1516.10.1996: Fixed a bug: Now the Voronoi diagram/Delaunay triangulation is also available for Growing Grid.15.10.1996: New method Growing Grid.14.10.1996: Added a new class GridNodeGNG to gain a grid covering the nodes.09.10.1996: Added a new menu for speed. Now it is possible to switch to an individual speed depending on the machine and/or browser. (On some machines there was no interaction possible with the default value. On the other hand, why longer waiting than necessary?)04.10.1996: Added an error graph. (The source of this class was written by Christian Kurzke and Ningsui Chen. I have only made minor changes to this class. Many thanks to Christian Kurzke and Ningsui Chen for this nice graph class.)30.09.1996: Added a new frame in a separate window for the error graph (new class GraphGNG).30.09.1996: New checkboxes to turn the nodes/edges on and off.26.09.1996: Fixed again all known bugs.25.09.1996: Finished the discrete distribution (mainly for LBG).24.09.1996: Default no sound.20.09.1996: Fixed all known bugs (Can you find some unknown?).19.09.1996: Prepared for new distribution (a discrete one with 500 fixed signals).10.09.1996: New method LBG (some minor bugs are known, but not concerning the method).03.09.1996: Renamed class PanelGNG to a more convenient name (ComputeGNG).02.09.1996: Split the source file DemoGNG.java. Now each class has its own file.01.09.1996: Inserted more comments.30.08.1996: Cleaned up the code to compute the Voronoi diagram/Delaunay triangulation.07.08.1996: Added Delaunay triangulation (press checkbutton Delaunay)!06.08.1996: Now you can display Voronoi diagrams for each method (press checkbutton Voronoi)!21.06.1996: Version 1.0 released. 16
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -