📄 demogng.tex
字号:
%\caption{Overview of the available probability distributions} %\end{center} %\end{figure}\subsection{Model Specific Options}This region shows the model specific parameters. Each time a new modelis selected, the necessary parameters are displayed. For a completedescription of the models and their parameters look at the technical report\htmladdnormallink{\textit{Some Competitive Learning Methods}}{http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper}by Bernd Fritzke which is available in\htmladdnormallinkfoot{HTML}{http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper}-and in \htmladdnormallinkfoot{Postscript-format}{ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz}.%% Change font in description from bold to typewriter%\renewcommand{\descriptionlabel}[1]{\hspace{\labelsep}\fbox{\textbf{#1}}}\subsubsection{\simImDGNG{LBG}{LBG_7}, \simDGNG{LBG-U}{LBG-U_7}}\begin{description}\item[Number of Signals] The number of input signals (only discrete distributions).\item[LBG-U] Switches from LBG to LBG-U and back.\end{description}\subsubsection{\simImDGNG{Hard Competitive Learning}{HCL_5}}\begin{description}\item[Variable] Switches from a constant to a variable learning rate.\item[epsilon] This value ($\epsilon$) determines the extent to which the winner is adapted towards the input signal (constant learning rate).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[t\_max] The simulation ends, if the number of input signals exceeds this value ($t_{max}$).\end{description}The variable learning rate is determined according to\begin{displaymath} \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}.\end{displaymath}\subsubsection{\simImDGNG{Neural Gas}{NG_3}}\begin{description}\item[lambda\_i] lambda initial ($\lambda_i$).\item[lambda\_f] lambda final ($\lambda_f$).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[t\_max] The simulation ends, if the number of input signals exceeds this value ($t_{max}$).\end{description}The reference vectors are adjusted according to\begin{displaymath} \Delta \bw_i = \epsilon(t) \cdot h_\lambda(k_i(\bxi,\A)) \cdot (\bxi - \bw_i)\end{displaymath} with the following time-dependencies:\begin{displaymath} \lambda(t) = \lambda_i (\lambda_f/\lambda_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath} \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath} h_\lambda(k) = \exp(-k/\lambda(t)).\end{displaymath}\subsubsection{\simImDGNG{Competitive Hebbian Learning}{CHL_0}}This implementation requires no model specific parameters. In generalone would have a maximum number of time steps ($t_{max}$).\subsubsection{\simImDGNG{Neural Gas with Competitive Hebbian Learning}{NGwCHL_2}}\begin{description}\item[lambda\_i] lambda initial ($\lambda_i$).\item[lambda\_f] lambda final ($\lambda_f$).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[t\_max] The simulation ends, if the number of input signals exceed this value ($t_{max}$).\item[edge\_i] Initial value for time-dependend edge aging ($T_i$).\item[edge\_f] Final value for time-dependend edge aging ($T_f$).\end{description}Edges are removed with an age larger than the maximal age $T(t)$ whereby\begin{displaymath}T(t) = T_i(T_f/T_i) ^ {t/t_{\rm max}}.\end{displaymath}The reference vectors are adjusted according to\begin{displaymath} \Delta \bw_i = \epsilon(t) \cdot h_\lambda(k_i(\bxi,\A)) \cdot (\bxi - \bw_i)\end{displaymath} with the following time-dependencies:\begin{displaymath} \lambda(t) = \lambda_i (\lambda_f/\lambda_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath} \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath} h_\lambda(k) = \exp(-k/\lambda(t)).\end{displaymath}\subsubsection{\simImDGNG{Growing Neural Gas}{GNG_3}, \simImDGNG{Growing Neural Gas with Utility}{GNG-U_10}}\begin{description}\item[No new Nodes] No new nodes will be inserted.\item[Utility] Switches from GNG to GNG-U and back. The value ($k$) determines the deletion of the unit with the smallest utility $U_i$ ($i := arg\ min_cU_c$), if the utility value falls below a certain fraction of the error variable $E_q$: $k < E_q/U_i$.\item[Lambda] If the number of input signals generated so far is an integer multiple of this value ($\lambda$), insert a new node.\item[max.~Edge Age] Remove edges with an age larger than this value ($a_{max}$). If this results in nodes having no emanating edges, remove them as well.\item[Epsilon winner] Move the winner node towards the input signal by this fraction of the total distance ($\epsilon_b$).\item[Epsilon neighbor] Move the neighbors of the winner node towards the input signal by this fraction of the total distance ($\epsilon_n$).\item[alpha] Decrease the error variables of the nodes neighboring to the newly inserted node by a fraction of this size ($\alpha$).\item[beta] Decrease the error variables of all nodes by a fraction of this size ($\beta$).\end{description}\subsubsection{\simImDGNG{Self-Organizing Map}{SOM_8}}\begin{description}\item[Grid size] The form of the grid and the number of nodes are determined.\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[sigma\_i] sigma initial ($\sigma_i$).\item[sigma\_f] sigma final ($\sigma_f$).\item[t\_max] The simulation ends, if the number of input signals exceed this value ($t_{max}$).\end{description}Determine $\epsilon(t)$ and $\sigma(t)$ according to\begin{displaymath}\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}},\end{displaymath}\begin{displaymath}\sigma(t) = \sigma_i(\sigma_f/\sigma_i)^{t/t_{\rm max}}.\end{displaymath}\subsubsection{\simImDGNG{Growing Grid}{GG_8}}\begin{description}\item[No new Nodes] No new nodes will be inserted.\item[lambda\_g] This parameter ($\lambda_g$) indicates how many adaptation steps on average are done per node before new nodes are inserted (\textit{growth phase}).\item[lambda\_f] This parameter ($\lambda_f$) indicates how many adaptation steps on average are done per node before new nodes are inserted (\textit{fine-tuning phase}).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[sigma] This parameter ($\sigma$) determines the width of the bell-shaped neighborhood interaction function.\end{description}In the fine-tuning phase the time-dependend learning rate $\epsilon(t)$is determined according to\begin{displaymath} \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}.\end{displaymath}\section{Wishlist}This is a list of projects for DemoGNG. Bug reports and requests which cannot be fixed or honored right away will be added to this list. If you havesome time for Java hacking, you are encouraged to try to provide asolution to one of the following problems. It might be a good idea tosend a mail first, though.\begin{itemize}\item Change looping in the \texttt{run}-procedure.\item New Method $k$-means~\cite{MacQueen67}.\item Redesign of the user interface (JDK 1.2).\item Supervised Learning\item Tune the main \texttt{Makefile}.\end{itemize}Please send any comments or suggestions you might have, along withany bugs that you may encounter to:\newline \newlineHartmut S. Loos\newline\htmladdnormallink{Hartmut.Loos@neuroinformatik.ruhr-uni-bochum.de}{mailto:Hartmut.Loos@neuroinformatik.ruhr-uni-bochum.de}\newpage%\hfill\addcontentsline{toc}{section}{References}%\bibliographystyle{alpha}\bibliographystyle{apalike}\bibliography{DemoGNG}\newpage\section{Change log}\begin{verbatim}19.10.1998: Version 1.5 released.\end{verbatim}These are the most important changes from v1.0 to v1.5:\begin{verbatim}19.10.1998: Updated the manual.17.07.1998: LBG: Marked nodes which haven't moved since the last update. 16.07.1998: GNG-U: Fixed a small bug regarding 'beta' for utility. 15.07.1998: Non-stationary probability distributions. 23.10.1997: Fixed: nodes without neighbors are not deleted in GNG-U (but in GNG). 13.10.1997: Trifles in GNG-U. 29.09.1997: Fixed a small bug in Method LBG. 28.09.1997: New Method LBG-U (Fritzke 1997). 27.09.1997: New Method GNG-U (Fritzke 1997).10.03.1997: Version 1.3 released.09.03.1997: Proofred the manual.08.03.1997: Changed the order of the models (model menu and manual).07.03.1997: GNG: added two variables (alpha, beta).06.03.1997: Tuned the code (20% faster).05.03.1997: Cleaned up the code/deleted DEBUG statements.04.03.1997: Updated the manual.03.03.1997: Added a new button 'Random Init'.02.03.1997: Made some improvements concerning LBG.28.02.1997: Version 1.2 released.26.02.1997: Version 1.1 skipped for PR reasons :-).24.02.1997: Added description of the model specific options to the manual.24.02.1997: Improved the variable learning rate for Hard Competitive Learning.21.02.1997: Changed eta to epsilon.20.02.1997: First draft of the manual for v1.1.18.02.1997: Updated the manual.17.02.1997: Made some screen dumps for the manual.30.10.1996: Some improvements and a bugfix concerning the error values. 28.10.1996: New method Self-organizing map (SOM).23.10.1996: Redesigned the user interface. I have made minor changes only. The next major version 2.0 will have a complete new interface. 22.10.1996: Added a close button to the error graph.21.10.1996: In the method Growing Grid a percentage counter appears in the fine-tuning phase. At 100% the calculation stops.17.10.1996: Added a new button 'White'. It switches the background of the drawing area to white. This is useful for making hardcopies of the screen.17.10.1996: New distribution UNIT.16.10.1996: Fixed a bug: Now the Voronoi diagram/Delaunay triangulation is also available for Growing Grid.15.10.1996: New method Growing Grid.14.10.1996: Added a new class GridNodeGNG to gain a grid covering the nodes. 09.10.1996: Added a new menu for speed. Now it is possible to switch to an individual speed depending on the machine and/or browser. (On some machines there was no interaction possible with the default value. On the other hand, why longer waiting than necessary?) 04.10.1996: Added an error graph. (The source of this class was written by Christian Kurzke and Ningsui Chen. I have only made minor changes to this class. Many thanks to Christian Kurzke and Ningsui Chen for this nice graph class.)30.09.1996: Added a new frame in a separate window for the error graph (new class GraphGNG).30.09.1996: New checkboxes to turn the nodes/edges on and off.26.09.1996: Fixed again all known bugs.25.09.1996: Finished the discrete distribution (mainly for LBG).24.09.1996: Default no sound.20.09.1996: Fixed all known bugs (Can you find some unknown?).19.09.1996: Prepared for new distribution (a discrete one with 500 fixed signals).10.09.1996: New method LBG (some minor bugs are known, but not concerning the method).03.09.1996: Renamed class PanelGNG to a more convenient name (ComputeGNG).02.09.1996: Split the source file DemoGNG.java. Now each class has its own file.01.09.1996: Inserted more comments.30.08.1996: Cleaned up the code to compute the Voronoi diagram/Delaunay triangulation.07.08.1996: Added Delaunay triangulation (press checkbutton Delaunay)!06.08.1996: Now you can display Voronoi diagrams for each method (press checkbutton Voronoi)!21.06.1996: Version 1.0 released.\end{verbatim}\end{document}
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -