⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 demogng-1.2.tex

📁 关于自组织神经网络的一种新结构程序,并包含了其它几种神经网络的程序比较
💻 TEX
📖 第 1 页 / 共 2 页
字号:
  \centerline{Probability Distribution}\end{minipage}\hfill\begin{minipage}[b]{0.6cm}  \mynewsepsf{eps/Nodes.ps} \hfill  \centerline{Nodes}\end{minipage}\hfill\begin{minipage}[b]{0.6cm}  \mynewsepsf{eps/Display.ps} \hfill  \centerline{Display}\end{minipage}\hfill\begin{minipage}[b]{2cm}  \mynewsepsf{eps/Speed.ps}  \centerline{Speed}\end{minipage}\caption{General pull-down menus} \end{figure}%\begin{figure}[htb]%\begin{center}%  \epsfig{width=2cm,file=eps/Distributions.ps} \hfill%  \epsfig{width=0.6cm,file=eps/Nodes.ps} \hfill%  \epsfig{width=0.6cm,file=eps/Display.ps} \hfill%  \epsfig{width=2cm,file=eps/Speed.ps}%\caption{General pull-down menus: probability distribution, nodes, display%  and speed.} %\end{center} %\end{figure}\begin{tabular}{lp{10cm}}prob. Distrib.  &       Selects one of the available probability                        distributions. The choosen distribution is                        displayed in the drawing area. The following                        distributions are provided: The uniform ones                        \textbf{Rectangle}, \textbf{Circle} and                        \textbf{Ring}.                         The clustered uniform ones \textbf{UNI},                        \textbf{Small Spirals},                        \textbf{Large Spirals} and \textbf{UNIT}.                        The non-uniform \textbf{HiLo Density}                        which consists of a small and a large                        rectangle. Each of this rectangles gets 50\% of                        the signals.                        And the discrete distribution \textbf{Discrete},                        which consists of 500 data vectors.\\ (max.) Nodes    &       Selects the number of nodes (\textbf{Growing Neural                        Gas} and \textbf{Growing Grid}: maximum number                        of nodes). \\Display         &       Selects the update interval for the display. \\Speed           &       Selects an individual speed depending on the machine                        and/or browser.                        Select a slow speed for good interaction with                        the program and slower program execution.                        Select a fast speed for slow interaction and                        fast program execution.                        The most suitable setting depends on your                        local hard- and software.\end{tabular}\newcommand{\pwidc}{0.32}\begin{figure}[htb]\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/UNI.ps}{2cm}  \centerline{UNI}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/SmallSpirals.ps}{2cm}  \centerline{Small Spirals}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/Unit.ps}{2cm}  \centerline{UNIT}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/Rectangle.ps}{2.5cm}  \centerline{Rectangle}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/LargeSpirals.ps}{2.5cm}  \centerline{Large Spirals}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/HiLoDensity.ps}{2.5cm}  \centerline{HiLo Density}   \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth} \mynewshepsf{eps/Circle.ps}{3.5cm}  \centerline{Circle}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/Ring.ps}{3.5cm}  \centerline{Ring}  \medskip\end{minipage}\hfill\begin{minipage}[b]{\pwidc\textwidth}  \mynewshepsf{eps/Discrete.ps}{3.5cm}  \centerline{Discrete}  \medskip\end{minipage}\caption{Overview of the available probability distributions} \end{figure}%\begin{figure}[htb]%\begin{center}%  \epsfig{width=3.5cm, height=2cm, file=eps/UNI.ps}%  \epsfig{width=3.5cm, height=2cm, file=eps/SmallSpirals.ps}%  \epsfig{width=3.5cm, height=2cm, file=eps/Unit.ps} \\[1mm]%  \epsfig{width=3.5cm, height=2.5cm, file=eps/Rectangle.ps}%  \epsfig{width=3.5cm, height=2.5cm, file=eps/LargeSpirals.ps}%  \epsfig{width=3.5cm, height=2.5cm, file=eps/HiLoDensity.ps} \\[1mm]%  \epsfig{width=3.5cm, height=3.5cm, file=eps/Circle.ps}%  \epsfig{width=3.5cm, height=3.5cm, file=eps/Ring.ps}%  \epsfig{width=3.5cm, height=3.5cm, file=eps/Discrete.ps}%\caption{Overview of the available probability distributions} %\end{center} %\end{figure}\subsection{Model Specific Options}This region shows the model specific parameters. Each time a new modelis selected, the necessary parameters are displayed. For a completedescription of the models and their parameters look at the technical report\htmladdnormallink{\textit{Some Competitive Learning Methods}}{http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper}from Bernd Fritzke which is available in HTML- \newline(\htmladdnormallink{http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper}{http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper})and in Postscript-format \newline(\htmladdnormallink{ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz}{ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz}).%% Change font in description from bold to typewriter%\renewcommand{\descriptionlabel}[1]{\hspace{\labelsep}\fbox{\textbf{#1}}}\subsubsection{Growing Neural Gas}\begin{description}\item[No new Nodes] No new nodes will be inserted.\item[Lambda] If the number of input signals generated so far is an  integer multiple of this value ($\lambda$), insert a new node.\item[max.~Edge Age] Remove edges with an age larger than this value  ($a_{max}$). If this results in nodes having no emanating edges,  remove them as well.\item[Epsilon winner] Move the nearest node towards the input signal by a  fraction of this value ($\epsilon_b$).\item[Epsilon neighbor] Move the neighbors of the nearest node towards the  input signal by a fraction of this value ($\epsilon_n$).\end{description}\subsubsection{Hard Competitive Learning}\begin{description}\item[Variable] Switches from a constant to a variable learning rate.\item[epsilon] This value ($\epsilon$) determines the extent to which the  winner is adapted towards the input signal (constant learning rate).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[t\_max] The simulation ends, if the number of input signals  exceeds this value ($t_{max}$).\end{description}The variable learning rate is determined according to\begin{displaymath}  \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}.\end{displaymath}\subsubsection{Neural Gas}\begin{description}\item[lambda\_i] lambda initial ($\lambda_i$).\item[lambda\_f] lambda final ($\lambda_f$).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[t\_max] The simulation ends, if the number of input signals  exceeds this value ($t_{max}$).\end{description}The reference vectors are adjusted according to\begin{displaymath}  \Delta \bw_i = \epsilon \cdot h_\lambda(k_i(\bxi,\A)) \cdot (\bxi - \bw_i)\end{displaymath} with the following time-dependencies:\begin{displaymath}  \lambda(t) = \lambda_i (\lambda_f/\lambda_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath}  \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath}  h_\lambda(k) = \exp(-k/\lambda(t))\end{displaymath}\subsubsection{Neural Gas with Competitive Hebbian Learning}\begin{description}\item[lambda\_i] lambda initial ($\lambda_i$).\item[lambda\_f] lambda final ($\lambda_f$).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[t\_max] The simulation ends, if the number of input signals  exceed this value ($t_{max}$).\item[edge\_i] Initial value for time-dependend edge aging ($T_i$).\item[edge\_f] Final value for time-dependend edge aging ($T_f$).\end{description}Edges are removed with an age larger than the maximal age $T(t)$ whereby\begin{displaymath}T(t) = T_i(T_f/T_i) ^ {t/t_{\rm max}}.\end{displaymath}The reference vectors are adjusted according to\begin{displaymath}  \Delta \bw_i = \epsilon \cdot h_\lambda(k_i(\bxi,\A)) \cdot (\bxi - \bw_i)\end{displaymath} with the following time-dependencies:\begin{displaymath}  \lambda(t) = \lambda_i (\lambda_f/\lambda_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath}  \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}\end{displaymath}\begin{displaymath}  h_\lambda(k) = \exp(-k/\lambda(t)).\end{displaymath}\subsubsection{Competitive Hebbian Learning}This model requires no  model specific parameters.\subsubsection{LBG}\begin{description}\item[Number of Signals] The number of input signals (only discrete  distributions).\end{description}\subsubsection{Growing Grid}\begin{description}\item[No new Nodes]  No new nodes will be inserted.\item[lambda\_g] This parameter ($\lambda_g$) indicates how many  adaptation steps on average are done per node before new nodes are  inserted (\textit{developement phase}).\item[lambda\_f]  This parameter ($\lambda_f$) indicates how many  adaptation steps on average are done per node before new nodes are  inserted (\textit{fine-tuning phase}).\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[sigma] This parameter ($\sigma$) determines the width of the  bell-shaped neighborhood interaction function.\end{description}Determine the time-dependend learning rate $\epsilon(t)$ according to\begin{displaymath}  \qquad\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}}.\end{displaymath}\subsubsection{Self-Organizing Map}\begin{description}\item[Grid size] The form of the grid and the number of nodes are determined.\item[epsilon\_i] epsilon initial ($\epsilon_i$).\item[epsilon\_f] epsilon final ($\epsilon_f$).\item[sigma\_i] sigma initial ($\sigma_i$).\item[sigma\_f] sigma final ($\sigma_f$).\item[t\_max] The simulation ends, if the number of input signals  exceed this value ($t_{max}$).\end{description}Determine $\epsilon(t)$ and $\sigma(t)$ according to\begin{displaymath}\epsilon(t) = \epsilon_i(\epsilon_f/\epsilon_i)^{t/t_{\rm max}},\end{displaymath}\begin{displaymath}\sigma(t) = \sigma_i(\sigma_f/\sigma_i)^{t/t_{\rm max}}.\end{displaymath}\section{Wishlist}This is a list of projects for DemoGNG. Bug reports and requests which cannot be fixed or honored right away will be added to this list. If you havesome time for Java hacking, you are encouraged to try to provide asolution to one of the following proplems. It might be a good idea tosend a mail first, though.\begin{itemize}\item Change looping in the \texttt{run}-procedure.\item New switch to determine the initial position of the nodes.\item New Method LBG-U (Fritzke 1997 \cite{Fritzke97a}).\item LBG: Mark nodes which haven't moved since the last update.\item New Method $k$-means (MacQueen 1967 \cite{MacQueen67}).\item Redesign of the user interface (JDK 1.1).\item Supervised Learning\item Tune the main {\texttt Makefile}.\end{itemize}Please send any comments or suggestions you might have, along withany bugs that you may encounter to:\newline \newlineHartmut S. Loos\newline\htmladdnormallink{loos@neuroinformatik.ruhr-uni-bochum.de}{mailto:loos@neuroinformatik.ruhr-uni-bochum.de}\newpage\addcontentsline{toc}{section}{References}\bibliographystyle{alpha}%\bibliographystyle{apalike}\bibliography{DemoGNG}\newpage\section{Change log}\begin{verbatim}28.02.1997: Version 1.2 released.\end{verbatim}These are the most important changes from v1.0 to v1.2:\begin{verbatim}26.02.1997: Version 1.1 skipped for PR reasons :-).24.02.1997: Added description of the model specific options to            the manual.24.02.1997: Improved the variable learning rate for Hard            Competitive Learning.21.02.1997: Changed eta to epsilon.20.02.1997: Frist draft of the manual for v1.1.18.02.1997: Updated the manual.17.02.1997: Made some hardcopies for the manual.30.10.1996: Some improvements and a bugfix concerning the error            values. 28.10.1996: New method Self-organizing map (SOM).23.10.1996: Redesigned the user interface. I have made minor            changes only. The next major version 2.0 will have            a complete new interface. 22.10.1996: Added a close button to the error graph.21.10.1996: In the method Growing Grid a percentage counter appears            in the fine-tuning phase. At 100% the calculation stops.17.10.1996: Added a new button 'White'. It switches the background            of the drawing area to white. This is useful for making            hardcopies of the screen.17.10.1996: New distribution UNIT.16.10.1996: Fixed a bug: Now the Voronoi diagram/Delaunay            triangulation is also available for Growing Grid.15.10.1996: New method Growing Grid.14.10.1996: Added a new class GridNodeGNG to gain a grid covering            the nodes. 09.10.1996: Added a new menu for speed. Now it is possible to switch            to an individual speed depending on the machine and/or            browser. (On some machines there was no interaction            possible with the default value. On the other hand, why            longer waiting than necessary?) 04.10.1996: Added an error graph. (The source of this class was            written by Christian Kurzke and Ningsui Chen. I have            only made minor changes to this class. Many thanks to            Christian Kurzke and Ningsui Chen for this nice            graph class.)30.09.1996: Added a new frame in a separate window for the error            graph (new class GraphGNG).30.09.1996: New checkboxes to turn the nodes/edges on and off.26.09.1996: Fixed again all known bugs.25.09.1996: Finished the discrete distribution (mainly for LBG).24.09.1996: Default no sound.20.09.1996: Fixed all known bugs (Can you find some unknown?).19.09.1996: Prepared for new distribution (a discrete one with 500            fixed signals).10.09.1996: New method LBG (some minor bugs are known, but not            concerning the method).03.09.1996: Renamed class PanelGNG to a more convenient name            (ComputeGNG).02.09.1996: Split the source file DemoGNG.java. Now each class has            its own file.01.09.1996: Inserted more comments.30.08.1996: Cleaned up the code to compute the Voronoi            diagram/Delaunay triangulation.07.08.1996: Added Delaunay triangulation (press checkbutton            Delaunay)!06.08.1996: Now you can display Voronoi diagrams for each method            (press checkbutton Voronoi)!21.06.1996: Version 1.0 released.\end{verbatim}\end{document}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -