-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Sorry for the bad quality of this README file. It was generated from the dvi-file of the documentation. Please have a look at the corresponding html- or PostScript file. Thank you, Hartmut Loos. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= DemoGNG v1.5 Hartmut S. Loos 1 and Bernd Fritzke y2 1Systems Biophysics Institute for Neural Computation Ruhr-Universit"at Bochum 2Neural Computation Group Artifical Intelligence Institute Computer Science Department Technical University Dresden October 19, 1998 Abstract DemoGNG, a Java applet, implements several methods related to com- petitive learning. It is possible to experiment with the methods using various data distributions and observe the learning process. A common terminology is used to make it easy to compare one method to the other. Thanks to the Java programming language the implementations run on a very large number of platforms without the need of compilation or local adaptation. Hopefully, the experimentation with the models will increase the intuitive understanding and make it easier to judge their particular strengths and weaknesses. _____________________________________ http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/loos yhttp://pikas.inf.tu-dresden.de/ fritzke 1 Contents 1 Introduction 3 2 System Requirements 3 3 Installation 3 4 Starting the Java Applet 4 5 Manual 4 5.1 Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5.2 Drawing Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 5.3 General Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5.3.1 Buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5.3.2 Checkboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.3.3 Pull-Down Menus . . . . . . . . . . . . . . . . . . . . . . . . 8 5.4 Model Specific Options . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.4.1 LBG, LBG-U . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.4.2 Hard Competitive Learning . . . . . . . . . . . . . . . . . . . 9 5.4.3 Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.4.4 Competitive Hebbian Learning . . . . . . . . . . . . . . . . . 11 5.4.5 Neural Gas with Competitive Hebbian Learning . . . . . . . 11 5.4.6 Growing Neural Gas, Growing Neural Gas with Utility . . . . 11 5.4.7 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . . 12 5.4.8 Growing Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 6 Wishlist 13 References 14 7 Change log 15 List of Figures 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Pull-down menu with all available models . . . . . . . . . . . . . . . 5 3 Example screenshots of the available models . . . . . . . . . . . . . . 6 4 Drawing Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5 Options for all models . . . . . . . . . . . . . . . . . . . . . . . . . . 7 6 Some examples for activated checkboxes. . . . . . . . . . . . . . . . . 8 7 General pull-down menus . . . . . . . . . . . . . . . . . . . . . . . . 8 8 Overview of the available probability distributions . . . . . . . . . . 10 2 1 Introduction In the area of competitive learning a rather large number of models exist which have similar goals but differ considerably in the way they work. A common goal of those algorithms is to distribute a certain number of vectors in a possibly high-dimensional space. The distribution of these vectors should reflect (in one of several possible ways) the distribution of input signals which in general is not given explicitly but only through sample vectors. 2 System Requirements To run DemoGNG, a Web browser with Java enhancements (e.g. Netscape 2.0 or newer, Microsofts Internet Explorer, ...) is needed or, e.g. the Sun Microsystems appletviewer from the Java Developers Kit (JDK). 3 Installation Download the compressed tar version1, save it to a file, say DemoGNG-1.5.tar.gz, and then extract the files with % gunzip DemoGNG-1.5.tar.gz % tar xvf DemoGNG-1.5.tar Instead, you can download DemoGNG in zip-format2. After uncompressing and unpacking you should have the following: _____________________________________1 ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/DemoGNG- 1.5.tar.gz2 ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/DemoGNG- 1.5.zip 3 COPYING copy of the GNU General Public License Changes change history for DemoGNG ComputeGNG.java Java source for class ComputeGNG DemoGNG.java Java source for class DemoGNG EdgeGNG.java Java source for class EdgeGNG EdgeVoronoi.java Java source for class EdgeVoronoi FPoint.java Java source for class FPoint GraphGNG.java Java source for class GraphGNG GridNodeGNG.java Java source for class GridNodeGNG HalfEdgeVoronoi.java Java source for class HalfEdgeVoronoi LineGNG.java Java source for class LineGNG ListElem.java Java source for class ListElem ListGNG.java Java source for class ListGNG NodeGNG.java Java source for class NodeGNG SelGraphics.java Java source for class SelGraphics SiteVoronoi.java Java source for class SiteVoronoi DemoGNGcode.html starting page for source documentation GenerateHTML.tcl a tcl-script to generate some HTML-pages with different parameters Makefile Makefile for Unix make README the ASCII-version of this document SwitchGNG.html the main HTML-page audio/ subdirectory containing audio files doc/ subdirectory containing the javadoc generated documenta- tion of DemoGNG with the needed images tex/ subdirectory containing the LaTeX source of the manual with the DVI-, PS- and HTML-format Type make world to generate some necessary files (all these files are included in the archive). *.html needed for SwitchGNG.html *.class compiled classes doc/*.html javadoc generated HTML-files 4 Starting the Java Applet Start the Web browser with the main HTML-page. Example: % netscape SwitchGNG.html Important: Make sure you have Java support enabled. Instead, you can also use an appletviewer with one of the generated HTML-files. Example: % appletviewer GNG.html Important: The appletviewer requires an applet included in the Web page, so you must choose one of the generated HTML-files. 5 Manual After the Java applet has been launched, the DemoGNG main window appears. This window can be divided into four regions: 1. Network Model 4 Figure 1: Overview 2. Drawing Area 3. General Options 4. Model Specific Options 5.1 Network Model Figure 2: Pull-down menu with all available models The first part shows the actual algorithm. To select another algorithm or restart the current, click on the pull-down menu and choose the desired algorithm. The following algorithms are available: fflLBG (Linde et al., 1980) fflLBG-U (Fritzke, 1997b) fflHard Competitive Learning (standard algorithm) - with constant learning rate - with decreasing learning rate fflNeural Gas (Martinetz and Schulten, 1991) fflCompetitive Hebbian Learning (Martinetz and Schulten, 1991; Martinetz, 1993) 5 fflNeural Gas with Competitive Hebbian Learning (Martinetz and Schulten, 1991; Martinetz and Schulten, 1994) fflGrowing Neural Gas (Fritzke, 1994; Fritzke, 1995a) fflGrowing Neural Gas with Utility (Fritzke, 1997a) fflSelf-Organizing Map (Kohonen, 1982) fflGrowing Grid (Fritzke, 1995b) LBG, LBG-U Hard Competitive Learning Neural Gas Competitive Hebbian Learning Neural Gas with CHL Growing Neural Gas, GNG-U Self-Organizing Map Growing Grid Figure 3: Example screenshots of the available models 5.2 Drawing Area The drawing area shows the network for the selected algorithm. The geometric figure in the background of the area reflects the probability distribution (see Section 6 Figure 4: Drawing Area 5.3.3 for more information). In every phase of the algorithm you can select a node and drag it to an arbitrary location within the drawing area. Additional information is displayed in the corners of this region: upper left Number of input signals occured so far upper right Version number lower left Number of nodes lower right Some additional information 5.3 General Options Figure 5: Options for all models This region contains the non-specific parameters for all algorithms. There are three kinds of interface elements: buttons, checkboxes and pull-down-menus. 5.3.1 Buttons Start Starts/Continues a stopped run. Stop Stops a calculation to modify parameters or move nodes (these modifications can also be done while the simulator runs). Reset Resets the algorithm, but leaves all parameters unchanged (to restart an algorithm with default parameters see Section 5.1 for more information). 7 Teach mode Voronoi diagram Error graph Figure 6: Some examples for activated checkboxes. 5.3.2 Checkboxes Teach Toggles teach mode. In the teach mode the algorithm is slowed down and more information is displayed depending on the algo- rithm (default: off). Signals Toggles display of signals. The most recently generated signals are shown (default: off). Voronoi Toggles display of Voronoi diagram (default: off). Delaunay Toggles display of Delaunay triangulation (default: off). Error Graph Toggles the error graph. It is displayed in a separate window and shows the mean square error of the selected model (default: off). Nodes Toggles display of nodes (default: on). Edges Toggles display of edges (default: on). Random Init If this is switched on, initial node positions can lie outside the region of positive probability density (default: off). White Switches the background of the drawing area to white. This is useful for making hardcopies of the screen (default: off). Sound Toggles sound (default: off). 5.3.3 Pull-Down Menus Probability Distribution (max.) Nodes Display Speed Figure 7: General pull-down menus 8 prob. Distrib. Selects one of the available probability distributions. The choosen distribution is displayed in the drawing area (see Section 5.2 for more information). The following distributions are provided: The convex uniform ones Rectangle, Circle and Ring. The clustered uniform ones UNI, Small Spirals, Large Spirals and UNIT. The non-uniform HiLo Density which consists of a small and a large rectangle. Each of these rectangles gets 50% of the signals. The discrete distribution Discrete, which consists of 500 data vectors generated from a number of Gaussian Kernels. And the non-stationary uniform distributions Move & Jump, Move, Jump and Right MouseB. The first three distributions are moving automatically, for the last one click on the right mouse button and place the distribution where you want. Or hold the right mouse button and the distribution will follow the mouse pointer. (max.) Nodes Selects the number of nodes (Growing Neural Gas and Grow- ing Grid: maximum number of nodes). Display Selects the update interval for the display. Speed Selects an individual speed depending on the machine and/or browser. Select a slow speed for good interaction with the program and slower program execution. Select a fast speed for slow interaction and fast program execution. The most suitable setting depends on your local hard- and soft- ware. 5.4 Model Specific Options This region shows the model specific parameters. Each time a new model is selected, the necessary parameters are displayed. For a complete description of the models and their parameters look at the technical report Some Competitive Learning Meth- ods by Bernd Fritzke which is available in HTML3- and in Postscript-format4. 5.4.1 LBG, LBG-U _________________________ _ Number of Signals _The number of input signals (only discrete distributions). _________________________ _ ___________ __LBG-U____ _Switches from LBG to LBG-U and back. 5.4.2 Hard Competitive Learning _____________ __Variable____Switches from a constant to a variable learning rate. ___________ _ epsilon _This value (ffl) determines the extent to which the winner is adapted ____________ towards the input signal (constant learning rate). _____________ _ epsilon_i _epsilon initial (ffl ). ______________ i _____________ _ epsilon_f _epsilon final (ffl ). _____________ _ f __________ __t_max____The simulation ends, if the number of input signals exceeds this value (tmax ). _____________________________________3 4http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz 9 UNI Small Spirals UNIT Rectangle Large Spirals HiLo Density Circle Ring Discrete Move & Jump Move Jump Figure 8: Overview of the available probability distributions The variable learning rate is determined according to ffl(t) = ffli(fflf =ffli)t=tmax: 5.4.3 Neural Gas _____________ __lambda_i___ _lambda initial (i). ______________ __lambda_f_____lambda final (f ). _____________ _ epsilon_i _epsilon initial (ffl ). ______________ i _____________ _ epsilon_f _epsilon final (ffl ). _____________ _ f __________ __t_max____The simulation ends, if the number of input signals exceeds this value (tmax ). The reference vectors are adjusted according to w i = ffl(t) h (ki( ; A)) ( w i) 10 with the following time-dependencies: (t) = i(f =i)t=tmax ffl(t) = ffli(fflf =ffli)t=tmax h (k) = exp (k=(t)): 5.4.4 Competitive Hebbian Learning This implementation requires no model specific parameters. In general one would have a maximum number of time steps (tmax ). 5.4.5 Neural Gas with Competitive Hebbian Learning _____________ __lambda_i___ _lambda initial (i). ______________ __lambda_f_____lambda final (f ). _____________ _ epsilon_i _epsilon initial (ffl ). ______________ i _____________ _ epsilon_f _epsilon final (ffl ). _____________ _ f __________ __t_max____The simulation ends, if the number of input signals exceed this value (tmax ). __________ _ edge_i _Initial value for time-dependend edge aging (T ). ___________ i __________ _ edge_f _Final value for time-dependend edge aging (T ). __________ _ f Edges are removed with an age larger than the maximal age T (t) whereby T (t) = Ti(Tf =Ti)t=tmax: The reference vectors are adjusted according to w i = ffl(t) h (ki( ; A)) ( w i) with the following time-dependencies: (t) = i(f =i)t=tmax ffl(t) = ffli(fflf =ffli)t=tmax h (k) = exp (k=(t)): 5.4.6 Growing Neural Gas, Growing Neural Gas with Utility _____________________ __No_new_Nodes________No new nodes will be inserted. __________ _ Utility _Switches from GNG to GNG-U and back. The value (k) determines the __________ _ deletion of the unit with the smallest utility Ui (i := arg mincUc), if the utility value falls below a certain fraction of the error variable Eq: k < Eq=Ui. ____________ __Lambda____ _If the number of input signals generated so far is an integer multiple of this value (), insert a new node. 11 _____________________ _ max. Edge Age _Remove edges with an age larger than this value (a ). If this _____________________ _ max results in nodes having no emanating edges, remove them as well. _____________________ _ Epsilon winner _Move the winner node towards the input signal by this fraction ______________________ of the total distance (fflb). _______________________ _ Epsilon neighbor _Move the neighbors of the winner node towards the input _______________________ _ signal by this fraction of the total distance (ffln ). _________ _ alpha _Decrease the error variables of the nodes neighboring to the newly inserted _________ _ node by a fraction of this size (ff). ________ __beta___Decrease the error variables of all nodes by a fraction of this size (fi). 5.4.7 Self-Organizing Map _____________ __Grid_size__ _The form of the grid and the number of nodes are determined. _____________ _ epsilon_i _epsilon initial (ffl ). ______________ i _____________ _ epsilon_f _epsilon final (ffl ). _____________ _ f ___________ _ sigma_i _sigma initial (oe ). ___________ _ i ____________ _ sigma_f _sigma final (oe ). _____________ f __________ __t_max____The simulation ends, if the number of input signals exceed this value (tmax ). Determine ffl(t) and oe(t) according to ffl(t) = ffli(fflf =ffli)t=tmax; oe(t) = oei(oef =oei)t=tmax: 5.4.8 Growing Grid _____________________ __No_new_Nodes________No new nodes will be inserted. ______________ _ lambda_g _This parameter ( ) indicates how many adaptation steps on average _______________ g are done per node before new nodes are inserted (growth phase). ______________ __lambda_f_____This parameter (f ) indicates how many adaptation steps on average are done per node before new nodes are inserted (fine-tuning phase). _____________ _ epsilon_i _epsilon initial (ffl ). ______________ i _____________ _ epsilon_f _epsilon final (ffl ). _____________ _ f _________ _ sigma _This parameter (oe) determines the width of the bell-shaped neighborhood _________ _ interaction function. In the fine-tuning phase the time-dependend learning rate ffl(t) is determined ac- cording to ffl(t) = ffli(fflf =ffli)t=tmax: 12 6 Wishlist This is a list of projects for DemoGNG. Bug reports and requests which can not be fixed or honored right away will be added to this list. If you have some time for Java hacking, you are encouraged to try to provide a solution to one of the following problems. It might be a good idea to send a mail first, though. fflChange looping in the run-procedure. fflNew Method k-means (MacQueen, 1967). fflRedesign of the user interface (JDK 1.2). fflSupervised Learning fflTune the main Makefile. Please send any comments or suggestions you might have, along with any bugs that you may encounter to: Hartmut S. Loos Hartmut.Loos@neuroinformatik.ruhr-uni-bochum.de 13 References Fritzke, B. (1994). Fast learning with incremental RBF networks. Neural Processing Letters, 1(1):2-5. Fritzke, B. (1995a). A growing neural gas network learns topologies. In Tesauro, G., Touretzky, D. S., and Leen, T. K., editors, Advances in Neural Information Processing Systems 7, pages 625-632. MIT Press, Cambridge MA. Fritzke, B. (1995b). Growing grid - a self-organizing network with constant neigh- borhood range and adaptation strength. Neural Processing Letters, 2(5):9-13. Fritzke, B. (1997a). A self-organizing network that can follow non-stationary dis- tributions. In ICANN'97: International Conference on Artificial Neural Net- works, pages 613-618. Springer. Fritzke, B. (1997b). The LBG-U method for vector quantization - an improvement over LBG inspired from neural networks. Neural Processing Letters, 5(1). Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43:59-69. Linde, Y., Buzo, A., and Gray, R. M. (1980). An algorithm for vector quantizer design. IEEE Transactions on Communication, COM-28:84-95. MacQueen, J. (1967). Some methods for classification and analysis of multivari- ate observations. In LeCam, L. and Neyman, J., editors, Proceedings of the Fifth Berkeley Symposium on Mathematical statistics and probability, volume 1, pages 281-297, Berkeley. University of California Press. Martinetz, T. M. (1993). Competitive Hebbian learning rule forms perfectly topol- ogy preserving maps. In ICANN'93: International Conference on Artificial Neural Networks, pages 427-434, Amsterdam. Springer. Martinetz, T. M. and Schulten, K. J. (1991). A "neural-gas" network learns topolo- gies. In Kohonen, T., M"akisara, K., Simula, O., and Kangas, J., editors, Artificial Neural Networks, pages 397-402. North-Holland, Amsterdam. Martinetz, T. M. and Schulten, K. J. (1994). Topology representing networks. Neural Networks, 7(3):507-522. 14 7 Change log 19.10.1998: Version 1.5 released. These are the most important changes from v1.0 to v1.5: 19.10.1998: Updated the manual. 17.07.1998: LBG: Marked nodes which haven't moved since the last update. 16.07.1998: GNG-U: Fixed a small bug regarding 'beta' for utility. 15.07.1998: Non-stationary probability distributions. 23.10.1997: Fixed: nodes without neighbors are not deleted in GNG-U (but in GNG). 13.10.1997: Trifles in GNG-U. 29.09.1997: Fixed a small bug in Method LBG. 28.09.1997: New Method LBG-U (Fritzke 1997). 27.09.1997: New Method GNG-U (Fritzke 1997). 10.03.1997: Version 1.3 released. 09.03.1997: Proofred the manual. 08.03.1997: Changed the order of the models (model menu and manual). 07.03.1997: GNG: added two variables (alpha, beta). 06.03.1997: Tuned the code (20% faster). 05.03.1997: Cleaned up the code/deleted DEBUG statements. 04.03.1997: Updated the manual. 03.03.1997: Added a new button 'Random Init'. 02.03.1997: Made some improvements concerning LBG. 28.02.1997: Version 1.2 released. 26.02.1997: Version 1.1 skipped for PR reasons :-). 24.02.1997: Added description of the model specific options to the manual. 24.02.1997: Improved the variable learning rate for Hard Competitive Learning. 21.02.1997: Changed eta to epsilon. 20.02.1997: First draft of the manual for v1.1. 18.02.1997: Updated the manual. 17.02.1997: Made some screen dumps for the manual. 30.10.1996: Some improvements and a bugfix concerning the error values. 28.10.1996: New method Self-organizing map (SOM). 23.10.1996: Redesigned the user interface. I have made minor changes only. The next major version 2.0 will have a complete new interface. 22.10.1996: Added a close button to the error graph. 21.10.1996: In the method Growing Grid a percentage counter appears in the fine-tuning phase. At 100% the calculation stops. 17.10.1996: Added a new button 'White'. It switches the background of the drawing area to white. This is useful for making hardcopies of the screen. 17.10.1996: New distribution UNIT. 15 16.10.1996: Fixed a bug: Now the Voronoi diagram/Delaunay triangulation is also available for Growing Grid. 15.10.1996: New method Growing Grid. 14.10.1996: Added a new class GridNodeGNG to gain a grid covering the nodes. 09.10.1996: Added a new menu for speed. Now it is possible to switch to an individual speed depending on the machine and/or browser. (On some machines there was no interaction possible with the default value. On the other hand, why longer waiting than necessary?) 04.10.1996: Added an error graph. (The source of this class was written by Christian Kurzke and Ningsui Chen. I have only made minor changes to this class. Many thanks to Christian Kurzke and Ningsui Chen for this nice graph class.) 30.09.1996: Added a new frame in a separate window for the error graph (new class GraphGNG). 30.09.1996: New checkboxes to turn the nodes/edges on and off. 26.09.1996: Fixed again all known bugs. 25.09.1996: Finished the discrete distribution (mainly for LBG). 24.09.1996: Default no sound. 20.09.1996: Fixed all known bugs (Can you find some unknown?). 19.09.1996: Prepared for new distribution (a discrete one with 500 fixed signals). 10.09.1996: New method LBG (some minor bugs are known, but not concerning the method). 03.09.1996: Renamed class PanelGNG to a more convenient name (ComputeGNG). 02.09.1996: Split the source file DemoGNG.java. Now each class has its own file. 01.09.1996: Inserted more comments. 30.08.1996: Cleaned up the code to compute the Voronoi diagram/Delaunay triangulation. 07.08.1996: Added Delaunay triangulation (press checkbutton Delaunay)! 06.08.1996: Now you can display Voronoi diagrams for each method (press checkbutton Voronoi)! 21.06.1996: Version 1.0 released. 16