📄 index.html
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head> <title></title></head> <body text="#000000" bgcolor="#ffffff" link="#0000ee" vlink="#551a8b" alink="#ff0000"> <center><table border="1" cellspacing="0" cellpadding="0" height="15%" bgcolor="#fdf5e6"> <tbody> <tr> <td><b><font size="+4"> Adaline, (Optimal)Perceptron and Backpropagation</font></b><img src="../../../images/java_duke_small.gif" height="38" width="49" align="middle"> </td> </tr> </tbody></table></center> <h3>Introduction</h3> <p>Single-layer neural networks can be trained using various learning algorithms. The best-known algorithms are the Adaline, Perceptron and Backpropagation algorithms for supervised learning. The first two are specific to single-layer neural networks while the third can be generalized to multi-layer perceptrons.</p> <h3>Credits</h3> <p>The applet was written by <a href="http://diwww.epfl.ch/lami/team/michel">Olivier Michel</a>. This pagewritten by <a href="http://diwww.epfl.ch/lami/team/herrmann/">Alix Herrmann</a>.<br>The optimal perceptron was added by <a href="http://www.subindex.de/tobias">Tobias Denninger</a>.<br></p> <h3> <hr width="100%">Presentation</h3> <p>Let's consider a single-layer neural network with <i>b</i> inputs and<i>c</i> outputs:</p> <ul> <li> <i>W</i><sub>ij</sub> = weight from input i to unit j in output layer; W<i><sub>j </sub></i>is the vector of all the weights of the j-th neuron inthe output layer.</li> <li> <i>I</i><sup>p</sup> = input vector (pattern p) = (<i>I</i><sub>1</sub><sup>p</sup>, <i>I</i><sub>2</sub><sup>p</sup>, ..., <i>I</i><sub>b</sub><sup>p</sup>).</li> <li> <i>T</i><sup>p</sup> = target output vector (pattern p) = (<i>T</i><sub>1</sub><sup>p</sup>, <i>T</i><sub>2</sub><sup>p</sup>, ..., <i>T</i><sub>c</sub><sup>p</sup>).</li> <li> <i>A</i><sup>p</sup> = Actual output vector (pattern p) = (<i>A</i><sub>1</sub><sup>p</sup>, <i>A</i><sub>2</sub><sup>p</sup>, ..., <i>A</i><sub>c</sub><sup>p</sup>).</li> <li> <i>g()</i> = sigmoid activation function: <i>g(a )</i> = [1 + exp(-<i>a</i>)]<sup>-1</sup></li> </ul> <h3> <hr width="100%">Theory</h3> <p>Click on each topic to learn more. Then scroll down to the applet.</p> <ul> <li> <a href="theory.html#Supervised_learning">Supervised learning</a></li> <li> <a href="theory.html#Adaline">Adaline learning</a></li> <li> <a href="theory.html#Perceptron">Perceptron learning</a></li> <li> <a href="theory.html#Pocket">Pocket algorithm</a></li> <li> <a href="theory.html#Backpropagation">Backpropagation</a></li> <li> <a href="theory.html#optimal">Optimal Perceptron</a></li> <li> <a href="theory.html#reading">Further reading</a></li> </ul> <hr width="100%"> <h3>Applet</h3> <p>This applet allows you to compare the different learning algorithms. The network implemented here has two inputs and a single output neuron. In this tutorial, you will train it to classify 2-dimensional data points into two categories.</p> <p>Click <a href="instructions.html">here</a> to see the instructions. You may find it helpful to open a separate browser window for the instructions, so you can view them at the same time as the applet window.</p> <center><table border="2" cellspacing="0" cellpadding="0"> <tbody> <tr align="center" valign="CENTER"> <td align="center" valign="CENTER" nowrap="nowrap" bgcolor="#c0c0c0"><applet code="SimplePerceptronApplet.class" codebase="../classes" width="520" height="400"> <param name="applet_mode" value="perceptron"> </applet><br> </td> </tr> </tbody><caption align="bottom"><br> </caption> </table></center> <hr width="100%"> <h3>Questions</h3> <ol> <li> <b>Ideal case</b>: place 10 red points (class 1)and 10 blue points (0) in two similar, distinct, and linearly separable clusters.</li> <ul> <li> Compare the speed of convergence of the four algorithms. Which oneis the fastest?</li> <li>Which values of the learning rate provide the best results ?</li> </ul> <li> <b>Different cluster dispersions</b>: Place 20 red points (1)in a very narrow cluster (strongly correlated points) and 5 blue points (0) in a very wide cluster in such a way that the classes are linearly separable.</li> <ul> <li> Compare the performance of the four algorithms on this problem. Whichone is the best?</li> <li> Which values of the learning rate provide the best results ?</li> <br> </ul> <li> <b>Imperfectly separable case:</b> Place 10 red points to (1)and 10 blue points (0) in two similar, linearly separable clusters.Then, place an additional blue point inside the red cluster.</li> <ul> <li> Compare the behavior of the perceptron with the behavior of the pocket algorithm.</li> <li> Which values for the learning rate give the best results ?</li> </ul> <li> For which kind of problem is the Adaline algorithm the best ?</li> <li> For which kind of problem is the Backpropagation algorithm the best?</li> <li> For which kind of problem is the Perceptron algorithm the best ?</li> <li> For which kind of problem is the Pocket algorithm the best ?</li> </ol> <br></body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -