📄 article771.asp.htm
字号:
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
</tr>
</table>
<P><I>W</I><SUB>2</SUB> = [ <I>I</I><SUB>2</SUB><SUP>*t</SUP> x <I>I</I><SUB>2</SUB><SUP>*</SUP> ] = (1,-1,-1,-1)<SUP>t</SUP> x (1,-1,-1,-1) =
<table border="0" cellpadding="7" cellspacing="0" width="96">
<tr>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">1</td>
</tr>
</table>
<P><I>W</I><SUB>3</SUB> =<I> </I>[ <I>I</I><SUB>3</SUB><SUP>*t</SUP> x <I>I</I><SUB>3</SUB><SUP>*</SUP> ] = (-1,1,-1,1)<SUP>t</SUP> x (-1,1,-1,1) =
<table border="0" cellpadding="7" cellspacing="0" width="96">
<tr>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
</tr>
<tr>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">1</td>
</tr>
</table>
</FONT>
</BLOCKQUOTE>
<P>Then we add <I>W</I><SUB>1</SUB> + <I>W</I><SUB>2</SUB>
+ <I>W</I><SUB>3</SUB> resulting in:
<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>W</I><SUB>(1+2+3)</SUB> =
<table border="0" cellpadding="7" cellspacing="0" width="96">
<tr>
<td valign="top" width="25%">3</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
</tr>
</table>
</FONT>
</BLOCKQUOTE>
<P>Zeroing out the main diagonal gives us the final weight matrix:
<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>W</I> =
<table border="0" cellpadding="7" cellspacing="0" width="96">
<tr>
<td valign="top" width="25%">0</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">0</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">0</td>
<td valign="top" width="25%">-1</td>
</tr>
<tr>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">3</td>
<td valign="top" width="25%">-1</td>
<td valign="top" width="25%">0</td>
</tr>
</table>
</FONT>
</BLOCKQUOTE>
<P>That's it, now we are ready to rock. Let's input our original vectors and see the results. To do this we simply have to matrix multiple the input by the matrix and then process each output value with our activation function <I>f</I><SUB>h</SUB><I>(x).</I> Here are the results:
<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>I</I><SUB>1</SUB><I> x W</I> = (-1,-1,0,-1)
and <I>f</I><SUB>h</SUB><I>(</I>(-1,-1,0,-1)<I>)</I> = (0,0,1,0)
<P><I>I</I><SUB>2</SUB><I> x W</I> = (0,-1,-1,-1)
and <I>f</I><SUB>h</SUB><I>(</I>(0,-1,-1,-1)<I>) = </I>(1,0,0,0)
<P><I>I</I><SUB>3</SUB><I> x W</I> = (-2,3,-2,3)
and <I>f</I><SUB>h</SUB><I>(</I>(-2,3,-2,3)<I>) = </I>(0,1,0,1)
</FONT>
</BLOCKQUOTE>
<P>The inputs were perfectly recalled, and they should be since they were all orthogonal. As a final example, let's assume that our input (vision, auditory etc.) is a little noisy and the input has a single error in it. Let's take <I>I</I><SUB>3</SUB> = (0,1,0,1) and add some noise to <I>I</I><SUB>3</SUB> resulting in <I>I</I><SUB>3</SUB><SUP>noise</SUP> = (0,1,1,1). Now let's see what happens if we input this noisy vector to the Hopfield net:
<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>I</I><SUB>3</SUB><SUP>noise</SUP> x <I>W</I>
= (-3, 2, -2, 2) and <I>f</I><SUB>h</SUB><I>(</I>(-3,2,-2, 2)<I>)
= </I>(0,1,0,1)
</FONT>
</BLOCKQUOTE>
<P>Amazingly enough, the original vector is recalled. This is very cool. So we might have a memory that is filled with bit patterns that look like trees, (oaks, weeping willow, spruce, redwood etc.) then if we input another tree that is similar to say a weeping willow, but hasn't been entered into the net, our net will (hopefully) output a weeping willow indicating that this is what it "thinks" it looks like. This is one of the strengths of associative memories, we don't have to teach it every possible input, but just enough to give it a good idea. Then inputs that are <I>"close"</I> will usually converge to an actual trained input. This is the basis for image, and voice recognition systems. Don't ask me where the heck the "tree" analogy came from. Anyway, to complete our study of neural nets, I have included a final Hopfield autoassociative simulator that allows you to create nets with up to 16 neurodes. It is similar to the Hebb Net, but you must use a step activation function and your inputs exemplars must be in bipolar while training and binary while associating (running). Listing 3.0 contains the code for the simulator.
<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Listing 3.0 - A Hopfiled Autoassociative Memory Simulator (in neuralnet.zip).</I></FONT></SPAN>
</BLOCKQUOTE>
<H1>Brain Dead...</H1>
<P>Well that's all we have time for. I was hoping to get to the <I>Perceptron</I> network, but oh well. I hope that you have an idea of what neural nets are and how to create some working computer programs to model them. We covered basic terminology and concepts, some mathematical foundations, and finished up with some of the more prevalent neural net structures. However, there is still so much more to learn about neural nets. We need to cover <I>Perceptrons</I>, <I>Fuzzy Associative Memories</I> or <I>FAMs</I>, <I>Bidirectional Associative Memories</I> or <I>BAMs</I>, <I>Kohonen Maps</I>, <I>Adalines</I>, <I>Madalines</I>, <I>Backpropagation networks</I>, <I>Adaptive Resonance Theory ne
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -