⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 neural.htm

📁 关于windows游戏编程的一些文章还有相关图形
💻 HTM
📖 第 1 页 / 共 5 页
字号:
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><B><FONT SIZE=2><P>v<SUB>6</B></SUB> = (1,1,0), <B>v<SUB>7</B></SUB> = (1,1,1)</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><FONT SIZE=2><P>will potentially do poorly as inputs since <B>v</B><SUB>6</SUB> <FONT FACE="Symbol">&#183;</FONT>
 <B>v</B><SUB>7</SUB> is non-zero or in a binary system it is 1. The next question is, "can we measure this orthogonality?" The answer is yes. In the binary vector system there is a measure called hamming distance. It is used to measure the n-dimensional distance between binary bit vectors. It is simply, the number of bits that are different between two vectors. For example the vectors:</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><B><FONT SIZE=2><P>v</B><SUB>0</SUB> = (0,0,0), <B>v</B><SUB>1</SUB> = (0,0,1)</P>
<P>&nbsp;</P>
<P>have a hamming distance of 1 while the vectors,</P>
<P>&nbsp;</P>
<B><P>v</B><SUB>2</SUB> = (0,1,0), <B>v</B><SUB>4</SUB> = (1,0,0)</P>
<P>&nbsp;</P>
<P>have a hamming distance of 2. </P>
<P>&nbsp;</P>
<P>&#9;We can use hamming distance as the measure of  orthogonality in binary bit vector systems. And this can help us determine if our input vectors are going to have a lot of overlap. Determining orthogonality with general vector inputs is harder, but the concept is the same. That's all the time we have for concepts and terminology, so let's jump right in and see some actual neural nets that do something and hopefully by the end of the article you will be able to use them in your game's AI. We are going to cover neural nets used to perform logic functions, classify inputs, and associate inputs with outputs.</P>
<P>&nbsp;</P>
<B><P>Figure 5.0 - The McCulloch-Pitts Neurode.</P>
</B><P><IMG SRC="Image8.gif" WIDTH=344 HEIGHT=175></P>
</FONT><B><FONT SIZE=4><P>Pure Logic Mr. Spock</P>
</B></FONT><FONT SIZE=2><P>&nbsp;</P>
<P>The first artificial neural networks were created in 1943 by <B>McCulloch</B> and <B>Pitts</B>. The neural networks were composed of a number of neurodes and were typically used to compute simple logic functions such as <B>AND</B>, <B>OR</B>, <B>XOR,</B> and combinations of them. Figure 5.0 is a representation of a basic McCulloch-Pitts neurode with 2 inputs. If you are an electrical engineer then you will immediately see a close resemblance between McCulloch-Pitts neurodes and transistors or <B>MOSFETs</B>. In any case, McCulloch-Pitts neurodes do <I>not</I> have biases and have the simple activation function <B>f</B><SUB>mp</SUB><B>(x)</B> equal to:</P>
<P>&nbsp;</P>
<P>Eq. 5.0</P>
<B><P>f</B><SUB>mp</SUB><B>(x) = &#9;</B>1,  if <B>x</B> <FONT FACE="Symbol">&#179;</FONT>
 <FONT FACE="Symbol">&#113;</FONT>
</P><DIR>
<DIR>

<P>0,  if <B>x</B> &lt; <FONT FACE="Symbol">&#113;</FONT>
&#9;</P>
<P>&nbsp;</P></DIR>
</DIR>

<P>The <B>MP</B> (McCulloch-Pitts) neurode functions by summing the product of the inputs <B>X</B><SUB>i</SUB> and weights <B>w</B><SUB>i</SUB> and applying the result <B>Y</B><SUB>a</SUB> to the activation function <B>f</B><SUB>mp</SUB><B>(x).</B> The early research of McCulloch-Pitts focused on creating complex logical circuitry with the neurode models. In addition, one of the rules of the neurode model is that is takes one time step for a signal to travel from neurode to neurode. This helps model the biological nature of neurons more closely. Let's take a look at some examples of MP neural nets that implement basic logic functions. The logical <B>AND</B> function has the following truth table:</P>
<P>&nbsp;</P>
<B><P>Table 1.0 - Truth Table for Logical AND.</P>
</B><P>&nbsp;</P>
<B><P>X1&#9;X2&#9;Output</P>
</B><P>0&#9;0&#9;0</P>
<P>0&#9;1&#9;0</P>
<P>1&#9;0&#9;0</P>
<P>1&#9;1&#9;1</P>
<P>&nbsp;</P>
<B><P>Figure 6.0 - Basic Logic Functions Implemented with McCulloch-Pitts Nets.</P>
</B><P><IMG SRC="Image9.gif" WIDTH=552 HEIGHT=447></P>
<P>We can model this with a two input MP neural net with weights <B>w</B><SUB>1</SUB>=1, <B>w</B><SUB>2</SUB>=1, and <FONT FACE="Symbol">&#113;</FONT>
 = 2. This neural net is shown in Figure 6.0a. As you can see, all input combinations work correctly. For example, if we try inputs <B>X</B><SUB>1</SUB>=0, <B>Y</B><SUB>1</SUB>=1, then the activation will be:</P>
<P>&nbsp;</P>
<B><P>X</B><SUB>1</SUB>*<B>w</B><SUB>1</SUB> + <B>X</B><SUB>2</SUB>*<B>w</B><SUB>2</SUB> = (1)*(1) + (0)*(1) = 1.0</P>
<P>&nbsp;</P>
<P>If we apply 1.0 to the activation function <B>f</B><SUB>mp</SUB><B>(x)</B> then the result is 0 which is correct. As another example, if we try inputs <B>X</B><SUB>1</SUB>=1, <B>X</B><SUB>2</SUB>=1, then the activation will be:</P>
<P>&nbsp;</P>
<B><P>X</B><SUB>1</SUB>*<B>w</B><SUB>1</SUB> + <B>X</B><SUB>2</SUB>*<B>w</B><SUB>2</SUB> = (1)*(1) + (1)*(1) = 2.0</P>
<P>&nbsp;</P>
<P>If we input 2.0 to the activation function <B>f</B><SUB>mp</SUB><B>(x)</B>, then the result is 1.0 which is correct. The other cases will work also. The function of the <B>OR</B> is similar, but the threshold <FONT FACE="Symbol">&#113;</FONT>
 of is changed to 1.0 instead 2.0 as it is in the <B>AND</B>. You can try running through the truth table yourself to see the results. </P>
<P>&nbsp;</P>
<P>The <B>XOR</B> network is a little different because it really has 2 layers in a sense because the results of the pre-processing are further processed in the output neuron. This is a good example of why a neural net needs more than one layer to solve certain problems. The <B>XOR</B> is a common problem in neural nets that is used to test a neural net's performance. In any case, <B>XOR</B> is not linearly separable in a single layer, it must be broken down into smaller problems and then the results added together. Let's take a look at <B>XOR</B> as the final example of MP neural networks. The truth table for <B>XOR</B> is as follows:</P>
<P>&nbsp;</P>
<B><P>Table 2.0 - Truth Table for Logical XOR.</P>
</B><P>&nbsp;</P>
<B><P>X1&#9;X2&#9;Output</P>
</B><P>0&#9;0&#9;0</P>
<P>0&#9;1&#9;1</P>
<P>1&#9;0&#9;1</P>
<P>1&#9;1&#9;0</P>
<P>&nbsp;</P>
<B><P>Figure 7.0 - Using the XOR Function to Illustrate Linear Separability. </P>
<P>&nbsp;</P>
</B><P><IMG SRC="Image10.gif" WIDTH=576 HEIGHT=437></P>
<B><P>XOR</B> is only true when the inputs are different, this is a problem since both inputs map to the same output. <B>XOR</B> is not linearly separable, this is shown in Figure 7.0. As you can see, there is no way to separate the proper responses with a straight line. The point is that we can separate the proper responses with 2 lines and this is just what 2 layers do. The first layer pre-processes or solves part of the problem and the remaining layer finishes up. Referring to Figure 6.0c, we see that the weights are <B>w</B><SUB>1</SUB>=1, <B>w</B><SUB>2</SUB>=-1, <B>w</B><SUB>3</SUB>=1, <B>w</B><SUB>4</SUB>=-1, <B>w</B><SUB>5</SUB>=1, <B>w</B><SUB>6</SUB>=1. The network works as follows: layer one computes if <B>X</B><SUB>1</SUB> and <B>X</B><SUB>2</SUB> are opposites in parallel, the results of either case (0,1) or (1,0) are feed to layer two which sums these up and fires if either is true. In essence we have created the logic function:</P>
<P>&nbsp;</P>
<B><P>z = ((X1 AND NOT X2) OR (NOT X1 AND X2))</P>
</B><P>&nbsp;</P>
<P>If you would like to experiment with the basic McCulloch Pitts neurode Listing 1.0 is a complete 2 input, single neurode simulator that you can experiment with.</P>
</FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><B><FONT SIZE=2><P>Listing 1.0 - A McCulloch-Pitts Logic Neurode Simulator.</P>
</B><P>&nbsp;</P>
</FONT><B><FONT FACE="Arial" SIZE=1><P>// McCULLOCH PITTS SIMULATOR /////////////////////////////////////////////////////</P>
<P>&nbsp;</P>
<P>// INCLUDES //////////////////////////////////////////////////////////////////////</P>
<P>&nbsp;</P>
<P>#include &lt;conio.h&gt;</P>
<P>#include &lt;stdlib.h&gt;</P>
<P>#include &lt;malloc.h&gt;</P>
<P>#include &lt;memory.h&gt;</P>
<P>#include &lt;string.h&gt;</P>
<P>#include &lt;stdarg.h&gt;</P>
<P>#include &lt;stdio.h&gt;</P>
<P>#include &lt;math.h&gt;</P>
<P>#include &lt;io.h&gt;</P>
<P>#include &lt;fcntl.h&gt;</P>
<P>&nbsp;</P>
<P>// MAIN //////////////////////////////////////////////////////////////////////////</P>
<P>&nbsp;</P>
<P>void main(void)</P>
<P>{</P>
<P>float&#9;threshold,&#9;// this is the theta term used to threshold the summation </P>
<P>&#9;w1,w2,&#9;&#9;// these hold the weights</P>
<P>&#9;x1,x2,&#9;&#9;// inputs to the neurode&#9;&#9;</P>
<P>&#9;y_in,&#9;&#9;// summed input activation</P>
<P>&#9;y_out;&#9;&#9;// final output of neurode</P>
<P>&#9;&#9;</P>
<P>printf("\nMcCulloch-Pitts Single Neurode Simulator.\n");</P>
<P>printf("\nPlease Enter Threshold?");</P>
<P>scanf("%f",&amp;threshold);</P>
<P>&nbsp;</P>
<P>printf("\nEnter value for weight w1?");</P>
<P>scanf("%f",&amp;w1);</P>
<P>&nbsp;</P>
<P>printf("\nEnter value for weight w2?");</P>
<P>scanf("%f",&amp;w2);</P>
<P>&nbsp;</P>
<P>printf("\n\nBegining Simulation:");</P>
<P>&nbsp;</P>
<P>// enter main event loop</P>
<P>while(1)</P>
<P>&#9;{</P>
<P>&#9;printf("\n\nSimulation Parms: threshold=%f, W=(%f,%f)\n",threshold,w1,w2);</P>
<P>&nbsp;</P>
<P>&#9;// request inputs from user</P>
<P>&#9;printf("\nEnter input for X1?");</P>
<P>&#9;scanf("%f",&amp;x1);</P>
<P>&nbsp;</P>
<P>&#9;printf("\nEnter input for X2?");</P>
<P>&#9;scanf("%f",&amp;x2);</P>
<P>&nbsp;</P>
<P>&#9;// compute activation</P>
<P>&#9;y_in = x1*w1 + x2*w2;</P>
<P>&nbsp;</P>
<P>&#9;// input result to activation function (simple binary step)</P>
<P>&#9;if (y_in&gt;=threshold)</P>
<P>&#9;&#9;y_out = (float)1.0;</P>
<P>&#9;else</P>
<P>&#9;&#9;y_out = (float)0.0;</P>
<P>&nbsp;</P>
<P>&#9;// print out result</P>
<P>&#9;printf("\nNeurode Output is %f\n",y_out);</P>
<P>&nbsp;</P>
<P>&#9;// try again</P>
<P>&#9;printf("\nDo you wish to continue Y or N?");</P>
<P>&#9;char ans[8];</P>
<P>&#9;scanf("%s",ans);</P>
<P>&#9;if (toupper(ans[0])!='Y')</P>
<P>&#9;&#9;break;</P>
<P>&#9;&#9;</P>
<P>&#9;} // end while</P>
<P>&nbsp;</P>
<P>printf("\n\nSimulation Complete.\n");</P>
<P>&nbsp;</P>
<P>} // end main</P>
</B></FONT><FONT SIZE=1><P>&nbsp;</P>
</FONT><FONT SIZE=2><P>That finishes up our discussion of the basic building block invented by McCulloch and Pitts now let's move on to more contemporary neural nets such as those used to classify input vectors.</P>
<P>&nbsp;</P>
<B><P>Figure 8.0 - The Basic Neural Net Model Used for Discussion. </P>
</B><P><IMG SRC="Image11.gif" WIDTH=256 HEIGHT=163></P>
</FONT><B><FONT SIZE=4><P>Classification and "Image" Recognition</P>
</B></FONT><FONT SIZE=2><P>&nbsp;</P>
<P>At this point we are ready to start looking at real neural nets that have some girth to them! To segue into the following discussions on <B>Hebbian</B>, and <B>Hopfield</B> neural nets, we are going to analyze a generic neural net structure that will illustrate a number of concepts such as linear separability, bipolar representations, and the analog that neural nets have with memories. Let's begin with taking a look at Figure 8.0 which is the basic neural net model we are going to use. As you can see, it is a single node net with 3 inputs including the bias, and a single output. We are going to see if we can use this network to solve the logical <B>AND</B> function that we solved so easily with McCulloch-Pitts neurodes.</P>
<P>&nbsp;</P>
<P>Let's start by first using bipolar representations, so all 0's are replaced with -1's and 1's are left alone. The truth table for logical <B>AND</B> using bipolar inputs and outputs is shown below:</P>
<P>&nbsp;</P>
<B><P>Table 3.0 - Truth Table for Logical AND in Bipolar Format.</P>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -