⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 article771.asp.htm

📁 电脑游戏中的人工智能制作的文章收集。 搞游戏设计和编程的人都可以参考一下
💻 HTM
📖 第 1 页 / 共 5 页
字号:
	   printf("\nEnter value for weight w1?");
	   scanf("%f",&w1);

	   printf("\nEnter value for weight w2?");
	   scanf("%f",&w2);

	   printf("\n\nBegining Simulation:");

	   // enter main event loop

	   while(1)
	   {
	      printf("\n\nSimulation Parms: threshold=%f, W=(%f,%f)\n",threshold,w1,w2);

	      // request inputs from user
	      printf("\nEnter input for X1?");
	      scanf("%f",&x1);

	      printf("\nEnter input for X2?");
	      scanf("%f",&x2);

	      // compute activation
	      y_in = x1*w1 + x2*w2;

	      // input result to activation function (simple binary step)
	      if(y_in >= threshold)
	         y_out = (float)1.0;
	      else
	         y_out = (float)0.0;

	      // print out result
	      printf("\nNeurode Output is %f\n",y_out);

	      // try again
	      printf("\nDo you wish to continue Y or N?");
	      char ans[8];
	      scanf("%s",ans);

	      if(toupper(ans[0])!='Y')
	         break;
	   } // end while

	   printf("\n\nSimulation Complete.\n");
	} // end main
</DIV></PRE>
</BLOCKQUOTE>

<P>That finishes up our discussion of the basic building block invented by McCulloch and Pitts now let's move on to more contemporary neural nets such as those used to classify input vectors.

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Figure 8.0 - The Basic Neural Net Model Used for Discussion.</I></font></SPAN>
<P ALIGN=CENTER><IMG SRC="xneuralnet/Image21.jpg" tppabs="http://www.gamedev.net/reference/articles/xneuralnet/Image21.jpg" width="330" height="211">
</BLOCKQUOTE>

<H1>Classification and "Image" Recognition</H1>
<P>At this point we are ready to start looking at real neural nets that have some girth to them! To segue into the following discussions on <I>Hebbian</I>, and <I>Hopfield</I> neural nets, we are going to analyze a generic neural net structure that will illustrate a number of concepts such as linear separability, bipolar representations, and the analog that neural nets have with memories. Let's begin with taking a look at Figure 8.0 which is the basic neural net model we are going to use. As you can see, it is a single node net with 3 inputs including the bias, and a single output. We are going to see if we can use this network to solve the logical <I>AND</I> function that we solved so easily with McCulloch-Pitts neurodes.
<P>Let's start by first using bipolar representations, so all 0's are replaced with -1's and 1's are left alone. The truth table for logical <I>AND</I> using bipolar inputs and outputs is shown below:

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Table 3.0 - Truth Table for Logical AND in Bipolar Format.</I></FONT></SPAN>
<P><table border="1" cellpadding="2">
    <tr>
        <td><I>X1</I></td>
        <td><I>X2</I></td>
        <td><I>Output</I></td>
    </tr>
    <tr>
        <td>-1</td>
        <td>-1</td>
        <td>-1</td>
    </tr>
    <tr>
        <td>-1</td>
        <td>1</td>
        <td>-1</td>
    </tr>
    <tr>
        <td>1</td>
        <td>-1</td>
        <td>-1</td>
    </tr>
    <tr>
        <td>1</td>
        <td>1</td>
        <td>1</td>
    </tr>
</table>
</BLOCKQUOTE>

<P>And here is the activation function <I>f</I><SUB>c</SUB><I>(x)</I>
that we will use:


<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Eq. 6.0</I></FONT></SPAN>
<FONT COLOR=RED>
<P><I>f</I><SUB>c</SUB><I>(x)</I> <I>=</I> 1, if <I>x</I>
<font face="Symbol">'</font> <font face="Symbol">q</font><br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-1,
if <I>x</I> &lt; <font face="Symbol">q</font>
</FONT>
</BLOCKQUOTE>

<P>Notice that the function is step with bipolar outputs. Before we continue, let me place a seed in your mind; the bias and threshold end up doing the same thing, they give us another degree of freedom in our neurons that make the neurons respond in ways that can't be achieved without them. You will see this shortly.
<P>The single neurode net in Figure 8.0 is going to perform a classification for us. It is going to tell us if our input is in one class or another. For example, is this image a tree or <I><I>not</I></I> a tree. Or in our case is this input (which just happens to be the logic for an <I>AND</I>) in the +1 or -1 class? This is the basis of most neural nets and the reason I was belaboring linear separability. We need to come up with a linear partitioning of space that maps our inputs and outputs so that there is a solid delineation of space that separates them. Thus, we need to come up with the correct weights and a bias that will do this for us. But how do we do this? Do we just use trial and error or is there a methodology? The answer is that there are a number of training methods to teach a neural net. These training methods work on various mathematical premises and can be proven, but for now, we're just going to pull some values out of the hat that work. These exercises will lead us into the learning algorithms and more complex nets that follow.
<P>All right, we are trying to finds weights <I>w</I><SUB>i</SUB> and bias <I>b</I> that give use the correct result when the various inputs are feed to our network with the given activation function <I>f</I><SUB>c</SUB><I>(x).</I> Let's write down the activation summation of our neurode and see if we can infer any relationship between the weights and the inputs that might help us. Given the inputs <I>X</I><SUB>1</SUB> and <I>X</I><SUB>2</SUB> with weights <I>w</I><SUB>1</SUB> and <I>w</I><SUB>2</SUB> along with <I>B</I>=1 and bias <I>b</I>, we have the following formula:

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Eq. 7.0</I></FONT></SPAN>
<FONT COLOR=RED>
<P><I>X</I><SUB>1</SUB>*<I>w</I><SUB>1</SUB> + <I>X</I><SUB>2</SUB>*<I>w</I><SUB>2</SUB>
+ <I>B</I>*<I>b</I>=<font face="Symbol">q</font>
</FONT>
</BLOCKQUOTE>

<P>Since <I>B</I> is always equal to 1.0 the equation simplifies to:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>X</I><SUB>1</SUB>*<I>w</I><SUB>1</SUB> + <I>X</I><SUB>2</SUB>*<I>w</I><SUB>2</SUB>
+ <I>b</I>=<font face="Symbol">q</font>

<P>.

<P>.

<P><I>X</I><SUB>2</SUB> = -<I>X</I><SUB>1</SUB>*<I>w</I><SUB>1</SUB>/<I>w</I><SUB>2</SUB>
+ (<font face="Symbol">q</font> -<I>b</I>)/<I>w</I><SUB>2
</SUB>(solving in terms of <I>X</I><SUB>2</SUB>)
</FONT>

<P><SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Figure 9.0 - Mathematical Decision Boundaries Generated by Weights, Bias, and <font face="Symbol">q</font></I></FONT></SPAN>
<P ALIGN=CENTER><IMG SRC="xneuralnet/Image22.jpg" tppabs="http://www.gamedev.net/reference/articles/xneuralnet/Image22.jpg" width="581" height="373">
</BLOCKQUOTE>

<P>What is this entity? It's a line! And if the left hand side is greater than or equal to <font face="Symbol">q</font>, that is, (<I>X</I><SUB>1</SUB>*<I>w</I><SUB>1</SUB> + <I>X</I><SUB>2</SUB>*<I>w</I><SUB>2</SUB> + <I>b</I>) then the neurode will fire and output 1, otherwise the neurode will output -1. So the line is a decision boundary. Figure 9.0a illustrates this. Referring to the figure, you can see that the slope of the line is <I>-w</I><SUB><I>1</I></SUB><I>/w</I><SUB><I>2</I></SUB> and the <I>X</I><SUB><I>2</I></SUB> intercept is (<font face="Symbol">q</font> -<I>b</I>)/<I>w</I><SUB>2</SUB>. Now can you see why we can get rid of <font face="Symbol">q</font>? It is part of a constant and we can always scale b to take up any loss, so we will assume that <font face="Symbol">q</font> = 0, and the resulting equation is:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>X</I><SUB>2</SUB> = -<I>X</I><SUB>1</SUB>*<I>w</I><SUB>1</SUB><I>/w</I><SUB>2</SUB>
- <I>b/w</I><sub2</SUB>
</FONT>
</BLOCKQUOTE>

<P>What we want to find are weights <I>w</I><SUB><I>1</I></SUB> and <I>w</I><SUB>2</SUB> and bias <I>b</I> so that it separates our outputs or classifies them into singular partitions without overlap. This is the key to linear separability. Figure 9.0b shows a number of decision boundaries that will suffice, so we can pick any of them. Let's pick the simplest values which would be:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>w</I><SUB>1</SUB> = <I>w</I><SUB>2</SUB> = 1

<P><I>b</I> = -1
</FONT>
</BLOCKQUOTE>

<P>With these values our decision boundary becomes:

<BLOCKQUOTE>
<FONT COLOR=RED>
<P><I>X</I><SUB>2</SUB> = -<I>X</I><SUB>1</SUB>*<I>w</I><SUB>1</SUB><I>/w</I><SUB>2</SUB>
- <I>b/w</I><SUB>2</SUB> -&gt; <I>X</I><SUB>2</SUB> = -1*<I>X</I><SUB>1</SUB>
+ 1
</FONT>
</BLOCKQUOTE>

<P>The slope is -1 and the <I>X</I><SUB>2</SUB> intercept is 1. If we plug the input vectors for the logical <I>AND</I> into this equation and use the <I>f</I><SUB>c</SUB><I>(x)</I> activation function then we will get the correct outputs. For example if, <I>X</I><SUB>2</SUB> + <I>X</I><SUB>1</SUB>-1 &gt;0 then fire the neurode, else output -1. Let's try it with our <I>AND</I> inputs and see what we come up with:

<BLOCKQUOTE>
<SPAN CLASS="maintext-2"><FONT COLOR="#000088"><I>Table 4.0 - Truth Table for Bipolar AND with decision boundary.</I></FONT></SPAN>

<P><table border="1" cellpadding="2">
    <tr>
        <td><I>Input </I></td>
        <td><I>X1</I></td>
        <td><I>X2</I></td>
        <td><I>Output (X2+X1-1)</I></td>
    </tr>
    <tr>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -