📄 217-221.html
字号:
<HTML>
<HEAD>
<META name=vsisbn content="0849398010">
<META name=vstitle content="Industrial Applications of Genetic Algorithms">
<META name=vsauthor content="Charles Karr; L. Michael Freeman">
<META name=vsimprint content="CRC Press">
<META name=vspublisher content="CRC Press LLC">
<META name=vspubdate content="12/01/98">
<META name=vscategory content="Web and Software Development: Artificial Intelligence: Other">
<TITLE>Industrial Applications of Genetic Algorithms:Tuning Bama Optimized Recurrent Neural Networks Using Genetic Algorithms</TITLE>
<!-- HEADER -->
<STYLE type="text/css">
<!--
A:hover {
color : Red;
}
-->
</STYLE>
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
<!--ISBN=0849398010//-->
<!--TITLE=Industrial Applications of Genetic Algorithms//-->
<!--AUTHOR=Charles Karr//-->
<!--AUTHOR=L. Michael Freeman//-->
<!--PUBLISHER=CRC Press LLC//-->
<!--IMPRINT=CRC Press//-->
<!--CHAPTER=11//-->
<!--PAGES=217-221//-->
<!--UNASSIGNED1//-->
<!--UNASSIGNED2//-->
<CENTER>
<TABLE BORDER>
<TR>
<TD><A HREF="215-217.html">Previous</A></TD>
<TD><A HREF="../ewtoc.html">Table of Contents</A></TD>
<TD><A HREF="221-225.html">Next</A></TD>
</TR>
</TABLE>
</CENTER>
<P><BR></P>
<P><FONT SIZE="+1"><B><I>Simple GA</I></B></FONT></P>
<P>GAs are optimization algorithms inspired by biological evolution [2]. They have been shown to be very effective at function optimization, efficiently search large and complex spaces to find near global optima. A simple GA uses three operators in its quest for improved solutions: reproduction, crossover, and mutation. These operators are implemented through the performance of the basic tasks of copying binary strings, exchanging portions of strings, and generating random numbers, respectively.
</P>
<P>Reproduction is a process in which strings with high performance indexes receive accordingly large number of copies in the new population. For instance, in roulette wheel reproduction, the strings are given a number of copies that are proportional to their fitness. The probability of reproduction selection is defined as in Equation (11.1).</P>
<P ALIGN="CENTER"><IMG SRC="images/11-01d.jpg"></P>
<P>where
</P>
<TABLE WIDTH="100%"><TR>
<TD WIDTH="10%"><I>P<SUB>select</SUB></I>
<TD WIDTH="90%">= Probability of a string being reproduced, and
<TR>
<TD><I>fi</I>
<TD>= fitness values of an individual string.
</TABLE>
<P>Reproduction drives a population toward highly fit regions of the search space.
</P>
<P>Crossover provides a mechanism of information exchanges between high performance strings. Crossover can be achieved in three steps:</P>
<DL>
<DD><B>1.</B> Select two new strings from the mating pool of strings that were produced by reproduction.
<DD><B>2.</B> Select a position for the crossing site.
<DD><B>3.</B> Exchange all characters following the crossing site.
</DL>
<P>An example of a crossover is shown Figure 11.1. The binary coded strings A and B of length 10 are crossed at the third position. New strings A’ and B’ are produced.
</P>
<P><A NAME="Fig1"></A><A HREF="javascript:displayWindow('images/11-01.jpg',450,135)"><IMG SRC="images/11-01t.jpg"></A>
<BR><A HREF="javascript:displayWindow('images/11-01.jpg',450,135)"><FONT COLOR="#000077"><B>Figure 11.1</B></FONT></A> Example of GA crossover.</P>
<P>Mutation enhances a GA’s ability to find near optimal solutions by providing a mechanism to insert missing genetic material into the population. Mutation consists of the occasional alteration of a value at the particular string position. This procedure insures against the loss of a particular value at any bit position.
</P>
<P>Together, reproduction, crossover, and mutation provide the ingredients necessary for an effective GA. This simple GA model is employed in the current study.</P>
<P><FONT SIZE="+1"><B><I>Recurrent NN</I></B></FONT></P>
<P>Recurrent Neural Networks (RNNs) are experiencing an increasing popularity because of their inherent dynamic nature. In RNNs, individual neurons are fed back as inputs to other neurons. The general structure of an RNN with BP learning is shown in Figure 11.2. In this figure, the circles represent the neurons of the RNN and the arrows are RNN connections. Each connection has its own strength, commonly called a weight. In typical BP learning, weights are adjusted in order to obtain the desired output values from the RNN input. RNN input can be any crisp values, which are related to the RNN output values. RNN input values are usually obtained from the environmental state, and the outputs are the predicted action or consequence of the input. Therefore, like most NNs, RNNs attempt to capture the relationship between input and output values, inherent in a particular problem. As shown in Figure 11.2, an RNN neuron has connections from every other neuron to its left at time period t. Also, every neuron has connections from itself and from every other neuron to its right at time t+dt. Here, time has no meaning in the physical environment; it is used exclusively to mark iterations through an NN learning cycle.
</P>
<P><A NAME="Fig2"></A><A HREF="javascript:displayWindow('images/11-02.jpg',500,330)"><IMG SRC="images/11-02t.jpg"></A>
<BR><A HREF="javascript:displayWindow('images/11-02.jpg',500,330)"><FONT COLOR="#000077"><B>Figure 11.2</B></FONT></A> Recurrent NN with BP learning structure.</P>
<P>The operation of RNNs with BP learning consists of two parts: (1) a forward pass and (2) a backward pass. The primary role of the forward pass is to predict an output response from a given input. The outputs are a definite function of the input. When an individual neuron receives an input, the input goes through an activation function within the neuron and generates a neuron output. The activation function can take many forms. Generally, it is a nonlinear function, but its only true limitation is that it must be a differentiable function. In this study, sigmoidal functions (Figure 11.3), are used. Equations 11.2 to 11.5 are necessary to accomplish a forward pass. Equation 11.2 represents the inputs U, which are received by the RNN. These inputs are multiplied by weights in 11.3 and outputs are processed by <IMG SRC="images/11-01i.jpg"> in Equations 11.4 and 11.5. The <IMG SRC="images/11-02i.jpg"> is the activation function. The effect of a bias in the neurons is achieved by assuming that the first input is always unity and is connected to all the other neurons.</P>
<P ALIGN="CENTER"><IMG SRC="images/11-02d.jpg"></P>
<P ALIGN="CENTER"><IMG SRC="images/11-03d.jpg"></P>
<P ALIGN="CENTER"><IMG SRC="images/11-04d.jpg"></P>
<P ALIGN="CENTER"><IMG SRC="images/11-05d.jpg"></P>
<P>where
</P>
<TABLE WIDTH="100%"><TR>
<TD WIDTH="10%"><I>t</I>
<TD WIDTH="90%">= current time frame,
<TR>
<TD><I>t-1</I>
<TD>= previous time frame,
<TR>
<TD><I>U(t)</I>
<TD>= net inputs,
<TR>
<TD><I>x(t)</I>
<TD>= neuronal activations,
<TR>
<TD><I>Y(t)</I>
<TD>= net output,
<TR>
<TD><IMG SRC="images/11-03i.jpg"><TD>= activation function,
<TR>
<TD><I>W<SUB>ij</SUB></I>
<TD>= weight connecting the <I>i</I><SUP><SMALL>th</SMALL></SUP> neuron to the <I>j</I><SUP><SMALL>th</SMALL></SUP> neuron,
<TR>
<TD><I>m</I>
<TD>= number of inputs,
<TR>
<TD><I>h</I>
<TD>= number of hidden neurons,
<TR>
<TD><I>n</I>
<TD>= number of outputs,
<TR>
<TD><I>N</I>
<TD>= total number of neurons(<I>m</I>+<I>h</I>+<I>n</I>).
</TABLE>
<P><A NAME="Fig3"></A><A HREF="javascript:displayWindow('images/11-03.jpg',450,352)"><IMG SRC="images/11-03t.jpg"></A>
<BR><A HREF="javascript:displayWindow('images/11-03.jpg',450,352)"><FONT COLOR="#000077"><B>Figure 11.3</B></FONT></A> Sigmoidal activation function <IMG SRC="images/11-04i.jpg">.</P>
<P>The second fundamental operation, the backward pass, is where the learning or adaptation occurs. In a backward pass, errors associated with the RNN’s performance are used to adjust the weights associated with the connections. Here, the errors are often a sum of squared error between the desired output (for the given input) and the output actually produced by the RNN. This adaptation (BP learning in this chapter) implies a modification of RNN structure and its parameters based on repeated exposure (epoch) to the environment, or from input-output pairs collected from the environment. Equations 11.2 to 11.6 are necessary to accomplish a backward pass. RNN learning involves an evaluation of RNN performance. The performance is measured via an error that is computed using Equation 11.6. The sum of the differences of RNN outputs (<I>Y<SUB>t</SUB>(t)</I>) and RNN desired outputs (<I>d<SUB>t</SUB>(t)</I>) is squared:</P>
<P ALIGN="CENTER"><IMG SRC="images/11-06d.jpg"></P>
<P><BR></P>
<CENTER>
<TABLE BORDER>
<TR>
<TD><A HREF="215-217.html">Previous</A></TD>
<TD><A HREF="../ewtoc.html">Table of Contents</A></TD>
<TD><A HREF="221-225.html">Next</A></TD>
</TR>
</TABLE>
</CENTER>
<hr width="90%" size="1" noshade>
<div align="center">
<font face="Verdana,sans-serif" size="1">Copyright © <a href="/reference/crc00001.html">CRC Press LLC</a></font>
</div>
<!-- all of the reference materials (books) have the footer and subfoot reveresed -->
<!-- reference_subfoot = footer -->
<!-- reference_footer = subfoot -->
</BODY>
</HTML>
<!-- END FOOTER -->
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -