⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 gaot

📁 王小平《遗传算法——理论、应用与软件实现》随书光盘
💻
字号:
A Genetic Algorithm for Function Optimization: A Matlab Implementation 
     Christopher R. Houck  North Carolina State University 
     Jeffery A. Joines  North Carolina State University 
     Michael G. Kay  North Carolina State University 
A genetic algorithm implemented in Matlab is presented. Matlab is used for the following reasons: it provides many built in auxiliary functions useful for function optimization; it is completely portable; and it is efficient for numerical computations. The genetic algorithm toolbox developed is tested on a series of non-linear, multi-modal, non-convex test problems and compared with results using simulated annealing. The genetic algorithm using a float representation is found to  be superior to both a binary genetic algorithm and simulated annealing in terms of efficiency and  quality of solution. The use of genetic algorithm toolbox as well as the code is introduced in the paper. 
Categories and Subject Descriptors: G.1 [Numerical Analysis]: Optimization---Unconstrained 
Optimization, nonlinear programming, gradient methods 
General Terms: Optimization, Algorithms 
Additional Key Words and Phrases: genetic algorithms, multimodal nonconvex functions, Matlab 
1. INTRODUCTION 
Algorithms for function optimization are generally limited to convex regular functions. However, many functions are multi-modal, discontinuous, and nondifferentiable. Stochastic sampling methods have been used to optimize these functions. Whereas traditional search techniques use characteristics of the problem to determine the next sampling point (e.g., gradients, Hessians, linearity, and continuity), stochastic search techniques make no such assumptions. Instead, the next sampled point(s) is(are) determined based on stochastic sampling/decision rules rather than a set of deterministic decision rules. Genetic algorithms have been used to solve difficult problems with objective functions that do not possess ``nice'' properties such as continuity, differentiability, satisfaction of the Lipschitz Condition, etc.[Davis 1991; Goldberg 1989; Holland 1975; Michalewicz 1994]. These algorithms maintain and manipulate a family, or  population, of solutions and implement a ``survival of the fittest'' strategy in their search for better solutions. This provides an implicit as well as explicit parallelism that allows for the exploitation of several promising areas of the solution space at the same time. The implicit parallelism is due to the schema theory developed by Holland, while the explicit parallelism arises from the manipulation of a population of points---the evaluation of the fitness of these points is easy to accomplish in parallel. Section 2 presents the basic genetic algorithm, and in Section 3 the GA is tested on several multi-modal functions and shown to be an efficient optimization tool. Finally, Section 4 briefly describes the code and presents the list of parameters of  the Matlab implementation. 
2. GENETIC ALGORITHMS 
Genetic algorithms search the solution space of a function through the use of simulated evolution, i.e., the survival of the fittest strategy. In general, the fittest individuals of any population tend to reproduce and survive to the next generation, thus improving successive generations. However, inferior individuals can, by chance, survive and also reproduce. Genetic algorithms have been shown to solve linear and nonlinear problems by exploring all regions of the state space and exponentially exploiting promising areas through mutation, crossover, and selection operations applied to individuals in the population [Michalewicz 1994]. A more  complete discussion of genetic algorithms, including extensions and related topics, can be found in the books by Davis [Davis 1991], Goldberg [Goldberg 1989], Holland[Holland 1975], and Michalewicz [Michalewicz 1994]. A genetic algorithm (GA) is summarized in Fig. 1, and each of the major components is discussed in detail below. 
(1) Supply a population P 0 of N individuals and respective function values. 
(2) i / 1 
(3) P 0 
i / selection function(P i A GA for function optimization \Delta 3 The use of a genetic algorithm requires the determination of six fundamental issues: chromosome representation, selection function, the genetic operators making up the reproduction function, the creation of the initial population, termination criteria, and the evaluation function. The rest of this section describes each of these issues. 
2.1 Solution Representation 
For any GA, a chromosome representation is needed to describe each individual in the population of interest. The representation scheme determines how the problem is structured in the GA and also determines the genetic operators that are used. Each individual or chromosome is made up of a sequence of genes from a certain alphabet. An alphabet could consist of binary digits (0 and 1), floating point numbers, integers, symbols (i.e., A, B, C, D), matrices, etc. In Holland's original design, the alphabet was limited to binary digits. Since then, problem representation has been the subject of much investigation. It has been shown that more natural representations are more efficient and produce better solutions[Michalewicz 1994]. One useful representation of an individual or chromosome for function optimization involves genes or variables from an alphabet of floating point numbers with values within the variables upper and lower bounds. Michalewicz[Michalewicz 1994] has done extensive experimentation comparing real-valued and binary GAs and shows that the real-valued GA is an order of magnitude more efficient in terms of CPU time. He also shows that a real-valued representation moves the problem closer to the problem representation which offers higher precision with more consistent results across replications. [Michalewicz 1994] 
2.2 Selection Function 
The selection of individuals to produce successive generations plays an extremely important role in a genetic algorithm. A probabilistic selection is performed based upon the individual's fitness such that the better individuals have an increased chance of being selected. An individual in the population can be selected more than once with all individuals in the population having a chance of being selected 
to reproduce into the next generation. There are several schemes for the selection process: roulette wheel selection and its extensions, scaling techniques, tournament, elitist models, and ranking methods [Goldberg 1989; Michalewicz 1994]. A common selection approach assigns a probability of selection, P j , to each individual, j based on its fitness value. A series of N random numbers is generated and compared against the cumulative probability, C i = P i j=1 P j , of the population. The appropriate individual, i, is selected and copied into the new population if C i roulette wheel, linear ranking and geometric ranking. Roulette wheel, developed by Holland [Holland 1975], was the first selection method. The probability, P i , for each individual is defined by: P [ Individual i is chosen ] = F i P PopSize 
j=1 F j ; (1) 
where F i equals the fitness of individual i. The use of roulette wheel selection limits 


4 \Delta C. Houck et al. 
the genetic algorithm to maximization since the evaluation function must map the solutions to a fully ordered set of values on ! + . Extensions, such as windowing and scaling, have been proposed to allow for minimization and negativity. Ranking methods only require the evaluation function to map the solutions to a partially ordered set, thus allowing for minimization and negativity. Ranking methods assign P i based on the rank of solution i when all solutions are sorted. Normalized geometric ranking, [Joines and Houck 1994], defines P i for each individual by: 
P [ Selecting the ith individual ] = q0 (1 
r = the rank of the individual, where 1 is the best: 
P = the population size 
q0 = q 
1 
the evaluation function to map solutions to a partially ordered set, however, it does not assign probabilities. Tournament selection works by selecting j individuals randomly, with replacement, from the population, and inserts the best of the j into the new population. This procedure is repeated until N individuals have been selected. 
2.3 Genetic Operators 
Genetic Operators provide the basic search mechanism of the GA. The operators are used to create new solutions based on existing solutions in the population. There are two basic types of operators: crossover and mutation. Crossover takes two individuals and produces two new individuals while mutation alters one individual to produce a single new solution. The application of these two basic types of operators and their derivatives depends on the chromosome representation used. 
Let * 
X and * 
Y be two m-dimensional row vectors denoting individuals (parents) from the population. For * 
X and * 
Y binary, the following operators are defined: 
binary mutation and simple crossover. Binary mutation flips each bit in every individual in the population with probability pm according to equation 3. 
x 0 
i = 
ae 1 
and 5. 
x 0 
i = 
ae 
x i ; if i ! r 
y i ; otherwise (4) 
y 0 
i = 
ae 
y i ; if i ! r 
x i ; otherwise (5) 
Operators for real-valued representations, i.e., an alphabet of floats, were de- 
veloped by Michalewicz [Michalewicz 1994]. For real * 
X and * 
Y , the following op- 


A GA for function optimization \Delta 5 
erators are defined: uniform mutation, non-uniform mutation, multi-non-uniform 
mutation, boundary mutation, simple crossover, arithmetic crossover, and heuris- 
tic crossover. Let a i and b i be the lower and upper bound, respectively, for each 
variable i. 
Uniform mutation randomly selects one variable, j, and sets it equal to an uniform 
random number U (a i ; b i ): 
x 0 
i = 
ae 
U (a i ; b i ); if i = j 
x i ; otherwise (6) 
Boundary mutation randomly selects one variable, j, and sets it equal to either 
its lower or upper bound, where r = U(0; 1): 
x 0 
i = 
8 
! 
: 
a i ; if i = j; r ! 0:5 
b i ; if i = j; r - 0:5 
x i ; otherwise 
(7) 
Non-uniform mutation randomly selects one variable, j, and sets it equal to an 
non-uniform random number: 
x 0 
i = 
8 ! 
: 
x i + (b i 
a i )f(G) if r 1 - 0:5, 
x i ; otherwise 
(8) 
where 
f(G) = (r 2 (1 
ber between (0,1), 
G = the current generation, 
Gmax = the maximum number of generations, 
b = a shape parameter. 
The multi-non-uniform mutation operator applies the non-uniform operator to all of the variables in the parent * 
X . 
Real-valued simple crossover is identical to the binary version presented above in equations 4 and 5. Arithmetic crossover produces two complimentary linear combinations of the parents, where r = U(0; 1): 
* 
X 0 = r * 
X + (1 
created using equation 12, where r = U(0; 1) and * 
X is better than * 
Y in terms of 
fitness. If * 
X 0 is infeasible, i.e., feasibility equals 0 as given by equation 14, then 
generate a new random number r and create a new solution using equation 12, 
otherwise stop. To ensure halting, after t failures, let the children equal the parents 
and stop. 


6 \Delta C. Houck et al. 
* 
X 0 = * 
X + r( * 
X 
x 0 
i - a i ; x 0 
i 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -