⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 chapter2.ps

📁 收集了遗传算法、进化计算、神经网络、模糊系统、人工生命、复杂适应系统等相关领域近期的参考论文和研究报告
💻 PS
📖 第 1 页 / 共 5 页
字号:
0 F1.27 (, of objects to be queued for subsequent investigation.) 271.36 145.81 P0.55 (Figure 5 illustrates the algorithm. It begins by assigning the initial object to the queue of) 108 127.81 P0.18 (objects and entering the search loop. Inside the loop the algorithm repeatedly expands the) 108 109.81 P108.72 389.81 539.28 702 C117.44 424.83 529.47 466.31 R7 X0 KV1 10 Q0 X0.34 (Figur) 117.44 459.65 P0.34 (e 5: Algorithm for beam sear) 141.14 459.65 P0.34 (ch.) 266.19 459.65 P4 F0.34 (obj-queue) 281.52 459.65 P1 F0.34 ( and) 322.61 459.65 P4 F0.34 (next-queue) 344.41 459.65 P1 F0.34 ( ar) 390.49 459.65 P0.34 (e standard LIFO queues with) 402.59 459.65 P0.76 (standard operators) 117.44 449.65 P4 F0.76 (head) 203.35 449.65 P1 F0.76 (,) 223.34 449.65 P4 F0.76 (push) 229.1 449.65 P1 F0.76 ( and) 249.1 449.65 P4 F0.76 (pop) 271.73 449.65 P1 F0.76 (. The sort r) 286.73 449.65 P0.76 (outine is assumed to use T) 336.57 449.65 P0.76 (ester to determine) 450.8 449.65 P0.19 (the ordering so that the objects closest to being solutions ar) 117.44 439.65 P0.19 (e at the fr) 370.71 439.65 P0.19 (ont of the queue. The space) 412.45 439.65 P(is assumed to be a tr) 117.44 429.65 T(ee.) 204.15 429.65 T113.78 483.47 533.55 698.62 R7 XV5 12 Q0 X(function Beam-Search\050root, beam-length\051;) 113.78 690.62 T(begin) 113.78 675.62 T(obj-queue := root;) 133.61 660.62 T(while not\050Tester\050head\050obj-queue\051\051\051) 133.61 645.62 T(begin) 133.61 630.62 T(for obj in obj-queue do) 151.61 615.62 T(for opr in) 169.61 600.62 T6 F(O) 248.77 600.62 T5 F( do push\050opr\050obj\051, next-queue\051;) 255.96 600.62 T(sort\050next-queue\051;) 151.61 585.62 T(obj-queue := null;) 151.61 570.62 T(for i from 1 to beam-length do) 151.61 555.62 T(push\050pop\050next-queue\051, obj-queue\051;) 169.61 540.62 T(end;) 133.61 525.62 T(return head\050obj-queue\051;) 133.61 510.62 T(end;) 113.78 495.62 T0 0 612 792 CFMENDPAGE%%EndPage: "22" 9%%Page: "23" 9612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(23) 528.01 712 T108 90 540 702 R7 XV0 X-0.14 (last set of objects checked into its children by applying each operator in) 108 694 P2 F-0.14 (O) 452.95 694 P0 F-0.14 (. The set of chil-) 461.61 694 P0.33 (dren is then sorted so that better objects are at the front of the queue. Only the best) 108 676 P2 F0.33 (b) 512.69 676 P0 F0.33 ( ele-) 518.69 676 P(ments are transferred into the processing queue to be expanded in the next iteration.) 108 658 T-0.02 (For a given) 126 628 P2 F-0.02 (b) 183.56 628 P0 F-0.02 ( << |) 189.56 628 P2 F-0.02 (S) 211.44 628 P0 F-0.02 (|, beam search does not generate all positions of the space. At some) 217.44 628 P0.27 (point the search must prune some object from the search and thus no child of that node is) 108 610 P0.08 (ever investigated. Thus beam search does not possess the completeness criteria for a good) 108 592 P0.14 (generator) 108 574 P0.14 (. However) 152.63 574 P0.14 (, it does make use of information during the search to manipulate how) 202.24 574 P2.9 (it traverses the space. It would appear that beam search trades the completeness of) 108 556 P0.99 (breadth-\336rst search for ef) 108 538 P0.99 (\336ciency and gains the ability to make an informed traversal of) 232.31 538 P-0.06 (the space. In search spaces where the evaluation function is a good indicator of the chance) 108 520 P0.18 (that the desired structure is a child of the current object, beam search is an extremely ef) 108 502 P0.18 (\336-) 529.34 502 P(cient search method.) 108 484 T1 F(2.4.2  Principles of Standard W) 108 448 T(eak Methods) 267.6 448 T0 F1.43 (W) 126 422 P1.43 (eak methods are not intended to be used directly as solutions, but as foundations) 136.36 422 P(from which strong methods can be built \050Rich 1982\051. This is because:) 108 404 T0.31 ([A weak method\325) 144 374 P0.31 (s] ef) 227.9 374 P0.31 (\336ciency when applied to particular problems is often) 248.98 374 P-0.16 (highly dependent on the way they exploit domain-speci\336c knowledge since) 144 356 P0.12 (in and of themselves they are unable to overcome the combinatorial explo-) 144 338 P(sion to which search processes are so vulnerable. \050Rich 1982\051 \050pp 55\051) 144 320 T0.25 (This strong association between weak and strong methods is designed so that as addi-) 126 290 P2.54 (tional knowledge is engineered out of the task it can be used to avoid investigating) 108 272 P0.89 (unpromising objects thus increasing the ef) 108 254 P0.89 (\336ciency of the search \050W) 315.09 254 P0.89 (inston 1983\051. This is) 438.72 254 P1.36 (chie\337y accomplished either through using the information from the evaluation function) 108 236 P0.44 (more explicitly) 108 218 P0.44 (, the modi\336cation of the operator set which de\336nes the connectivity of the) 180.62 218 P-0.08 (space or the addition of) 108 200 P2 F-0.08 (heuristic functions) 223.21 200 P0 F-0.08 ( that estimate the potential of a solution appear-) 312.09 200 P(ing below the current object.) 108 182 T1.12 (The association of more ef) 126 152 P1.12 (\336cient search through additional knowledge suggests that) 258.15 152 P1.9 (these methods encode a minimal or non-existent commitment to credit assignment. In) 108 134 P1.09 (breadth-\336rst and depth-\336rst search, credit assignment is ignored. Each of these methods) 108 116 P0.31 (assumes that stronger approaches for deciding what components of an object to alter dur-) 108 98 PFMENDPAGE%%EndPage: "23" 10%%Page: "24" 10612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(24) 528.01 712 T108 90 540 702 R7 XV0 X-0.16 (ing the search come from other sources than the basic search space traversal. In hill climb-) 108 694 P1.57 (ing and beam search, a simple heuristic for credit assignment is used: try all available) 108 676 P-0.18 (operators from the current position\050s\051 in the search space and keep the best of the resulting) 108 658 P(objects. This is best described as a greedy approach to credit assignment.) 108 640 T0.08 (Another interesting property of weak methods is that they generally apply each opera-) 126 610 P-0.21 (tor available to the current object before determining how to proceed. Thus, these methods) 108 592 P-0.04 (expand every child associated with a particular node before focusing on a new node of the) 108 574 P0.58 (space. Given this, in order for these methods to be tractable and perform better than ran-) 108 556 P0.31 (dom search, the connectivity of the space must be relatively sparse. This observation dic-) 108 538 P2.32 (tates several properties of the operators that or) 108 520 P2.32 (ganize the space. First, the number of) 346.2 520 P0.48 (operators must be small, otherwise the number of children would be a signi\336cant portion) 108 502 P0.59 (of the space and the or) 108 484 P0.59 (ganization provided by the operators would be worthless. In addi-) 218.95 484 P-0.08 (tion, each operator must be one-to-one, i.e. must map from a single point in the space to at) 108 466 P-0.28 (most one other point in the space, since an operators is applied only once to an object. This) 108 448 P(implies that the operators used by weak methods must be deterministic.) 108 430 T2.86 (These properties of weak methods re\337ect the fact that these search methods are) 126 400 P-0.09 (inspired by observations of human problem solving. In that environment, a human\325) 108 382 P-0.09 (s activ-) 505.11 382 P1.49 (ities are necessarily sequential ordered and appear to be deterministic, especially when) 108 364 P1.16 (solving problems that are routine. The characterization is that of collecting the possible) 108 346 P1.19 (choices available at the current time, thinking about what to do given this collection of) 108 328 P0.08 (choices and making a decision of how to proceed. Such a model is suf) 108 310 P0.08 (\336cient for problems) 445.24 310 P-0.1 (where knowledge is readily available and the number of choices from a particular position) 108 292 P(in the search space is on average small.) 108 274 T1 F(2.4.3  Repr) 108 238 T(esentation of T) 163.74 238 T(ask Envir) 238.6 238 T(onment in Str) 288.03 238 T(ong Methods) 358.45 238 T0 F1.28 (The task environment is represented in the knowledge that is entered into the weak) 126 212 P1.44 (method to make it strong. If the problem solver) 108 194 P1.44 (\325) 346.81 194 P1.44 (s task description is incomplete or the) 350.14 194 P0.15 (nuances of the task environment eluded the knowledge engineer) 108 176 P0.15 (, then the task description) 416.48 176 P0.2 (may not be suf) 108 158 P0.2 (\336cient to solve the problem. This explicit knowledge leads to the problems) 179.34 158 P0.46 (reviewed in the previous chapter and eventually reduce AI to the search for the \322perfect\323) 108 140 P1 (task environment representation. This representation serves as a model of the task envi-) 108 122 P0.72 (ronment and provides an interpretation of the incoming information. If the wrong model) 108 104 PFMENDPAGE%%EndPage: "24" 11%%Page: "25" 11612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(25) 528.01 712 T108 90 540 702 R7 XV0 X1.37 (of the task is selected, the complexity of the problem in terms of the algorithm can be) 108 694 P(greater than necessary \050Kolen and Pollack 1993, 1994\051.) 108 676 T-0.22 (Models of the task environment in strong methods often take the form of a knowledge-) 126 646 P1.83 (base within the problem solver) 108 628 P1.83 (. These knowledge-bases encode the task model in any) 262.56 628 P0.21 (number of representations. The important aspect of the knowledge in the knowledge-base) 108 610 P0.33 (is that it is speci\336c to the task. This means that the motivation for a particular component) 108 592 P-0.12 (of the knowledge-base comes solely from the programmer) 108 574 P-0.12 (\325) 387.71 574 P-0.12 (s modeling of the task envrion-) 391.04 574 P-0.14 (ment. T) 108 556 P-0.14 (aken in total, the complete knowledge-base represents a model of the task environ-) 144.33 556 P4.36 (ment that is internal to the problem solver) 108 538 P4.36 (. W) 338.74 538 P4.36 (ithout the motivations of the task) 359.94 538 P0.75 (environment, the task-speci\336c knowledge that comprises the task model appears as arbi-) 108 520 P(trary associations.) 108 502 T1 F(2.5  Evolutionary Algorithms) 108 466 T0 F1.36 (In general, the weak methods described in the previous sections are generalizations) 126 442 P0.51 (inspired by observations of human problem solving. However) 108 424 P0.51 (, there are many sources of) 408.23 424 P0.08 (\322intelligence\323 and \322search\323 that can be used to inspire novel search algorithms. One alter-) 108 406 P-0.17 (native set has come from analogies to physical processes. For instance, Kirkpatrick, Gelatt) 108 388 P1.22 (and V) 108 370 P1.22 (ecchi \0501983\051 use an analogy to physical settling systems for) 136.86 370 P2 F1.22 (simulated annealing) 438.17 370 P0 F1.22 (.) 537 370 P-0.21 (They show that this method can be used to quickly acquire near optimal solutions for dif) 108 352 P-0.21 (\336-) 529.34 352 P1.1 (cult, even NP-Hard, problems. Another physical model, called) 108 334 P2 F1.1 (elastic networks) 419.22 334 P0 F1.1 (\050Durbin) 502.03 334 P0.32 (and W) 108 316 P0.32 (illshaw 1985\051, uses spring equations and a model of gravity to iteratively approach) 139.48 316 P1.19 (near optimal solutions for similarly dif) 108 298 P1.19 (\336cult problems. Still another method involving a) 299.3 298 P0.35 (variant of neural networks, called a Hop\336eld network \050Hop\336eld 1982; Hop\336led and T) 108 280 P0.35 (ank) 522.68 280 P0.44 (1985\051, has been shown to perform in a similar manner) 108 262 P0.44 (. Each of these methods centers on) 371.14 262 P0.35 (an optimization model that is searching for a minimal \322ener) 108 244 P0.35 (gy\323 state. Such settling meth-) 397.03 244 P(ods work well even when the search space is extremely lar) 108 226 T(ge and convoluted.) 388.89 226 T0.59 (Another possibility is to model the process that gave rise to the intelligent activity of) 126 196 P1.8 (humans as a search and learning method. Analogies to evolution for problem solving,) 108 178 P1.06 (search, function optimization and learning have been proposed under the names genetic) 108 160 P1.48 (algorithms \050Holland 1975; Goldber) 108 142 P1.48 (g 1989a\051, evolutionary programming \050Fogel, Owens) 281.78 142 P1.85 (and W) 108 124 P1.85 (alsh 1966; Fogel 1992\051, evolution strategies \050Rechenber) 140.52 124 P1.85 (g 1973; Schwefel 1981;) 420.2 124 P0.73 (B\212ck, Hof) 108 106 P0.73 (fmeister and Schwefel 1991\051 and others. Figure 6  shows the generic algorith-) 157.81 106 PFMENDPAGE%%EndPage: "25" 12%%Page: "26" 12612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(26) 528.01 712 T108 90 540 702 R7 XV0 X2.23 (mic shell for an evolutionary algorithm. This algorithm receives as input an array of) 108 412.12 P1.53 (search space objects, called a) 108 394.12 P2 F1.53 (population) 258.85 394.12 P0 F1.53 (, and the number of objects in the array) 310.83 394.12 P1.53 (. Like) 510.49 394.12 P1.47 (beam search \050Rich 1982\051, this algorithm limits the number of objects that it can retain) 108 376.12 P0.28 (from one iteration of inspection to another) 108 358.12 P0.28 (.) 312.2 358.12 P0 10 Q0.23 (1) 315.2 362.92 P0 12 Q0.28 ( In general, the initial population is generated) 320.2 358.12 P(to be uniformly distributed in the search space.) 108 340.12 T0.47 (On each iteration of the search, each of the objects in the population is evaluated and) 126 310.12 P-0.13 (the best is checked to see if it ful\336lls the criteria for halting by the T) 108 292.12 P-0.13 (ester function. If not, a) 431.28 292.12 P0.77 (subset of the population is selected, based on their \336tness, to be the basis of the ensuing) 108 274.12 P-0.28 (population by the Selection function. Copies of these objects replace objects in the popula-) 108 256.12 P0.74 (tion that were not selected for reproduction so that the population size remains constant.) 108 238.12 P1 (Next, the objects in the population are manipulated by applying the reproductive opera-) 108 220.12 P1.81 (tors. Objects from the previous population are called) 108 202.12 P2 F1.81 (par) 377.98 202.12 P1.81 (ents) 394.19 202.12 P0 F1.81 ( while objects created by) 413.51 202.12 P2.16 (applying the reproduction operators to the parents are called) 108 184.12 P2 F2.16 (offspring) 418.92 184.12 P0 F2.16 (. Each cycle of) 462.24 184.12 P(select-modify-evaluate is called a) 108 166.12 T2 F(generation) 271.87 166.12 T0 F( of the search.) 323.83 166.12 T108 104 540 124.09 C108 111.99 239.98 111.99 2 L0.25 H2 Z0 X0 KN0 0 612 792 C0 10 Q0 X0 K(1.  Thanks to Greg Saunders for bringing this to my attention.) 108 97.33 T110.45 420.12 537.55 702 C117.48 481.22 532.32 678.09 R7 X0 KV5 12 Q0 X(function Evolutionary-Algorithm\050population, size\051;) 117.48 670.09 T(begin) 117.48 655.09 T(for i from 1 to size do) 137.31 640.09 T(f) 155.31 625.09 T(itness[i] := evaluate\050population[i];) 162.5 625.09 T(while not\050Tester\050bestof\050population, f) 137.31 610.09 T(itness\051\051 do) 403.56 610.09 T(begin) 137.31 595.09 T(Select\050population, f) 155.31 580.09 T(itness\051;) 299.23 580.09 T(Reproduce\050population\051;) 155.31 565.09 T(for i from 1 to size do) 155.31 550.09 T(f) 173.31 535.09 T(itness[i] := evaluate\050population[i]\051;) 180.5 535.09 T(end;) 137.31 520.09 T(return bestof\050population, f) 137.31 505.09 T(itness\051;) 331.6 505.09 T(end;) 117.48 490.09 T122.4 437.62 529.51 465.75 R7 XV1 10 Q0 X(Figur) 122.4 459.08 T(e 6: Generic algorithm for evolutionary algorithms.) 146.1 459.08 T0 0 612 792 CFMENDPAGE%%EndPage: "26" 13%%Page: "27" 13612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(27) 528.01 712 T108 90 540 702 R7 XV0 X1.07 (The various evolutionary algorithms dif) 126 694 P1.07 (fer on the model of evolution assumed, how) 321.29 694 P0.48 (the surving population members are selected, the types of \336tness functions used and spe-) 108 676 P0.49 (ci\336c features that are more de\336nitional. The remainder of this chapter explores these dis-) 108 658 P1.71 (tinctions between evolutionary algorithms and trys to sort out the essential dif) 108 640 P1.71 (ferences) 500.05 640 P(from those that are non-essential.) 108 622 T1 F(2.5.1  Evolutionary Models) 108 586 T0 F0.5 (The three major variations for evolutionary algorithms dif) 126 560 P0.5 (fer chie\337y in their assumed) 407.43 560 P2 F1.75 (evolutionary models) 108 542 P0 F1.75 (, i.e., what aspects of evolution are important to model to achieve) 207.35 542 P1.09 (analogous computational ef) 108 524 P1.09 (fects \050Fogel 1994\051. Genetic algorithms \050GA\051 \050Holland 1975;) 242.55 524 P0.65 (Goldber) 108 506 P0.65 (g 1989a\051, evolution strategies and evolutionary programming model evolution at) 147.09 506 P0.52 (dif) 108 488 P0.52 (ferent abstract levels. Genetic algorithms emphasize the acquisition of structures. This) 121.11 488 P0.19 (leads to modeling evolution at the granularity of genetics, i.e. at the level of manipulating) 108 470 P3.13 (the object\325) 108 452 P3.13 (s stru

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -