⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 chapter1.ps

📁 收集了遗传算法、进化计算、神经网络、模糊系统、人工生命、复杂适应系统等相关领域近期的参考论文和研究报告
💻 PS
📖 第 1 页 / 共 5 页
字号:
V108 711 540 720 RV0 12 Q0 X(8) 534 712 T108 90 540 702 R7 XV0 X0.14 (and thus is not directly accessible to the problem solver) 108 694 P0.14 (, all knowledge required for a task) 374.95 694 P1.26 (must be provided a priori. If the explicit knowledge is incomplete the system is brittle.) 108 676 P0.04 (When the explicit knowledge grows too lar) 108 658 P0.04 (ge, it must be indexed by task-speci\336c features) 315.21 658 P0.19 (for quick retrieval. If the problem solver is to learn, a method of credit assignment for the) 108 640 P(chosen representation of the task must be provided.) 108 622 T0.09 (In order to avoid the above problems of AI techniques, the reliance on explicit knowl-) 126 592 P0.14 (edge must be reduced. This involves the identi\336cation of new techniques that can be fully) 108 574 P1.69 (de\336ned) 108 556 P2 F1.69 (independently) 148 556 P0 F1.69 ( of the task. In other words, in order to remove the task-speci\336c) 215.28 556 P1.51 (knowledge, a problem solver must use) 108 538 P2 F1.51 (task-independent) 304.63 538 P0 F1.51 (representations and operations) 391.08 538 P1.21 (for problem solving and must acquire whatever task-speci\336c knowledge is necessary to) 108 520 P(solve the problem) 108 502 T2 F(while) 196.94 502 T0 F( problem solving.) 222.93 502 T-0.02 (Clearly) 126 472 P-0.02 (, traditional weak methods are inapplicable for the de\336nition of a task-indepen-) 160.53 472 P0.83 (dent problem solver since they invariably assume task-speci\336c knowledge and operators) 108 454 P0.05 (to complete their de\336nition. Further they provide no mechanism to gather knowledge dur-) 108 436 P-0.28 (ing the problem solving process. Explanation-based learning methods \050EBL\051 \050De Jong and) 108 418 P0.54 (Mooney 1986; Minton et al. 1989; Schank and Leake 1989\051 are a contemporary learning) 108 400 P0.4 (paradigm for the identi\336cation of task-speci\336c knowledge during problem solving. These) 108 382 P0.18 (systems require a task-speci\336c domain theory from which to construct explanations about) 108 364 P0.55 (single examples of problem solving and deduce appropriate rules. Like most other learn-) 108 346 P0.61 (ing methods, the objective of EBL is to merely automate the acquisition of explicit task-) 108 328 P-0.12 (speci\336c knowledge. In other words, the broad goals of learning methods is to merely auto-) 108 310 P1.11 (mate the transduction of knowledge from the task environment into the problem solver) 108 292 P1.11 (.) 537 292 P1.61 (While this reduces the brittleness of the knowledge-base, it does not address the other) 108 274 P-0.19 (problems associated with explicit knowledge described above. In fact, simply appealing to) 108 256 P-0.27 (learning as a solution to the problems of explicit knowledge, as is generally done, is incon-) 108 238 P(sequential if the methods of learning rely on task-speci\336c knowledge to perform.) 108 220 T1.3 (The hypothesis of this dissertation is that the problems that arise from the usage of) 126 190 P0.79 (explicit knowledge indicate that literal knowledge level models may be an inappropriate) 108 172 P0.61 (level to describe a general computational intelligence and as a result AI must investigate) 108 154 P1.05 (techniques that do not rely on internal task models to perform general problem solving.) 108 136 P1.93 (Such methods must be generally applicable, avoid relying on explicit knowledge, and) 108 118 PFMENDPAGE%%EndPage: "8" 9%%Page: "9" 9612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(9) 534 712 T108 90 540 702 R7 XV0 X1.99 (allow the same sorts of processing as in strong methods, but without an internal task) 108 694 P(model. Evolutionary algorithms provide such features.) 108 676 T1 F(1.4  Evolutionary Algorithms) 108 640 T0 F-0.24 (Evolutionary algorithms are techniques for search and optimization inspired by natural) 126 616 P0.46 (evolution. These techniques include genetic algorithms \050Holland 1975; Goldber) 108 598 P0.46 (g 1989a\051,) 494.24 598 P1.09 (evolutionary programming \050Fogel, Owens and W) 108 580 P1.09 (alsh 1966; Fogel 1992a\051 and evolution) 349.66 580 P0.38 (strategies \050Rechenber) 108 562 P0.38 (g 1973; Schwefel 1981; B\212ck, Hof) 211.74 562 P0.38 (fmeister and Schwefel 1991\051 and) 379.95 562 P0.82 (are distinguished by their parallel investigation of several positions of a search space by) 108 544 P0.3 (manipulating a) 108 526 P2 F0.3 (population) 183.23 526 P0 F0.3 ( of problem solutions simultaneously) 235.2 526 P0.3 (. In this dissertation, these) 413.54 526 P2.69 (techniques are generalized into a novel weak method, termed the) 108 508 P2 F2.69 (evolutionary weak) 449.03 508 P1.42 (method) 108 490 P0 F1.42 (, that bears comparison to standard weak methods but encodes some interesting) 143.31 490 P(dif) 108 472 T(ferences.) 121.11 472 T1.48 (Evolutionary weak methods provide several bene\336cial attributes for exploring task-) 126 442 P-0.09 (independent problem solving. First, they model the task environment as a separate evalua-) 108 424 P-0.14 (tion function, called a) 108 406 P2 F-0.14 (\336tness function) 215.04 406 P0 F-0.14 (, that maps an individual of the population into a real) 287.2 406 P0.02 (number) 108 388 P0.02 (. The problem solver only sees the result of this function as feedback for the mem-) 143.98 388 P0.77 (bers of the population. Such minimal feedback provides a strong separation between the) 108 370 P-0.08 (task environment and the problem solver) 108 352 P-0.08 (. This problem set-up is akin to system identi\336ca-) 303.13 352 P-0.12 (tion or reinforcement learning. Second, the operators in evolutionary algorithms are repre-) 108 334 P0.46 (sentation speci\336c rather than task-speci\336c. In other words, the operators are de\336ned for a) 108 316 P0.41 (speci\336c representation, such as bit strings or LISP expressions, independently of how the) 108 298 P3.49 (task environment interprets that representation. The simplicity and task-independent) 108 280 P(nature of these algorithms ensures their general applicability across many problem types.) 108 262 T2.58 (The most important distinction between standard weak methods and evolutionary) 126 232 P0.83 (algorithms is a task independent empirical approach to local credit assignment, which is) 108 214 P0.31 (termed) 108 196 P2 F0.31 (empirical cr) 144.61 196 P0.31 (edit assignment) 203.43 196 P0 F0.31 (. Empirical credit assignment works by creating novel) 278.7 196 P0.56 (structural variations in the population through probabilistic application of the representa-) 108 178 P0.66 (tion-speci\336c operators. Subsequent interaction between the population and the task envi-) 108 160 P0.48 (ronment, represented as the \336tness function, determines the viability of the modi\336cations) 108 142 P1.47 (and the abstract features they encode. While the search progresses, the population pre-) 108 124 P0.69 (serves the best solutions found and attempts additional manipulations. As the population) 108 106 PFMENDPAGE%%EndPage: "9" 10%%Page: "10" 10612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(10) 528.01 712 T108 90 540 702 R7 XV0 X0.61 (becomes \336lled with progressively better members, the search is constrained into areas of) 108 694 P0.07 (the search space that are dense with features previously found to be applicable to the task.) 108 676 P-0.28 (Thus, empirical credit assignment allows evolutionary algorithms to) 108 658 P2 F-0.28 (adapt) 435.87 658 P0 F-0.28 ( its search in the) 463.19 658 P1.59 (problem space dynamically) 108 640 P1.59 (. As a result, an evolutionary algorithm that solves the task) 242.3 640 P0.11 (appears to an observer to contain explicit task knowledge. Because no explicit knowledge) 108 622 P0.57 (exists in the evolutionary algorithm, this knowledge must emer) 108 604 P0.57 (ge from the interaction of) 415.15 604 P(the simple problem solver and the task environment.) 108 586 T1 F(1.5  Emergent Intelligence) 108 550 T0 F0.62 (Based on the abilities of empirical credit assignment, this dissertation of) 126 526 P0.62 (fers an alter-) 478.15 526 P0.16 (native to the explicit representation of task knowledge in a problem solver that has poten-) 108 508 P1.71 (tial to impact the central problems of knowledge-based AI.) 108 490 P2 F1.71 (Emer) 409.14 490 P1.71 (gent intelligence) 434.67 490 P0 F1.71 (\050EI\051) 520.69 490 P(relies on the following two assumptions about computational problem solving:) 108 472 T(1.) 144 442 T0.89 (The task environment itself is often a more concise representation for) 162 442 P3.32 (knowledge speci\336c to the task than any internal representation of) 162 424 P(explicit knowledge.) 162 406 T(2.) 144 376 T-0.18 (Direct interaction of a simple problem solver with the task environment) 162 376 P-0.16 (permits the task environment\325) 162 358 P-0.16 (s inherent constraints to be expressed nat-) 304.44 358 P0.81 (urally in the problem solver during the problem solving process. As a) 162 340 P1.05 (result, pertinent task-speci\336c knowledge) 162 322 P2 F1.05 (emer) 363.07 322 P1.05 (ges) 386.6 322 P0 F1.05 ( from the interaction) 402.58 322 P1.2 (of the problem solver with the innate constraints of the task environ-) 162 304 P(ment.) 162 286 T1.72 (Emer) 108 256 P1.72 (gent Intelligence avoids each of the problems associated with explicit knowledge) 133.76 256 P1.02 (described above by removing the source of the problems, the explicit knowledge. Since) 108 238 P1.52 (knowledge pertinent to the speci\336c problem solving situation emer) 108 220 P1.52 (ges from interaction) 440.37 220 P1.72 (with the task environment, there is no need for formulating any task-speci\336c indexing) 108 202 P0.68 (scheme, credit assignment or acquisition method. The knowledge that emer) 108 184 P0.68 (ges is depen-) 476.02 184 P0.54 (dent on the particular task being solved, since it emer) 108 166 P0.54 (ges from direct interaction with the) 368.44 166 P0.47 (task environment, so small changes in the task results in small changes in the knowledge) 108 148 P0.28 (that emer) 108 130 P0.28 (ges rather than brittleness. T) 153.03 130 P0.28 (o paraphrase an old saying, one task environment is) 289.57 130 P(worth a thousand heuristics \050or more\051.) 108 112 TFMENDPAGE%%EndPage: "10" 11%%Page: "11" 11612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(1) 528.45 712 T(1) 534 712 T108 90 540 702 R7 XV1 F0 X(1.6  Demonstrations of Emergent Intelligence) 108 694 T0 F0.65 (The experiments within this dissertation illustrate four distinct examples of emer) 126 670 P0.65 (gent) 519.35 670 P0.33 (intelligence illustrated by evolutionary algorithms. The examples cover the acquisition of) 108 652 P1.33 (task-speci\336c connectionist network architectures, task-directed component manipulation) 108 634 P0.86 (while inducing \336nite-state machines, the acquisition of modular computer programs that) 108 616 P0.6 (create appropriate high-level abstractions and implicit goal directed progression. Each of) 108 598 P0.19 (these ef) 108 580 P0.19 (fects are shown to emer) 144.93 580 P0.19 (ge from the interaction of a simple evolutionary algorithm) 259.36 580 P2.81 (and the task environment without the assistance of explicit knowledge. This section) 108 562 P(describes how each of these problems are addressed as instances of emer) 108 544 T(gent intelligence.) 457.15 544 T0.67 (Currently) 126 514 P0.67 (, there is no known general solution for inducing both an appropriate archi-) 171.19 514 P1.78 (tecture and parametric values for a neural network. Current methods either assume an) 108 496 P0.17 (architecture as given and learn only the weights or use approximate heuristics that force a) 108 478 P0.73 (task into a restricted architecture rather than \336t an architecture to the task. The dif) 108 460 P0.73 (\336culty) 509.34 460 P0.69 (with inducing neural network architectures is that the they are highly speci\336c to the task) 108 442 P0.93 (being solved. This is especially true in the case of recurrent networks. Chapter 4 of this) 108 424 P-0.14 (dissertation describes a program, GNARL, originally described in Angeline, Saunders and) 108 406 P0.44 (Pollack \0501994\051, that induces both the topology and parametric values for recurrent neural) 108 388 P2.33 (networks. This is done without explicit knowledge that describes what constitutes an) 108 370 P0.5 (appropriate architecture for the task. GNARL begins with only a population of randomly) 108 352 P1.31 (generated networks and an evaluation function that rates a network\325) 108 334 P1.31 (s performance on a) 444.5 334 P1.39 (pre-speci\336ed task. The results show GNARL is able to induce networks in comparable) 108 316 P(time and with better generalization than a method that assumes a \336xed topology) 108 298 T(.) 490.3 298 T0.71 (Chapter 5 describes an instance of emer) 126 268 P0.71 (gent intelligence where the interaction of the) 321.23 268 P0.43 (evolutionary algorithm with the task environment determines which components of \336nite) 108 250 P1.27 (state machines \050FSMs\051 should be manipulated while solving a simple control task. The) 108 232 P1.12 (program is similar to GNARL except it also employs two additional operators that pre-) 108 214 P1.22 (serve randomly selected components from further manipulation. The appropriateness of) 108 196 P0.48 (the protected component is determined through future interactions of the population with) 108 178 P2.03 (the task environment. In essence, the knowledge of which components to protect and) 108 160 P-0.3 (which to manipulate emer) 108 142 P-0.3 (ges naturally as a result of the interaction with the simple control) 231.81 142 P0.96 (task. An interesting side-ef) 108 124 P0.96 (fect of the program is that it induces the FSMs more quickly) 239.92 124 PFMENDPAGE%%EndPage: "11" 12%%Page: "12" 12612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(12) 528.01 712 T108 90 540 702 R7 XV0 X2.55 (with the added random operators than without. The experiments of this chapter also) 108 694 P(appear in Angeline and Pollack \0501993c\051.) 108 676 T0.13 (Another program, GLiB, described in Chapter 6, induces modular computer programs) 126 646 P0.17 (without explicit knowledge of what makes a bene\336cial module in general or for a speci\336c) 108 628 P0.48 (task. Angeline and Pollack \0501993a; 1992\051 describe initial explorations with this program.) 108 610 P1.11 (GLiB uses operators that randomly select portions of a program to be de\336ned as a new) 108 592 P-0.1 (module. As in GNARL) 108 574 P-0.1 (\325) 218.52 574 P-0.1 (s determination of appropriate architectures, the validity of a mod-) 221.85 574 P0.22 (ule is induced naturally through future interactions with the task environment. The result-) 108 556 P2.07 (ing modules re\337ect the presence of knowledge that is a natural constraint of the task) 108 538 P0.69 (environment. GLiB\325) 108 520 P0.69 (s modules are thus highly dependent on the task. An interesting side) 205.97 520 P0.07 (ef) 108 502 P0.07 (fect of this program is that the modules represent task-speci\336c abstractions from the ini-) 117.1 502 P0.13 (tial general programming language. This automatic process aids in the discovery of lar) 108 484 P0.13 (ger) 524.68 484 P(structures by abstracting not only the representation but the search operators as well.) 108 466 T-0.24 (Given the task environment is a crucial component of a problem solver using emer) 126 436 P-0.24 (gent) 519.35 436 P0.65 (intelligence, Chapter 7 explores the implications of dif) 108 418 P0.65 (fering task environments on prob-) 374.17 418 P0.92 (lem solving. Here, it is shown, again using GLiB, that introducing direct competition in) 108 400 P1.93 (the evaluation of population members leads to a natural progression towards complex) 108 382 P0.15 (solutions. This is interpreted as emer) 108 364 P0.15 (gent goal-directed behavior since explicit knowledge) 285.1 364 P-0.02 (of the goal and associated subgoals are absent from GLiB. This chapter is an expansion of) 108 346 P(the work originally published in Angeline and Pollack \0501993b\051.) 108 328 T1 F(1.7  Scope of this Dissertation) 108 292 T0 F0.15 (This work is concerned with the latter of the two goals attributed to AI above, namely) 126 268 P0.26 (creating programs that perform as well or better than humans on tasks but not necessarily) 108 250 P0.35 (using humans as a model. I will be content to demonstrate that explicit knowledge, while) 108 232 P1.42 (suf) 108 214 P1.42 (\336cient, is not necessary for a problem solver to perform intelligently on a task. The) 122.44 214 P0.24 (determination that the reduced reliance on explicit knowledge actually alleviates the vari-) 108 196 P(ous problems described above is left for future work.) 108 178 T0.94 (Because of the insistence of no explicit task-speci\336c knowledge, the techniques pre-) 126 148 P0.31 (sented in this dissertation do not perform as well as other techniques that use task knowl-) 108 130 P1.37 (edge when available. This is not a limitation of the technique but a direct result of the) 108 112 P0.22 (assumed knowledge-poor experimental paradigm. Further) 108 94 P0.22 (, there is no a priori reason why) 386.55 94 PFMENDPAGE%%EndPage: "12" 13%%Page: "13" 13612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(13) 528.01 712 T108 90 540 702 R7 XV0 X0.55 (the techniques investigated here can not be integrated with explicit task knowledge. This) 108 694 P0.37 (has not been done in this work since such knowledge would only obfuscate the ef) 108 676 P0.37 (fects of) 504 676 P0.3 (the techniques under investigation and their ability to address problems where no explicit) 108 658 P(knowledge is available.) 108 640 T0.55 (Other limitations of the following work stem from its opportunistic nature. The basic) 126 610 P1.1 (evolutionary mechanism is extremely opportunistic, and typically exploits or) 108 592 P1.1 (ganizations) 485.37 592 P-0.04 (that are challenging to analyze. This is similar to the dif) 108 574 P-0.04 (\336culty of analyzing the distributed) 374.91 574 P3.1 (representations \050Hinton, McClelland and Rumelhart 1986; Sejnowski and Rosenber) 108 556 P3.1 (g) 534 556 P-0.07 (1987; Smolensky 1988\051 of some connectionist networks trained by simple gradient decent) 108 538 P2.76 (methods such as backprop \050Rumelhart, Hinton and W) 108 520 P2.76 (illiams 1986\051. As a result, the) 384.64 520 P1.54 (decompositions or or) 108 502 P1.54 (ganizations acquired by these techniques do not relate well to the) 212.14 502 P1.21 (normative solutions preferred by humans. In addition, indirect routes of veri\336cation are) 108 484 P(often needed to demonstrate an ef) 108 466 T(fect.) 269.98 466 T0.14 (While the opportunistic nature of these methods presents dif) 126 436 P0.14 (\336culties in fully interpret-) 416 436 P0.38 (ing the results of an experiment, it may ultimately pose an advantage for automatic prob-) 108 418 P-0.16 (lem solving. By shedding the preconceptions of humans and becoming more opportunistic) 108 400 P2.14 (the resulting structures may provide computational advantages that our human-centric) 108 382 P(preconceptions otherwise obfuscate.) 108 364 T1 F(1.8  Contributions of this W) 108 328 T(ork) 249.94 328 T0 F-0.23 (Instead of relying on the analysis of a task to supply explicit knowledge for its solution) 126

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -