📄 chapter8.ps
字号:
7 XV0 X0.33 (within an intelligent agent, it was proposed that the interaction of a simple agent with the) 108 695.12 P1.89 (environment could also give rise to intelligent behavior without associated knowledge) 108 677.12 P-0.25 (being explicitly represented within the agent or the environment. Thus, the knowledge that) 108 659.12 P0.32 (determines the intelligent behavior emer) 108 641.12 P0.32 (ges out of the interaction of the simple agent and) 302.93 641.12 P-0.03 (the environment, a computational theme that goes back at least to Simon \0501969\051 and is the) 108 623.12 P0.33 (foundation of situated activity \050Greeno 1993; Clancey 1993\051. The techniques proposed in) 108 605.12 P1.57 (this dissertation extend this work by demonstrating that interactions between a general) 108 587.12 P-0.12 (problem solver and the task environment can also determine task-speci\336c modi\336cations of) 108 569.12 P-0.02 (the problem solver) 108 551.12 P-0.02 (\325) 198.34 551.12 P-0.02 (s internal representations. I dubbed this method of AI emer) 201.67 551.12 P-0.02 (gent intelli-) 484.38 551.12 P0.17 (gence and I ar) 108 533.12 P0.17 (gued that at the knowledge level, a knowledge-based AI agent and an agent) 175.89 533.12 P1.8 (whose knowledge emer) 108 515.12 P1.8 (ges are indistinguishable. Because emer) 224.62 515.12 P1.8 (gent intelligent systems) 423.12 515.12 P1.86 (appear similar to knowledge-based AI methods at the knowledge level but contain no) 108 497.12 P0.04 (explicit task knowledge, the knowledge in these systems must emer) 108 479.12 P0.04 (ge at the abstract level) 432.6 479.12 P1.05 (from the interaction of the simple problem solver with the task environment. Following) 108 461.12 P2.15 (the identi\336cation and characterization of emer) 108 443.12 P2.15 (gent intelligence, the next four chapters) 339.38 443.12 P(demonstrated this concept through experimentation.) 108 425.12 T2.72 (Chapter 4 described an experiment where an evolutionary algorithm was able to) 126 395.12 P0.97 (induce an appropriate neural network architecture for several tasks without task-speci\336c) 108 377.12 P2.29 (knowledge. I ar) 108 359.12 P2.29 (gued that the heuristics typically used by connectionist researchers to) 187.3 359.12 P0.85 (induce an architecture are overly simplistic in their approach. GNARL, the evolutionary) 108 341.12 P1.53 (algorithm used in these experiments, created appropriate architectures for several tasks) 108 323.12 P0.16 (while inducing the parametric values simultaneously) 108 305.12 P0.16 (. GNARL was shown to induce com-) 360.84 305.12 P0.05 (plete networks for several language learning tasks in comparable time to a second method) 108 287.12 P0.53 (that only induced weights for a designed architecture. In these experiments, the generali-) 108 269.12 P0.14 (zation of the networks created by GNARL was consistently better) 108 251.12 P0.14 (. In another experiment,) 424.68 251.12 P0.94 (GNARL induced networks to perform a simple control task. These experiments demon-) 108 233.12 P-0.23 (strated that two networks induced by GNARL might solve the same task in much the same) 108 215.12 P(manner although their method of achieving the solution dif) 108 197.12 T(fers signi\336cantly) 390.94 197.12 T(.) 470.44 197.12 T0.1 (The experiments of Chapter 4 illustrate emer) 126 167.12 P0.1 (gent intelligence at a simple and straight-) 341.55 167.12 P3.13 (forward level. Appropriate architectures for the networks are induced by repeatedly) 108 149.12 P1.43 (manipulating them with representation-speci\336c operators and testing them in the actual) 108 131.12 P-0.09 (task environment. When a network architecture appears that provides better problem solv-) 108 113.12 P0.57 (ing accuracy than the average ability of the current population, it and its variants receive) 108 95.12 PFMENDPAGE%%EndPage: "146" 4%%Page: "147" 4612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(147) 522.01 712 T108 90 540 703.12 R7 XV0 X1.36 (increasingly more of the population space and hence more subsequent testing. Because) 108 695.12 P0.19 (there was no task-speci\336c knowledge available within the problem solver to determine an) 108 677.12 P-0.04 (appropriate network architecture, this knowledge emer) 108 659.12 P-0.04 (ged at the more abstract knowledge) 370.02 659.12 P(level without explicit instantiation in the environment or the problem solver) 108 641.12 T(.) 471.12 641.12 T1.03 (Chapter 5 furthered the experiments of Chapter 4 by adding two representation-spe-) 126 611.12 P1.66 (ci\336c operators that protected components of the FSAs evolving in the population. The) 108 593.12 P0.08 (added operators made only syntactic manipulations of the solutions rather than the typical) 108 575.12 P1.06 (semantic manipulations. The intention was that the evolutionary algorithm would deter-) 108 557.12 P1.18 (mine which components of the FSA representation were crucial to solving the task and) 108 539.12 P0.2 (protect them from manipulation by the other reproduction operators. Experimental results) 108 521.12 P0.45 (showed that given the same experimental conditions, the evolutionary algorithm with the) 108 503.12 P0.01 (additional operators identi\336ed solutions more quickly) 108 485.12 P0.01 (. Rather than requiring an automated) 364.1 485.12 P0.07 (analysis of each FSA to determine which components were crucial to solving the task and) 108 467.12 P0.33 (thus should be protected from manipulation, the interaction of the evolutionary algorithm) 108 449.12 P(and the task environment allowed the pertinent knowledge to emer) 108 431.12 T(ge dynamically) 427.89 431.12 T(.) 500.72 431.12 T2.62 (In Chapter 6, automatic task decomposition was investigated, without an explicit) 126 401.12 P1.06 (description of the properties of an appropriate decomposition inside the problem solver) 108 383.12 P1.06 (.) 537 383.12 P0.03 (Experiments in this chapter evolved modular LISP programs using GLiB, an evolutionary) 108 365.12 P-0.03 (algorithm that evolves a population of LISP programs, to learn both the legal moves and a) 108 347.12 P1.29 (competitive strategy for solving the T) 108 329.12 P1.29 (owers of Hanoi problem and playing T) 294.52 329.12 P1.29 (ic-T) 488.4 329.12 P1.29 (ac-T) 507.55 329.12 P1.29 (oe) 528.68 329.12 P0.1 (against a rule-based expert. Again, two additional reproduction operators were introduced) 108 311.12 P0.49 (into the basic evolutionary algorithm that probabilistically selected and de\336ned functions) 108 293.12 P1.35 (from the programs in the population using only syntactic modi\336cations to the evolving) 108 275.12 P0.93 (LISP programs. The appropriateness of each randomly de\336ned function was determined) 108 257.12 P0.06 (by their subsequent proliferation through the population. Bene\336cial functions were propa-) 108 239.12 P1.3 (gated to many population members while inappropriate modules died out with the pro-) 108 221.12 P0.15 (grams that used them. The contents of the modules induced by the process appeared to be) 108 203.12 P-0.16 (applicable to a broad number of situations within the limitations of representation and per-) 108 185.12 P-0.09 (formance. An interesting side ef) 108 167.12 P-0.09 (fect of these experiments is that induced functions de\336ned) 261.62 167.12 P1.33 (task-speci\336c representational abstractions. Each new function, when appropriate for the) 108 149.12 P0.49 (task, de\336ned a more abstract level of representation above the original language that was) 108 131.12 P0.26 (tailored to the task. In these experiments, both the applicability of a particular module for) 108 113.12 PFMENDPAGE%%EndPage: "147" 5%%Page: "148" 5612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(148) 522.01 712 T108 90 540 702.56 R7 XV0 X0.37 (solving the task and the abstraction it embodied emer) 108 694.56 P0.37 (ged from the simple problem solver) 366.59 694.56 P(interacting with the task environment.) 108 676.56 T0.64 (The \336nal set of experiments, described in Chapter 7, took a closer look at the role of) 126 646.56 P1.59 (the \336tness function in evolutionary algorithms. Standard objective \336tness functions are) 108 628.56 P-0.2 (usually simple evaluation functions. However) 108 610.56 P-0.2 (, the numerical value returned by these func-) 326.91 610.56 P0.51 (tions is often arbitrary and is occasionally a liability when an accurate objective function) 108 592.56 P0.5 (is not easily provided for the task. In addition, evaluation functions are often designed to) 108 574.56 P1.38 (provide a speci\336c gradient to ensure the success of the learning process. In the experi-) 108 556.56 P-0.21 (ments of this chapter) 108 538.56 P-0.21 (, the \336tness function used in Chapter 6 was varied for the T) 207.17 538.56 P-0.21 (ic-T) 488.4 538.56 P-0.21 (ac-T) 507.55 538.56 P-0.21 (oe) 528.68 538.56 P1.07 (task to observe the dif) 108 520.56 P1.07 (ference in quality in the discovered modular LISP programs. The) 217.97 520.56 P0.82 (various experts used to evaluate the evolving programs included one that chose its posi-) 108 502.56 P-0.22 (tion at random, one that was an optimal strategy for the task and one identical to the expert) 108 484.56 P1.31 (used in the experiments of Chapter 6. A fourth \336tness function that used no expert but) 108 466.56 P0.58 (instead used the population members to play against themselves in a competitive tourna-) 108 448.56 P-0.06 (ment to determine relative rankings was also tested. This evaluation function merely iden-) 108 430.56 P-0.21 (ti\336ed which of the two programs passed to it was the better player) 108 412.56 P-0.21 (. The results showed that) 421.26 412.56 P1.27 (the competitive tournament induced LISP programs that were more robust players than) 108 394.56 P0.54 (each of the \336tness functions that used rule-based experts. I ar) 108 376.56 P0.54 (gued that this resulted from) 405.93 376.56 P1.08 (an ecology of strategies developing in the population which provided a pressure for the) 108 358.56 P0.5 (evolving LISP programs to generalize their performance over a wider array of strategies.) 108 340.56 P-0.06 (Equivalent generalization was not apparent in the programs evolved using the single, non-) 108 322.56 P0.66 (deterministic rule-based strategies in the \336tness function. Thus, a progression toward the) 108 304.56 P0.87 (implied goal of optimal T) 108 286.56 P0.87 (ic-T) 234.31 286.56 P0.87 (ac-T) 253.45 286.56 P0.87 (oe performance emer) 274.58 286.56 P0.87 (ged from the interaction between) 378 286.56 P-0.2 (population members in the environment without an explicit gradient supplied in the \336tness) 108 268.56 P(function.) 108 250.56 T-0.2 (From the experimental results presented in this dissertation, some features of emer) 126 220.56 P-0.2 (gent) 519.35 220.56 P3.36 (intelligence as implemented by evolutionary algorithms can be identi\336ed. First, the) 108 202.56 P1.41 (dynamics of evolutionary algorithms create a race condition that favors portions of the) 108 184.56 P0.1 (search space that are dense with near) 108 166.56 P0.1 (-solutions. For an of) 285.52 166.56 P0.1 (fspring to be retained as a parent) 382.87 166.56 P-0.11 (in the subsequent generation, it must perform the task about as well as an average member) 108 148.56 P0.04 (of the population. Thus, an of) 108 130.56 P0.04 (fspring\325) 250.23 130.56 P0.04 (s performance must be comparable to its parent. The) 287.54 130.56 P1.81 (of) 108 112.56 P1.81 (fspring must also be within the search neighborhood of the parent where the search) 117.78 112.56 P-0.26 (neighborhood is de\336ned relative to the reproduction operators. For subsequent of) 108 94.56 P-0.26 (fspring to) 493.95 94.56 PFMENDPAGE%%EndPage: "148" 6%%Page: "149" 6612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(149) 522.01 712 T108 90 540 702.56 R7 XV0 X0.05 (be equally viable, they must again be competitive with the average abilities of the popula-) 108 694.56 P0.63 (tion. Thus, the reproduction operators must have again mapped to a comparable solution) 108 676.56 P0.52 (within the same search neighborhood. Eventually) 108 658.56 P0.52 (, a section of the search space begins to) 346.99 658.56 P-0.28 (\336ll the population and the search is concentrated in that area. However) 108 640.56 P-0.28 (, it is not necessarily) 442.85 640.56 P-0.12 (the case that only a single area of the search space is represented in the population. Islands) 108 622.56 P-0.14 (of distinct areas in the search space that are all dense with near) 108 604.56 P-0.14 (-solutions can also develop.) 407.17 604.56 P0.89 (The relative amount of space occupied by a particular area of the solution space will be) 108 586.56 P-0.05 (proportional to the density of solutions in the neighborhood. The race then is for represen-) 108 568.56 P0.74 (tational space in the population and thus for assigned credit and future manipulation. As) 108 550.56 P-0.08 (demonstrated in Chapters 5 and 6, this feature of evolutionary algorithms can be exploited) 108 532.56 P2.17 (to produce bene\336cial task-speci\336c syntactic or) 108 514.56 P2.17 (ganizations that speed-up acquisition of) 340.79 514.56 P0.46 (solutions and determine appropriate task-speci\336c modularizations and high-level abstrac-) 108 496.56 P(tions without the use of explicit knowledge.) 108 478.56 T0.19 (A second feature of emer) 126 448.56 P0.19 (gent intelligence as implemented by evolutionary algorithms) 247.74 448.56 P1.14 (is the opportunistic nature of the acquired structures. In each of the experiments of this) 108 430.56 P0.93 (dissertation, the structures that evolved displayed little similarity with a human problem) 108 412.56 P1.06 (solver) 108 394.56 P1.06 (\325) 137.76 394.56 P1.06 (s preconceptions of how to solve the task. The solutions often took advantage of) 141.09 394.56 P1.64 (representational constructions that would be too subtle or too complex for a human to) 108 376.56 P-0.04 (design. This was evident in the experiments with GNARL and the T) 108 358.56 P-0.04 (ower of Hanoi experi-) 433.87 358.56 P-0.19 (ment with GLiB. As the representation language became more general, the resulting struc-) 108 340.56 P1.58 (tures became increasingly exploitative of non-standard representational techniques. For) 108 322.56 P0.4 (instance, in the T) 108 304.56 P0.4 (ic-T) 191.38 304.56 P0.4 (ac-T) 210.52 304.56 P0.4 (oe experiments with GLiB of Chapters 6 and 7, the) 231.65 304.56 P2 F0.4 (<test>) 483.77 304.56 P0 F0.4 ( por-) 516.62 304.56 P1.81 (tion of the conditional in the representational language often contained numerous side) 108 286.56 P1.08 (ef) 108 268.56 P1.08 (fects. This invariably lead to representations that were non-normative for the problem) 117.1 268.56 P1.34 (and extremely cumbersome to describe. As mentioned previously) 108 250.56 P1.34 (, these representations) 431.06 250.56 P0.72 (appear similar in spirit to the distributed representations of connectionist networks \050Hin-) 108 232.56 P0.48 (ton, Rumelhart and McClelland 1986; Sejnowski and Rosenber) 108 214.56 P0.48 (g 1987; Smolensky 1988\051) 414.96 214.56 P-0.13 (which are also constructed using an opportunistic mechanism. This property is likely to be) 108 196.56 P(present in all implementations of emer) 108 178.56 T(gent intelligence.) 292.66 178.56 T0.51 (Thirdly) 126 148.56 P0.51 (, emer) 161.2 148.56 P0.51 (gent intelligence relies on the task environment, implemented as the \336t-) 191.46 148.56 P0.16 (ness function in evolutionary algorithms, to work as a \336lter for or) 108 130.56 P0.16 (ganizations that are lim-) 423.28 130.56 P-0.18 (ited relative to the rest of the population. The abstract features and opportunistic structures) 108 112.56 P0.12 (within the population are not induced for their objective truth but instead for their relative) 108 94.56 PFMENDPAGE%%EndPage: "149" 7%%Page: "150" 7612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(150) 522.01 712 T108 90 540 702.56 R7 XV0 X1.12 (usefulness in solving the task and their ability to bias future search toward increasingly) 108 694.56 P-0.04 (better solutions. The more useful a structure or feature is in the context of the current pop-) 108 676.56 P2.04 (ulation, the more variations of it that will be explored in subsequent iterations of the) 108 658.56 P1.38 (search. This is distinct from the general optimization approach of some intelligent sys-) 108 640.56 P-0.26 (tems. Rather than \336nding the optimal construction, these methods merely locate one that is) 108 622.56 P1.21 (useful, i.e., suf) 108 604.56 P1.21 (\336cient. If there are idiosyncracies to the structure within some tolerance,) 180.82 604.56 P-0.13 (they are worked around. Often even these idiosyncracies turn out to be useful, as shown in) 108 586.56 P0.39 (the T) 108 568.56 P0.39 (owers of Hanoi experiment of Chapter 6. Hence these techniques are closer to satis-) 132.54 568.56 P1.3 (\336cing systems than function optimizers. If optimality is a concern, it can be a criterion) 108 550.56 P(inserted into the \336tness function for problems when satis\336cing is unacceptable.) 108 532.56 T1.95 (In the future, studies which further illuminate the dif) 126 502.56 P1.95 (ferences between knowledge-) 393.57 502.56 P0.42 (based AI and emer) 108 484.56 P0.42 (gent intelligence are needed. In this dissertation, I claimed that the ills) 199.29 484.56 P2.09 (of knowledge-based AI originated from its commitment to explicit knowledge. I then) 108 466.56 P0.41 (described and demonstrated a specialization of AI that avoided this commitment and still) 108 448.56 P0.5 (appeared intelligent. This removes the need for explicit knowledge in the problem solver) 108 430.56 P-0.12 (and hence the associated dif) 108 412.56 P-0.12 (\336culties. However) 241.86 412.56 P-0.12 (, I did not demonstrate that emer) 329.19 412.56 P-0.12 (gent intelli-) 484.48 412.56 P1.17 (gence can be used to solve the same problems as knowledge-based AI more quickly or) 108 394.56 P0.43 (ef) 108 376.56 P0.43 (\336ciently or that they truly circumvent the classic problems of knowledge-based AI sys-) 117.1 376.56 P0.96 (tems. I have no doubt that there are problems for which emer) 108 358.56 P0.96 (gent intelligent techniques) 411.48 358.56 P0.2 (are bene\336cial and others for which it is not. The distinctions between these two classes of) 108 340.56 P1.16 (problems should provide information about both the limitations of knowledge-based AI) 108 322.56 P-0.1 (methodology and the practice of emer) 108 304.56 P-0.1 (gent intelligence. Similarly) 289.48 304.56 P-0.1 (, it is clear that not all the) 418.77 304.56 P-0.24 (knowledge necessary to solve every problem will always be adequately accessible through) 108 286.56 P0.69 (emer) 108 268.56 P0.69 (gent mechanisms. The distinctions between what types of knowledge are accessible) 131.76 268.56 P1.12 (and what types are not should also provide signi\336cant insight into these techniques and) 108 250.56 P0.83 (possibly the nature of human intelligence. Other future work should include the bene\336ts) 108 232.56 P0.02 (and limitations of inserting optimality constraints into the \336tness function and the identi\336-) 108 214.56 P(cation of a theory of environmental feedback.) 108 196.56 T1.23 (Knowledge-based AI techniques assume that the knowledge level is the appropriate) 126 166.56 P-0.03 (level at which to model intelligent agents. This level of description is certainly convenient) 108 148.56 P-0.19 (for testing a model of intelligence to ensure that it is consistent with itself and the observa-) 108 130.56 P-0.29 (tions that gave rise to it. But the symbolic level favored by knowledge-based AI is not nec-) 108 112.56 P5.17 (essarily the best level at which to formulate a computational intelligence. This) 108 94.56 PFMENDPAGE%%EndPage: "150" 8%%Page: "151" 8612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(151) 522.01 712 T108 90 540 703.12 R7 XV0 X0.06 (straightforward approach to modeling intelligence presents numerous computational dif) 108 695.12 P0.06 (\336-) 529.34 695.12 P0.8 (culties when realistic problems are modeled due to the transduction of information from) 108 677.12 P2.79 (the task environment to the problem solver) 108 659.12 P2.79 (. Opportunistic methods that dynamically) 329.95 659.12 P0.42 (acquire pertinent task information from the environment provide a more general method-) 108 641.12 P-0.1 (ology for constructing intelligent systems ef) 108 623.12 P-0.1 (fectively and circumvent our biases as design-) 318.8 623.12 P1.96 (ers and observers. Emer) 108 605.12 P1.96 (gent intelligence suggests a class of techniques that avoid the) 228.9 605.12 P0.54 (modeling commitments of knowledge-based AI while still appearing \322as if\323 they contain) 108 587.12 P2 (equivalent resources. Clarifying the distinctions between knowledge-based AI systems) 108 569.12 P-0.03 (and emer) 108 551.12 P-0.03 (gent intelligent systems that perform equivalently) 152.04 551.12 P-0.03 (, such as those described in this) 389.29 551.12 P0.99 (dissertation, will provide signi\336cant insight into the nature of human and computational) 108 533.12 P0.19 (intelligence, the limitations and bene\336ts of computational modeling, and the biases of our) 108 515.12 P(scienti\336c observations.) 108 497.12 TFMENDPAGE%%EndPage: "151" 9%%Trailer%%BoundingBox: 0 0 612 792%%Pages: 8 1%%DocumentFonts: Times-Roman%%+ Times-Bold%%+ Times-Italic
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -