⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 chapter1.ps

📁 收集了遗传算法、进化计算、神经网络、模糊系统、人工生命、复杂适应系统等相关领域近期的参考论文和研究报告
💻 PS
📖 第 1 页 / 共 5 页
字号:
0.17 ( This principle dictates that if the agent contains knowledge that is) 219.97 172 P-0 (applicable in the current context and will satisfy one of its goals, then a rational agent will) 108 154 P(apply the knowledge to achieve the goal.) 108 136 TFMENDPAGE%%EndPage: "2" 3%%Page: "3" 3612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(3) 534 712 T108 90 540 702 R7 XV0 X1.28 (The simplest manner in which to model observed intelligent behavior is to take the) 126 694 P-0.07 (observations at the knowledge level at face value and implement them directly at the sym-) 108 676 P-0.29 (bol level. All that is needed is a representation for knowledge and goals and an algorithmic) 108 658 P-0.05 (version of the principle of rationality) 108 640 P-0.05 (. Knowledge and goals are extracted through analysis) 283.86 640 P0.46 (of the task environment. After the knowledge and goals are extracted, weak methods can) 108 622 P-0.08 (serve as an appropriate principle of rationality) 108 604 P-0.08 (, assuming that control knowledge appropri-) 327.89 604 P0.17 (ate for the task is added for ef) 108 586 P0.17 (\336ciency) 251.82 586 P0.17 (. Newell echoes this modeling approach by suggest-) 289.02 586 P1.42 (ing that the knowledge level can be reduced to the symbol level. In this reduction, the) 108 568 P0.97 (knowledge and goals at the knowledge level correspond simply to data structures in the) 108 550 P(program:) 108 532 T(When [AI researchers] say) 162 502 T(, as we often do in explaining an action) 289.42 502 T(of a program, that the \322program knows K\323 ... we mean that there is) 162 484 T-0.17 (some structure in the program that we view as holding K... \050Newell) 162 466 P(1981, p. 15\051) 162 448 T(The discovery) 162 423 T(, development and elaboration of [the symbol level]) 229.5 423 T(to predicting the behavior of an intelligent agent has been what AI) 162 405 T(has been all about... \050Newell 1981, p. 12\051) 162 387 T1.26 (I label such models of intelligence, where there is a direct correspondence between the) 108 357 P0.56 (knowledge described at the knowledge level and the content of the representations at the) 108 339 P-0.25 (symbol level, as) 108 321 P2 F-0.25 (literal knowledge level models) 187.86 321 P0 F-0.25 ( of intelligent behavior) 333.33 321 P-0.25 (. Such models repre-) 441.51 321 P(sent the majority of knowledge-based AI systems.) 108 303 T-0 (It is important to note that what is observed at the knowledge level may not accurately) 126 273 P-0.23 (re\337ect the process that gave rise to the observations. In other words, there may be a dispar-) 108 255 P0.74 (ity between the) 108 237 P2 F0.74 (appar) 186.5 237 P0.74 (ent) 214.71 237 P0 F0.74 ( knowledge of the agent as seen at the knowledge level and the) 229.36 237 P2 F2.66 (actual) 108 219 P0 F2.66 (knowledge contained in the implementation of the agent at the symbol level.) 143.64 219 P2.43 (Knowledge-based AI has long confused the implementation of intelligence within an) 108 201 P0.17 (agent with its knowledge level description, choosing to equate the two to simplify model-) 108 183 P2.44 (ing \050Chandrasekaran, Goel and Allemang 1989\051. Situated activists and connectionists) 108 165 P0.06 (have noticed a similar confusion in the terms like) 108 147 P2 F0.06 (plan) 348.05 147 P0 F0.06 (,) 369.37 147 P2 F0.06 (symbol) 375.43 147 P0 F0.06 ( and) 409.4 147 P2 F0.06 (r) 432.84 147 P0.06 (epr) 437.06 147 P0.06 (esentation) 452.6 147 P0 F0.06 (as used) 504.97 147 P2.17 (in knowledge-based AI systems \050W) 108 129 P2.17 (inograd and Flores 1986; Hinton, McClelland and) 287.43 129 P2.66 (Rumelhart 1986; Smolensky 1988; Greeno and Moore 1993; Clancey 1993\051. Brooks) 108 111 PFMENDPAGE%%EndPage: "3" 4%%Page: "4" 4612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(4) 534 712 T108 90 540 702 R7 XV0 X1.6 (\0501986, 1991\051 makes similar points in his approach to modeling intelligent behavior by) 108 694 P1.18 (constructing robots devoid of explicit representations. All of these confusions and criti-) 108 676 P-0.26 (cisms, in my opinion, can be traced to knowledge-based AI\325) 108 658 P-0.26 (s dependence on literal knowl-) 393.81 658 P(edge level models of intelligence.) 108 640 T1 F(1.2  The Pr) 108 604 T(oblems of Explicit Knowledge) 164.42 604 T0 F1.1 (In a literal knowledge level model, the knowledge to solve the problem is) 126 580 P2 F1.1 (explicitly) 496.03 580 P0 F-0 (represented within the problem solving agent in a knowledge-base using some representa-) 108 562 P2.03 (tion. This dissertation de\336nes) 108 544 P2 F2.03 (explicit knowledge) 260.38 544 P0 F2.03 ( to be any manner of domain theory) 352.02 544 P2.03 (,) 537 544 P0.96 (domain knowledge, world model, heuristics, knowledge-base, or other form of informa-) 108 526 P1.36 (tion,) 108 508 P2 F1.36 (r) 134.02 508 P1.36 (egar) 138.24 508 P1.36 (dless of r) 159.78 508 P1.36 (epr) 206.04 508 P1.36 (esentation) 221.58 508 P0 F1.36 (, that is speci\336c to the task being solved and included) 270.89 508 P2 F-0.19 (explicitly within) 108 490 P0 F-0.19 ( the problem solver) 184.78 490 P-0.19 (. Explicit knowledge is the basis of strong methods and) 276.5 490 P-0.14 (is usually assumed as a given for AI tasks. It often serves as the representation of the asso-) 108 472 P0.92 (ciated \322world\323 knowledge about the task. In other words, explicit knowledge is a repre-) 108 454 P0.28 (sentation of the task environment that is internal to the problem solver) 108 436 P0.28 (. The quality of the) 447.26 436 P0.21 (problem solving and search methods that are standard in AI are chie\337y determined by the) 108 418 P0.38 (quality of explicit knowledge they contain. It is no wonder that the phrase \322in the knowl-) 108 400 P2.04 (edge lies the power) 108 382 P2.04 (,\323 coined by Edward Feigenbaum, was a popular catch phrase for) 206.56 382 P(knowledge-based AI through the 1980s.) 108 364 T0.77 (Every action of an agent using a literal knowledge level model is explicitly account-) 126 334 P0.63 (able to some component within the agent\325) 108 316 P0.63 (s knowledge-base. This reduction from knowl-) 312.33 316 P0.16 (edge-level observations to complete accountability within the representations of the agent) 108 298 P0.74 (leads to classic problems in arti\336cial intelligence. In order for problem solving ability to) 108 280 P0.03 (improve on a given task when using knowledge-based AI techniques, the quality of repre-) 108 262 P1.05 (sentation for the task environment within the problem solver must improve. For a more) 108 244 P0.12 (accurate problem solver) 108 226 P0.12 (, more accurate knowledge must be provided. If knowledge of the) 223 226 P0.63 (task environment is incomplete then the ability of the problem solver is also incomplete.) 108 208 P1.52 (This direct relationship between quantity of knowledge and quality of ability in strong) 108 190 P0.99 (methods leads to several classic dif) 108 172 P0.99 (\336culties in AI: scaling of method, credit assignment,) 281.59 172 P0.53 (the knowledge acquisition bottleneck, the knowledge indexing problem and the selection) 108 154 P(of appropriate representations. The next few subsections review each of these problems.) 108 136 TFMENDPAGE%%EndPage: "4" 5%%Page: "5" 5612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(5) 534 712 T108 90 540 702 R7 XV1 F0 X(1.2.1  Cr) 108 694 T(edit Assignment Pr) 151.75 694 T(oblem) 249.47 694 T0 F0.6 (The) 126 668 P2 F0.6 (cr) 148.24 668 P0.6 (edit assignment pr) 157.79 668 P0.6 (oblem) 247.15 668 P0 F0.6 ( \050Minsky 1963\051 highlights the issue of how to convert) 276.47 668 P0.96 (feedback about problem solving into information about how to manipulate a knowledge) 108 650 P1.87 (structure internal to the problem solver) 108 632 P1.87 (. There are two forms of the credit assignment) 303.55 632 P0.12 (problem: global and local. The) 108 614 P2 F0.12 (global cr) 259.18 614 P0.12 (edit assignment pr) 302.51 614 P0.12 (oblem) 390.93 614 P0 F0.12 ( is the determination that) 420.24 614 P1.57 (there is in fact an error in the internal structure. T) 108 596 P1.57 (ypically) 359.63 596 P1.57 (, explicit goals or evaluation) 397.49 596 P0.41 (functions within the problem solver determine the global correctness of its internal struc-) 108 578 P2.15 (ture. In the case of explicit knowledge, global credit assignment is determined by its) 108 560 P-0.23 (inability to correctly solve the problem at hand. The) 108 542 P2 F-0.23 (local cr) 358.46 542 P-0.23 (edit assignment pr) 394.77 542 P-0.23 (oblem) 482.49 542 P0 F-0.23 ( is the) 511.8 542 P0.22 (identi\336cation of individual components of the internal structure that are in fact erroneous.) 108 524 P0.15 (Often, it is the local credit assignment problem that poses the most dif) 108 506 P0.15 (\336culty in construct-) 445.75 506 P(ing a problem solver) 108 488 T(.) 206.27 488 T-0.2 (Knowledge-based AI\325) 126 458 P-0.2 (s answer to the local credit assignment problem is to add explicit) 231.38 458 P0.87 (knowledge to the problem solver that describes how to identify the faulty structure \050e.g.) 108 440 P-0.23 (W) 108 422 P-0.23 (inston 1975; Minton et al. 1989; Schank and Leake 1989\051. This knowledge is often task-) 118.84 422 P-0 (speci\336c and relates the feedback from the task environment directly to faulty components.) 108 404 P1 F(1.2.2  The Knowledge Acquisition Bottleneck) 108 368 T0 F1.16 (The knowledge acquisition problem is typically cast as the dif) 126 342 P1.16 (\336culty in determining) 433.73 342 P-0.05 (how a program should interact with an expert so that the expert\325) 108 324 P-0.05 (s knowledge can be incor-) 414.29 324 P0.35 (porated into the problem solver \050Hayes-Roth et al. 1983\051. This creates a bottleneck in the) 108 306 P1.88 (populating the method with the knowledge required to solve the task. The knowledge) 108 288 P1.39 (acquisition bottleneck can be generalized to the problem of extracting knowledge from) 108 270 P0.66 (any task representation external to the problem solver and incorporating it into the prob-) 108 252 P0.51 (lem solver) 108 234 P0.51 (. The root of this problem lies in the transduction from the external task repre-) 158.15 234 P1.35 (sentation into the format of the internal environment. As in the local credit assignment) 108 216 P0.7 (problem above, knowledge-based AI techniques simply supply explicit knowledge inter-) 108 198 P0.07 (nal to the problem solver that describes how to acquire knowledge from the given domain) 108 180 P(\050e.g. Davis 1979\051.) 108 162 TFMENDPAGE%%EndPage: "5" 6%%Page: "6" 6612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(6) 534 712 T108 90 540 702 R7 XV1 F0 X(1.2.3  Memory \050Knowledge\051 Indexing Pr) 108 694 T(oblem) 313.99 694 T0 F0.18 (In methods such as Case-Based Reasoning \050CBR\051 \050Schank 1982; Kolodner 1989\051 and) 126 668 P2.33 (Explanation-Based Learning \050EBL\051 \050De Jong and Mooney 1986; Minton et al. 1989;) 108 650 P1.2 (Schank and Leake 1989\051 knowledge is stored in memory as \322experience\323 and retrieved) 108 632 P0.18 (when the situation suggests its appropriateness. This can be generalized to the problem of) 108 614 P-0.29 (indexing any lar) 108 596 P-0.29 (ge knowledge-base where only a small percentage of the knowledge is rel-) 185.15 596 P0.12 (evant at any one time \050Schank 1987\051. T) 108 578 P0.12 (o search a lar) 297.23 578 P0.12 (ge knowledge-base each time knowl-) 360.98 578 P1.82 (edge is required for a problem would be prohibitive. CBR and EBL researchers often) 108 560 P0.81 (assume an indexing scheme for the knowledge that uses features salient to the task. The) 108 542 P2 (identi\336cation of these features is task dependent to allow all similar knowledge to be) 108 524 P-0.08 (retrieved with a speci\336c index. Also, the similarity metric is often task-speci\336c, forcing an) 108 506 P(experience-base to be reindexed when the task changes.) 108 488 T1 F(1.2.4  The Pr) 108 452 T(oblem of Scaling) 173.41 452 T0 F1.97 (A simple relation of AI systems is that as the accuracy of the explicit knowledge) 126 426 P-0.19 (improves, the accuracy of the problem solver also improves. However) 108 408 P-0.19 (, to become an order) 442.52 408 P0.17 (of magnitude more accurate, the quantity of explicit knowledge or processing required by) 108 390 P0.5 (the problem solver often increases exponentially or worse. Such a discrepancy of scaling) 108 372 P1.05 (between the accuracy of the problem solver and the algorithmic ef) 108 354 P1.05 (fort is the chief dif) 436.07 354 P1.05 (\336-) 529.34 354 P(culty faced in AI techniques.) 108 336 T1.13 (For a complex problem the quantity of internal knowledge required to represent the) 126 306 P0.48 (task environment suf) 108 288 P0.48 (\336ciently may be prohibitive to extract and/or provide explicitly) 209.34 288 P0.48 (. The) 514.87 288 P-0.1 (popularity of blocksworld tasks \050e.g. W) 108 270 P-0.1 (inograd 1972; W) 297.24 270 P-0.1 (altz 1975\051 was chie\337y due to their) 377.36 270 P1.24 (concisely represented task environment. The \336niteness of the situations that could arise) 108 252 P0.51 (allowed a complete encoding as explicit knowledge. This allows a) 108 234 P2 F0.51 (closed-world assump-) 433.89 234 P1.28 (tion) 108 216 P0 F1.28 ( to restrict the knowledge needed to solve the problems. The extreme of the block-) 126.66 216 P0.74 (sworld approach requires a complete and hence enormous encyclopedic knowledge-base) 108 198 P1.89 (of \322common sense\323 internal to a general problem solver \050Lenat, Prakash and Shepard) 108 180 P(1986\051.) 108 162 TFMENDPAGE%%EndPage: "6" 7%%Page: "7" 7612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 KV108 711 540 720 RV0 12 Q0 X(7) 534 712 T108 90 540 702 R7 XV1 F0 X(1.2.5  Repr) 108 694 T(esentation Design) 163.74 694 T0 F1.81 (The volume of information required to solve some complex problems often forces) 126 668 P0.73 (researchers to spend a signi\336cant amount of time determining an appropriate representa-) 108 650 P0.15 (tion that reduces the explicit knowledge to a manageable level. Often this is done to com-) 108 632 P1.15 (pensate for inadequacies inherent in the technique itself \050e.g. McClelland nd Rumelhart) 108 614 P3.42 (1986b\051. Given that a particular task has a speci\336c computational ability) 108 596 P3.42 (, the game) 483.53 596 P0.21 (becomes can the task be represented in such a way that the limited technique can solve it.) 108 578 P-0.03 (This has lead to a number of representations to be proposed for problem solving including) 108 560 P2.03 (production systems \050Newel and Simon 1981\051, predicate calculus \050Nilson 1980\051, fuzzy) 108 542 P0.66 (logic \050Zadeh 1965\051, connectionist networks \050Rumelhart and McClelland 1986\051, semantic) 108 524 P1.37 (networks \050Brachman 1979\051, frames \050Minsky 1975\051, conceptual structures \050Sowa 1984\051,) 108 506 P2.66 (scripts \050Schank and Ableson 1977\051, functional representations \050Sembugamoorthy and) 108 488 P2.75 (Chandrasekaran 1986\051, the multiple representations of generic tasks \050Chandrasekaran) 108 470 P-0.28 (1986\051 and others. This representationalist perspective to AI problem solving adheres to the) 108 452 P(philosophy \322Given the right task representation, magic happens.\323) 108 434 T0.66 (There is a signi\336cant problem with manipulating the task representation for maximal) 126 404 P-0.11 (computational bene\336t. The directed design of the problem solver) 108 386 P-0.11 (\325) 418.32 386 P-0.11 (s representation assumes) 421.65 386 P0.04 (knowledge of the task, the algorithm to solve it and how the task is best represented given) 108 368 P1.47 (the algorithm to solve it. This forces the human programmer to remain in the problem) 108 350 P1.78 (solving loop in order to determine an adequate representation before problem solving.) 108 332 P0.19 (However) 108 314 P0.19 (, it is often said that the true intelligence for solving problem comes in the deter-) 151.48 314 P0.03 (mination of an appropriate representation for problem solving \050Newell 1981, 1982; Reeke) 108 296 P(and Edelman 1988\051.) 108 278 T1 F(1.3  The Root of the Pr) 108 242 T(oblems) 224.04 242 T0 F0.88 (All of the limitations for AI methods that use explicit knowledge stem from the fact) 126 218 P2 (that the task environment is represented internally in the problem solver) 108 200 P2 (. The explicit) 472.39 200 P0.45 (knowledge, by virtue of its task-speci\336c nature, is a representation or model of the exter-) 108 182 P-0.07 (nal task environment that is) 108 164 P2 F-0.07 (dir) 243.25 164 P-0.07 (ectly) 256.8 164 P0 F-0.07 ( available to the problem solver) 279.44 164 P-0.07 (. Because this model is) 430 164 P-0.23 (internal, it is manipulatable, can be rearranged and augmented, and can be employed using) 108 146 P0.34 (only weak methods if necessary) 108 128 P0.34 (. But once the commitment to include explicit knowledge) 261.79 128 P0.83 (is made, then the classic problems of AI arise. Because the task environment is external) 108 110 PFMENDPAGE%%EndPage: "7" 8%%Page: "8" 8612 792 0 FMBEGINPAGE108 63 540 702 R7 X0 K

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -