📄 http:^^www.cs.utexas.edu^users^ai-lab^dream.html
字号:
and not delayed until the machine can learn from books how to carry ona discourse. <P>I like Marvin Minsky's suggestion that the ability of a program tolearn should be proportional to what it already knows. Such a programwhen and if it is achieved, can be exploited in a dramatic(frightening?) way. <P>Causality is another important research area in AI. As ourintelligent programs, such as expert systems, begin to fail, we wantto move from "shallow" (statistical) rules toward "reasoning frombasic principles". Several research programs are pushing in thisdirection. I believe the key here is to move toward basic principles,a step at a time, and not to basic principles in one step. Forexample, knowledge of actions can be classified by levels ofcausality. I will try to explain this by first giving an example. Ifone holds an object in his hand five feet above the ground andreleases it, it will <P><OL><LI> Fall toward the ground <P> <LI> Fall toward the ground with increasing velocity <P> <LI> Fall, with its height y, in feet, governed by the equation <br> y = 5 - 16.1t^2 <br> with t measured in seconds <P><LI> Fall according to Newton's law of attraction <P><LI> Same as (3) but also accounting for air friction <P><LI> Fall according to the laws of general relativity <P></OL>For most applications the first answer is enough: "If you release it,it will fall". For example, we might say to a child, "if you dropthat rock it will hurt your foot". This might be called the"shallow" level. Deeper levels give information that is more andmore precise, but at a higher cost. <P>A human could never get anything done operating continuously at thethird level, let alone the fifth, and neither could an expert system.The early expert systems tended to operate at the first level ofcausality, the shallow level. This was fortunate because it allowedthese programs to exhibit a great deal of expertise for minimum cost.Such successes of expert systems have been of great value to the fieldof AI. They not only helped build confidence in the AI researchersthat worthwhile accomplishments are possible, but also promisedfinancial returns in the near term which can help pay for furtherresearch and development. So operating at the shallow level is notbad at all when it works. <P>The problem comes when that level is not adequate, when deeper causalreasoning is needed. And it is at this point that our machines needto be directed one step deeper in the causality chain. <P>Using causality properly, then, does not mean jumping to the deepestcausal level, but rather working down through levels as needed. Ibelieve that the recent work on qualitative reasoning is a correctstep in this direction. But to make it work properly, an overallknowledge structure, governed partly by common sense, is needed tocontrol the process. <P>These new Super Expert Systems (for the coming decade) will absorb alarge percentage of the research and development effort over the nextseveral years, and rightfully so. I mean expert systems which havebeen endowed with: large structured knowledge bases; ability to reasonthrough various causality levels (preferring the shallowest butresorting to deeper levels as needed); limited ability to learnautomatically from experience and to accumulate knowledge by analogy;truth maintenance systems; enhanced human interfacing to facilitateknowledge acquisition from experts and for ease of use; etc. <P>These super expert systems will evolve into the "thinking" part (asopposed to the moving and sensory parts) of our dreamed-of intelligentmachines of the future. Later versions will have enhanced ability tolearn (e.g., learning directly from machine--readable text), andreason by analogy, and much more. <P>Now let me list some of the areas that I feel will dominate AIresearch over the next decade. I have discussed most of these alreadyand now list one more: automatic reasoning. <P> <H3>IMPORTANT RESEARCH AREAS</H3><UL><LI> Large Structured Knowledge Bases <br> Knowledge Representation <br> Knowledge Storage, Retrieval, and Use <P><LI> Expert Systems Technology (A large Effort) <P><LI> Machine Learning <br> Controlled by knowledge structures <P><LI> Causality <br> By depth levels <P><LI> Human Interfacing <br> Natural Language Processing <br> Speech Recognition and Generation <P><LI> Automatic Reasoning <br> Analogical Reasoning <br> Common Sense Reasoning, Default Reasoning <P></UL>I have not tried to be complete in this listing and have not evenmentioned some important areas such as robotics, automaticprogramming, and planning. <P>Automatic reasoning is another area of research that is becomingincreasingly important for a number of reasons. Earlier expertsystems required only modest inferencing power because they operatedon rules at the shallowest levels. But as we reach toward deepercausality, the reasoning component is challenged to handle theswitching of levels and the added complexity of the deeper levels. Inthis, as always, knowledge plays a crucial role. <P>Also the emergence of logic as a basis for programming languages(PROLOG, LOGLISP, PARLOG, etc.), and as a means for storing knowledge(in Logic data bases, logic based rules for expert systems), hassuddenly placed a new load on our automatic reasoning programs (ourprovers). Thus we see the great interest in "Kilolip" machines,that perform a large number of logical inferences per second. Suchhigh performance will not only be needed for horn-clause problems,such as the use of PROLOG, but also for reasoning in first orderlogic, and even in modal logic, and higher order logic. Thus therenewed interest in Automated Theorem Proving. <P>It will be interesting to see whether the new concepts for handlingHorn clauses and first order logic, which are expected to produce"raw horsepower" in the Megalip range, will be enough to cope withthe load that will be imposed by the next generation applications, orwhether these methods will have to be "spiced" with specialreasoning-knowledge-units for speeding up proofs for particularapplications. In any event automatic reasoning research should becomemore relevant in the near future. <P>Let me not be misunderstood. General purpose reasoning machines(theorem provers) alone are not enough. Knowledge is still the key.But the requirements for reasoning about knowledge will be intensifiedand partly satisfied by these new high speed provers that arebeginning to appear. <P>I have great faith that the AI community is headed generally in theright direction. About half of the new crop of graduate studentsadmitted to the Ph.D. program in Computer Science at the University ofTexas this year selected AI as their preferred field of study. Thispreference for AI seems to be duplicated throughout the world, and weare talking about some of the very best students. These young peoplehold in their hands the future of this discipline. The power andinfluence of the earlier pioneers will wane as these new researchersemerge. <P>I urge these new students and all new researchers to set themselves avision of the future and to have the courage to make major newdepartures, to question the old and get on with the new. There ismuch to learn from us, we have pointed generally in the rightdirection, but the major gains are yet to be made. <P>I personally favor the bold approach over the timid. And there arecertain bold experiments that have to be made. One such effort wasthe Mechanical Translation (MT) work of the early 1960's. Some havecalled it a failure, but I do not! It had to be tried. It seemsrather obvious now that you cannot have MT without languageunderstanding. That awareness was made much clearer by these earlierexperiments -- they helped focus research in the important area ofNatural Language processing. And look how exciting that has becomeand where it has led, even to the resurgence of MT! A similar storycould be told about early speech recognition -- quality speechrecognition is not possible without language understanding. Earlyexperiments with Perceptrons represent another such example. In allthese cases a lot of work compensated somewhat for the lack of a greatidea. (The "Shakey" robot project at SRI is another example but inthat case the value of the early work is widely appreciated.) <P>The principle I want to make is this: when you have what looks like agood idea, give it your "best shot", waste a little money to getsome early feedback. Don't take "forever" to study the problembecause that is even more expensive (and less exciting). Of course,this strategy (this scientific method) requires character on the partof the researcher. He/she must be willing to analyze thoseexperiments, reformulate theories, and press on. Otherwise, thatperson does not qualify for the work and should not be entrusted withresearch funds. <P>I was recently reading about Thomas Edison and his team at the timethey developed a successful light bulb. He started with what hethought was a good idea and plowed ahead. He was brash, he was cocky,he bragged about what they would do (build a widely usable electriclight bulb), his early ideas were wrong. I believe that they werelucky, even with all their brilliance; it could have taken years. Butthis is another example, like Language Translation, where an earlyexpensive failure returned information that helped finalize asuccessful solution. <P>I believe that AI is in a position today where these kinds of boldexperiments are needed (but not the bold bragging). They need to beconducted by men and women with character, with wisdom and persistenceenough to succeed. <P>Another concern I have is the "flash in the pan" researcher, theperson with a limited theory, who does a trivial application of it, ornone at all, gets no useful feedback, he builds a program that cannotsurprise him in any way, and leaves to others to prove and extend hiswork. His fragment had better be pretty brilliant if anything is tobecome of it. More likely a real researcher will rediscover thefragment as part of a larger effort and absorb most of the credit. Wemight recall that most AI pioneers are well known for what they did,not what they theorized. <P>What is the most important characteristic of a good researcher?Answer: he does good research. Successful people somehow find a wayto succeed, others fail. Of course, native intelligence is animportant ingredient, but it alone is not enough. An equally importantcharacteristic is the ability (and inclination) to combine theory andexperiment. <P>So again I would say to young people. Set a dream. Set a goal (yourpart of bringing about that dream). Tool up: education, employment,facilities. Pursue it with vigor -- and impatience, want it today.I've never seen a content researcher who was worth his salary. Don'tbe easily deterred by those who don't have your insight and training.Work hard, provide momentum, don't give up easily. Don't spend toomuch time extolling the work of others; you will neverbe properly recognized or satisfied until you make your own personalcontribution. Compare and compete. These are rules for a researcherin any field. My conviction is that the field of AI is worth yourfinest efforts. <P>I have told you about my dream, have offered advice for youngresearchers, and have offered my opinion on important areas of AIresearch. But of all the predictions that I could make, the one thatI'm most sure about is that we will again be surprised. <P><hr>Acknowledgement. I want to thank Doug Lenat, Mary Shepherd, CliveDawson, Joe Scullion, Hassan Ait--Kaci, Elaine Rich, Dick Martin, andDick Hill for helpful comments. The title is obviously due to MartinLuther King.<hr>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -