📄 http:^^www.cs.utexas.edu^users^chaput^inter.html
字号:
MIME-Version: 1.0
Server: CERN/3.0
Date: Monday, 06-Jan-97 21:15:55 GMT
Content-Type: text/html
Content-Length: 39917
Last-Modified: Wednesday, 14-Feb-96 19:20:39 GMT
<title>Symbol Emergence & Symbol Grounding</title><h1 align=center>Symbol Emergence & Symbol Grounding</h1><h2 align=center>Meaning and Communication in Man and Machine</h2><h3 align=center>December 16, 1995<br><a href="mailto:chaput@cs.utexas.edu">Harold Cliff Chaput</a></h3><h4>Introduction</h4><p>The artificial intelligencedebate has mainly centered on the representation problem. On one side isclassical AI, maintaining that intelligence is a matter of symbol processing. The other side usually consists of connectionist, claiming that systems thatmodel the brain (i.e. neural systems) are more likely to approach thefunctionality of the mind.</p><p>White the debate continues to sputter on, recent events have brought intoquestion whether symbol manipulation is necessary at all for some intelligentbehavior. It is becoming clear that some sophisticated actions can beaccomplished with very little high-level computation, sometimes with no symbolicprocessing at all. Yet we also know that certain cognition tasks are performedsymbolically. Symbol manipulation, and with it representative symbols, seem anecessary component of intelligence, but not of all parts of intelligence. Italso does not appear to be sufficient for creating intelligence.</p><p>What has been missing from this dialog is a discussion of how symbols and symbolmanipulation come about in the human mind. How do symbols, or categories, emergefrom our cognitive apparatus? How do we understand categories? And whatrelation does symbolic thought have with non-symbolic cognition? Several effortsin linguistics, cognitive psychology, philosophy and computer science have shedsome light of this area. They reveal some aspects of the link betweenconnectionism and representationism.</p><p>These issues lie at the heart of communication: the ability to convey meaningwith symbols, the ability to evoke thought without symbols, and the ability totranslate between the two. Communication is the golden fleece of AI, bothconnectionist and representationist: nothing better exposes the complexities andsubtleties of intelligence, yet it remains completely out of reach. Perhaps aphilosophical look at communication will point the way towards a possibleimplementation.</p><p>In this paper, I plan to discuss these issues, which arose while I wasimplementing a program for non-symbolic communication, RobotMap. After a briefoverview of representational and connectionist AI, I'll talk about some activityin robotics that has made people think differently about the problem. Then,drawing from different fields, I'll discuss the implications these ideas have oncognition and communication, wrapping up with a look at RobotMap and what itreveals about symbol grounding and symbol emergence.</p><h4>Natural Language Processing, Representation-Style</h4><p>The bulk of existing artificialintelligence code is representational. That is, artificial intelligence isrealized by representing a problem as a set of symbols which the computer canthen manipulate to find a correct and optimal solution. Chess programs sortthrough possible moves and look at symbols that represent future states of thechess board; theorem provers take predicate logic statements (fortunately alreadyin symbolic form) and churn through them with manipulation rules to find newtheorems in the form of novel symbolic constructs. The symbols that the computermanipulates necessarily correspond to some thing in the problem domain. Therefore, answers which arise as a set of symbols can then be translated backinto the problem domain, e.g. a chess move or a new theorem.</p><p>As such, the essential design of a natural language processor on a computer is totake a sentence, break it down into its symbols, and convert those symbols intoan internal representation. Usually, this representation is some form ofpredicate logic, which the computer uses to derive meaning from the sentence. For example, if we take the sentence:</p><blockquote> Jack kissed Jill.</blockquote><P>We can take what we know about the syntactic structure of English and produce thefollowing syntactical analysis:</p><pre> (S (NP "Jack") (VP (V "kissed") (NP "Jill.")))</pre><p>Looking up the word "kiss" (of which "kissed" is the past tense) in our lexicon,we know that kiss takes an agent (the kisser) and a theme (the kissee). Giventhat information, we can build the following predicate logic form:</p><pre> (PAST s1 KISS ACTION (AGENT s1 (NAME j1 PERSON "Jack")) (THEME s1 (NAME j2 PERSON "Jill")))</pre><p>This straightforward approach of converting sentences into predicate logic workswith a great deal of success, and gives the computer something to analyze andmanipulate. But this approach quickly ran into some problems. For example,consider the following couplet:<p><blockquote>Jane saw the bike through the store window. She wanted it.</blockquote><p>Suppose now that we want our parser to identify the antecedent of "She". Well,we know that the word "She" refers to an animate female object, and that there isonly one of those in the previous sentence ("Jane"), so we assign "Jane" as thereferent of "She". For our parser to do this, we now must include extrainformation in our lexicon, like gender, animacy, etc. So far so good. Now,what about "it"? Well, "it" is a genderless inanimate object, of which there arethree: "bike", "store", and "window". Our parser is stuck, and on a problem thatpeople can answer to with little effort. We might be tempted to create a hackfor this and say that the subject "it" refers to the subject of the previoussentence, "bike." But this hack would fail quite quickly:</p><blockquote>Jane saw the bike through the store window. She pressed her nose up against it.</blockquote><p>In order for our parser to interpret this sentence, we need to tell it aboutpressing noses and wanting bikes and looking through store windows, etc. Thiscan be done with scripts that describe these relationships [Schank and Abelson1977], but now our implementation is quite daunting: In order to understandnatural language in an unconstrained setting, we would need to be armed with abattery of scripts to parse some of the simplest texts. The number of scriptsneeded are tremendous, and the exceptions are quite numerous. To illustrate,consider how our parser would make sense out of some of the following examples:</p><ul><li>Teacher: Billy, can you name a color in the rainbow?<br>Billy: Blue.<br>Teacher: Very good. Janet, can you name a color in the rainbow?<br>Janet: Blue.<br>Teacher: That's the color that Billy used. [Implication: choose another color.]</li><li>A: Teheran's in Turkey isn't it, teacher?<br>B: And New York's in France, I suppose.</li><li>A: Is the Pope jewish?</li><li>A: Doesn't the teacher for this class suck?<br>B: Hmm... How about them Cubs?</li><li>Johnny: Hey Sally let's play Sonic.<br>Mother: How is your homework coming along, Johnny?</li></ul><p>This problem has been termed the "common-sense problem," because the solutions tothese problems of ambiguity lie in our knowledge of common sense. Ironically, itis these problems which are easy for people to understand that have stoppedrepresentational AI in its tracks. There is currently a representationalsolution to this problem underway, the CYC project under Doug Lenat, that plansto catalog all of common-sense as a set of logic statements. One cannot help butwonder if the solution to natural language understanding has to be thisconvoluted and difficult.</p><h4>Undermining Symbols</h4><p>The philosophical argument against intelligence as symbolprocessing has come in two major flavors. One is that symbol crunching is notsufficient for describing intelligence. This position is demonstrated bySearle's well-worn thought experiment of the Chinese Room [Searle 1980], whichessentially says that a system that translates Chinese to English would knowChinese no more than a calculator knows math (i.e. not at all). This claim ishighly controversial, and many rejoinders have been launched against it. For ourpurposes, the most useful is the response that knowing something is a matter ofdegree. A person who has memorized "The Wasteland" certainly knows the poem, butin a different way and perhaps to a lesser degree than an author who has writtena biography on T. S. Elliot, or a student of 20th century American poetry. Inthis way, we might be more comfortable with the idea that a calculator might knowmath... just not very well.</p><p>The other argument against representational AI is that some knowledge cannot becaptured by a set of rules. In particular, Dreyfus and Dreyfus claim thatcommon-sense knowledge is particularly prone to this, being based on "wholepatterns" of experience [Dreyfus and Dreyfus 1988]. I think this is closer tothe matter, because it concedes that the logical approach might not work all thetime. The Dreyfuses say that it's a mistake to think "that there must be atheory for every domain." He claims that proficiency is achieved by "similarityrecognition" in the domain. We will see that this view is psychologicallysupportable, and provides some insight to the problem.</P<p>Researchers in computer science have been looking at non-symbolic computationsince 1943, when McCullock and Pitts first considered neurons as logic elements[McCulloch & Pitts 1943]. In 1957, Rosenblatt developed the Perceptron model[Rosenblatt 1958], which saw great success for the next ten years. Connectionism, a sister to representation, held a lot of promise and excelled atproblems that representation struggled with [Rosenblatt 1960a, 1960b, 1962;Steinbuch 1961; Widrow 1962; Grossberg 1968] (and, predictably, vice versa). Butthis approach to the problem of artificial intelligence was brought to an abrupthalt in 1969 with Minsky and Papert's "Perceptrons" [Minsky & Papert 1969]. Their paper forecast an inevitable ceiling in connectionism, and convinced ageneration of researchers to abandon the field. Interest did not pick up againuntil the 1980's and is just now regaining acceptance in the field of computerscience.</p><p>Lately, connectionist systems have been used to model neurology, learn complexsystems, and even manipulate symbols. Chalmers trained a neural net that takesactive verbs as inputs as produces passive verbs as outputs [Chalmers 1990],which many would classify as a symbolic task.</p><p>Consequently, connectionism has come under greater scrutiny as an approach toartificial intelligence. While it provides some architectural features thatrepresentation AI is lacking
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -