⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.utexas.edu^users^ml^abstracts.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 5 页
字号:
<blockquote> This paper presents results on using a new inductive logic programmingmethod called FOIDL to learn the past tense of English verbs. The pasttense task has been widely studied in the context of thesymbolic/connectionist debate.  Previous papers have presented resultsusing various neural-network and decision-tree learning methods.  Wehave developed a technique for learning a special type of Prologprogram called a <em>first-order decision list</em>, defined as anordered list of clauses each ending in a cut.  FOIDL is based on FOIL(Quinlan, 1990) but employs intensional background knowledge andavoids the need for explicit negative examples.  It is particularlyuseful for problems that involve rules with specific exceptions, suchas the past-tense task.  We present results showing that FOIDL learnsa more accurate past-tense generator from significantly fewer examplesthan all other previous methods.</blockquote><!WA32><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/foidl-bkchapter-95.ps.Z"><!WA33><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="dolphin-ewsp-95.ps.Z" </a><b> <li> Hybrid Learning of Search Control for Partial-Order Planning <br> </b>Tara A. Estlin and Raymond J. Mooney <br><cite>New Directions in AI Planning</cite>, M. Ghallab and A. Milani, Eds,IOS Press, 1996, pp. 129-140. <p>  <blockquote>This paper presents results on applying a version of the DOLPHINsearch-control learning system to speed up a partial-order planner.DOLPHIN integrates explanation-based and inductive learning techniquesto acquire effective clause-selection rules for Prolog programs.  Aversion of the UCPOP partial-order planning algorithm has beenimplemented as a Prolog program and DOLPHIN used to automaticallylearn domain-specific search control rules that help eliminatebacktracking. The resulting system is shown to produce significantspeedup in several planning domains.</blockquote><!WA34><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/dolphin-ewsp-95.ps.Z"><!WA35><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="banner-icnn-96.ps.Z"</a><b><li>Revising Bayesian Network Parameters Using Connectionist Methods<br></b>Sowmya Ramachandran and Raymond J. Mooney<br><cite>Proceedings of the International Conference on NeuralNetworks (ICNN-96)</cite>, Special Session on Knowledge-Based ArtificialNeural Networks, Washington DC, June 1996. <p><blockquote>The problem of learning Bayesian networks with hidden variables is known tobe a hard problem. Even the simpler task of learning just the conditionalprobabilities on a Bayesian network with hidden variables is hard. In thispaper, we present an approach that learns the conditional probabilities ona Bayesian network with hidden variables by transforming it into amulti-layer feedforward neural network (ANN). The conditional probabilitiesare mapped onto weights in the ANN, which are then learned using standardbackpropagation techniques. To avoid the problem of exponentially largeANNs, we focus on Bayesian networks with noisy-or and noisy-andnodes. Experiments on real world classification problems demonstrate theeffectiveness of our technique.</blockquote><!WA36><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/banner-icnn-96.ps.Z"><!WA37><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="qdocs-dissertation-95.ps.Z"</a><b><li>Qualitative Multiple-Fault Diagnosis of Continuous Dynamic Systems UsingBehavioral Modes<br></b>Siddarth Subramanian<br>Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, August, 1995.<p><blockquote>As systems like chemical plants, power plants, and automobiles getmore complex, online diagnostic systems are becomingly increasinglyimportant.  One of the ways to rein in the complexity of describingand reasoning about large systems such as these is to describe themusing qualitative rather than quantitative models.<p>Model-based diagnosis is a class of diagnostic techniques that usedirect knowledge about how a system functions instead of expert rulesdetailing causes for every possible set of symptons of a brokensystem.  Our research builds on standard methods for model-baseddiagnosis and extends them to the domain of complex dynamic systemsdescribed using qualitative models.<p>We motivate and describe out algorithm for diagnosing faults in adynamic system given a qualitative model and a sequence of qualitativestates.  The main contributions in this algorithm include a method forpropagating dependencies while solving a general constraintsatisfaction problem, and a method for verfying the compatibility of abehavior with a model across time.  The algorithm can diagnosemultiple faults and uses models of faulty behavior, or behavioralmodes.<p>We then demonstrate these techniques using an implemented programcalled QDOCS and test it on some realistic problems.  Through ourexperiments with a model of the reaction control system (RCS) of thespace shuttle and with a level-controller for a reaction tank, we showthat QDOCS demonstrates the best balance of generality, accuracy andefficiency among known systems.<p></blockquote><!WA38><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/qdocs-dissertation-95.ps.Z"><!WA39><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-dissertation-95.ps.Z"</a><b><li>Using Inductive Logic Programming to Automate the Construction of Natural Language Parsers<br></b>John M. Zelle<br>Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, August, 1995.<p><blockquote>Designing computer systems to understand natural language input is adifficult task.  In recent years there has been considerable interestin corpus-based methods for constructing natural language parsers.These empirical approaches replace hand-crafted grammars withlinguistic models acquired through automated training over languagecorpora.  A common thread among such methods to date is the use ofpropositional or probablistic representations for the learnedknowledge.  This dissertation presents an alternative approach basedon techniques from a subfield of machine learning known as inductivelogic programming (ILP).  ILP, which investigates the learning ofrelational (first-order) rules, provides an empirical method foracquiring knowledge within traditional, symbolic parsing frameworks.<p>This dissertation details the architecture, implementation andevaluation of CHILL a computer system for acquiring naturallanguage parsers by training over corpora of parsed text.  CHILLtreats language acquisition as the learning of search-control ruleswithin a logic program that implements a shift-reduce parser.  Controlrules are induced using a novel ILP algorithm which handles difficultissues arising in the induction of search-control heuristics.  Boththe control-rule framework and the induction algorithm are crucial toCHILL's success.<p>The main advantage of CHILL over propositional counterparts isits flexibility in handling varied representations.  CHILL hasproduced parsers for various analyses including case-role mapping,detailed syntactic parse trees, and a logical form suitable forexpressing first-order database queries.  All of these tasks areaccomplished within the same framework, using a single, generallearning method that can acquire new syntactic and semantic categoriesfor resolving ambiguities.<p>Experimental evidence from both aritificial and real-world corporademonstrate that CHILL learns parsers as well or better thanprevious artificial neural network or probablistic approaches oncomparable tasks.  In the database query domain, which goes beyond thescope of previous empirical approaches, the learned parser outperformsan existing hand-crafted system.  These results support the claim thatILP techniques as implemented in CHILL represent a viablealternative with significant potential advantages over neural-network,propositional, and probablistic approaches to empirical parserconstruction.<p></blockquote><!WA40><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-dissertation-95.ps.Z"><!WA41><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-compling-95.ps.Z"</a><b><li>An Inductive Logic Programming Method for Corpus-based ParserConstruction<br></b>John M. Zelle and Raymond J. Mooney<br>Submitted to Computational Linguistics<p><blockquote>In recent years there has been considerable research into corpus-basedmethods for parser construction.  A common thread in this research hasbeen the use of propositional representations for learned knowledge.This paper presents an alternative approach based on techniques from asubfield of machine learning known as inductive logic programming(ILP).  ILP, which investigates the learning of relational(first-order) rules, provides a way of using empricial methods toacquire knowledge within traditional, symbolic parsing frameworks.  Wedescribe a novel method for constructing deterministic Prolog parsersfrom corpora of parsed sentences.  We also discuss several advantagesof this approach compared to propositional alternatives and presentexperimental results on learning complete parsers using severalcorpora including the ATIS corpus from the Penn Treebank.</blockquote><!WA42><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-compling-95.ps.Z"><!WA43><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-ijcai-nll-95.ps.Z"</a><b><li>A Comparison of Two Methods Employing Inductive Logic Programming forCorpus-based Parser Constuction<br></b>John M. Zelle and Raymond J. Mooney<br><cite>Working Notes of the IJCAI-95 Workshop on New Approaches toLearning for Natural Language Processing</cite>, pp.79-86, Montreal,Quebec, August, 1995.<p><blockquote> This paper presents results from recent experiments with CHILL,a corpus-based parser acquisition system.  CHILL treats grammaracquisition as the learning of search-control rules within a logicprogram.  Unlike many current corpus-based approaches that usepropositional or probabilistic learning algorithms, CHILL usestechniques from inductive logic programming (ILP) to learn relationalrepresentations.  The reported experiments compare CHILL'sperformance to that of a more naive application of ILP to parseracquisition.  The results show that ILP techniques, as employed inCHILL, are a viable alternative to propositional methods andthat the control-rule framework is fundamental to CHILL'ssuccess.</blockquote><!WA44><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-ijcai-nll-95.ps.Z"><!WA45><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-ifoil-ml-95.ps.Z"</a><b><li>Inducing Logic Programs without Explicit Negative Examples<br></b>John M. Zelle, Cynthia A. Thompson, Mary Elaine Califf, and Raymond J. Mooney<br><cite>Proceedings of the Fifth International Workshop on Inductive LogicProgramming</cite>, Leuven, Belguim, Sepetember 1995.<p><blockquote> This paper presents a method for learning logic programswithout explicit negative examples by exploiting an assumption of<i>output completeness</i>. A mode declaration is supplied for thetarget predicate and each training input is assumed to be accompaniedby all of its legal outputs.  Any other outputs generated by anincomplete program implicitly represent negative examples; however,large numbers of ground negative examples never need to be generated.This method has been incorporated into two ILP systems, CHILLIN andIFOIL, both of which use intensional background knowledge.  Tests ontwo natural language acquisition tasks, case-role mapping andpast-tense learning, illustrate the advantages of the approach.</blockquote><!WA46><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-ifoil-ml-95.ps.Z"><!WA47><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="wolfie-ack-95.ps.Z"</a><b><li>Acquisition of a Lexicon from Semantic Representations of Sentences<br></b>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -