⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.utexas.edu^users^ml^abstracts.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 5 页
字号:
<!WA66><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-ml-94.ps.Z"><!WA67><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-aaai-94.ps.Z" </a><b> <li> Inducing Deterministic Prolog Parsers From Treebanks:A Machine Learning Approach </b> <br>  John M. Zelle and Raymond J. Mooney <br> <cite> Proceedings of the Twelfth National Conference onAI</cite>, pp. 748-753, Seattle, WA, July 1994. (AAAI-94) <p> <blockquote>This paper presents a method for constructing deterministic, context-sensitive,Prolog parsers from corpora of parsed sentences. Our approach uses recentmachine learning methods for inducing Prolog rules from examples (inductivelogic programming).  We discuss several advantages of this method compared torecent statistical methods and present results on learning complete parsersfrom portions of the ATIS corpus.</blockquote><!WA68><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-aaai-94.ps.Z"><!WA69><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="ilp-ebl-sigart-94.ps.Z" </a><b> <li> Integrating ILP and EBL </b> <br>  Raymond J. Mooney and John M. Zelle <br> <cite> SIGART Bulletin</cite>, Volume 5, number 1, Jan. 1994, pp 12-21. <p><blockquote>This paper presents a review of recent work that integrates methods fromInductive Logic Programming (ILP) and Explanation-Based Learning (EBL).  ILPand EBL methods have complementary strengths and weaknesses and a number ofrecent projects have effectively combined them into systems with betterperformance than either of the individual approaches. In particular, integratedsystems have been developed for guiding induction with prior knowledge(ML-SMART, FOCL, GRENDEL) refining imperfect domain theories(FORTE, AUDREY, Rx), and learning effective search-controlknowledge (AxA-EBL, DOLPHIN).</blockquote><!WA70><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/ilp-ebl-sigart-94.ps.Z"><!WA71><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="neither-informatica-94.ps.Z" </a><b> <li> Extending Theory Refinement to M-of-N Rules </b> <br> Paul T. Baffes and Raymond J. Mooney <br> <cite> Informatica</cite>, 17 (1993), pp. 387-397. <p><blockquote>In recent years, machine learning research has started addressing a problemknown as <em> theory refinement</em>. The goal of a theory refinement learner isto modify an incomplete or incorrect rule base, representing a domain theory,to make it consistent with a set of input training examples. This paperpresents a major revision of the EITHER propositional theory refinementsystem. Two issues are discussed. First, we show how run time efficiency canbe greatly improved by changing from a exhaustive scheme for computingrepairs to an iterative greedy method. Second, we show how to extendEITHER to refine MofN rules. The resulting algorithm, Neither (New EITHER), is more than an order of magnitude faster and producessignificantly more accurate results with theories that fit the MofNformat. To demonstrate the advantages of NEITHER, we present experimentalresults from two real-world domains.</blockquote><!WA72><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/neither-informatica-94.ps.Z"><!WA73><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="lab-masters-93.ps.Z" </a><b> <li> Inductive Learning for Abductive Diagnosis </b> <br> Cynthia Thompson <br> M.A. Thesis, Department of Computer Sciences, University of Texas at Austin, 1993. <p><blockquote>A new system for learning by induction, called LAB, is presented.  LAB(Learning for ABduction) learns abductive rules based on a set of trainingexamples.  Our goal is to find a small knowledge base which, when usedabductively, diagnoses the training examples correctly, in addition togeneralizing well to unseen examples.  This is in contrast to past systems,which inductively learn rules which are used deductively.  Abduction isparticularly well suited to diagnosis, in which we are given a set of symptoms(manifestations) and we want our output to be a set of disorders which explainwhy the manifestations are present.  Each training example is associated withpotentially multiple categories, instead of one, which is the case with typicallearning systems.  Building the knowledge base requires a choice betweenmultiple possibilities, and the number of possibilities grows exponentiallywith the number of training examples.  One method of choosing the bestknowledge base is described and implemented.  The final system isexperimentally evaluated, using data from the domain of diagnosing brain damagedue to stroke.  It is compared to other learning systems and a knowledge baseproduced by an expert.  The results are promising: the rule base learned issimpler than the expert knowledge base and rules learned by one of the othersystems, and the accuracy of the learned rule base in predicting which areasare damaged is better than all the other systems as well as the expertknowledge base.</blockquote><!WA74><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/lab-masters-93.ps.Z"><!WA75><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="assert-proposal-93.ps.Z" </a><b> <li> Learning to Model Students: Using Theory Refinement to DetectMisconceptions </b> <br>Paul T. Baffes <br> Ph.D. proposal, Department of Computer Sciences, University of Texasat Austin, 1993. <p><blockquote>A new student modeling system called ASSERT is described which uses domainindependent learning algorithms to model unique student errors and toautomatically construct bug libraries. ASSERT consists of two learning phases.The first is an application of theory refinement techniques for constructingstudent models from a correct theory of the domain being tutored. The secondlearning cycle automatically constructs the bug library by extracting commonrefinements from multiple student models which are then used to bias futuremodeling efforts. Initial experimental data will be presented which suggeststhat ASSERT is a more effective modeling system than other induction techniquespreviously explored for student modeling, and that the automatic bug libraryconstruction significantly enhances subsequent modeling efforts.</blockquote><!WA76><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/assert-proposal-93.ps.Z"><!WA77><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="dolphin-chill-proposal-93.ps.Z" </a><b> <li> Learning Search-Control Heuristics for Logic Programs:Applications to Speedup Learning and Language Acquisition </b> <br>John M. Zelle <br> Ph.D. proposal, Department of Computer Sciences, University of Texasat Austin, 1993. <p><blockquote>This paper presents a general framework, learning search-control heuristics forlogic programs, which can be used to improve both the efficiency and accuracyof knowledge-based systems expressed as definite-clause logic programs.  Theapproach combines techniques of explanation-based learning and recent advancesin inductive logic programming to learn clause-selection heuristics that guideprogram execution.  Two specific applications of this framework are detailed:dynamic optimization of Prolog programs (improving efficiency) and naturallanguage acquisition (improving accuracy).  In the area of programoptimization, a prototype system, DOLPHIN, is able to transform someintractable specifications into polynomial-time algorithms, and outperformscompeting approaches in several benchmark speedup domains.  A prototypelanguage acquisition system, CHILL, is also described.  It is capable ofautomatically acquiring semantic grammars, which uniformly incorprate syntacticand semantic constraints to parse sentences into case-role representations.Initial experiments show that this approach is able to construct accurateparsers which generalize well to novel sentences and significantly outperformprevious approaches to learning case-role mapping based on connectionisttechniques.  Planned extensions of the general framework and the specificapplications as well as plans for further evaluation are also discussed.</blockquote><!WA78><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/dolphin-chill-proposal-93.ps.Z"><!WA79><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="dolphin-ijcai-93.ps.Z" </a><b> <li> Combining FOIL and EBG to Speed-Up Logic Programs </b> <br> John M. Zelle and Raymond J. Mooney <br> <cite> Proceedings of the Thirteenth International Joint Conference on ArtificialIntelligence</cite>, pp. 1106-1111, Chambery, France, 1993. (IJCAI-93) <p><blockquote>This paper presents an algorithm that combines traditional EBLtechniques and recent developments in inductive logic programming tolearn effective clause selection rules for Prolog programs.  Whenthese control rules are incorporated into the original program,significant speed-up may be achieved.  The algorithm is shown to be animprovement over competing EBL approaches in several domains.Additionally, the algorithm is capable of automatically transformingsome intractable algorithms into ones that run in polynomial time.</blockquote><!WA80><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/dolphin-ijcai-93.ps.Z"><!WA81><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="neither-ijcai-93.ps.Z" </a><b> <li> Symbolic Revision of Theories With M-of-N Rules </b> <br> Paul T. Baffes and Raymond J. Mooney <br> <cite> Proceedings of the Thirteenth International Joint Conference on ArtificialIntelligence</cite>, pp. 1135-1140, Chambery, France, 1993. (IJCAI-93) <p><blockquote>This paper presents a major revision of the EITHER propositional theoryrefinement system. Two issues are discussed. First, we show how run timeefficiency can be greatly improved by changing from a exhaustive scheme forcomputing repairs to an iterative greedy method. Second, we show how to extendEITHER to refine M-of-N rules. The resulting algorithm, NEITHER (New EITHER),is more than an order of magnitude faster and produces significantly moreaccurate results with theories that fit the M-of-N format. To demonstrate theadvantages of NEITHER, we present preliminary experimental results comparing itto EITHER and various other systems on refining the DNA promoter domain theory.</blockquote><!WA82><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/neither-ijcai-93.ps.Z"><!WA83><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-aaai-93.ps.Z" </a><b> <li> Learning Semantic Grammars With Constructive Inductive Logic Programming </b> <br> John M. Zelle  and Raymond J. Mooney <br> <cite> Proceedings of the Eleventh National Conference of the AmericanAssociation for Artificial Intelligence</cite>, pp. 817-822,Washington, D.C. July 1993 (AAAI-93). <p><blockquote>Automating the construction of semantic grammars is a difficult andinteresting problem for machine learning.  This paper shows how thesemantic-grammar acquisition problem can be viewed as the learning ofsearch-control heuristics in a logic program.  Appropriate controlrules are learned using a new first-order induction algorithm thatautomatically invents useful syntactic and semantic categories.Empirical results show that the learned parsers generalize well tonovel sentences and out-perform previous approaches based onconnectionist techniques.</blockquote><!WA84><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-aaai-93.ps.Z"><!WA85><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="rapture-connsci-94.ps.Z" </a><b> <li> Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases </b> <br> J. Jeffrey Mahoney and Raymond J. Mooney <br> <cite> Connection Science</cite>, 5 (1993), pp. 339-364. (Special issue onArchitectures for Integrating Neural and Symbolic Processing) <

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -