⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.utexas.edu^users^ml^ilp.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<!WA14><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-dissertation-95.ps.Z"><!WA15><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-compling-95.ps.Z"</a><b><li>An Inductive Logic Programming Method for Corpus-based ParserConstruction<br></b>John M. Zelle and Raymond J. Mooney<br>Submitted to Computational Linguistics<p><blockquote>In recent years there has been considerable research into corpus-basedmethods for parser construction.  A common thread in this research hasbeen the use of propositional representations for learned knowledge.This paper presents an alternative approach based on techniques from asubfield of machine learning known as inductive logic programming(ILP).  ILP, which investigates the learning of relational(first-order) rules, provides a way of using empricial methods toacquire knowledge within traditional, symbolic parsing frameworks.  Wedescribe a novel method for constructing deterministic Prolog parsersfrom corpora of parsed sentences.  We also discuss several advantagesof this approach compared to propositional alternatives and presentexperimental results on learning complete parsers using severalcorpora including the ATIS corpus from the Penn Treebank.</blockquote><!WA16><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-compling-95.ps.Z"><!WA17><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-ijcai-nll-95.ps.Z"</a><b><li>A Comparison of Two Methods Employing Inductive Logic Programming forCorpus-based Parser Constuction<br></b>John M. Zelle and Raymond J. Mooney<br><cite>Working Notes of the IJCAI-95 Workshop on New Approaches toLearning for Natural Language Processing</cite>, pp.79-86, Montreal,Quebec, August, 1995.<p><blockquote> This paper presents results from recent experiments with CHILL,a corpus-based parser acquisition system.  CHILL treats grammaracquisition as the learning of search-control rules within a logicprogram.  Unlike many current corpus-based approaches that usepropositional or probabilistic learning algorithms, CHILL usestechniques from inductive logic programming (ILP) to learn relationalrepresentations.  The reported experiments compare CHILL'sperformance to that of a more naive application of ILP to parseracquisition.  The results show that ILP techniques, as employed inCHILL, are a viable alternative to propositional methods andthat the control-rule framework is fundamental to CHILL'ssuccess.</blockquote><!WA18><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-ijcai-nll-95.ps.Z"><!WA19><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-ifoil-ml-95.ps.Z"</a><b><li>Inducing Logic Programs without Explicit Negative Examples<br></b>John M. Zelle, Cynthia A. Thompson, Mary Elaine Califf, and Raymond J. Mooney<br><cite>Proceedings of the Fifth International Workshop on Inductive LogicProgramming</cite>, Leuven, Belguim, Sepetember 1995.<p><blockquote> This paper presents a method for learning logic programswithout explicit negative examples by exploiting an assumption of<i>output completeness</i>. A mode declaration is supplied for thetarget predicate and each training input is assumed to be accompaniedby all of its legal outputs.  Any other outputs generated by anincomplete program implicitly represent negative examples; however,large numbers of ground negative examples never need to be generated.This method has been incorporated into two ILP systems, CHILLIN andIFOIL, both of which use intensional background knowledge.  Tests ontwo natural language acquisition tasks, case-role mapping andpast-tense learning, illustrate the advantages of the approach.</blockquote><!WA20><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-ifoil-ml-95.ps.Z"><!WA21><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="foidl-jair-95.ps.Z"</a><b><li>Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs<br></b>Raymond J. Mooney and Mary Elaine Califf <br><cite>Journal of Artificial Intelligence Research</cite>, 3 (1995) pp. 1-24.<blockquote>This paper presents a method for inducing logic programs from examples thatlearns a new class of concepts called first-order decision lists, definedas ordered lists of clauses each ending in a cut.  The method, called FOIDL, is based on FOIL but employs intensional background knowledge and avoids the need for explicit negative examples.  It is particularly useful for problems that involve rules with specific exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate.  FOIDL is able to learn concise, accurate programs for this problem from significantly fewer examples than previous methods (both connectionist and symbolic).</blockquote><!WA22><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/foidl-jair-95.ps.Z"><!WA23><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-ml-94.ps.Z" </a><li> <b> Combining Top-Down And Bottom-Up Techniques In Inductive Logic Programming</b> <br>John M. Zelle, Raymond J. Mooney and Joshua B. Konvisser <br><cite> Proceedings of the Eleventh International Workshopon Machine Learning</cite>, pp. 343-351, Rutgers, NJ, July 1994. (ML-94) <p><blockquote>This paper describes a new method for inducing logic programs fromexamples which attempts to integrate the best aspects of existing ILPmethods into a single coherent framework.  In particular, it combinesa bottom-up method similar to GOLEM with a top-down method similar toFOIL.  It also includes a method for predicate invention similar toCHAMP and an elegant solution to the ``noisy oracle'' problem whichallows the system to learn recursive programs without requiring acomplete set of positive examples.  Systematic experimentalcomparisons to both GOLEM and FOIL on a range of problems are used toclearly demonstrate the advantages of the approach.</blockquote><!WA24><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-ml-94.ps.Z"><!WA25><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-aaai-94.ps.Z" </a><b> <li> Inducing Deterministic Prolog Parsers From Treebanks:A Machine Learning Approach </b> <br>  John M. Zelle and Raymond J. Mooney <br> <cite> Proceedings of the Twelfth National Conference onAI</cite>, pp. 748-753, Seattle, WA, July 1994. (AAAI-94) <p> <blockquote>This paper presents a method for constructing deterministic, context-sensitive,Prolog parsers from corpora of parsed sentences. Our approach uses recentmachine learning methods for inducing Prolog rules from examples (inductivelogic programming).  We discuss several advantages of this method compared torecent statistical methods and present results on learning complete parsersfrom portions of the ATIS corpus.</blockquote><!WA26><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-aaai-94.ps.Z"><!WA27><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="ilp-ebl-sigart-94.ps.Z" </a><b> <li> Integrating ILP and EBL </b> <br>  Raymond J. Mooney and John M. Zelle <br> <cite> SIGART Bulletin</cite>, Volume 5, number 1, Jan. 1994, pp 12-21. <p><blockquote>This paper presents a review of recent work that integrates methods fromInductive Logic Programming (ILP) and Explanation-Based Learning (EBL).  ILPand EBL methods have complementary strengths and weaknesses and a number ofrecent projects have effectively combined them into systems with betterperformance than either of the individual approaches. In particular, integratedsystems have been developed for guiding induction with prior knowledge(ML-SMART, FOCL, GRENDEL) refining imperfect domain theories(FORTE, AUDREY, Rx), and learning effective search-controlknowledge (AxA-EBL, DOLPHIN).</blockquote><!WA28><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/ilp-ebl-sigart-94.ps.Z"><!WA29><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="dolphin-ijcai-93.ps.Z" </a><b> <li> Combining FOIL and EBG to Speed-Up Logic Programs </b> <br> John M. Zelle and Raymond J. Mooney <br> <cite> Proceedings of the Thirteenth International Joint Conference on ArtificialIntelligence</cite>, pp. 1106-1111, Chambery, France, 1993. (IJCAI-93) <p><blockquote>This paper presents an algorithm that combines traditional EBLtechniques and recent developments in inductive logic programming tolearn effective clause selection rules for Prolog programs.  Whenthese control rules are incorporated into the original program,significant speed-up may be achieved.  The algorithm is shown to be animprovement over competing EBL approaches in several domains.Additionally, the algorithm is capable of automatically transformingsome intractable algorithms into ones that run in polynomial time.</blockquote><!WA30><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/dolphin-ijcai-93.ps.Z"><!WA31><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="chill-aaai-93.ps.Z" </a><b> <li> Learning Semantic Grammars With Constructive Inductive Logic Programming </b> <br> John M. Zelle  and Raymond J. Mooney <br> <cite> Proceedings of the Eleventh National Conference of the AmericanAssociation for Artificial Intelligence</cite>, pp. 817-822,Washington, D.C. July 1993 (AAAI-93). <p><blockquote>Automating the construction of semantic grammars is a difficult andinteresting problem for machine learning.  This paper shows how thesemantic-grammar acquisition problem can be viewed as the learning ofsearch-control heuristics in a logic program.  Appropriate controlrules are learned using a new first-order induction algorithm thatautomatically invents useful syntactic and semantic categories.Empirical results show that the learned parsers generalize well tonovel sentences and out-perform previous approaches based onconnectionist techniques.</blockquote><!WA32><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/chill-aaai-93.ps.Z"><!WA33><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="dolphin-mlw-92.ps.Z" </a> <b> <li> Speeding-up Logic Programs by Combining EBG and FOIL </b> <br>  John M. Zelle and Raymond J. Mooney <br> <cite> Proceedings of the 1992 Machine Learning Workshop on KnowledgeCompilation and Speedup Learning</cite>, Aberdeen Scotland, July 1992. <p><blockquote>This paper presents an algorithm that combines traditional EBLtechniques and recent developments in inductive logic programming tolearn effective clause selection rules for Prolog programs.  When thesecontrol rules are incorporated into the original program, significantspeed-up may be achieved.  The algorithm produces not only EBL-likespeed up of problem solvers, but is capable of automaticallytransforming some intractable algorithms into ones that run inpolynomial time.</blockquote><!WA34><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/dolphin-mlw-92.ps.Z"><!WA35><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="forte-mlj-94.ps.Z" </a><b> <li> Refinement of First-Order Horn-Clause Domain Theories </b> <br>Bradley L. Richards and Raymond J. Mooney <br><cite> Machine Learning</cite> 19,2 (1995), pp. 95-131. <p><blockquote> Knowledge acquisition is a difficult and time-consumingtask, and as error-prone as any human activity.  The task ofautomatically improving an existing knowledge base using learningmethods is addressed by a new class of systems performing <i> theoryrefinement</i>.  Until recently, such systems were limited topropositional theories.  This paper presents a system, FORTE(First-Order Revision of Theories from Examples), for refiningfirst-order Horn-clause theories.  Moving to a first-orderrepresentation opens many new problem areas, such as logic programdebugging and qualitative modelling, that are beyond the reach ofpropositional systems.  FORTE uses a hill-climbing approach to revisetheories.  It identifies possible errors in the theory and calls on alibrary of operators to develop possible revisions.  The best revisionis implemented, and the process repeats until no further revisions arepossible.  Operators are drawn from a variety of sources, includingpropositional theory refinement, first-order induction, and inverseresolution.  FORTE has been tested in several domains includinglogic programming and qualitative modelling.  </blockquote><!WA36><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/forte-mlj-94.ps.Z"><!WA37><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><hr><address><!WA38><a href="http://www.cs.utexas.edu/users/estlin/">estlin@cs.utexas.edu</a></address>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -