📄 http:^^www.cs.utexas.edu^users^ml^theory-rev.html
字号:
based on the models automatically generated by ASSERT performedsignificantly better on a post test than students who receivedsimple reteaching.</blockquote><! <!WA10><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/assert-dissertation-94.tar.Z"><! <!WA11><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="rapture-ml-94.ps.Z" </a><b> <li> Comparing Methods For Refining Certainty Factor Rule-Bases </b> <br> J. Jeffrey Mahoney and Raymond J. Mooney <br> <cite> Proceedings of the Eleventh International Workshop on MachineLearning</cite>, pp. 173-180, Rutgers, NJ, July 1994. (ML-94) <p><blockquote>This paper compares two methods for refining uncertain knowledge bases usingpropositional certainty-factor rules. The first method, implemented in theRAPTURE system, employs neural-network training to refine the certaintiesof existing rules but uses a symbolic technique to add new rules. The secondmethod, based on the one used in the KBANN system, initially adds acomplete set of potential new rules with very low certainty and allowsneural-network training to filter and adjust these rules. Experimental resultsindicate that the former method results in significantly faster training andproduces much simpler refined rule bases with slightly greater accuracy.</blockquote><!WA12><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/rapture-ml-94.ps.Z"><!WA13><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="rapture-isiknh-94.ps.Z" </a><b> <li> Modifying Network Architectures For Certainty-Factor Rule-Base Revision</b> <br> J. Jeffrey Mahoney and Raymond J. Mooney <br> <cite> Proceedings of the International Symposium on IntegratingKnowledge and Neural Heuristics 1994</cite>, pp. 75-85, Pensacola, FL,May 1994. (ISIKNH-94) <p><blockquote> This paper describes RAPTURE --- a system for revisingprobabilistic rule bases that converts symbolic rules into aconnectionist network, which is then trained via connectionisttechniques. It uses a modified version of backpropagation to refinethe certainty factors of the rule base, and uses ID3'sinformation-gain heuristic (Quinlan) to add new rules. Work iscurrently under way for finding improved techniques for modifyingnetwork architectures that include adding hidden units using theUPSTART algorithm (Frean). A case is made via comparison with fullyconnected connectionist techniques for keeping the rule base as closeto the original as possible, adding new input units only as needed.</blockquote><!WA14><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/rapture-isiknh-94.ps.Z"><!WA15><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="neither-informatica-94.ps.Z" </a><b> <li> Extending Theory Refinement to M-of-N Rules </b> <br> Paul T. Baffes and Raymond J. Mooney <br> <cite> Informatica</cite>, 17 (1993), pp. 387-397. <p><blockquote>In recent years, machine learning research has started addressing a problemknown as <em> theory refinement</em>. The goal of a theory refinement learner isto modify an incomplete or incorrect rule base, representing a domain theory,to make it consistent with a set of input training examples. This paperpresents a major revision of the EITHER propositional theory refinementsystem. Two issues are discussed. First, we show how run time efficiency canbe greatly improved by changing from a exhaustive scheme for computingrepairs to an iterative greedy method. Second, we show how to extendEITHER to refine MofN rules. The resulting algorithm, Neither (New EITHER), is more than an order of magnitude faster and producessignificantly more accurate results with theories that fit the MofNformat. To demonstrate the advantages of NEITHER, we present experimentalresults from two real-world domains.</blockquote><!WA16><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/neither-informatica-94.ps.Z"><!WA17><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="assert-proposal-93.ps.Z" </a><b> <li> Learning to Model Students: Using Theory Refinement to DetectMisconceptions </b> <br>Paul T. Baffes <br> Ph.D. proposal, Department of Computer Sciences, University of Texasat Austin, 1993. <p><blockquote>A new student modeling system called ASSERT is described which uses domainindependent learning algorithms to model unique student errors and toautomatically construct bug libraries. ASSERT consists of two learning phases.The first is an application of theory refinement techniques for constructingstudent models from a correct theory of the domain being tutored. The secondlearning cycle automatically constructs the bug library by extracting commonrefinements from multiple student models which are then used to bias futuremodeling efforts. Initial experimental data will be presented which suggeststhat ASSERT is a more effective modeling system than other induction techniquespreviously explored for student modeling, and that the automatic bug libraryconstruction significantly enhances subsequent modeling efforts.</blockquote><!WA18><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/assert-proposal-93.ps.Z"><!WA19><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="neither-ijcai-93.ps.Z" </a><b> <li> Symbolic Revision of Theories With M-of-N Rules </b> <br> Paul T. Baffes and Raymond J. Mooney <br> <cite> Proceedings of the Thirteenth International Joint Conference on ArtificialIntelligence</cite>, pp. 1135-1140, Chambery, France, 1993. (IJCAI-93) <p><blockquote>This paper presents a major revision of the EITHER propositional theoryrefinement system. Two issues are discussed. First, we show how run timeefficiency can be greatly improved by changing from a exhaustive scheme forcomputing repairs to an iterative greedy method. Second, we show how to extendEITHER to refine M-of-N rules. The resulting algorithm, NEITHER (New EITHER),is more than an order of magnitude faster and produces significantly moreaccurate results with theories that fit the M-of-N format. To demonstrate theadvantages of NEITHER, we present preliminary experimental results comparing itto EITHER and various other systems on refining the DNA promoter domain theory.</blockquote><!WA20><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/neither-ijcai-93.ps.Z"><!WA21><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="rapture-connsci-94.ps.Z" </a><b> <li> Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases </b> <br> J. Jeffrey Mahoney and Raymond J. Mooney <br> <cite> Connection Science</cite>, 5 (1993), pp. 339-364. (Special issue onArchitectures for Integrating Neural and Symbolic Processing) <p><blockquote>This paper describes Rapture --- a system for revising probabilistic knowledgebases that combines connectionist and symbolic learning methods. Rapture usesa modified version of backpropagation to refine the certainty factors of aMycin-style rule base and it uses ID3's information gain heuristic to addnew rules. Results on refining three actual expert knowledge bases demonstratethat this combined approach generally performs better than previous methods.</blockquote><!WA22><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/rapture-connsci-94.ps.Z"><!WA23><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="forte-mlj-94.ps.Z" </a><b> <li> Refinement of First-Order Horn-Clause Domain Theories </b> <br>Bradley L. Richards and Raymond J. Mooney <br><cite> Machine Learning</cite> 19,2 (1995), pp. 95-131. <p><blockquote> Knowledge acquisition is a difficult and time-consumingtask, and as error-prone as any human activity. The task ofautomatically improving an existing knowledge base using learningmethods is addressed by a new class of systems performing <i> theoryrefinement</i>. Until recently, such systems were limited topropositional theories. This paper presents a system, FORTE(First-Order Revision of Theories from Examples), for refiningfirst-order Horn-clause theories. Moving to a first-orderrepresentation opens many new problem areas, such as logic programdebugging and qualitative modelling, that are beyond the reach ofpropositional systems. FORTE uses a hill-climbing approach to revisetheories. It identifies possible errors in the theory and calls on alibrary of operators to develop possible revisions. The best revisionis implemented, and the process repeats until no further revisions arepossible. Operators are drawn from a variety of sources, includingpropositional theory refinement, first-order induction, and inverseresolution. FORTE has been tested in several domains includinglogic programming and qualitative modelling. </blockquote><!WA24><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/forte-mlj-94.ps.Z"><!WA25><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="rapture-mlw-92.ps.Z" </a><b> <li> Combining Symbolic and Neural Learning to Revise Probabilistic Theories </b> <br> J. Jeffrey Mahoney & Raymond J. Mooney <br> <cite> Proceedings of the 1992 Machine Learning Workshop on IntegratedLearning in Real Domains</cite>, Aberdeen Scotland, July 1992. <p><blockquote>This paper describes RAPTURE --- a system for revising probabilistictheories that combines symbolic and neural-network learning methods. RAPTURE uses a modified version of backpropagation to refine the certaintyfactors of a Mycin-style rule-base and it uses ID3's information gain heuristicto add new rules. Results on two real-world domains demonstrate that thiscombined approach performs as well or better than previous methods.</blockquote><!WA26><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/rapture-mlw-92.ps.Z"><!WA27><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="assert-cogsci-92.ps.Z" </a><b> <li> Using Theory Revision to Model Students and Acquire Stereotypical Errors </b> <br> Paul T. Baffes and Raymond J. Mooney <br> <cite> Proceedings of the Fourteenth Annual Conference of the CognitiveScience Society</cite>, pp. 617-622, Bloomington, IN, July 1992. <p><blockquote>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -