⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.utexas.edu^users^ml^theory-rev.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 3 页
字号:
Student modeling has been identified as an important component to the longterm development of Intelligent Computer-Aided Instruction (ICAI) systems. Twobasic approaches have evolved to model student misconceptions. One uses astatic, predefined library of user bugs which contains the misconceptionsmodeled by the system. The other uses induction to learn studentmisconceptions from scratch. Here, we present a third approach that uses amachine learning technique called theory revision. Using theory revisionallows the system to automatically construct a bug library for use in modelingwhile retaining the flexibility to address novel errors.</blockquote><!WA28><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/assert-cogsci-92.ps.Z"><!WA29><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="pac-bkchapter-94.ps.Z" </a><b> <li> A Preliminary PAC Analysis of Theory Revision </b> <br> Raymond J. Mooney <br> March 1992 <br><cite> Computational Learning Theory and Natural Learning Systems</cite>, Vol.  3, T. Petsche, S. Judd, and S. Hanson, Eds., MIT Press, 1995, pp. 43-53. <p><blockquote>This paper presents a preliminary analysis of the sample complexity of theoryrevision within the framework of PAC (Probably Approximately Correct)learnability theory.  By formalizing the notion that the initial theory is``close'' to the correct theory we show that the sample complexity of anoptimal propositional Horn-clause theory revision algorithm is $O( ( \ln 1 /\delta + d \ln (s_0 + d + n) ) / \epsilon)$, where $d$ is the {\em syntacticdistance} between the initial and correct theories, $s_0$ is the size ofinitial theory, $n$ is the number of observable features, and $\epsilon$ and$\delta$ are the standard PAC error and probability bounds. The paper alsodiscusses the problems raised by the computational complexity of theoryrevision.</blockquote><!WA30><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/pac-bkchapter-94.ps.Z"><!WA31><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="forte-ilp-92.ps.Z" </a><b> <li> Automated Debugging of Logic Programs via Theory Revision </b> <br> Raymond J. Mooney & Bradley L. Richards <br> <cite> Proceedings of the Second International Workshop on InductiveLogic Programming</cite>, Tokyo, Japan, June 1992. <p><blockquote>This paper presents results on using a theory revision system to automaticallydebug logic programs. FORTE is a recently developed system for revisingfunction-free Horn-clause theories.  Given a theory and a set of trainingexamples, it performs a hill-climbing search in an attempt to minimally modifythe theory to correctly classify all of the examples.  FORTE makes use ofmethods from propositional theory revision, Horn-clause induction (FOIL),and inverse resolution.  The system has has been successfully used to debuglogic programs written by undergraduate students for a programming languagescourse.</blockquote><!WA32><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/forte-ilp-92.ps.Z"><!WA33><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="either-aaaisymp-92.ps.Z" </a><b> <li> Batch versus Incremental Theory Refinement </b> <br> Raymond J. Mooney <br> <cite> Proceedings of AAAI Spring Symposium on KnowledgeAssimilation</cite>, Standford, CA, March, 1992. <p><blockquote>Most existing theory refinement systems are not incremental. However, anytheory refinement system whose input and output theories are compatible can beused to incrementally assimilate data into an evolving theory.  This is done bycontinually feeding its revised theory back in as its input theory.  Anincremental batch approach, in which the system assimilates a batch of examplesat each step, seems most appropriate for existing theory revision systems.Experimental results with the EITHER theory refinement system demonstratethat this approach frequently increases efficiency without significantlydecreasing the accuracy or the simplicity of the resulting theory.  However, ifthe system produces bad initial changes to the theory based on only smallamount of data, these bad revisions can ``snowball'' and result in an overalldecrease in performance.</blockquote><!WA34><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/either-aaaisymp-92.ps.Z"><!WA35><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="either-bkchapter-94.ps.Z" </a><b> <li> A Multistrategy Approach to Theory Refinement </b> <br> Raymond J. Mooney & Dirk Ourston <br> <cite> Machine Learning: A Multistrategy Approach</cite>, Vol. IV, R.S. Michalski& G. Teccuci (eds.), pp.141-164, Morgan Kaufman, San Mateo, CA, 1994. <p><blockquote>This chapter describes a multistrategy system that employs independent modulesfor deductive, abductive, and inductive reasoning to revise an arbitrarilyincorrect propositional Horn-clause domain theory to fit a set of preclassifiedtraining instances.  By combining such diverse methods, EITHER is ableto handle a wider range of imperfect theories than other theory revisionsystems while guaranteeing that the revised theory will be consistent with thetraining data.  EITHER has successfully revised two actual experttheories, one in molecular biology and one in plant pathology. The resultsconfirm the hypothesis that using a multistrategy system to learn from boththeory and data gives better results than using either theory or data alone.</blockquote><!WA36><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/either-bkchapter-94.ps.Z"><!WA37><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="either-aij-94.ps.Z" </a><b> <li> Theory Refinement Combining Analytical and Empirical Methods </b> <br>Dirk Ourston and Raymond J. Mooney <br> <cite> Artificial Intelligence</cite>, 66 (1994), pp. 311--344. <p><blockquote>This article describes a comprehensive approach to automatic theory revision.Given an imperfect theory, the approach combines explanation attempts forincorrectly classified examples in order to identify the failing portions ofthe theory. For each theory fault, correlated subsets of the examples are usedto inductively generate a correction. Because the corrections are <em>focused</em>, they tend to preserve the structure of the original theory.  Becausethe system starts with an approximate domain theory, in general fewer trainingexamples are required to attain a given level of performance (classificationaccuracy) compared to a purely empirical system. The approach applies toclassification systems employing a propositional Horn-clause theory. The systemhas been tested in a variety of application domains, and results are presentedfor problems in the domains of molecular biology and plant disease diagnosis.</blockquote><!WA38><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/either-aij-94.ps.Z"><!WA39><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="either-td-ml-91.ps.Z" </a><b> <li> Improving Shared Rules in Multiple Category Domain Theories </b> <br> Dirk Ourston and Raymond J. Mooney <br> <cite> Proceedings of the Eighth International Machine LearningWorkshop</cite>, pp. 534-538, Evanston, IL, June 1991. <p><blockquote>This paper presents an approach to improving the classification performance ofa multiple category theory by correcting intermediate rules which are sharedamong the categories.  Using this technique, the performance of a theory in onecategory can be improved through training in an entirely different category.Examples of the technique are presented and experimental results are given.</blockquote><!WA40><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/either-td-ml-91.ps.Z"><!WA41><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="either-ci-ml-91.ps.Z" </a><b> <li> Constructive Induction in Theory Refinement </b> <br> Raymond J. Mooney and Dirk Ourston <br> <cite> Proceedings of the Eighth International Machine LearningWorkshop</cite>, pp. 178-182, Evanston, IL. June 1991. <p><blockquote>This paper presents constructive induction techniques recently added to theEITHER theory refinement system.  These additions allow EITHER to handlearbitrary gaps at the ``top,'' ``middle,'' and/or ``bottom'' of an incompletedomain theory.  <i> Intermediate concept utilization</i> employs existing rulesin the theory to derive higher-level features for use in induction.  <i>Intermediate concept creation</i> employs inverse resolution to introduce newintermediate concepts in order to fill gaps in a theory that span multiplelevels.  These revisions allow EITHER to make use of imperfect domain theoriesin the ways typical of previous work in both constructive induction and theoryrefinement.  As a result, EITHER is able to handle a wider range of theoryimperfections than does any other existing theory refinement system.</blockquote><!WA42><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/either-ci-ml-91.ps.Z"><!WA43><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><a name="either-tr-91.ps.Z" </a><b> <li> Theory Refinement with Noisy Data </b> <br> Raymond J. Mooney and Dirk Ourston <br> Technical Report AI 91-153, Artificial Intelligence Lab, University ofTexas at Austin, March 1991. <p><blockquote>This paper presents a method for revising an approximate domain theory based onnoisy data. The basic idea is to avoid making changes to the theory thataccount for only a small amount of data. This method is implemented in theEITHER propositional Horn-clause theory revision system.  The paperpresents empirical results on artificially corrupted data to show that thismethod successfully prevents over-fitting.  In other words, when the data isnoisy, performance on novel test data is considerably better than revising thetheory to completely fit the data. When the data is not noisy, noise processingcauses no significant degradation in performance.  Finally, noise processingincreases efficiency and decreases the complexity of the resulting theory.</blockquote><!WA44><a href="file://ftp.cs.utexas.edu/pub/mooney/papers/either-tr-91.ps.Z"><!WA45><img align=top src="http://www.cs.utexas.edu/users/ml/paper.xbm"></a><p><! ===========================================================================><hr><address><!WA46><a href="http://www.cs.utexas.edu/users/estlin/">estlin@cs.utexas.edu</a></address>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -