📄 311.txt
字号:
发信人: fervvac (高远), 信区: DataMining
标 题: Re: 什么是meta-learning?应如何翻译?
发信站: 南京大学小百合站 (Wed Apr 10 12:20:52 2002), 站内信件
Thanks for the essay. Together with daniel's authoratative explanation, the
meta-learning in this example denotes to the process of learning the parameter/
structure of a super-classifier given a set of classifiers.
Just some quick comments:
1. I know littleb about AI, but I remember one faculty told me that the problem
of combining multiple classifiers has been extensively studied. And they work
well in practise, for a small amount of classifier.
2. The weighted classifiers seem to be simple, or a little bit rough? I expect
w_k is not a constant. I doubt if a set of constant w_k will lead to
optimality (one can easily construct a counter-example).
3. A related, but most important isue is how to evaluate the result, or if
optimality is achieved, what is the objective function?
Just some provocative questions from layman, hoping to raise some network
traffic to this board, :p.
【 在 lucky (乐凯) 的大作中提到: 】
: I don't know how to translate it, maybe (Yuan(2) Xue(2) Xi(2)). But I can give
: you some ideas about it.
:
: Suppose you have several classifiers c1,c2,..,ck (called base classifier) for
: a learning problem, each one will make a classification decision when seeing a
: query instance. Meta-learning technology tries to learn a meta-classifier c b
: ased on c1-ck but outperform any signle base classifier.
:
: There are several kinds of meta-learning strategy: combiner, arbiter, multi-le
: vel. For example, combiner take the labels made by each base classifier and th
: e real label as the training example of the meta-learner. The meta-learner can
: use any learning algorithm to learn from training examples.
:
: Stacked generalization and stacked regression are a good meta-learning method
: which has been verified effective in many emperical study. The idea of stacked
: regression is like this: suppose there is a training instance q, each base le
: arner ci makes a prediction fi(q) (Here we assume the classifier is for real v
: alue, but it also works for categorical data), and we try to combine the base
: classifiers by weighting them. The prediction made by meta-learner is
: w1 * f1(q) + w2 * f2(q) + ... + wk * fk(q). w1, w2, ..., wk are the weights fo
: (以下引言省略 ... ...)
--
※ 来源:.南京大学小百合站 bbs.nju.edu.cn.[FROM: 饮水思源BBS]
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -