⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 396.txt

📁 This complete matlab for neural network
💻 TXT
字号:
发信人: GzLi (笑梨), 信区: DataMining
标  题: [合集]什么是meta-learning?应如何翻译?
发信站: 南京大学小百合站 (Tue Apr 16 11:46:04 2002), 站内信件

luchu (luchu) 于Mon Apr  8 14:01:02 2002)
提到:

和和,菜鸟一个,不过要急着写论文,请高手帮忙.



xlcy (all nothing) 于Mon Apr  8 21:25:28 2002提到:

大概的意思是基于知识的知识发现,
如何翻译不太清楚


Axiao (阿肖期待涅磐中) 于Tue Apr  9 10:34:59 2002)
提到:

meta来自希腊语,表示在近旁或在后,当作为前缀表示其后面的那个词的更高级或更新的
发展阶段,与原来状态相比这是一个上升了或更复杂的状态。

所以可不可以翻译为“深入学习”



fervvac (高远) 于Tue Apr  9 10:46:10 2002提到:

An excellent study on the origins of words (prefixes in this example), :p

From what I perceive, meta - XXX is the "schema" of XXX and XXX being one of its
instance. This understanding, however,is only valid in the context of DB.
So for met-learning, is it about the way to learn sth., for example, the study
of meta-learning might include how we (human beings) learn a new thing.



bravewei (bravewei) 于Tue Apr  9 16:04:32 2002)
提到:

我看到一篇文章叫做metacomputing,称为元计算技术(http://www.st121.com.cn/produ
cts/article/20010627-1.htm)这个网址很好上,教育网不要花钱的。你看看字面翻译是
不是可以借鉴一下,至于具体内容要靠你自己理解了。



eastcamel (快乐加班工) 于Tue Apr  9 19:29:21 2002提到:

这个好像跟grid computing有关,跟数据挖掘关系不大


lucky (乐凯) 于Wed Apr 10 06:38:31 2002)
提到:

I don't know how to translate it, maybe (Yuan(2) Xue(2) Xi(2)). But I can give
 you some ideas about it.


Suppose you have several classifiers c1,c2,..,ck (called base classifier) for 
a learning problem, each one will make a classification decision when seeing a
 query instance. Meta-learning technology tries to learn a meta-classifier c b
ased on c1-ck but outperform any signle base classifier.


There are several kinds of meta-learning strategy: combiner, arbiter, multi-le
vel. For example, combiner take the labels made by each base classifier and th
e real label as the training example of the meta-learner. The meta-learner can
 use any learning algorithm to learn from training examples.


Stacked generalization and stacked regression are a good meta-learning method 
which has been verified effective in many emperical study. The idea of stacked
 regression is like this: suppose there is a training instance q, each base le
arner ci makes a prediction fi(q) (Here we assume the classifier is for real v
alue, but it also works for categorical data), and we try to combine the base 
classifiers by weighting them. The prediction made by meta-learner is 

w1 * f1(q) + w2 * f2(q) + ... + wk * fk(q). w1, w2, ..., wk are the weights fo
r each base-classifier, and they sum up to 1. So the key problem here is how t
o learn the weights. Stacked regression uses linear least squares to do it bas
ed on non-negative weight assumption. Cross-validation is also important in it
.


Boosting and Bagging are other very famous meta-learning methods.


You can check the following site on this topic, which provides links to many g
ood papers.

http://www.ai.univie.ac.at/oefai/ml/metal/metal-bib.html




daniel (飞翔鸟) 于Wed Apr 10 11:52:38 2002提到:


元学习
the journal Machine Learning will publish a sepcial issue on this topic
in the next year
in general, it means learning on learning, or learning about learning, 
hehe 


fervvac (高远) 于Wed Apr 10 12:20:52 2002提到:

Thanks for the essay. Together with daniel's authoratative explanation, the
meta-learning in this example denotes to the process of learning the parameter/
structure of a super-classifier given a set of classifiers.

Just some quick comments:
1. I know littleb about AI, but I remember one faculty told me that the problem
of combining multiple classifiers has been extensively studied. And they work
well in practise, for a small amount of classifier. 

2. The weighted classifiers seem to be simple, or a little bit rough? I expect
w_k is not a constant. I doubt if a set of constant w_k will lead to 
optimality (one can easily construct a counter-example). 

3. A related, but most important isue is how to evaluate the result, or if 
optimality is achieved, what is the objective function?

Just some provocative questions from layman, hoping to raise some network
traffic to this board, :p. 



lucky (乐凯) 于Wed Apr 10 15:25:35 2002)
提到:

I am not working on AI either. Let me try to answer the questions from what I 
know:)

1. Yes, there are many previous works on this topic, and it is still an active
 research problem. Combining classifiers or stacking technique is only one kin
d of meta-learning. Boosting and Bagging have also been explored quite a lot.


2. The weights are not constants. They must be learned from the base-classifie
rs. That's why it's meta-learning. 

It's quite simple, which we often favor.

3. Optimality shouldn't be the objective. The accuracy of the meta-learner sho
uld be evaluated on some test data different from the training data.


another link to combining multiple classifier:

http://iris.usc.edu/Vision-Notes/bibliography/pattern566.html#Multiple%20Class
ifiers,%20Combining%20Classifiers,%20Combinations






daniel (飞翔鸟) 于Wed Apr 10 18:48:28 2002提到:

Note that meta-learning is not a same concept to combining classfiers,

or more general, ensembles. The term itself has some ambiguity, and

the machine learning community does not hold an agreement on it up

to now. Although some researchers include ensembles as the content

of meta-learning, there are researchers who do not agree. Now combining

classifiers is more popular to be regarded as a subdomain of ensemble

learning. As for meta-learning, a popularly accepted recognition is to

learn from learning, learn for learning, or learn on learning. Roughly

speaking, it means to get something from the learning, or capture the

nature of some specific learning paradigm, then use them to guide

future learning.

Some words written by Christophe Giraud-Carrier, a leading expert

on meta-learning, may help to understand the notion. Hope you enjoy it:

"Discovering new algorithms (or versions thereof) has occupied much of

the research of the past decade with reasonable success. Despite empirical

studies comparing various algorithms, however, much remains to be learned

about what makes a particular algorithm work well (or not) in a particular

domain. There is a need to formulate or acquire such meta-knowledge, and

make consistent use of it.

"Although the term meta-learning has been ascribed different meanings by

different researchers, ...... meta-learning is defined as any attempt to

learn from the learning process itself. The goal is to understand how

learning itself can become flexible and/or adaptable, taking into account

the domain or task under study."
    


daniel (飞翔鸟) 于Wed Apr 10 19:27:18 2002提到:

   I forgot to mention that for ensemble learning such as combining

classifiers, it could be included into the research of meta-learning

only if the emphasisi were put on exploring which kinds of combining

shemes are better for a specific learner or task.  The designing

of concrete combining methods could not be regarded as meta-learning.

As for stacked generalization, it is a specific ensemble learning method

where the first-level learners are sometimes called meta-learner.

But it just sounds like meta-learning, and has little relation

to meta-learning.


⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -