⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 46.txt

📁 This complete matlab for neural network
💻 TXT
📖 第 1 页 / 共 5 页
字号:
published papers---but not until at least a year after the cool people know al
l about it. Which means that the cool people have a year's head start on worki
ng with new ideas.


How do the cool people find out about a new idea? Maybe they hear about it at 
a conference; but much more likely, they got it through the Secret Paper Passi
ng Network. Here's how it works. Jo Cool gets a good idea. She throws together
 a half-assed implementation and it sort of works, so she writes a draft paper
 about it. She wants to know whether the idea is any good, so she sends copies
 to ten friends and asks them for comments on it. They think it's cool, so as 
well as telling Jo what's wrong with it, they lend copies to their friends to 
Xerox. Their friends lend copies to their friends, and so on. Jo revises it a 
bunch a few months later and sends it to AAAI. Six months later, it first appe
ars in print in a cut-down five-page version (all that the AAAI proceedings al
low). Jo eventually gets around to cleaning up the program and writes a longer
 revised version (based on the feedback on the AAAI version) and sends it to t
he AI Journal. AIJ has almost two years turn-around time, what with reviews an
d revisions and publication delay, so Jo's idea finally appears in a journal f
orm three years after she had it---and almost that long after the cool people 
first found out about it. So cool people hardly ever learn about their subfiel
d from published journal articles; those come out too late.


You, too, can be a cool people. Here are some heuristics for getting connected
:


There's a bunch of electronic mailing lists that discuss AI subfields like con
nectionism or vision. Get yourself on the ones that seem interesting. 

Whenever you talk about an idea you've had with someone who knows the field, t
hey are likely not to give an evaluation of your idea, but to say, ``Have you 
read X?'' Not a test question, but a suggestion about something to read that w
ill probably be relevant. If you haven't read X, get the full reference from y
our interlocutor, or better yet, ask to borrow and Xerox his copy. 

When you read a paper that excites you, make five copies and give them to peop
le you think will be interested in it. They'll probably return the favor. 

The lab has a number of on-going informal paper discussion groups on various s
ubfields. These meet every week or two to discuss a paper that everyone has re
ad. 

Some people don't mind if you read their desks. That is, read the papers that 
they intend to read soon are heaped there and turn over pretty regularly. You 
can look over them and see if there's anything that looks interesting. Be sure
 to ask before doing this; some people do mind. Try people who seem friendly a
nd connected. 

Similarly, some people don't mind your browsing their filing cabinets. There a
re people in the lab who are into scholarship and whose cabinets are quite com
prehensive. This is often a faster and more reliable way to find papers than u
sing the school library. 

Whenever you write something yourself, distribute copies of a draft of it to p
eople who are likely to be interested. (This has a potential problem: plagiari
sm is rare in AI, but it does happen. You can put something like ``Please do n
ot photocopy or quote'' on the front page as a partial prophylactic.) Most peo
ple don't read most of the papers they're given, so don't take it personally w
hen only a few of the copies you distribute come back with comments on them. I
f you go through several drafts---which for a journal article you should---few
 readers will read more than one of them. Your advisor is expected to be an ex
ception. 

When you finish a paper, send copies to everyone you think might be interested
. Don't assume they'll read it in the journal or proceedings spontaneously. In
ternal publication series (memos and technical reports) are even less likely t
o be read. 

The more different people you can get connected with, the better. Try to swap 
papers with people from different research groups, different AI labs, differen
t academic fields. Make yourself the bridge between two groups of interesting 
people working on related problems who aren't talking to each other and sudden
ly reams of interesting papers will flow across your desk. 

When a paper cites something that looks interesting, make a note of it. Keep a
 log of interesting references. Go to the library every once in a while and lo
ok the lot of them up. You can intensively work backward through a ``reference
 graph'' of citations when you are hot on the trail of an interesting topic. A
 reference graph is a web of citations: paper A cites papers B and C, B cites 
C and D, C cites D, and so on. Papers that you notice cited frequently are alw
ays worth reading. Reference graphs have weird properties. One is that often t
here are two groups of people working on the same topic who don't know about e
ach other. You may find yourself close to closure on searching a graph and sud
denly find your way into another whole section. This happens when there are di
fferent schools or approaches. It's very valuable to understand as many approa
ches as possible---often more so than understanding one approach in greater de
pth. 

Hang out. Talk to people. Tell them what you're up to and ask what they're doi
ng. (If you're shy about talking to other students about your ideas, say becau
se you feel you haven't got any, then try talking to them about the really goo
d---or unbelievably foolish---stuff you've been reading. This leads naturally 
into the topic of what one might do next.) There's an informal lunch group tha
t meets in the seventh floor playroom around noon every day. People tend to wo
rk nights in our lab, and so go for dinner in loose groups. Invite yourself al
ong. 

If you interact with outsiders much---giving demos or going to conferences---g
et a business card. Make it easy to remember your name. 

At some point you'll start going to scientific conferences. When you do, you w
ill discover fact that almost all the papers presented at any conference are b
oring or silly. (There are interesting reasons for this that aren't relevant h
ere.) Why go to them then? To meet people in the world outside your lab. Outsi
de people can spread the news about your work, invite you to give talks, tell 
you about the atmosphere and personalities at a site, introduce you to people,
 help you find a summer job, and so forth. How to meet people? Walk up to some
one whose paper you've liked, say ``I really liked your paper'', and ask a que
stion. 

Get summer jobs away at other labs. This gives you a whole new pool of people 
to get connected with who probably have a different way of looking at things. 
One good way to get summer jobs at other labs is to ask senior grad students h
ow. They're likely to have been places that you'd want to go and can probably 
help you make the right connections. 

4. Learning other fields

It used to be the case that you could do AI without knowing anything except AI
, and some people still seem to do that. But increasingly, good research requi
res that you know a lot about several related fields. Computational feasibilit
y by itself doesn't provide enough constraint on what intelligence is about. O
ther related fields give other forms of constraint, for example experimental d
ata, which you can get from psychology. More importantly, other fields give yo
u new tools for thinking and new ways of looking at what intelligence is about
. Another reason for learning other fields is that AI does not have its own st
andards of research excellence, but has borrowed from other fields. Mathematic
s takes theorems as progress; engineering asks whether an object works reliabl
y; psychology demands repeatable experiments; philosophy rigorous arguments; a
nd so forth. All these criteria are sometimes applied to work in AI, and adept
ness with them is valuable in evaluating other people's work and in deepening 
and defending your own.


Over the course of the six or so years it takes to get a PhD at MIT , you can 
get a really solid grounding in one or two non-AI fields, read widely in sever
al more, and have at least some understanding of the lot of them. Here are som
e ways to learn about a field you don't know much about:


Take a graduate course. This is solidest, but is often not an efficient way to
 go about it. 

Read a textbook. Not a bad approach, but textbooks are usually out of date, an
d generally have a high ratio of words to content. 

Find out what the best journal in the field is, maybe by talking to someone wh
o knows about it. Then skim the last few years worth and follow the reference 
trees. This is usually the fastest way to get a feel of what is happening, but
 can give you a somewhat warped view. 

Find out who's most famous in the field and read their books. 

Hang out with grad students in the field. 

Go to talks. You can find announcements for them on departmental bulletin boar
ds. 

Check out departments other than MIT 's. MIT will give you a very skewed view 
of, for example, linguistics or psychology. Compare the Harvard course catalog
. Drop by the graduate office over there, read the bulletin boards, pick up an
y free literature. 

Now for the subjects related to AI you should know about.


Computer science is the technology we work with. The introductory graduate cou
rses you are required to take will almost certainly not give you an adequate u
nderstanding of it, so you'll have to learn a fair amount by reading beyond th
em. All the areas of computer science---theory, architectures, systems, langua
ges, etc.---are relevant. 

Mathematics is probably the next most important thing to know. It's critical t
o work in vision and robotics; for central-systems work it usually isn't direc
tly relevant, but it teaches you useful ways of thinking. You need to be able 
to read theorems, and an ability to prove them will impress most people in the
 field. Very few people can learn math on their own; you need a gun at your he
ad in the form of a course, and you need to do the problem sets, so being a li
stener is not enough. Take as much math as you can early, while you still can;
 other fields are more easily picked up later. 

Computer science is grounded in discrete mathematics: algebra, graph theory, a
nd the like. Logic is very important if you are going to work on reasoning. It
's not used that much at MIT , but at Stanford and elsewhere it is the dominan
t way of thinking about the mind, so you should learn enough of it that you ca
n make and defend an opinion for yourself. One or two graduate courses in the 
MIT math department is probably enough. For work in perception and robotics, y
ou need continuous as well as discrete math. A solid background in analysis, d
ifferential geometry and topology will provide often-needed skills. Some stati
stics and probability is just generally useful.


·   Cognitive psychology mostly shares a worldview with AI, but practitioners
 have rather different goals and do experiments instead of writing programs. E
veryone needs to know something about this stuff. Molly Potter teaches a good 
graduate intro course at MIT . 


·   Developmental psychology is vital if you are going to do learning work. I
t's also more generally useful, in that it gives you some idea about which thi
ngs should be hard and easy for a human-level intelligence to do. It also sugg
ests models for cognitive architecture. For example, work on child language ac
quisition puts substantial constraints on linguistic processing theories. Susa
n Carey teaches a good graduate intro course at MIT . 


·   ``Softer'' sorts of psychology like psychoanalysis and social psychology 
have affected AI less, but have significant potential. They give you very diff
erent ways of thinking about what people are. Social ``sciences'' like sociolo
gy and anthropology can serve a similar role; it's useful to have a lot of per
spectives. You're on your own for learning this stuff. Unfortunately, it's har
d to sort out what's good from bad in these fields without a connection to a c
ompetent insider. Check out Harvard: it's easy for MIT students to cross-regis
ter for Harvard classes. 


·   Neuroscience tells us about human computational hardware. With the recent
 rise of computational neuroscience and connectionism, it's had a lot of influ
ence on AI. MIT 's Brain and Behavioral Sciences department offers good course
s on vision (Hildreth, Poggio, Richards, Ullman) motor control (Hollerbach, Bi
zzi) and general neuroscience (9.015, taught by a team of experts). 


·   Linguistics is vital if you are going to do natural language work. Beside
s that, it exposes a lot of constraint on cognition in general. Linguistics at
 MIT is dominated by the Chomsky school. You may or may not find this to your 
liking. Check out George Lakoff's recent book Women, Fire, and Dangerous Thing
s as an example of an alternative research program. 


·   Engineering, especially electrical engineering, has been taken as a domai
n by a lot of AI research , especially at MIT . No accident; our lab puts a lo
t of stock in building programs that clearly do something, like analyzing a ci
rcuit. Knowing EE is also useful when it comes time to build a custom chip or 
debug the power supply on your Lisp machine. 


·   Physics can be a powerful influence for people interested in perception a
nd robotics. 


·   Philosophy is the hidden framework in which all AI is done. Most work in 
AI takes implicit philosophical positions without knowing it. It's better to k
now what your positions are. Learning philosophy also teaches you to make and 
follow certain sorts of arguments that are used in a lot of AI papers. Philoso
phy can be divided up along at least two orthogonal axes. Philosophy is usuall
y philosophy of something; philosophy of mind and language are most relevant t
o AI. Then there are schools. Very broadly, there are two very different super
schools: analytic and Continental philosophy. Analytic philosophy of mind for 
the most part shares a world view with most people in AI. Continental philosop
hy has a very different way of seeing which takes some getting used to. It has
 been used by Dreyfus to argue that AI is impossible. More recently, a few res
earchers have seen it as compatible with AI and as providing an alternative ap
proach to the problem. Philosophy at MIT is of the analytical sort, and of a s
chool that has been heavily influenced by Chomsky's work in linguistics. 


This all seems like a lot to know about, and it is. There's a trap here: think
ing ``if only I knew more X, this problem would be easy,'' for all X. There's 
always more to know that could be relevant. Eventually you have to sit down an
d solve the problem.


5. Notebooks

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -