63.txt

来自「This complete matlab for neural network」· 文本 代码 · 共 109 行

TXT
109
字号
发信人: yaomc (白头翁&山东大汉), 信区: DataMining
标  题: Introduction to artificial neural networks in SAS
发信站: 南京大学小百合站 (Mon Mar 25 15:28:30 2002), 站内信件

This abstract comes from the help of Enterprise Miner of neural network node.

Introduction 


Artificial neural networks were originally developed by researchers 
who were trying to mimic the neurophysiology of the human brain. By 
combining many simple computing elements (neurons or units) into a 
highly interconnected system, these researchers hoped to produce complex
 phenomena such as intelligence. In recent years, neural network 
researchers have incorporated methods from statistics and numerical 
analysis into their networks. While there is considerable controversy 
over whether artificial neural networks are really intelligent, there is
 no doubt that they have developed into very useful statistical models.
 More specifically, feedforward neural networks are a class of 
flexible nonlinear regression, discriminant, and data reduction models.
 By detecting complex nonlinear relationships in data, neural networks 
can help to make predictions about real-world problems. 

Neural networks are especially useful for prediction problems where: 


no mathematical formula is known that relates inputs to outputs. 

prediction is more important than explanation. 

there is lots of training data. 


Common applications of neural networks include credit risk assessment, 
direct marketing, and sales prediction. 

The Neural Network node provides a variety of feedforward networks 
that are commonly called backpropagation or backprop networks. This 
terminology causes much confusion. Strictly speaking, backpropagation 
refers to the method for computing the error gradient for a 
feedforward network, a straightforward application of the chain rule 
of elementary calculus. By extension, backprop refers to various 
training methods that use backpropagation to compute the gradient. By 
further extension, a backprop network is a feedforward network trained 
by any of various gradient-descent techniques. Standard backprop is a 
euphemism for the generalized delta rule, the training technique that 
was popularized by Rumelhart, Hinton, and Williams in 1986 and which 
remains the most widely used supervised training method for 
feedforward neural nets. Standard backprop is also one of the most 
difficult to use, tedious, and unreliable training methods. Unlike the 
other training methods in the Neural Network node, standard backprop 
comes in two varieties: 


Batch backprop, like conventional optimization techniques, reads the 
entire data set, updates the weights, reads the entire data set, updates
 the weights, and so on. 

Incremental backprop reads one case, updates the weights, reads one 
case, updates the weights, and so on. 


Batch backprop is one of the slowest training methods. Although the 
Neural Network node provides an option for batch backprop, it is 
recommended that you never use it for serious work. Incremental backprop
 can be useful for very large, redundant data sets, if you are skilled 
at setting the learning rate and momentum appropriately. 

Fortunately, there is no need to suffer through the slow convergence and
 the tedious tuning of standard backprop. Much of the neural network 
research literature is devoted to attempts to speed up backprop. Most of
 these methods are inconsequential; two that are effective are Quickprop
 and RPROP, both of which are available in the Neural Network node. In 
addition, the Neural Network node provides a variety of conventional 
methods for nonlinear optimization that have been developed by numerical
 analysts over the past several centuries and that are usually faster 
and more reliable than the algorithms from the neural network 
literature. 

Up until the early 1990s, neural networks were often viewed as 
alternatives to statistical methods. Some researchers made outlandish 
claims that neural networks could be used to analyze data with no 
expertise required on the part of the analyst. These unjustifiable 
claims, combined with the unreliability of early algorithms such as 
standard backprop, led to a backlash in which many people, especially 
statisticians, dismissed neural networks as entirely worthless for 
data analysis. But in recent years, it has been widely recognized that 
many kinds of neural networks are statistical methods, and that when 
neural networks are trained via reliable methods such as conventional 
optimization techniques or Bayesian learning, the results are just as 
valid as those obtained by many nonlinear or nonparametric statistical 
methods. 

Neural networks, like other statistical methods, cannot magically create
 information out of nothing - the rule "garbage in, garbage out" still 
applies. The predictive ability of a neural network depends in part on 
the quality of the training data. It is also important for the analyst 
to have some knowledge of the subject matter, especially for selecting 
inputs and choosing an appropriate error function. Experienced neural 
network users typically try several architectures to determine the 
best network for a specific data set. The design process and the 
training process are both iterative. 

--

Welcome to http://datamining.bbs.lilybbs.net.

※ 来源:.南京大学小百合站 bbs.nju.edu.cn.[FROM: 202.204.36.15]

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?