员工培训系统 首先确认您的机器上已经安装了VC6.0以上版本,要编译生成可执行文件,需打开后缀名为dsw的文件,系统会默认用VC打开,然后选择Build菜单下的Set Active Configuration选项,选择Project Configuration为Win32 Release,然后编译项目,得到可执行文件。连接数据库请参照第2章的方法,在ODBC数据源内添加Microsoft Access数据库train.mdb,并将数据源名设定为train即可实现数据库和应用程序的正常连接,程序才能正常实现数据库的访问。另外,如果用户自己编写了.chm的帮助文档只要和可执行文件放在同一个目录下就可以了。 用户要修改程序源码可以选择相应的.h和.cpp文件,本实例使用的命名规则基本是:DIALOG_***** 为管理对话框资源, *****Info为数据输入窗口资源, *****Set为结果集对应的类
上传时间: 2014-01-03
上传用户:luopoguixiong
In this demo, I use the EM algorithm with a Rauch-Tung-Striebel smoother and an M step, which I ve recently derived, to train a two-layer perceptron, so as to classify medical data (kindly provided by Steve Roberts and Will Penny from EE, Imperial College). The data and simulations are described in: Nando de Freitas, Mahesan Niranjan and Andrew Gee Nonlinear State Space Estimation with Neural Networks and the EM algorithm After downloading the file, type "tar -xf EMdemo.tar" to uncompress it. This creates the directory EMdemo containing the required m files. Go to this directory, load matlab5 and type "EMtremor". The figures will then show you the simulation results, including ROC curves, likelihood plots, decision boundaries with error bars, etc. WARNING: Do make sure that you monitor the log-likelihood and check that it is increasing. Due to numerical errors, it might show glitches for some data sets.
标签: Rauch-Tung-Striebel algorithm smoother which
上传时间: 2016-04-15
上传用户:zhenyushaw
JaNet: Java Neural Network Toolkit resume: A well documented toolkit for designing and training, and a java library for inclusion in third party programs. description: jaNet package is a java neural network toolkit, which you can use to design, test, train and optimize an ideal Neural Network for your private application. You can then include your saved network in your program using the jaNet.backprop package. The consequent documentation is only in french for the moment, but an english translation is planned. The java source code is released under GPL, and can be compiled with JDK, Symantec Cafe or MS Visual J
标签: documented designing training Network
上传时间: 2016-04-15
上传用户:zhanditian
神经网络的基本介绍,包括了由工具箱指定的有关网络结构和符号的基本材料以及建立神经网络的一些基本函数,例如new、init、adapt和train。以反向传播网络为例讲解了反向传播网络的原理和应用的基本过程。
标签: 神经网络
上传时间: 2013-12-16
上传用户:奇奇奔奔
这是读好的ORL 和YALE人脸库数据, 用LODA加载后, 变量train 代表是训练样本, test 代码是测试样本。
上传时间: 2014-01-04
上传用户:zhangyigenius
Face Recognition Library ======================== Advanced face recognition DLL using two functions : train and Recognize. Uses neural net back propogation alogorithm with more AI tools added for imaging optimization. Library works great even for a low resolution web cam image and requires the user to align to a mirror frame on screen. Complete Source Code with Video capture and feature extraction kit for Registered Users. Please register here for only $299 with Source Code : http://www.research-lab.com/facerecognitionorder.htm (c) www.research-lab.com
标签: Recognition recognition Advanced Library
上传时间: 2017-04-25
上传用户:784533221
matlab神经网络工具箱的实用指南,第一章是神经网络的基本介绍,第二章包括了由工具箱指定的有关网络结构和符号的基本材料以及建立神经网络的一些基本函数,例如new、init、adapt和train。第三章以反向传播网络为例讲解了反向传播网络的原理和应用的基本过程。
上传时间: 2017-05-07
上传用户:zhyiroy
人工神经网络分类实现,在vs2005下实现,训练数据和测试数据有train.txt和test.txt读入
上传时间: 2017-05-21
上传用户:洛木卓
svm中train方法,适用于做svm分类的用户直接调用
标签: svm
上传时间: 2015-05-05
上传用户:saberxun
% 生成训练样本集 clear all; clc; P=[110 0.807 240 0.2 15 1 18 2 1.5; 110 2.865 240 0.1 15 2 12 1 2; 110 2.59 240 0.1 12 4 24 1 1.5; 220 0.6 240 0.3 12 3 18 2 1; 220 3 240 0.3 25 3 21 1 1.5; 110 1.562 240 0.3 15 3 18 1 1.5; 110 0.547 240 0.3 15 1 9 2 1.5]; 0 1.318 300 0.1 15 2 18 1 2]; T=[54248 162787 168380 314797; 28614 63958 69637 82898; 86002 402710 644415 328084; 230802 445102 362823 335913; 60257 127892 76753 73541; 34615 93532 80762 110049; 56783 172907 164548 144040]; @907 117437 120368 130179]; m=max(max(P)); n=max(max(T)); P=P'/m; T=T'/n; %-------------------------------------------------------------------------% pr(1:9,1)=0; %输入矢量的取值范围矩阵 pr(1:9,2)=1; bpnet=newff(pr,[12 4],{'logsig', 'logsig'}, 'traingdx', 'learngdm'); %建立BP神经网络, 12个隐层神经元,4个输出神经元 %tranferFcn属性 'logsig' 隐层采用Sigmoid传输函数 %tranferFcn属性 'logsig' 输出层采用Sigmoid传输函数 %trainFcn属性 'traingdx' 自适应调整学习速率附加动量因子梯度下降反向传播算法训练函数 %learn属性 'learngdm' 附加动量因子的梯度下降学习函数 net.trainParam.epochs=1000;%允许最大训练步数2000步 net.trainParam.goal=0.001; %训练目标最小误差0.001 net.trainParam.show=10; %每间隔100步显示一次训练结果 net.trainParam.lr=0.05; %学习速率0.05 bpnet=train(bpnet,P,T); %------------------------------------------------------------------------- p=[110 1.318 300 0.1 15 2 18 1 2]; p=p'/m; r=sim(bpnet,p); R=r'*n; display(R);
上传时间: 2016-05-28
上传用户:shanqiu