虫虫首页|资源下载|资源专辑|精品软件
登录|注册

W2

  • Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs

    Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs and an initial network % [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the % network with backpropagation. % % The activation functions must be either linear or tanh. The network % architecture is defined by the matrix NetDef consisting of two % rows. The first row specifies the hidden layer while the second % specifies the output layer. %

    标签: back-propagation corresponding input-output algorithm

    上传时间: 2016-12-26

    上传用户:exxxds

  • This function calculates Akaike s final prediction error % estimate of the average generalization e

    This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %

    标签: generalization calculates prediction function

    上传时间: 2014-12-03

    上传用户:maizezhen

  • % Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is p

    % Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.

    标签: Levenberg-Marquardt desired network neural

    上传时间: 2016-12-26

    上传用户:jcljkh

  • This function calculates Akaike s final prediction error % estimate of the average generalization e

    This function calculates Akaike s final prediction error % estimate of the average generalization error for network % models generated by NNARX, NNOE, NNARMAX1+2, or their recursive % counterparts. % % [FPE,deff,varest,H] = nnfpe(method,NetDef,W1,W2,U,Y,NN,trparms,skip,Chat) % produces the final prediction error estimate (fpe), the effective number % of weights in the network if it has been trained with weight decay, % an estimate of the noise variance, and the Gauss-Newton Hessian. %

    标签: generalization calculates prediction function

    上传时间: 2016-12-26

    上传用户:脚趾头

  • 一个简化的背包问题:一个背包能装总重量为 T

    一个简化的背包问题:一个背包能装总重量为 T,现有 n 个物件,其重量分别为(W1、W2、…、Wn)。问能否从这 n 个物件中挑选若干个物件放入背包中,使其总重量正好为 T ?若有解则给出全部解,否则输出无解。

    标签: 背包问题

    上传时间: 2017-01-16

    上传用户:tianyi223

  • 一个旅行者有一个最多能用m公斤的背包

    一个旅行者有一个最多能用m公斤的背包,现在有n件物品,它们的重量分别是W1,W2,...,Wn,它们的价值分别为C1,C2,...,Cn.若每种物品只有一件求旅行者能获得最大总价值。

    标签: 旅行

    上传时间: 2017-01-22

    上传用户:奇奇奔奔

  • 哈夫曼树又称最优二叉树

    哈夫曼树又称最优二叉树,是一种带权路径长度最短的二叉树。所谓树的带权路径长度,就是树中所有的叶结点的权值乘上其到根结点的路径长度(若根结点为0层,叶结点到根结点的路径长度为叶结点的层数)。树的带权路径长度记为WPL=(W1*L1+W2*L2+W3*L3+...+Wn*Ln),N个权值Wi(i=1,2,...n)构成一棵有N个叶结点的二叉树,相应的叶结点的路径长度为Li(i=1,2,...n)。可以证明哈夫曼树的WPL是最小的。

    标签: 二叉树

    上传时间: 2017-06-08

    上传用户:wang5829

  • 采用CMAC网络对对频率为0.1572HZ的谐波进行辨识

    采用CMAC网络对对频率为0.1572HZ的谐波进行辨识,选择联想单元A*=4,权值Wa=W1+W2+W3+W4

    标签: 0.1572 CMAC HZ 网络

    上传时间: 2014-02-16

    上传用户:cazjing

  • 批处理感知器算法

    批处理感知器算法的代码matlab w1=[1,0.1,1.1;1,6.8,7.1;1,-3.5,-4.1;1,2.0,2.7;1,4.1,2.8;1,3.1,5.0;1,-0.8,-1.3;     1,0.9,1.2;1,5.0,6.4;1,3.9,4.0]; W2=[1,7.1,4.2;1,-1.4,-4.3;1,4.5,0.0;1,6.3,1.6;1,4.2,1.9;1,1.4,-3.2;1,2.4,-4.0;     1,2.5,-6.1;1,8.4,3.7;1,4.1,-2.2]; w3=[1,-3.0,-2.9;1,0.5,8.7;1,2.9,2.1;1,-0.1,5.2;1,-4.0,2.2;1,-1.3,3.7;1,-3.4,6.2;     1,-4.1,3.4;1,-5.1,1.6;1,1.9,5.1]; figure; plot(w3(:,2),w3(:,3),'ro'); hold on; plot(W2(:,2),W2(:,3),'b+'); W=[W2;-w3];%增广样本规范化 a=[0,0,0]; k=0;%记录步数 n=1; y=zeros(size(W,2),1);%记录错分的样本 while any(y<=0)     k=k+1;     y=a*transpose(W);%记录错分的样本     a=a+sum(W(find(y<=0),:));%更新a     if k >= 250         break     end end if k<250     disp(['a为:',num2str(a)])      disp(['k为:',num2str(k)]) else      disp(['在250步以内没有收敛,终止']) end %判决面:x2=-a2*x1/a3-a1/a3 xmin=min(min(w1(:,2)),min(W2(:,2))); xmax=max(max(w1(:,2)),max(W2(:,2))); x=xmin-1:xmax+1;%(xmax-xmin): y=-a(2)*x/a(3)-a(1)/a(3); plot(x,y)

    标签: 批处理 算法matlab

    上传时间: 2016-11-07

    上传用户:a1241314660

  • DIP SSOP QFN TQFP TO SOP SOT常用芯片2D3D AD封装库AD19封装库器

    DIP SSOP QFN TQFP TO SOP SOT常用芯片2D3D AD封装库AD19封装库器件库文件33库合集PCB Library : .PcbLibDate        : 2021/1/4Time        : 17:10:26Component Count : 200DIP4DIP6DIP8DIP8_MHDIP14DIP14_MHDIP16DIP16_MHDIP18DIP18_MHDIP20DIP20_MHDIP20_TPDIP22DIP22_MHDIP24DIP24_MHDIP24LDIP24L_MHDIP28DIP28LDIP40DIP40_MHDIP40_TPDIP48LQFP32 7X7LQFP44 10X10LQFP48 7X7LQFP52 10X10LQFP64 7x7LQFP64 10x10LQFP64 14x14LQFP80 12x12LQFP80 14x14LQFP100 14x14LQFP120 14x14LQFP128 14x14LQFP128 14x20LQFP144 20X20LQFP176 24x24LQFP208 28x28LQFP216 24x24LQFP256 28x28SOP4SOP4-W2.54SOP8SOP8-W2.54SOP8WSOP10SOP14SOP16SOP16-W2.54SOP16NSOP16WSOP18SOP18WSOP20SOP20ZSOP22SOP24SOP28SOP30SSOP8SSOP14SSOP16SSOP20SSOP24SSOP28SSOP48TQFP32 5x5TQFP32 7x7TQFP40 5x5TQFP44 10x10TQFP48 7x7TQFP52 10X10TQFP64 7x7TQFP64 10x10TQFP64 14x14TQFP80 12x12TQFP80 14x14TQFP100 14x14TQFP120 14x14TQFP128 14x14TQFP128 20x20TQFP144 20x20TQFP176 20x20

    标签: dip ssop qfn tqfp to sop sot 芯片

    上传时间: 2022-03-05

    上传用户:ibeikeleilei