虫虫首页|资源下载|资源专辑|精品软件
登录|注册

weight

  • % because we do not truncate and shift the convolved input % sequence, the delay of the desired out

    % because we do not truncate and shift the convolved input % sequence, the delay of the desired output sequence wrt % the convolved input sequence need only be the delay % introduced by the ideal weight vector centred at n=5

    标签: the convolved truncate sequence

    上传时间: 2015-12-27

    上传用户:www240697738

  • 用分支限界法求解背包问题(0/1背包) 1.问题描述:已知有N个物品和一个可以容纳TOT重量的背包

    用分支限界法求解背包问题(0/1背包) 1.问题描述:已知有N个物品和一个可以容纳TOT重量的背包,每种物品I的重量为weight,价值为Value。一个只能全放入或者不放入,求解如何放入物品,可以使背包里的物品的总价值最大。 2.设计思想与分析:对物品的选取与否构成一棵解树,左子树表示装入,右表示不装入,通过检索问题的解树得出最优解,并用结点上界杀死不符合要求的结点。

    标签: TOT 分支 背包问题

    上传时间: 2016-02-09

    上传用户:我们的船长

  • Computes BER v EbNo curve for convolutional encoding / soft decision Viterbi decoding scheme assum

    Computes BER v EbNo curve for convolutional encoding / soft decision Viterbi decoding scheme assuming BPSK. Brute force Monte Carlo approach is unsatisfactory (takes too long) to find the BER curve. The computation uses a quasi-analytic (QA) technique that relies on the estimation (approximate one) of the information-bits weight Enumerating Function (WEF) using A simulation of the convolutional encoder. Once the WEF is estimated, the analytic formula for the BER is used.

    标签: convolutional Computes encoding decision

    上传时间: 2013-12-23

    上传用户:咔乐坞

  • Swarm intelligence algorithms are based on natural behaviors. Particle swarm optimization (PSO) is

    Swarm intelligence algorithms are based on natural behaviors. Particle swarm optimization (PSO) is a stochastic search and optimization tool. Changes in the PSO parameters, namely the inertia weight and the cognitive and social acceleration constants, affect the performance of the search process. This paper presents a novel method to dynamically change the values of these parameters during the search. Adaptive critic design (ACD) has been applied for dynamically changing the values of the PSO parameters.

    标签: intelligence optimization algorithms behaviors

    上传时间: 2014-01-08

    上传用户:lgnf

  • 构造哈夫曼树 哈弗曼树中没有度为一的节点

    构造哈夫曼树 哈弗曼树中没有度为一的节点,是标准的二叉树,所以有n个叶子结点时,需要一个长度为2n-1的一维数组存储哈弗曼树的结点。 (1)、n个叶子节点只有weight权值,处理非叶子节点,从ht[i](ht[1]~ht[n-1])中找到ht[i].weight最小的两个节点ht[s1]和ht[s2],这就是Select(int n,int &s1,int & s2,HTNode *ht)函数完成的功能。 (2)、调用select函数,并将ht[s1]和ht[s2]作为ht[l]的左右子树,即ht[s1]和ht[s2]双亲节点为ht[l],新的根节点的权值为其左右子树权值之和, ht[l].weight=ht[s1].weight+ht[s2].weight

    标签: 节点

    上传时间: 2016-06-13

    上传用户:ztj182002

  • 学上的基本神经元

    学上的基本神经元,人工的神经网络也有基本的神经元。每个神经元有特定数量的输入,也会为每个神经元设定权重(weight)。权重是对所输入的资料的重要性的一个指标。然后,神经元会计算出权重合计值(net value),而权重合计值就是将所有输入乘以它们的权重的合计。每个神经元都有它们各自的临界值(threshold),而当权重合计值大于临

    标签:

    上传时间: 2014-06-05

    上传用户:luke5347

  • j2me设计的界面包

    j2me设计的界面包,很漂亮实用。 light weight UI toolkit

    标签: j2me

    上传时间: 2013-12-20

    上传用户:kristycreasy

  • 编写一个Java程序

    编写一个Java程序,设计一个运输工具类Transport,包含的成员属性有:速度pace、载重量load;汽车类Vehicle是Transport的子类,其中包含的属性有:车轮的个数wheels和车重weight;飞机Airplane类是Transport的子类其中包含的属性有:机型enginertype和发动机数量enginers。每个类都有相关所有数据的输出方法。

    标签: Java 编写 程序

    上传时间: 2016-11-16

    上传用户:miaochun888

  • This function calculates Akaike s final prediction error % estimate of the average generalization e

    This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %

    标签: generalization calculates prediction function

    上传时间: 2014-12-03

    上传用户:maizezhen

  • % Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is p

    % Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.

    标签: Levenberg-Marquardt desired network neural

    上传时间: 2016-12-26

    上传用户:jcljkh