代码搜索:Learning

找到约 5,352 项符合「Learning」的源代码

代码结果 5,352
www.eeworm.com/read/412367/11202060

m readme.m

This is a parsed code (MATLAB P files) version of the simulation routines accompanying Vojislav KECMAN's Book: LEARNING AND SOFT COMPUTING Support Vect
www.eeworm.com/read/412367/11202179

m read about simulational experiments.m

Vojislav KECMAN's Book: LEARNING AND SOFT COMPUTING Support Vector Machines, Neural Networks and Fuzzy Logic Models The MIT Press, Cambridge, MA, 2000 ISBN 0
www.eeworm.com/read/411382/11247785

m hop_stor.m

function W=hop_stor(P) % function W=hop_stor(P) % % performs the storage (learning phase) for a Hopfield network % % W - weight matrix % P - patterns to be stored (column wise matrix) % % Hugh
www.eeworm.com/read/146896/12605412

plg 矩阵相乘.plg

Build Log --------------------Configuration: 矩阵相乘 - Win32 Debug-------------------- Command Lines Creating temporary file "C:\DOCUME~1\scyfm\LO
www.eeworm.com/read/111603/15509349

bbl manual.bbl

\begin{thebibliography}{} \bibitem[Boser et~al., 1992]{Boser1992} Boser, B., Guyon, I., and Vapnik, V.~N. (1992). \newblock A training algorithm for optimal margin classifiers. \newblock In {\em
www.eeworm.com/read/389844/8496291

asv demo_incremental.asv

%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ % % Incremental ELM learning DEMO % % Author: Povilas Daniu餴s, paralax@hacker.lt % http://ai.hacker.lt - lithuanian
www.eeworm.com/read/389274/8536822

m example22a.m

%perc2a %%=============== %%=============== % figure('name','训练过程图示','numbertitle','off'); P=[-0.5 -0.5 0.3 0;-0.5 0.5 -0.5 1]; T=[1 1 0 0]; %initialization [R,Q]=size(P); [S,Q]=size(T)
www.eeworm.com/read/389274/8536969

m example22.m

%perc2 %%=============== %%=============== % figure('name','训练过程图示','numbertitle','off'); P=[-0.5 -0.5 0.3 0;-0.5 0.5 -0.5 1]; T=[1 1 0 0]; %initialization [R,Q]=size(P); [S,Q]=size(T);
www.eeworm.com/read/389274/8537162

m example24a.m

%perc4 %%=============== %%=============== figure('name','训练过程图示','numbertitle','off'); P=[-0.5 -0.5 0.3 0 -0.8;-0.5 0.5 -0.5 1 0]; T=[1 1 0 0 0]; %initialization [R,Q]=size(P); [S,Q]=size(T
www.eeworm.com/read/389274/8537460

m selforganize.m

function [w,wbias,y,d,b,sse]=selforganize(x,c,t) % RBF网络的实现 %x为np×ni的输入矩阵。np为输入样本个数,ni为RBF网络输入层单元数 %c为ni×m的初始中心矩阵。m为中心的个数 %t为np×no的期望输出矩阵。No为RBF网络输出层节单元数 [np,ni]=size(x); d=learning_c(x,c); %学