⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 ttr1.m

📁 书籍“Regularization tools for training large feed-forward neural networks using Automatic Differentiat
💻 M
字号:
function [w1,b1,tr,rq] = ttr1(w1,b1,f1,...					xc,P,T,VA,VAT,TE,TET,TP)%TTR1 Trains a large feed-forward network containing no hidden layers%the Gauss-Newton method on a Tikhonov regularized problem.%	%	[W1,B1,TR,RQ] =TTR1(W1,B1,'F1',%					XC,P,T,VA,VAT,TE,TET,TP)%	  Wi - Weight matrix of ith layer.%	  Bi - Bias vector of ith layer.%	  F  - Transfer function (string) of ith layer.%	  XC - Center point for the weights and biases%	  P  - RxQ matrix of input vectors.%	  T  - S2xQ matrix of target vectors.%	  VA - matrix of validation set%	  VAT - matrix of target values corresponding to VA%	  TE - matrix of test set%	  TET - matrix of target values corresponding to TE%	  TP - Training parameters (optional).%	%	Training parameters are:%	  TP(1) - #epochs in striplength, (also between displays), default = 5%	  TP(2) - Maximum number of epochs to train, default = 50%	  TP(3) - Sum-squared error goal, default = 0.01%	  TP(4) - Generalization loss threshold (%), default = 5%	  TP(5) - Initial value of MU, default = 0.1%	  TP(6) - Minimum value of MU, default = 0.001%	  TP(7) - controls termination criteria. Code is two digets XY.%                 X controls outer termination criterion. =1 absolute criterion%                   on error function is used, =2 relative difference criterion%                   on error function is used, >2 early stopping is used%                   (default=3)%                 Y controls inner criterion used when the CG-method is used%                   to compute the search direction. =1 original criteria from%                   LSQR is used, =2 Dembo et al criteria is used, =3 Jerry's%                   criterion is used (default=2)%	  TP(8) - controls linesearch%                 =1 parabola in R(m) is used, =2 curvelinear search is used%                 (default=1)%	  TP(9) - controls method to compute search direction%                 =2 unscaled CG, =3 diagonal scaled CG%                 (default =2)%%	Missing parameters and NaN's are replaced with defaults.%%	Returns:%	  Wi - new weights.%	  Bi - new biases.%	  TR - resulting matrix containing a row of information corresponding%              to each epoch (iteration).%              Each row contains %              "Fsq MU relquot E(va) norm(dx) progress/1000 alpha"%              (See paper "Regularization tools for training%              feed-forward neural networks, Part I: Theory and basic%              algorithms" to find some description of the row contents)%	  RQ - resulting quantities (see below)%	%	Resulting quantities are:%	  RQ(1) - total number of iterations (epochs)%	  RQ(2) - total number of net evaluations (funcev)%         RQ(3) - elapsed time in computing searchdirection%         RQ(4) - #total inner iterations in computing search direction%	  RQ(5) - error for the validation set (min(E(va)))%	  RQ(6) - error for the test set at the point where min(E(va)) occurs%	  RQ(7) - number of epochs until min(E(va))) occurs%         RQ(8) - total time inside ttr1.m%% Per Lindstrom, Computing Science, Umea University, Sweden% email: perl@cs.umu.se% $Revision: 0.0if nargin <  10  error('TTR1: Not enough arguments.')enddisp('TTR1: Sorry! Not implemented')end

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -