mlpjacob.m

来自「递归贝叶斯估计的工具包」· M 代码 · 共 39 行

M
39
字号
function g = mlpjacob(net, x)%MLPJACOB Backpropagate gradient of error function for 2-layer network.%%   Description%   G = MLPJACOB(NET, X) takes a network data structure NET and an%   input vector X and returns a matrix G whose J, K%   element contains the derivative of network output K with respect to%   input parameter J.%%   See also%   MLP, MLPGRAD, MLPBKP%%   Copyright (c) Christopher M Bishop, Ian T Nabney (1996, 1997)%                 This function coded by Rudolph van der Merwe (2002)[y, z, a] = mlpfwd(net, x);switch net.outfncase 'linear'  deltas = eye(net.nout);  delhid = deltas*net.w2';  delhid = delhid.*repmat((1.0 - z.*z),net.nout,1);case {'logistic'}  deltas = diag(y.*(1-y));  delhid = deltas*net.w2';  delhid = delhid.*repmat((1.0 - z.*z),net.nout,1);case {'softmax'}  deltas = diag(y)-y'*y;  delhid = deltas*net.w2';  delhid = delhid.*repmat((1.0 - z.*z),net.nout,1);otherwise  error(['Unknown activation function ', net.actfn]);end% Finally, evaluate the first-layer gradients.g = (delhid*net.w1')';

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?