📄 mlpjacob.m
字号:
function g = mlpjacob(net, x)%MLPJACOB Backpropagate gradient of error function for 2-layer network.%% Description% G = MLPJACOB(NET, X) takes a network data structure NET and an% input vector X and returns a matrix G whose J, K% element contains the derivative of network output K with respect to% input parameter J.%% See also% MLP, MLPGRAD, MLPBKP%% Copyright (c) Christopher M Bishop, Ian T Nabney (1996, 1997)% This function coded by Rudolph van der Merwe (2002)[y, z, a] = mlpfwd(net, x);switch net.outfncase 'linear' deltas = eye(net.nout); delhid = deltas*net.w2'; delhid = delhid.*repmat((1.0 - z.*z),net.nout,1);case {'logistic'} deltas = diag(y.*(1-y)); delhid = deltas*net.w2'; delhid = delhid.*repmat((1.0 - z.*z),net.nout,1);case {'softmax'} deltas = diag(y)-y'*y; delhid = deltas*net.w2'; delhid = delhid.*repmat((1.0 - z.*z),net.nout,1);otherwise error(['Unknown activation function ', net.actfn]);end% Finally, evaluate the first-layer gradients.g = (delhid*net.w1')';
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -