📄 rbfgrad.htm
字号:
<html><head><title>Netlab Reference Manual rbfgrad</title></head><body><H1> rbfgrad</H1><h2>Purpose</h2>Evaluate gradient of error function for RBF network.<p><h2>Synopsis</h2><PRE>g = rbfgrad(net, x, t)[g, gdata, gprior] = rbfgrad(net, x, t)</PRE><p><h2>Description</h2><CODE>g = rbfgrad(net, x, t)</CODE> takes a network data structure <CODE>net</CODE>together with a matrix <CODE>x</CODE> of inputvectors and a matrix <CODE>t</CODE> of target vectors, and evaluates the gradient<CODE>g</CODE> of the error function with respect to the network weights (i.e.including the hidden unit parameters). The errorfunction is sum of squares.Each row of <CODE>x</CODE> corresponds to oneinput vector and each row of <CODE>t</CODE> contains the corresponding target vector.If the output function is <CODE>'neuroscale'</CODE> then the gradient is onlycomputed for the output layer weights and biases.<p><CODE>[g, gdata, gprior] = rbfgrad(net, x, t)</CODE> also returns separately the data and prior contributions to the gradient. In the case ofmultiple groups in the prior, <CODE>gprior</CODE> is a matrix with a rowfor each group and a column for each weight parameter.<p><h2>See Also</h2><CODE><a href="rbf.htm">rbf</a></CODE>, <CODE><a href="rbffwd.htm">rbffwd</a></CODE>, <CODE><a href="rbferr.htm">rbferr</a></CODE>, <CODE><a href="rbfpak.htm">rbfpak</a></CODE>, <CODE><a href="rbfunpak.htm">rbfunpak</a></CODE>, <CODE><a href="rbfbkp.htm">rbfbkp</a></CODE><hr><b>Pages:</b><a href="index.htm">Index</a><hr><p>Copyright (c) Ian T Nabney (1996-9)</body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -