代码搜索:Gradient

找到约 2,951 项符合「Gradient」的源代码

代码结果 2,951
www.eeworm.com/read/292964/3936891

m mixexp_graddesc.m

%%%%%%%%%% function [theta, eta] = mixture_of_experts(q, data, num_iter, theta, eta) % MIXTURE_OF_EXPERTS Fit a piecewise linear regression model using stochastic gradient descent. % [theta, eta] =
www.eeworm.com/read/292964/3937270

m maximize_params.m

function CPD = maximize_params(CPD, temp) % MAXIMIZE_PARAMS Find ML params of an MLP using Scaled Conjugated Gradient (SCG) % CPD = maximize_params(CPD, temperature) % temperature parameter is igno
www.eeworm.com/read/273525/4207829

ado arch_dr.ado

*! version 6.0.2 30mar2005 program define arch_dr version 6 args todo /* whether to calculate gradient */ bc /* Name of full beta matrix */ llvar /* Name of variable to hold LL
www.eeworm.com/read/273525/4210122

ado heck_d2.ado

*! version 2.2.3 14feb2005 program define heck_d2 version 6.0 args todo /* whether to calculate gradient */ b /* Name of beta matrix */ lnf /* Name of scalar to hold likelihoo
www.eeworm.com/read/273525/4210496

ado ml_max.ado

*! version 7.2.13 27jun2005 program define ml_max, eclass local vv : display "version " string(_caller()) ":" version 6 #delimit ; syntax [, Bounds(string) noCLEAR GRADient noHEADer HESSian
www.eeworm.com/read/434858/1867936

m mixexp_graddesc.m

%%%%%%%%%% function [theta, eta] = mixture_of_experts(q, data, num_iter, theta, eta) % MIXTURE_OF_EXPERTS Fit a piecewise linear regression model using stochastic gradient descent. % [theta, eta] =
www.eeworm.com/read/434858/1868178

m maximize_params.m

function CPD = maximize_params(CPD, temp) % MAXIMIZE_PARAMS Find ML params of an MLP using Scaled Conjugated Gradient (SCG) % CPD = maximize_params(CPD, temperature) % temperature parameter is igno
www.eeworm.com/read/431231/1908714

java gradientbar.java

package ai.decision.gui; import java.awt.*; import javax.swing.*; /** * A utility class that draw a gradient-filled bar on * a supplied graphics context. Each shade of color on the * bar
www.eeworm.com/read/396844/2406602

m netgrad.m

function g = netgrad(w, net, x, t) %NETGRAD Evaluate network error gradient for generic optimizers % % Description % % G = NETGRAD(W, NET, X, T) takes a weight vector W and a network data % structure
www.eeworm.com/read/396844/2406654

m scg.m

function [x, options, flog, pointlog, scalelog] = scg(f, x, options, gradf, varargin) %SCG Scaled conjugate gradient optimization. % % Description % [X, OPTIONS] = SCG(F, X, OPTIONS, GRADF) uses a sca