📄 ann.mdl
字号:
}
Branch {
DstBlock "Sum"
DstPort 2
}
}
Branch {
DstBlock "X"
DstPort 1
}
}
}
Line {
SrcBlock "Sum"
SrcPort 1
DstBlock "Unit Delay"
DstPort 1
}
Line {
SrcBlock "eta"
SrcPort 1
DstBlock "Sum"
DstPort 1
}
Line {
SrcBlock "xn"
SrcPort 1
DstBlock "Mux8"
DstPort 2
}
Line {
SrcBlock "Constant"
SrcPort 1
DstBlock "Mux8"
DstPort 1
}
}
}
Block {
BlockType SubSystem
Name "DCS"
Ports [3, 2, 0, 0, 0]
Position [30, 369, 100, 451]
BackgroundColor "darkGreen"
ShowPortLabels on
MaskType "DCS NN"
MaskDescription " Self Adaptive Discrete Time Gaussian DCS Neura"
"l Network"
MaskHelp "<p>\n This Neural Network is used to adaptivel"
"y approximate\n a vector field y=f(x), where x(t) is a vector of size Ni, \n"
" and y(t) is a vector of size No.\n</p>\n<p>\n The first input is x (normal"
"ly but not necessarily \n scaled between -1 and 1).<br>\n The second input "
"is the error signal (i.e. e=y-ys).<br>\n The third input is the learning ena"
"ble:\n with LE=1 the learning is enabled, \n with LE=0 the learni"
"ng is disabled.\n</p>\n<p>\n The first output is the learned function ys(x)."
" <br>\n The second output is the states matrix reshaped columnwise.\n Note "
"that the network has in total No*(Nmax*(Nmax+Ni+5)+2) states,\n where Nmax i"
"s the maximum number of neurons per output.\n</p>\n<p>\n The first parameter"
" in the mask is a vector containing Ni and No, \n namely the dimensions (num"
"ber of elements) of x and y.\n</p>\n<p>\n The second parameter is a vector c"
"ontaining:<br>\n 1) Nmax : the maximum number of active neurons for a single"
" output. \n The total maximum number of active neurons in the whole netw"
"ork \n is this value multplied by the number of outputs, that is Nmax*No"
". <br>\n 2) Overlapping Factor (when a new neuron is activated, the sigma (w"
"idth) of \n this new neuron, is the sum of the distances from the two ne"
"arest \n neurons multiplied by the overlapping factor). <br>\n 3) Resou"
"rce Threshold, a new neuron is added only if the \n resource is greater "
"than than this threshold. <br>\n 4) Lambda, is the number of steps between i"
"nsertion,\n a new neuron is activated only if the number of steps \n "
" from the last activation is greater than lambda. <br>\n 5) Alpha, is the c"
"onnection weight decay constant\n for the neighborhood of the best match"
"ing unit.<br>\n 6) Theta is the connection deletion threshold, that is when "
"\n the weight falls below this threshold, it is set to 0, \n so the"
" neuron is deactivated.\n</p>\n<p>\n The third parameters is 5 elements vect"
"or containing : <br>\n 1) The two Kohonen update coefficients for the best "
"\n matching unit and its neighborhood <br>\n 2) the three learning rates fo"
"r all the neurons states:\n weights, sigmas, and centers.\n</p>\n<p>\n The "
"initial condition must be a vector of size \n No*(Nmax*(Nmax+Ni+5)+2). If it"
"s norm is zero, then an \n appropriate initial condition (two neurons near z"
"ero) is chosen.\n</p>\n<p>\n STATE VECTOR MEANING: <br>\n Each output h is "
"related to a contiguous vector of Nmax*(Nmax+Ni+5)+2 \n states that are orga"
"nized as follows: <br>\n ofY + 1 counter : counts sampling times since last"
" neuron activation. <br>\n ofY + 1 + [1..Nmax*Nmax] : interlayer connection "
"matrix. <br>\n ofY + 1 + Nmax*Nmax + [1..Nmax*Ni] : neurons centers <br>\n "
"ofY + 1 + Nmax*(Nmax+Ni) + [1..Nmax]: neurons weigths <br>\n ofY + 1 + Nmax*"
"(Nmax+Ni+1) + [1..Nmax] : neurons sigmas (widths) <br>\n ofY + 1 + Nmax*(Nma"
"x+Ni+2) + [1..Nmax] : neurons cumulative resource <br>\n ofY + 1 + Nmax*(Nma"
"x+Ni+3) + [1..Nmax] : neurons resource<br>\n ofY + 1 + Nmax*(Nmax+Ni+4) + [1"
"..Nmax] : number of times that each neuron has been bmu <br>\n ofY + 1 + Nma"
"x*(Nmax+Ni+5) + 1 : number of active neurons <br>\n where ofY=(h-1)*(Nmax*(N"
"max+Ni+5)+2) is the offset related to the h-th output.<br>\n It is important"
" to note that the states related to a certain output\n are independent from "
"the states related to a different output.\n</p>\n<p>\n BRIEF EXPLANATION OF "
"THE ALGORITHM: <br>\n This Neural Network is essentially an RBF Neural Netwo"
"rk with an \n additional lateral connection structure between the neural uni"
"ts \n of the hidden layer. This structure is used in an attempt to mirror \n"
" the topology of the input manifold. <br>\n As in the RAN, the learning alg"
"orithm, in order to decrease the error, \n changes weights, positions and wi"
"dths of the basis functions.\n The estimation error e(k) is accumulated loca"
"lly to each neuron and \n used to determine where (and if) to activate a new"
" neuron. <br> \n DSC stands for Dynamic Cell Structure.<br>\n<br>\n OUTPUT "
"EQUATION: <br>\n At any given time t, if x(t) is the input vector, then the "
"h-th output \n of the neural network is : <br>\n ys(h,t)=W(h,t)*g(x(t),S(h,"
"t),C(h,t)) <br>\n where W(h,t) is the output weight matrix related to the h-"
"th output, \n g is the vector of radial basis functions of the input x(t), a"
"nd\n finally S(h,t) and C(h,t) are vectors of widths and centers \n (relati"
"ve to the h-th output).<br>\n<br>\n STATE EQUATION (Learning Algorithm): <br"
">\n Being e(h,t)=y(h,t)-ys(h,t) the h-th element of the error vector, \n at"
" a time t, and x(t) the input vector at the same time, we indicate \n with b"
"mu the nearest unit (among those related to the h-th output) \n to the curre"
"nt position of the input, and with sec is the nearest \n unit after the bmu."
" The neighborhood of the bmu is defined as the set of \n all the units that "
"are connected to the bmu by the interlayer connection \n matrix CN, that is "
"all the units i such that CN(h,t,bmu,i) > 0. <br>\n Firstly, the connection "
"matrix updated, by setting to 1 the strength of \n the connection between bm"
"u and sec, ( that is CN(h,t+1,bmu,sec)=1 and \n CN(h,t+1,sec,bmu)=1 ), and b"
"y multiplying by a value alpha < 1 the \n strength of all the other connecti"
"ons ( that is for every i,j <> bmu,sec \n CN(h,t+1,i,j)=CN(h,t,i,j)*alpha )."
" Also, all the connections whose \n strength is less than a threshold theta "
"are deleted (i.e. their strength \n is set to 0). This kind of updating is a"
"lso called \"Hebbian Learning\". <br>\n The next step in the network adaptat"
"ion algorithm consist in moving the \n positions of the BMU and its neighbor"
"hood toward the current input x(t), \n following a so called \"Kohonen\" rul"
"e. Specifically, if C(h,t,i) is\n the position of the neuron i, related to t"
"he output h, at time t,\n then for each neuron i belonging to the neighborho"
"od of the bmu \n we have C(h,t+1,i)=epsilon(i)*(x(t)-C(h,t,i)). <br>\n Each"
" neuron i is associated with a value called resource, R(h,t,i) and \n at thi"
"s point in the algorithm, the resource of the bmu is updated, \n specificall"
"y, the resource of the bmu is set to the error e(h,t) \n divided by the numb"
"er of times that the unit has been selected as bmu. <br>\n If the mean value"
" of the resource of the whole network is greater than \n a certain threshold"
" RsThr, and if the last neuron activation was more than \n lambda steps ago,"
" then a new neuron is activated. <br>\n The new neuron n is placed between t"
"he position of the unit with highest \n resource w and the position of the u"
"nit with highest resource within the \n neighborhood of w, excluding w itsel"
"f, let us indicate it with v. \n In detail, C(h,t+1,n)=C(h,t,w)+b*(C(h,t,v)-"
"C(h,t,w)), where \n b=R(h,t,w)/(R(h,t,w)+R(h,t,v)). <br>\n The interlayer c"
"onnections from w to n and from n to v are set to 1, \n the original connect"
"ion between w and v is set to 0. Both resource and \n weight of the new neur"
"on are computed by interpolating the resource and \n weight of the two neuro"
"ns w and v: <br> \n R(h,t+1,n)=R(h,t,w)+b*(R(h,t,v)-R(h,t,w)) <br>\n W(h,t"
"+1,n)=W(h,t,w)+b*(W(h,t,v)-W(h,t,w)) <br>\n The width of the basis function "
"of n is set to \n S(h,t,n)=overlap*(C(h,t,v)-C(h,t,w)), where overlap is the"
" so called\n overlapping factor. <br>\n Finally, as a last step of the adap"
"tation algorithm, the vector X(t)\n containing all the neural network weight"
"s and widths,\n is updated according to the gradient rule: <br>\n X(t+T)=X("
"t)-eta*(dys/dX)*e(t) <br>\n where eta is the learning rate, dys/dX is a jaco"
"bian matrix,\n and T is the sampling time.\n</p>\n<p>\n The final mask para"
"meter is the sampling time of the block, T.\n</p>\n<p>\n This block calls th"
"e mex file obtained by compiling the s-function\n dcsg2.c, therefore, to use"
" the block, you should have the resulting \n mex file (on windows platform t"
"he file is dcsg2.dll) in the matlab path.<br>\n For further reference see so"
"me papers on DSC Networks.<br>\n</p>\n<p>\n Giampiero Campa, June 4 2003\n</"
"p>"
MaskPromptString "[Ni No]|[Nmax Overlap RsThr Lambda Alpha Theta]"
"|[Epsb Epsn etaW etaS etaC]|Initial Condition, size = (Nmax*(Nmax+Ni+5)+2)*No"
"|Sample Time"
MaskStyleString "edit,edit,edit,edit,edit"
MaskTunableValueString "on,on,on,on,on"
MaskCallbackString "||||"
MaskEnableString "on,on,on,on,on"
MaskVisibilityString "on,on,on,on,on"
MaskVariables "Dim=@1;norlat=@2;eta=@3;S=@4;T=@5;"
MaskInitialization "if prod(size(Dim))==1,Dim=[Dim 1]; end"
MaskIconFrame on
MaskIconOpaque on
MaskIconRotate "none"
MaskIconUnits "autoscale"
MaskValueString "[4 1]|[50 0.2 0.01 600 0.99"
" 0.005]|[0.03 0.003 0.1 0.001 0 ]|zeros(1*(50*(50+4+5)+2),1)|0.05"
Port {
PortNumber 1
Name "nyn"
TestPoint off
RTWStorageClass "Auto"
}
System {
Name "DCS"
Location [369, 301, 705, 474]
Open off
ModelBrowserVisibility off
ModelBrowserWidth 200
ScreenColor "automatic"
PaperOrientation "landscape"
PaperPositionMode "auto"
PaperType "usletter"
PaperUnits "inches"
ZoomFactor "100"
AutoZoom on
Block {
BlockType Inport
Name "x"
Position [30, 43, 60, 57]
Port "1"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
SignalType "auto"
Interpolate on
}
Block {
BlockType Inport
Name "e"
Position [30, 78, 60, 92]
Port "2"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
SignalType "auto"
Interpolate on
}
Block {
BlockType Inport
Name "LE"
Position [30, 113, 60, 127]
Port "3"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
SignalType "auto"
Interpolate on
}
Block {
BlockType "S-Function"
Name "S-Function"
Ports [3, 2, 0, 0, 0]
Position [120, 61, 185, 109]
FunctionName "dcsg2"
Parameters "Dim,norlat,eta,S,T"
PortCounts "[]"
SFunctionModules "''"
MaskIconFrame on
MaskIconOpaque on
MaskIconRotate "none"
MaskIconUnits "autoscale"
}
Block {
BlockType Outport
Name "ys"
Position [215, 68, 245, 82]
NamePlacement "alternate"
Port "1"
OutputWhenDisabled "held"
InitialOutput "[]"
}
Block {
BlockType Outport
Name "X"
Position [215, 93, 245, 107]
Port "2"
OutputWhenDisabled "held"
InitialOutput "[]"
}
Line {
SrcBlock "S-Function"
SrcPort 1
DstBlock "ys"
DstPort 1
}
Line {
SrcBlock "S-Function"
SrcPort 2
DstBlock "X"
DstPort 1
}
Line {
SrcBlock "x"
SrcPort 1
Points [40, 0]
DstBlock "S-Function"
DstPort 1
}
Line {
SrcBlock "e"
SrcPort 1
DstBlock "S-Function"
DstPort 2
}
Line {
SrcBlock "LE"
SrcPort 1
Points [40, 0]
DstBlock "S-Function"
DstPort 3
}
}
}
Block {
BlockType SubSystem
Name "DCS (Matlab)"
Ports [3, 2, 0, 0, 0]
Position [145, 369, 215, 451]
BackgroundColor "darkGreen"
ShowPortLabels on
MaskType "DCS NN"
MaskDescription " Self Adaptive Discrete Time Gaussian DCS Neura"
"l Network"
MaskHelp "<p>\n This Neural Network is used to adaptivel"
"y approximate\n a vector field y=f(x), where x(t) is a vector of size Ni, \n"
" and y(t) is a vector of size No.\n</p>\n<p>\n The first input is x (normal"
"ly but not necessarily \n scaled between -1 and 1).<br>\n The second input "
"is the error signal (i.e. e=y-ys).<br>\n The third input is the learning ena"
"ble:\n with LE=1 the learning is enabled, \n with LE=0 the learni"
"ng is disabled.\n</p>\n<p>\n The first output is the learned function ys(x)."
" <br>\n The second output is the states matrix reshaped columnwise.\n Note "
"that the network has in total No*(Nmax*(Nmax+Ni+5)+2) states,\n where Nmax i"
"s the maximum number of neurons per output.\n</p>\n<p>\n The first parameter"
" in the mask is a vector containing Ni and No, \n namely the dimensions (num"
"ber of elements) of x and y.\n</p>\n<p>\n The second parameter is a vector c"
"ontaining:<br>\n 1) Nmax : the maximum number of active neurons for a single"
" output. \n The total maximum number of active neurons in the whole netw"
"ork \n is this value multplied by the number of outputs, that is Nmax*No"
". <br>\n 2) Overlapping Factor (when a new neuron is activated, the sigma (w"
"idth) of \n this new neuron, is the sum of the distances from the two ne"
"arest \n neurons multiplied by the overlapping factor). <br>\n 3) Resou"
"rce Threshold, a new neuron is added only if the \n resource is greater "
"than than this threshold. <br>\n 4) Lambda, is the number of steps between i"
"nsertion,\n a new neuron is activated only if the number of steps \n "
" from the last activation is greater than lambda. <br>\n 5) Alpha, is the c"
"onnection weight decay constant\n for the neighborhood of the best match"
"ing unit.<br>\n 6) Theta is the connection deletion threshold, that is when "
"\n the weight falls below this threshold, it is set to 0, \n so the"
" neuron is deactivated.\n</p>\n<p>\n The third parameters is a 5 elements ve"
"ctor containing : <br>\n 1) The two Kohonen update coefficients for the best"
" \n matching unit and its neighborhood <br>\n 2) The three learning rates f"
"or all the neurons states:\n weights, sigmas, and centers.\n</p>\n<p>\n The"
" initial condition must be a vector of size \n No*(Nmax*(Nmax+Ni+5)+2). If i"
"ts norm is zero, then an \n appropriate initial condition (two neurons near "
"zero) is chosen.\n</p>\n<p>\n STATE VECTOR MEANING: <br>\n Each output h is"
" related to a contiguous vector of Nmax*(Nmax+Ni+5)+2 \n states that are org"
"anized as follows: <br>\n ofY + 1 counter : counts sampling times since las"
"t neuron activation. <br>\n ofY + 1 + [1..Nmax*Nmax] : interlayer connection"
" matrix. <br>\n ofY + 1 + Nmax*Nmax + [1..Nmax*Ni] : neurons centers <br>\n "
" ofY + 1 + Nmax*(Nmax+Ni) + [1..Nmax]: neurons weigths <br>\n ofY + 1 + Nmax"
"*(Nmax+Ni+1) + [1..Nmax] : neurons sigmas (widths) <br>\n ofY + 1 + Nmax*(Nm"
"ax+Ni+2) + [1..Nmax] : neurons cumulative resource <br>\n ofY + 1 + Nmax*(Nm"
"ax+Ni+3) + [1..Nmax] : neurons resource<br>\n ofY + 1 + Nmax*(Nmax+Ni+4) + ["
"1..Nmax] : number of times that each neuron has been bmu <br>\n ofY + 1 + Nm"
"ax*(Nmax+Ni+5) + 1 : number of active neurons <br>\n where ofY=(h-1)*(Nmax*("
"Nmax+Ni+5)+2) is the offset related to the h-th output.<br>\n It is importan"
"t to note that the states related to a certain output\n are independent from"
" the states related to a different output.\n</p>\n<p>\n BRIEF EXPLANATION OF"
" THE ALGORITHM: <br>\n This Neural Network is essentially an RBF Neural Netw"
"ork with an \n additional lateral connection structure between the neural un"
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -