📄 ann.mdl
字号:
SrcPort 1
Points [135, 0]
Branch {
DstBlock "vrmult"
DstPort 1
}
Branch {
Points [0, 45]
DstBlock "vrmult1"
DstPort 1
}
}
Line {
SrcBlock "xn"
SrcPort 1
DstBlock "Mux8"
DstPort 2
}
Line {
SrcBlock "Constant"
SrcPort 1
DstBlock "Mux8"
DstPort 1
}
}
}
Block {
BlockType SubSystem
Name "GDCS"
Ports [3, 2, 0, 0, 0]
Position [190, 180, 260, 260]
ShowPortLabels on
MaskType "GDCS NN"
MaskDescription " Self Adaptive Discrete Time Generalized DCS Ne"
"ural Network"
MaskHelp "<p>\n This Neural Network is used to adaptivel"
"y approximate\n a vector field y=f(x), where x(t) is a vector of size Ni, \n"
" and y(t) is a vector of size No.\n</p>\n<p>\n The first input is x (normal"
"ly but not necessarily \n scaled between -1 and 1).<br>\n The second input "
"is the error signal (i.e. e=y-ys).<br>\n The third input is the learning ena"
"ble:\n with LE=1 the learning is enabled, \n with LE=0 the learni"
"ng is disabled.\n</p>\n<p>\n The first output is the learned function ys(x)."
" <br>\n The second output is the states matrix reshaped columnwise.\n Note "
"that the network has in total No*(Nmax*(Nmax+Ni+5)+2) states,\n where Nmax i"
"s the maximum number of neurons per output.\n</p>\n<p>\n The first parameter"
" in the mask is a vector containing Ni and No, \n namely the dimensions (num"
"ber of elements) of x and y.\n</p>\n<p>\n The second parameter is a vector c"
"ontaining:<br>\n 1) Nmax : the maximum number of active neurons for a single"
" output. \n The total maximum number of active neurons in the whole netw"
"ork \n is this value multplied by the number of outputs, that is Nmax*No"
". <br>\n 2) Overlapping Factor (when a new neuron is activated, the sigma (w"
"idth) of \n this new neuron, is the sum of the distances from the two ne"
"arest \n neurons multiplied by the overlapping factor). <br>\n 3) Resou"
"rce Threshold, a new neuron is added only if the \n resource is greater "
"than than this threshold. <br>\n 4) Lambda, is the number of steps between i"
"nsertion,\n a new neuron is activated only if the number of steps \n "
" from the last activation is greater than lambda. <br>\n 5) Alpha, is the c"
"onnection weight decay constant\n for the neighborhood of the best match"
"ing unit.<br>\n 6) Theta is the connection deletion threshold, that is when "
"\n the weight falls below this threshold, it is set to 0, \n so the"
" neuron is deactivated.\n</p>\n<p>\n The third parameters is 5 elements vect"
"or containing : <br>\n 1) The two Kohonen update coefficients for the best "
"\n matching unit and its neighborhood <br>\n 2) the three learning rates fo"
"r all the neurons states:\n weights, sigmas, and centers.\n</p>\n<p>\n The "
"fifth parameter contains 3 limits for each element of the weight, \n sigma "
"and center matrices. Basically, a limiting mechanism is implemented\n withi"
"n the code so that the norms of each element of the weight, sigma \n and ce"
"nter vectors are confined within the upper bound entered as parameter.\n</"
"p>\n<p>\n The fifth parameter decides wether the activation function is \n "
" piecewise conical or gaussian\n</p>\n<p>\n The initial condition must be a "
"vector of size \n No*(Nmax*(Nmax+Ni+5)+2). If its norm is zero, then an \n "
"appropriate initial condition (two neurons near zero) is chosen.\n</p>\n<p>\n"
" STATE VECTOR MEANING: <br>\n Each output h is related to a contiguous vect"
"or of Nmax*(Nmax+Ni+5)+2 \n states that are organized as follows: <br>\n of"
"Y + 1 counter : counts sampling times since last neuron activation. <br>\n "
"ofY + 1 + [1..Nmax*Nmax] : interlayer connection matrix. <br>\n ofY + 1 + Nm"
"ax*Nmax + [1..Nmax*Ni] : neurons centers <br>\n ofY + 1 + Nmax*(Nmax+Ni) + ["
"1..Nmax]: neurons weigths <br>\n ofY + 1 + Nmax*(Nmax+Ni+1) + [1..Nmax] : ne"
"urons sigmas (widths) <br>\n ofY + 1 + Nmax*(Nmax+Ni+2) + [1..Nmax] : neuron"
"s cumulative resource <br>\n ofY + 1 + Nmax*(Nmax+Ni+3) + [1..Nmax] : neuron"
"s resource<br>\n ofY + 1 + Nmax*(Nmax+Ni+4) + [1..Nmax] : number of times th"
"at each neuron has been bmu <br>\n ofY + 1 + Nmax*(Nmax+Ni+5) + 1 : number o"
"f active neurons <br>\n where ofY=(h-1)*(Nmax*(Nmax+Ni+5)+2) is the offset r"
"elated to the h-th output.<br>\n It is important to note that the states rel"
"ated to a certain output\n are independent from the states related to a diff"
"erent output.\n Also note that the states related to the sigmas are only use"
"d for gaussian activation functions.\n</p>\n<p>\n BRIEF EXPLANATION OF THE A"
"LGORITHM: <br>\n This Neural Network is essentially an RBF Neural Network wi"
"th an \n additional lateral connection structure between the neural units \n"
" of the hidden layer. This structure is used in an attempt to mirror \n the"
" topology of the input manifold. <br>\n As in the RAN, the learning algorith"
"m, in order to decrease the error, \n changes weights, positions and widths "
"of the basis functions.\n The estimation error e(k) is accumulated locally t"
"o each neuron and \n used to determine where (and if) to activate a new neur"
"on. <br> \n DSC stands for Dynamic Cell Structure.<br>\n<br>\n OUTPUT EQUAT"
"ION: <br>\nAt any given time t, if x(t) is the input vector, we indicate with"
" bmu the nearest unit (among those related to the h-th output) to the current"
" position of the input, and with sec is the nearest unit after the bmu. The n"
"eighborhood of the bmu is defined as the set of all the units that are connec"
"ted to the bmu \nby the interlayer connection matrix CN, that is all the unit"
"s i such that CN(h,t,bmu,i) > 0. \nBeing W(h,t,bmu) and W(h,t,sec) the weight"
"s associated with the bmu and the sec units, and dist(bmu,x) and dist(sec,x)"
" their distances from the current input point x(t), if the PIECEWISE CONICAL "
"activation functions are used, then the h-th output of the neural network is "
":\nys(h,t)=W(h,t,bmu) if dist(sec,x)>dist(sec,bmu) \nys(h,t)=W(h,t,bmu)+b*(W("
"h,t,sec)-W(h,t,bmu)) otherwise \nwhere b=dist(bmu,x)/(dist(bmu,x)+dist(sec,x)"
") \n<br>\n If the GAUSSIAN activation functions are used t, then the h-th ou"
"tput \n of the neural network is simply: <br>\n ys(h,t)=W(h,t)*g(x(t),S(h,t"
"),C(h,t)) <br>\n where W(h,t) is the output weight matrix related to the h-t"
"h output, \n g is the vector of radial basis functions of the input x(t), an"
"d\n finally S(h,t) and C(h,t) are vectors of widths and centers \n (relativ"
"e to the h-th output).<br>\n<br>\n STATE EQUATION (Learning Algorithm): <br>"
"\n Being e(h,t)=y(h,t)-ys(h,t) the h-th element of the error vector, \n at "
"a time t, and x(t) the input vector at the same time, we indicate \n with bm"
"u the nearest unit (among those related to the h-th output) \n to the curren"
"t position of the input, and with sec is the nearest \n unit after the bmu. "
"The neighborhood of the bmu is defined as the set of \n all the units that a"
"re connected to the bmu by the interlayer connection \n matrix CN, that is a"
"ll the units i such that CN(h,t,bmu,i) > 0. <br>\n Firstly, the connection m"
"atrix updated, by setting to 1 the strength of \n the connection between bmu"
" and sec, ( that is CN(h,t+1,bmu,sec)=1 and \n CN(h,t+1,sec,bmu)=1 ), and by"
" multiplying by a value alpha < 1 the \n strength of all the other connectio"
"ns ( that is for every i,j <> bmu,sec \n CN(h,t+1,i,j)=CN(h,t,i,j)*alpha ). "
"Also, all the connections whose \n strength is less than a threshold theta a"
"re deleted (i.e. their strength \n is set to 0). This kind of updating is al"
"so called \"Hebbian Learning\". <br>\n The next step in the network adaptati"
"on algorithm consist in moving the \n positions of the BMU and its neighborh"
"ood toward the current input x(t), \n following a so called \"Kohonen\" rule"
". Specifically, if C(h,t,i) is\n the position of the neuron i, related to th"
"e output h, at time t,\n then for each neuron i belonging to the neighborhoo"
"d of the bmu \n we have C(h,t+1,i)=epsilon(i)*(x(t)-C(h,t,i)). <br>\n Each "
"neuron i is associated with a value called resource, R(h,t,i) and \n at this"
" point in the algorithm, the resource of the bmu is updated, \n specifically"
", the resource of the bmu is set to the error e(h,t) \n divided by the numbe"
"r of times that the unit has been selected as bmu. <br>\n If the mean value "
"of the resource of the whole network is greater than \n a certain threshold "
"RsThr, and if the last neuron activation was more than \n lambda steps ago, "
"then a new neuron is activated. <br>\n The new neuron n is placed between th"
"e position of the unit with highest \n resource w and the position of the un"
"it with highest resource within the \n neighborhood of w, excluding w itself"
", let us indicate it with v. \n In detail, C(h,t+1,n)=C(h,t,w)+b*(C(h,t,v)-C"
"(h,t,w)), where \n b=R(h,t,w)/(R(h,t,w)+R(h,t,v)). <br>\n The interlayer co"
"nnections from w to n and from n to v are set to 1, \n the original connecti"
"on between w and v is set to 0. Both resource and \n weight of the new neuro"
"n are computed by interpolating the resource and \n weight of the two neuron"
"s w and v: <br> \n R(h,t+1,n)=R(h,t,w)+b*(R(h,t,v)-R(h,t,w)) <br>\n W(h,t+"
"1,n)=W(h,t,w)+b*(W(h,t,v)-W(h,t,w)) <br>\n The width of the basis function o"
"f n is set to \n S(h,t,n)=overlap*(C(h,t,v)-C(h,t,w)), where overlap is the "
"so called\n overlapping factor. <br>\n Finally, as a last step of the adapt"
"ation algorithm, the vector X(t)\n containing all the neural network weights"
" and widths,\n is updated according to the gradient rule: <br>\n X(t+T)=X(t"
")+eta*(dys/dX)*e(t) <br>\n where eta is the learning rate, dys/dX is a jacob"
"ian matrix,\n and T is the sampling time.\n</p>\n<p>\n The final mask param"
"eter is the sampling time of the block, T.\n</p>\n<p>\n This block calls the"
" mex file obtained by compiling the s-function\n dcsgl.c, therefore, to use "
"the block, you should have the resulting \n mex file (on windows platform th"
"e file is dcsgl.dll) in the matlab path.<br>\n For further reference see som"
"e papers on DSC Networks.<br>\n</p>\n<p>\n Giampiero Campa, January 2007\n</"
"p>"
MaskPromptString "[Ni No]|[Nmax Overlap RsThr Lambda Alpha Theta]"
"|[Epsb Epsn etaW etaS etaC]|[limW limS limC]|Piecewise Conical Activation Fun"
"ction (otherwise Gaussian)|Initial Condition, size = (Nmax*(Nmax+Ni+5)+2)*No|"
"Sample Time"
MaskStyleString "edit,edit,edit,edit,checkbox,edit,edit"
MaskTunableValueString "on,on,on,on,on,on,on"
MaskCallbackString "||||||"
MaskEnableString "on,on,on,on,on,on,on"
MaskVisibilityString "on,on,on,on,on,on,on"
MaskVariables "Dim=@1;norlat=@2;eta=@3;lim=@4;lga=@5;S=@6;T=@7"
";"
MaskInitialization "if prod(size(Dim))==1,Dim=[Dim 1]; end"
MaskIconFrame on
MaskIconOpaque on
MaskIconRotate "none"
MaskIconUnits "autoscale"
MaskValueString "[4 1]|[50 0.2 0.01 600 0.99"
" 0.005]|[0.03 0.003 0.1 0.001 0 ]|[1 1 1]*1e6|on|zeros(1*(50*(50+4+5)"
"+2),1)|0.05"
Port {
PortNumber 1
Name "nyn"
TestPoint off
RTWStorageClass "Auto"
}
System {
Name "GDCS"
Location [385, 293, 721, 466]
Open off
ModelBrowserVisibility off
ModelBrowserWidth 200
ScreenColor "automatic"
PaperOrientation "landscape"
PaperPositionMode "auto"
PaperType "usletter"
PaperUnits "inches"
ZoomFactor "100"
AutoZoom on
Block {
BlockType Inport
Name "x"
Position [30, 43, 60, 57]
Port "1"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
SignalType "auto"
Interpolate on
}
Block {
BlockType Inport
Name "e"
Position [30, 78, 60, 92]
Port "2"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
SignalType "auto"
Interpolate on
}
Block {
BlockType Inport
Name "LE"
Position [30, 113, 60, 127]
Port "3"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
SignalType "auto"
Interpolate on
}
Block {
BlockType "S-Function"
Name "S-Function"
Ports [3, 2, 0, 0, 0]
Position [115, 61, 180, 109]
FunctionName "dcsgl2"
Parameters "Dim,norlat,eta,lim,real(lga),S,T"
PortCounts "[]"
SFunctionModules "''"
MaskIconFrame on
MaskIconOpaque on
MaskIconRotate "none"
MaskIconUnits "autoscale"
}
Block {
BlockType Outport
Name "ys"
Position [215, 68, 245, 82]
NamePlacement "alternate"
Port "1"
OutputWhenDisabled "held"
InitialOutput "[]"
}
Block {
BlockType Outport
Name "X"
Position [215, 93, 245, 107]
Port "2"
OutputWhenDisabled "held"
InitialOutput "[]"
}
Line {
SrcBlock "S-Function"
SrcPort 1
DstBlock "ys"
DstPort 1
}
Line {
SrcBlock "S-Function"
SrcPort 2
DstBlock "X"
DstPort 1
}
Line {
SrcBlock "x"
SrcPort 1
Points [35, 0]
DstBlock "S-Function"
DstPort 1
}
Line {
SrcBlock "e"
SrcPort 1
DstBlock "S-Function"
DstPort 2
}
Line {
SrcBlock "LE"
SrcPort 1
Points [35, 0]
DstBlock "S-Function"
DstPort 3
}
}
}
Block {
BlockType SubSystem
Name "GMLP"
Ports [3, 2, 0, 0, 0]
Position [190, 44, 260, 126]
ShowPortLabels on
MaskType "GMLP NN"
MaskDescription " Self Adaptive Discrete Time Generalized MLP N"
"eural Network"
MaskHelp "<p>\n This Neural Network is used to adaptivel"
"y approximate\n a (possibly nonlinear) vector field y=f(x),\n with the in"
"put vector x being a function of time.\n</p>\n<p>\n The first input is x. <b"
"r>\n The second input is the error signal (i.e. e=y-ys). <br>\n The third i"
"nput is the learning enable: <br>\n with LE=1 the learning is enabled, "
"\n with LE=0 the learning is disabled.\n</p>\n<p>\n The first output i"
"s the learned function ys(x).<br>\n The second output is the states matrix r"
"eshaped columnwise.\n</p>\n<p>\n The first parameter in the mask is a vector"
" containing respectively \n the number of inputs Ni, the number of neurons i"
"n the hidden layer Nh,\n and the number of outputs No.<br>\n The second par"
"ameter contains the learning rates for:<br>\n 1) the weights connecting inp"
"ut and hidden layers (V)<br>\n 2) the weights connecting hidden and output "
"layers (W)<br>\n 3) the vector of parameters [Pw Uo Lo To Pv Uh Lh Th]<br>"
"\n The next parameter contains limiters for each element of the 3 items\n "
"in the above list. Basically, a limiting mechanism is implemented\n within "
"the code so that each element of V,W and the vector \n [Pw Uo Lo To Pv Uh "
"Lh Th] is always limited.<br>\n Finally, the fourth parameter is the so cal"
"led momentum.\n</p>\n<p>\n The initial condition could be a vector of size\n"
" 2*(Nh*(Ni+No)+4*(No+Nh)), or a scalar, in the latter\n case, the scalar mu"
"ltiplies the weights of an \n appropriate random initial condition vector.\n"
"</p>\n<p>\n STATE VECTOR MEANING: <br>\n The state is a column vector compo"
"sed by 2 contiguous \n parts having both Nh*(Ni+No)+4*(No+Nh) elements. <br>"
" \n The states in the first part are organized as follows: <br>\n [1..Nh*Ni"
"] : weights connecting the input to the hidden layer. <br>\n Nh*Ni+[1..No*Nh"
"] : weights connecting the hidden layer to the output layer. <br>\n Nh*Ni+No"
"*Nh+[1..No] : hidden layer to output threshold vector (Pv).<br>\n Nh*Ni+No*N"
"h+No+[1..No] : upper limits of the output layer base functions (Uo).<br>\n N"
"h*Ni+No*Nh+2*No+[1..No] : lower limits of the output layer base functions (Lo"
").<br>\n Nh*Ni+No*Nh+3*No+[1..No] : slopes of the output layer base function"
"s (To).<br>\n Nh*Ni+No*Nh+4*No+[1..Nh] : input to hidden layer threshold vec"
"tor (Pw).<br>\n Nh*Ni+No*Nh+4*No+Nh+[1..Nh] : upper limits of the hidden lay"
"er base functions (Uh).<br>\n Nh*Ni+No*Nh+4*No+2*Nh+[1..Nh] : lower limits o"
"f the hidden layer base functions (Lh).<br>\n Nh*Ni+No*Nh+4*No+3*Nh+[1..Nh] "
": slopes of the hidden layer base functions (Th).<br>\n The second half of t"
"he state vector contains the states of an error filter\n that somehow repres"
"ents the past increments of the first half of the state vector.\n See below "
"in the \"state equation\" section for a more detailed explanation of this.\n<"
"/p>\n<p>\n BRIEF EXPLANATION OF THE ALGORITHM: <br>\n This Neural Network i"
"s essentially a 2-layered sigmoidal Neural Network,\n in which the usual sig"
"moidal base function is replaced by a more flexible\n function: <br> f(s) ="
" L + (U-L)/(1+exp(-s/T)) <br>\n where s is the (scalar) input to the functio"
"n,\n L is the lower limit, U the upper limit, and T the slope.<br>\n The fu"
"nction reduces to the usual sigmoid when L=0, U=1 and T=1.<br>\n Two affine "
"transformations (i.e. having the form y=A*x+b) connect\n the input to the hi"
"dden layer and the hidden layer to the output layer. <br>\n The learning alg"
"orithm allows the parameters A,b,L,U,T for each layer to\n change. An extend"
"ed gradient rule, structured according to the well known\n backpropagation a"
"lgorithm, is used to update the parameters. <br>\n The acronym GMLP stands f"
"or Generalized Multi Layer Perceptron, \n and it is used to refer to this ki"
"nd of neural network architectures. \n<br>\n<br>\n OUTPUT EQUATION: <br>\n "
"At any given time t, if x(t) is the input vector, then the i-th \n element o"
"f the output vector AT THE HIDDEN LAYER is:\n z(i,t) = Lh(i,t) + (Uh(i,t)-Lh"
"(i,t))/(1+exp(-( V(i,t)*x(t)+Pv(i,t) )/Th(i,t)))\n where Lh(i,t) is the i-th"
" element of the lower limit vector (for the hidden layer),\n Uh(i,t) is the "
"i-th element of the upper limit vector (for the hidden layer),\n Th(i,t) is "
"the i-th element of the slope vector (for the hidden layer),\n V(i,t) is the"
" i-th row of the (input to hidden layer) weight matrix,\n and Pv(i,t) is the"
" i-th element of the (input to hidden layer) threshold vector. <br>\n The i-"
"th element of the output vector AT THE NETWORK OUTPUT is:\n ys(i,t) = Lo(i,t"
") + (Uo(i,t)-Lo(i,t))/(1+exp(-( W(i,t)*z(t)+Pw(i,t) )/To(i,t)))\n where Lo(i"
",t) is the i-th element of the lower limit vector (for the output layer),\n "
"Uo(i,t) is the i-th element of the upper limit vector (for the output layer),"
"\n To(i,t) is the i-th element of the slope vector (for the output layer),\n"
" W(i,t) is the i-th row of the (hidden layer to output) weight matrix,\n an"
"d Pw(i,t) is the i-th element of the (hidden layer to output) threshold vecto"
"r.\n<br><br>\n STATE EQUATION (Learning Algorithm): <br>\n Being e(t)=y(t)-"
"ys(t) the error vector at time t, the vector X(t)\n containing the first Nh*"
"(Ni+No)+4*(No+Nh) neural network states \n is updated according to an \"exte"
"nded\" gradient rule: <br>\n X(t+T)=X(t)+eta*(dys/dX)*e(t)+Z(t) <br>\n wher"
"e eta is the learning rate, dys/dX is the jacobian matrix,\n T is the sampli"
"ng time, and Z(t) represents an additional \n contribution from a filtered e"
"rror:<br>\n D(t+1)=alpha*D(t)-eta*(dys/dX)*e(t) <br>\n Z(t)=alpha*D(t) <br>"
"\n The filter decay rate, alpha, is called \"momentum\", if alpha=0, \n the"
" update law reduces to the classic gradient rule.<br>\n It can be seen Z(t) "
"somehow represents all the past increments of X(t).<br>\n The final part of "
"the whole state vector is D(t).\n</p>\n<p>\n The final mask parameter is the"
" sampling time of the block, T.\n</p>\n<p>\n This block is implemented in Si"
"mulink, \n to use it you should have the smxl library in your path. \n For "
"further reference see some papers on the backpropagation \n algorithm applie"
"d to multilayer sigmoidal neural networks.<br>\n</p>\n<p>\n Giampiero Campa,"
" 2007\n</p>"
MaskPromptString "[ni nh no]|[etaV etaW etaP]|[Lv Lw Lp] (Weight "
"Limiters)|Momentum (alpha)|Initial Conditions|Sample Time"
MaskStyleString "edit,edit,edit,edit,edit,edit"
MaskTunableValueString "on,on,on,on,on,on"
MaskCallbackString "|||||"
MaskEnableString "on,on,on,on,on,on"
MaskVisibilityString "on,on,on,on,on,on"
MaskVariables "dim=@1;eta=@2;L=@3;alp=@4;ini=@5;T=@6;"
MaskInitialization "ni=dim(1);nh=dim(2);no=dim(3);\nns=nh*ni+no*nh+"
"4*no+4*nh;\netaV=eta(1);\netaW=eta(2);\netaP=eta(3);\n\nif size(ini)==[1 1],"
"\n V0 = ini*reshape( rand(nh,ni)-0.5 ,nh*ni,1 );\n W0 = ini*reshape( rand"
"(no,nh)-0.5 ,no*nh,1 );\n Gm0 = ini*(rand(no,1) - 0.5);\n Uo0= ones(no,"
"1);\n Lo0=-1*ones(no,1);\n To0= ones(no,1);\n Te0 = ini*(rand(nh,1) -"
" 0.5);\n Uh0= ones(nh,1);\n Lh0=-1*ones(nh,1);\n Th0= ones(nh,1);\n"
"\n x0=[V0;W0;Gm0;Uo0;Lo0;To0;Te0;Uh0;Lh0;Th0];\n d0=x0*0;\n\nelseif size(in"
"i)==[2*ns 1],\n x0=ini(1:ns);\n d0=ini(ns+1:2*ns);\nelse\n warning(['The "
"initial condition size must be 1 by 1 or ' mat2str(2*ns) ' by 1']);\nend\n"
MaskIconFrame on
MaskIconOpaque on
MaskIconRotate "none"
MaskIconUnits "autoscale"
MaskValueString "[4 10 1]|[0.003 0.003 0.003]/10|[inf inf inf]|0"
".01|1|0.05"
Port {
PortNumber 1
Name "nyn"
TestPoint off
RTWStorageClass "Auto"
}
System {
Name "GMLP"
Location [48, 84, 998, 676]
Open off
ModelBrowserVisibility off
ModelBrowserWidth 200
ScreenColor "automatic"
PaperOrientation "landscape"
PaperPositionMode "auto"
PaperType "usletter"
PaperUnits "inches"
ZoomFactor "100"
AutoZoom on
Block {
BlockType Inport
Name "x"
Position [65, 413, 95, 427]
Port "1"
PortWidth "-1"
SampleTime "-1"
DataType "auto"
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -