📄 fit_rgnn.m
字号:
function [fitness, x] = fit_rgnn(x,fcn_opts)%[fitness, x] = fit_rgnn(x,fcn_opts)%this calculates the fitness for a system identification problem, a%recurrent generalized neural network. The fitness is the negative RMS%error, or, in the case of nan, the number of finite values before nan.%fcn_opts.U is the input matrix (N_in x N_points)%fcn_opts.Y is the target output vector (1 x N_points)%fcn_opts.activation is the activation vector (N_neurons x 1)% Copyright Travis Wiens 2008%% This program is free software: you can redistribute it and/or modify% it under the terms of the GNU General Public License as published by% the Free Software Foundation, either version 3 of the License, or% (at your option) any later version.%% This program is distributed in the hope that it will be useful,% but WITHOUT ANY WARRANTY; without even the implied warranty of% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the% GNU General Public License for more details.%% You should have received a copy of the GNU General Public License% along with this program. If not, see <http://www.gnu.org/licenses/>.%% If you would like to request a commerical (or other) license, please% feel free to contact travis.mlfx@nutaksas.comuse_mex=false;%set to true to use precompiled mex functionsN_in=size(fcn_opts.U,1);%number of inputsW=params_to_W(x,N_in);%convert row vector of params to square W matrixx0=zeros(size(W,1),1);%initial statesif use_mex [Y_hat]=rgnn_sim_mex(fcn_opts.U,W,fcn_opts.activation,x0);%perform NN calculationelse [Y_hat]=rgnn_sim(fcn_opts.U,W,fcn_opts.activation,x0);endk_skip=100;%skip first few data pointsE=sqrt(mean((fcn_opts.Y(k_skip:end)-Y_hat(k_skip:end)).^2));%RMS errorfitness=-E;%maximize the negative errorif isnan(fitness) %if the error is "not a number" then the fitness is the number of finite values fitness=-(numel(Y_hat)-max(find(~isnan(Y_hat))));end
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -