⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 bncrossval.m

📁 Speaker Verification Toolbox
💻 M
字号:
function Ehat = bnCrossVal(X,Y,w,k,F,bi,cvk,nr)

% Ehat = bnCrossVal(X,Y,w,k,F,bi,cvk,nr) - Cross-validation for Boolean Network inference
%
% Function estimates the (possibly weighted) error of all predictor 
% variable (rows in X) combinations for all the target variables (rows in 
% Y) using the cross-validation. Currently, the predictor function 
% inference (prediction/classification rule) itself is done using the 
% bnBestFit.m function (other functions can be used as well). Note that if 
% unity weights (defined in w) are used for all the samples, then the 
% estimated C-V error is equal to the standard error estimate.
%
% INPUT:
% X,Y,w,k,F,bi - See the definition in bnBestFit.m
% cvk   - The number of folds in the cross-validation. Limitation:
%         2<=cvk<=n.
% nr    - The number of times a single cross-validation procedure is
%         repeated.
%
% OUTPUT:
% Ehat  - Estimated error for all predictor variable combinations and for 
%         all target variables. Ehat has size nchoosek(n,k)-by-ni, where n 
%         is the number of predictor variables, k is the number of 
%         variables in the Boolean functions, and ni is the number of 
%         target variables.

% Functions used: bnBestFit, randintex

% 03.04.2003 by Harri L鋒desm鋕i, modified from bnBestFit.
% Modified: May 14, 2003 by HL.
%           24/08/2005 by HL


% The number of variables and samples.
[n,m] = size(X);

% The number of target variables.
ni = size(Y,1);


%+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
% The number of samples in each split.
%+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
% The number of samples in each split of the data.
ns = floor(m/cvk)*ones(1,cvk);
ind = randintex(abs(m-sum(ns)),1,cvk);
ns(ind) = ns(ind) + 1;

combnum = nchoosek(n,k);
Ehat = zeros(combnum,ni);
indAll = [1:m];


%+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
% The main loop.
%+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
% Repeat the standard cross-validation nr times (in order to get more
% reliable estimates of the error).
for r=1:nr
    
    % Indices of the "unused" test samples.
    ind = [1:m];
    
    % Run through all the folds.
    for i=1:cvk
        
        % Indices of the current test data.
        indt = randintex(ns(i),1,length(ind));
        testind = ind(indt); % Current fold (test set). 
        trainind = indAll;
        trainind(testind) = []; % Current training set.
        ind(indt) = []; % Remaining samples.
        
        % Infer the Boolean functions.
        [Fhat,Ehatr,Et] = bnBestFit(X(:,trainind),Y(:,trainind),w(trainind),...
            k,F,bi,X(:,testind),Y(:,testind),w(testind));
        
        Ehat = Ehat + Et;
        
    end % for i=1:cvk
    
    % Display something...
    %disp([num2str(r),'/',num2str(nr)]);
    
end % for r=1:nr

% Normalize the error.
%Ehat = Ehat/(nr*m);
Ehat = Ehat/(nr*sum(w));

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -