⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 clevalb.m

📁 The pattern recognition matlab toolbox
💻 M
字号:
%CLEVALB Classifier evaluation (learning curve), bootstrap version% % 	E = CLEVALB(A,CLASSF,TRAINSIZES,N,FID)%% INPUT%   A           Training dataset%   CLASSF      Classifier to evaluate%   TRAINSIZES  Vector of class sizes, used to generate subsets of A%               (default [2,3,5,7,10,15,20,30,50,70,100])%   NREPS       Number of repetitions (default 1)%   FID         File ID to write progress to (default [], see PRPROGRESS)%% OUTPUT%   E           Error structure (see PLOTE)%% DESCRIPTION % Generates at random, for all class sizes defined in TRAINSIZES, training% sets out of the dataset A and uses these for training the untrained% classifier CLASSF. CLASSF may also be a cell array of untrained% classifiers; in this case the routine will be run for all of them. The% resulting trained classifiers are tested on all objects in A. This% procedure is then repeated N times.%% Training set generation is done "with replacement" and such that for each% run the larger training sets include the smaller ones and that for all% classifiers the same training sets are used.% % If CLASSF is fully deterministic, this function uses the RAND random% generator and thereby reproduces if its seed is reset (see RAND). % If CLASSF uses RANDN, its seed may have to be set as well.%% Use FID = 1 to report progress to the command window.% % EXAMPLES% See PREX_CLEVAL.%% SEE ALSO% MAPPINGS, DATASETS, CLEVALB, TESTC, PLOTE, PRPROGRESS% Copyright: R.P.W. Duin, duin@ph.tn.tudelft.nl% Faculty of Applied Sciences, Delft University of Technology% P.O. Box 5046, 2600 GA Delft, The Netherlands% $Id: clevalb.m,v 1.3 2007/02/15 10:11:15 davidt Exp $function e = clevalb(a,classf,learnsizes,nreps,fid) 	prtrace(mfilename);  if (nargin < 5)		fid = []; 	end;  if (nargin < 4)		prwarning(2,'number of repetitions not specified, assuming NREPS = 1');		nreps = 1; 	end;  if (nargin < 3)		prwarning(2,'vector of training set class sizes not specified, assuming [2,3,5,7,10,15,20,30,50,70,100]'); 		learnsizes = [2,3,5,7,10,15,20,30,50,70,100]; 	end;	% If a single mapping is given, convert it to a 1 x 1 cell array.  if (ismapping(classf)), classf = {classf}; end	% Correct for old argument order.  if (isdataset(classf)) & (ismapping(a))   	tmp = a; a = classf; classf = {tmp};  end  if (isdataset(classf)) & (iscell(a)) & (ismapping(a{1}))   	tmp = a; a = classf; classf = tmp;  end  if ~iscell(classf), classf = {classf}; end	% Assert that all is right.  isdataset(a); ismapping(classf{1});	% Remove requested class sizes that are larger than the size of the	% smallest class.  mc = classsizes(a); [m,k,c] = getsize(a);	toolarge = find(learnsizes >= min(mc));	if (~isempty(toolarge))		prwarning(2,['training set class sizes ' num2str(learnsizes(toolarge)) ...								 ' larger than the minimal class size in A; removed them']);	  learnsizes(toolarge) = [];	end  learnsizes = learnsizes(:)';	% Fill the error structure.  nw = length(classf(:));  datname = getname(a);  e.n       = nreps;  e.error   = zeros(nw,length(learnsizes));  e.std     = zeros(nw,length(learnsizes));  e.xvalues = learnsizes(:)';  e.names   = [];  e.xlabel  = 'Training set size';  if (nreps > 1)  	e.ylabel= ['Averaged error (' num2str(nreps) ' experiments)'];  elseif (nreps == 1)  	e.ylabel = 'Error';  else		error('Number of repetitions NREPS should be >= 1.');	end;  if (~isempty(datname))  	e.title = ['Bootstrapped learning curve on ' datname];  end  if (learnsizes(end)/learnsizes(1) > 20)  	e.plot = 'semilogx'; 				% If range too large, use a log-plot for X.  end  	% Report progress.	  prprogress(fid,['\nclevalb:  bootstrapped classifier evaluation (learning curve): \n' ...      '    %i classifiers, %i repetitions, %i learnsizes ['],nw,nreps,length(learnsizes));  prprogress(fid,' %i ',learnsizes)  prprogress(fid,']\n  ');	% Store the seed, to reset the random generator later for different	% classifiers.  seed = rand('state');	% Loop over all classifiers (with index WI).  for wi = 1:nw  	isuntrained(classf{wi}); name = getname(classf{wi});    prprogress(fid,'classifier: %s\n  ',name); e.names = char(e.names,name);		% E1 will contain the error estimates.  	e1 = zeros(nreps,length(learnsizes));		% Take care that classifiers use same training set.  	rand('state',seed); seed2 = seed;		% For NREPS repetitions...  	for i = 1:nreps			% Store the randomly permuted indices of samples of class CI to use in			% this training set in JR(CI,:).  		JR = zeros(c,max(learnsizes));  		for ci = 1:c				JC = findnlab(a,ci);								% Necessary for reproducable training sets: set the seed and store				% it after generation, so that next time we will use the previous one.  			rand('state',seed2);		  			R = ceil(rand(1,max(learnsizes))*length(JC));  			JR(ci,:) = JC(R)';  			seed2 = rand('state');   		end  		li = 0;										% Index of training set.  		for j = learnsizes  			li = li + 1; 				% J will contain the indices for this training set.				J = [];				  			for ci = 1:c  				J = [J;JR(ci,1:j)'];   			end;								% Train classifier CLASSF{WI} on this training set and calculate				% error.				  			W = a(J,:)*classf{wi};     		e1(i,li) = testc(a,W);  			prprogress(fid,'.');  		end      prprogress(fid,'\n  ');  	end		% Calculate average error and standard deviation for this classifier		% (or set the latter to zero if there's been just 1 repetition).  	e.error(wi,:) = mean(e1,1);  	if (nreps == 1)  		e.std(wi,:) = zeros(1,size(e.std,2));  	else  		e.std(wi,:) = std(e1)/sqrt(nreps);  	end  end	prprogress(fid,'\b\bclevalb finished\n')  % The first element is the empty string [], remove it.  e.names(1,:) = [];return

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -