📄 ada_boost.m
字号:
function D = ada_boost(train_features, train_targets, params, region);% Classify using the AdaBoost algorithm% Inputs:% features - Train features% targets - Train targets% Params:% 1. NumberOfIterations% 2. Weak Learner Type (Perceptron, Pocket, LS, ML, RDA )% 3. Learner's parameters ( Perceptron - Number Of Iterations,% Pocket - Number Of Iterations,% LS - None,% ML - None,% Friedman - Lambda )% region - Decision region vector: [-x x -y y number_of_points]%% Outputs% D - Decision sufracedisp(params);comma_loc = findstr(params,',');Iter = str2num(params(2:comma_loc(1)-1));alg_iter = str2num(params(comma_loc(2)+1:length(params)-1));method = params(comma_loc(1)+2:comma_loc(2)-2);indexs = round((train_features - (region([1,3])' * ones(1,size(train_features,2)))) * (region(5)-1) ./... ((region([2,4]) - region([1,3]))' * ones(1,size(train_features,2))))... + ones(size(train_features));N = size(train_targets);ind = (indexs(1,:)-1)*region(5) + indexs(2,:);beta = [];D_final = zeros(region(5));distr_vector = ones(N) / N(2);flg = 0;for k = 1:Iter, p = distr_vector / sum(distr_vector); D = feval(method, train_features, train_targets, [p,alg_iter], region); D1 = D(:)'; new_targets = D1(ind); delta = xor(train_targets,new_targets); e = sum(p .* delta); if e > 0.5 D = ~D; D1 = D(:)'; new_targets = D1(ind); delta = xor(train_targets,new_targets); e = sum(p .* delta); end entropy = - sum(p.*log(p))/log(N(2)); %disp(['Error = ',num2str(e),' Entropy = ',num2str(entropy)]); if (abs(e-0.5)<1e-8), break end beta(k) = e / (1 - e); distr_vector = distr_vector .* (beta(k) .^ ~delta); if beta(k) ~= 0 D_final = D_final + log(1/beta(k)) * D; else flg = 1; break endendif k > 1 D = (D_final >= 0.5 * sum(log(1./beta)));end
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -