📄 my_entropic_map.m
字号:
function [theta, loglik] = my_entropic_map(counts, Z)% ENTROPIC_MAP_ESTIMATE Find MAP estimate of multinomial using entropic prior% [theta, loglik] = entropic_map_estimate(counts, Z)%% The entropic prior says P(theta) \propto exp(-H(theta)), where H(.) is the entropy.%% Z = 1 (default) is min entropy% Z = 0 is max likelihood,% Z = -1 is max ent,% Z = -inf corresponds to very high temperature (good for initialisation)%% loglik is the un-normalized log-posterior:% loglik = log(prod_i theta_i^(c_i) exp(sum_j theta_j log theta_j)% = sum_i c_i log(theta_i) + sum_j theta_j log(theta_j)% where c_i = counts(i)%% Based on "Structure learning in conditional probability models via an entropic prior% and parameter extinction", M. Brand, Neural Computation 11 (1999): 1155--1182%% For the Z ~= 1 case, see "Pattern discovery via entropy minimization",% M. Brand, AI & Statistics 1999. Equation numbers refer to this paper.%% Written by Kevin Murphy (murphyk@cs.berkeley.edu), Nov 2001% This should give the same results as Brand's entropic_map.m,% but doesn't always work right.% This uses Lambert's W function, which is built-in% to the Matlab symbolic math toolbox (calls Maple)if nargin < 2, Z = 1; end% Check for counts that are very close to zero.% This section is based on Matt Brand's codeok = counts > eps.^2;if any(ok == 0), q = sum(ok); if q > 1, % Two or more nonzero parameters -- skip those that are zero [fix, loglik] = entropic_map_estimate(counts(ok), Z); theta = ok; %theta(ok) = stoch(fix); theta(ok) = fix; elseif q == 1, % Only one nonzero parameter -- return spike distribution theta = ok; loglik = 0; else % Everything is zero -- return uniform distribution theta = repmat(1/length(counts), size(counts)); loglik = 0; end; returnend;if Z == 0 theta = normalise(counts); if any(theta==0) loglik = -inf; else loglik = sum((counts+theta).*log(theta)); end return;endtheta = normalise(counts);converged = 0;old_lambda = -inf;thresh = 1e-2;max_iter = 100;niter = 1;if Z > 0 % we must make sure the argument to W, (-counts/Z) .* exp(1+lambda/Z), is > -1/exp(1) % to ensure W returns a real value. % Hence lambda < Z log (Z/ (c_i exp(1)) ) - Z for all i lambda = min(Z * log(Z ./ (counts*exp(1))) - Z) - 1;else % If Z < 0, the argument is guaranteed to be positive, so we make a good first guess % eqn 14 should hold for every theta_i, so we take average lambda = -(mean(counts ./ theta) + Z*mean(log(theta)) + Z);endwhile ~converged & (niter <= max_iter) % eqn 19 uses Lambert's W function, which is built-in % to the Matlab symbolic math toolbox (calls Maple) arg = (-counts/Z) .* exp(1+lambda/Z); if any(arg < -1/exp(1)) converged = 1; %fprintf('\nwarning: W will return complex values\n'); %lambda %counts %arg end if Z > 0 theta = normalise(-(counts/Z) ./ lambertw(-1,arg)); else theta = normalise(-(counts/Z) ./ lambertw(0, arg)); end % eqn 14 should hold for every theta_i, so we take average lambda = -(mean(counts ./ theta) + Z*mean(log(theta)) + Z); % this might take too large a step and cause the arg to W to go below -1/e! converged = abs(lambda - old_lambda) < thresh; old_lambda = lambda; niter = niter + 1;endloglik = sum((counts+theta).*log(theta));
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -