⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 stolcke_entropic_demo.m

📁 麻省理工学院的人工智能工具箱,很珍贵,希望对大家有用!
💻 M
字号:
% Do experiment on p54 of "Bayesian learning of probabilistic language models",% A. Stolcke, PhD thesis, UC Berkeley, 1994seed = 1;rand('state', seed);randn('state', seed);hmm0 = mk_dhmm_stolcke1;if 1  ntrain = 20;  ntest = 10;  [data, hidden] = sample_dhmm_endstate(hmm0.startprob, hmm0.transmat, hmm0.obsmat, hmm0.endprob, ntrain + ntest);  train_data = data(1:ntrain);  test_data = data(ntrain+1:ntrain+ntest);else  ntrain = 8;  ntest = 10;  train_data = {[1 1], [2 2], [1 3 1], [2 3 2], [1 3 3 1], [2 3 3 2], [1 3 3 3 1], [2 3 3 3 2]};  test_data = sample_dhmm_endstate(hmm0.startprob, hmm0.transmat, hmm0.obsmat, hmm0.endprob, ntest);endhmm = {};LL = {};seeds = 1:5;nseeds = length(seeds);nparseable_by_learned = [];nparseable_by_true = [];loglik_train = [];loglik_test = [];niter = [];for seedi=1:nseeds  seed = seeds(seedi);  rand('state', seed);  randn('state', seed);  % hmm{1} is the starting point for learning  %hmm{1} = mk_rnd_dhmm(hmm0.nstates, hmm0.nobs);  hmm{1} = mk_rnd_dhmm(10, hmm0.nobs); % more states than necessary  LL{1} = 0;  h = 2;    disp('baum welch')  [hmm{h}, LL{h}] = learn_dhmm_simple(train_data, hmm{1});  h = h + 1;    disp('entropic')  [hmm{h}, LL{h}] = learn_dhmm_entropic(train_data, hmm{1});  h = h + 1;    disp('trim')  [hmm{h}, LL{h}] = learn_dhmm_entropic(train_data, hmm{1}, 'trimtrans', 1, 'trimobs', 1,  'trimstates', 0);  h = h + 1;  disp('trim states')  [hmm{h}, LL{h}] = learn_dhmm_entropic(train_data, hmm{1}, 'trimtrans', 1, 'trimobs', 1,  'trimstates', 1);  h = h + 1;  disp('annealed entropic')  [hmm{h}, LL{h}] = learn_dhmm_entropic(train_data, hmm{1}, 'anneal', 1);  h = h + 1;    disp('annealed entropic and trim')  [hmm{h}, LL{h}] = learn_dhmm_entropic(train_data, hmm{1}, 'trimtrans', 1, 'trimobs', 1, 'anneal', 1);  h = h + 1;    disp('annealed entropic and trim states')  [hmm{h}, LL{h}] = learn_dhmm_entropic(train_data, hmm{1}, 'trimtrans', 1, 'trimobs', 1, 'anneal', 1,...					'trimstates', 1);  h = h + 1;    H = length(hmm);    for i=1:H    if i==1      loglik_train(i, seedi) = log_lik_dhmm(train_data, hmm{i}.startprob, hmm{i}.transmat, hmm{i}.obsmat);      niter(i, seedi) = 0;    else      loglik_train(i, seedi) = LL{i}(end);      niter(i, seedi) = length(LL{i});    end  end  loglik_train  niter  for i=1:H    loglik_test(i, seedi) = log_lik_dhmm(test_data, hmm{i}.startprob, hmm{i}.transmat, hmm{i}.obsmat);  end  loglik_test  % if not parseable by learned, the model is too specific  for i=1:H    c = 0;    for n=1:ntest      c = c + parseable_dhmm(hmm{i}, test_data{n});    end    nparseable_by_learned(i, seedi) = c;  end  nparseable_by_learned    % if not parseable by true, the model is too general  for i=1:H    data_from_learned = sample_dhmm_endstate(hmm{i}.startprob, hmm{i}.transmat, hmm{i}.obsmat, hmm{i}.endprob, ntest);    c = 0;    for n=1:ntest      c = c + parseable_dhmm(hmm0, data_from_learned{n});    end    nparseable_by_true(i, seedi) = c;  end  nparseable_by_trueend

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -