⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mpot.m

📁 gaussian procession 原程序,用于解决分类问题,有兴趣可以看看啊
💻 M
字号:
function pot = mpot(vectheta)% log posterior probability (up to an additive constant)%% pot = mpot(vectheta)%% vectheta  : vector of hyperparameters%            Matlab code for Gaussian Processes for Classification:%                      GPCLASS version 0.2  10 Nov 97%       Copyright (c) David Barber and Christopher K I Williams (1997)%    This program is free software; you can redistribute it and/or modify%    it under the terms of the GNU General Public License as published by%    the Free Software Foundation; either version 2 of the License, or%    any later version.%%    This program is distributed in the hope that it will be useful,%    but WITHOUT ANY WARRANTY; without even the implied warranty of%    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the%    GNU General Public License for more details.%%    You should have received a copy of the GNU General Public License%    along with this program; if not, write to the Free Software%    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.%fprintf(1,'\n Potential\n')%fprintf(1,'===========\n')% Multiple class potential for the independent% covariance structure.% Multiple class Gaussian Process:% Use the following format for the covariance function:% Sigma =     class1       class2  ..       class m%         data1...datan  %			  data1...datan%					   data1...datan% etc..% that is, the matrix is organised such that the larger scale% is the classes, and within each submatrix,% is the covariance matrix between the datapoints;global x_tr ntr ybin prob y x_all makepredflag m jitterglobal invSxTVEC  yeta B invS4 invS5 sinvSxTVECth = 100;				% threshold values before expa_thresh = 100;mntr = m*ntr;				% m - # classes, ntr - # datapoints% These are the parameters for the covariance functionTheta = threshold(vec2mitheta(vectheta,m),th);% initialization of yeta and pyenoiseproc = getnoise_sp(full(yeta),m);pye = minvg(yeta);% set up prior covariance matrix BB=zeros(mntr,mntr);Bno_bias=zeros(mntr,mntr);o = size(Theta,2);		% # parameters in the covariance functiontnp = size(Theta,1)*o;		% 	dimension of gradient vector.for ci = 1:m;  D = zeros(ntr,ntr);  a = threshold(exp(Theta(ci,1)),a_thresh);  bias = exp(Theta(ci,o));		% bias defined by last component    for in_mcomp = 2:o-1    k = in_mcomp;	    epsi = exp(Theta(ci,k));    oo = ones(ntr,1); ot = ones(1,ntr);    xv = x_tr(k-1,:)';          xxv = xv.*xv;    C = xxv*ot + oo*xxv'-2.*xv*xv';    D = D + epsi.*C;  end  [D,tflag] = threshold(D,th);  Bno_bias(1+(ci-1)*ntr:ci*ntr,1+(ci-1)*ntr:ci*ntr) = a.*exp(-0.5.*D);  B(1+(ci-1)*ntr:ci*ntr,1+(ci-1)*ntr:ci*ntr) = Bno_bias(1+(ci-1)*ntr:ci*ntr,1+(ci-1)*ntr:ci*ntr) + bias.*ones(ntr,ntr);  endB = B + eye(mntr).*jitter.^2;  % stabilize the covariance matrixB=sparse(B);% find the mode.pye = minvg(yeta);toln = 0.00000001;[yeta,pye,noiseproc,sinvSxTVEC,invSxTVEC]=getmode(sinvSxTVEC,yeta,ybin,toln,B,m,ntr);mv = reshape(yeta,ntr,m);ldS3 = ldetS3(ntr,m,pye,B);lev = 0.5.*yeta'*(ybin+pye) -sum(log(sum(exp(mv)'))) -0.5*ldS3;if makepredflag == 1			% global variables for makepred  invS4 = invIpSN(ntr,m,pye,B);  invS5 = sparse(noiseproc)*invS4;endpot = -lev -mlnptheta(Theta);%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function ld = ldetS3(nn,m,pye,B)% S = inv(Sig) + diag(Pi)f = 0;h = zeros(nn,nn);BB = zeros(nn,nn);for c =1:m				% loop over the classes  P = getPmat(c,pye,nn);		  BB = B(1+(c-1)*nn:c*nn, 1+(c-1)*nn:c*nn); % get the covariance matrix    A = eye(nn) + BB*P;  [trl, AL,AU] = trlog(A);  f = f + trl;  if length(find(diag(AL) == 0)) ==0    h = h + P*invAxB(AL,AU,BB)*P;    else%    fprintf(1,'\nzero on diag of L\n')	% don't yet understand why this can happen    h = h + P*(A\BB)*P;  end  endld = f + trlog(eye(nn) - h);%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function lnptheta = mlnptheta(mtheta)% assume that the hyperprior distribution is gaussian with fixed mean and% variance for each hyperparameter. This means that the log prob of the% hyperprior distn is simply a scaled sum of squares.global mean_prior var_priorlnptheta = -0.5*sum(sum(((mtheta - mean_prior).^2)./var_prior));

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -