⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node1.html

📁 高斯混合模型算法
💻 HTML
📖 第 1 页 / 共 3 页
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"><!--Converted with LaTeX2HTML 2002-2-1 (1.70)original version by:  Nikos Drakos, CBLU, University of Leeds* revised and updated by:  Marcus Hennecke, Ross Moore, Herb Swan* with significant contributions from:  Jens Lippmann, Marek Rouchal, Martin Wilck and others --><HTML><HEAD><TITLE>Gaussian statistics</TITLE><META NAME="description" CONTENT="Gaussian statistics"><META NAME="keywords" CONTENT="Gaussian"><META NAME="resource-type" CONTENT="document"><META NAME="distribution" CONTENT="global"><META NAME="Generator" CONTENT="LaTeX2HTML v2002-2-1"><META HTTP-EQUIV="Content-Style-Type" CONTENT="text/css"><LINK REL="STYLESHEET" HREF="../../ci.css"><LINK REL="next" HREF="node2.html"><LINK REL="previous" HREF="Gaussian.html"><LINK REL="up" HREF="Gaussian.html"><LINK REL="next" HREF="node2.html"></HEAD><BODY  bgcolor="#ffffff"><DIV CLASS="navigation"><table border=0 cellspacing=0 callpadding=0 width=100% class="tut_nav"><tr valign=middle class="tut_nav"><td valign=middle align=left  class="tut_nav"><i><b>&nbsp;<A NAME="tex2html19"  HREF="Gaussian.html">Tutorial: Gaussian Statistics and Unsupervised Learning</A></b></i></td><td valign=middle align=right class="tut_nav">&nbsp;<A NAME="tex2html12"  HREF="Gaussian.html"><IMG  ALIGN="absmiddle" BORDER="0" ALT="previous" SRC="prev.gif"></A>&nbsp;&nbsp;<a href="index.html"><img ALIGN="absmiddle" BORDER="0" ALT="Contents" src="contents.gif"></a>&nbsp;<A NAME="tex2html20"  HREF="node2.html"><IMG  ALIGN="absmiddle" BORDER="0" ALT="next" SRC="next.gif"></A></dt></tr></table></DIV><!--End of Navigation Panel--><!--Table of Child-Links--><br><A NAME="CHILD_LINKS"><STRONG>Subsections</STRONG></A><UL CLASS="ChildLinks"><LI><A NAME="tex2html22"  HREF="node1.html#SECTION00011000000000000000">Samples from a Gaussian density</A><LI><A NAME="tex2html23"  HREF="node1.html#SECTION00012000000000000000">Gaussian modeling: Mean and variance of a sample</A><LI><A NAME="tex2html24"  HREF="node1.html#SECTION00013000000000000000">Likelihood of a sample with respect to a Gaussian model</A></UL><!--End of Table of Child-Links--><HR><H1><A NAME="SECTION00010000000000000000">Gaussian statistics</A></H1><P><H2><A NAME="SECTION00011000000000000000"></A>
<A NAME="samples"></A><BR>Samples from a Gaussian density</H2><P><H3><A NAME="SECTION00011100000000000000"></A><A NAME="sec:gausspdf"></A><BR>Useful formulas and definitions:</H3>
<UL><LI>The <EM>Gaussian probability density function (pdf)</EM> for the
  <SPAN CLASS="MATH"><IMG WIDTH="11" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img5.gif" ALT="$ d$"></SPAN>-dimensional random variable <!-- MATH $\ensuremath\mathbf{x}\circlearrowleft {\calN}(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})$ --><SPAN CLASS="MATH"><IMG WIDTH="83" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img6.gif" ALT="$ \ensuremath\mathbf{x}\circlearrowleft {\calN}(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})$"></SPAN> (i.e., variable <!-- MATH $\ensuremath\mathbf{x}\in \ensuremath\mathbb{R}^d$ --><SPAN CLASS="MATH"><IMG WIDTH="45" HEIGHT="31" ALIGN="MIDDLE" BORDER="0" SRC="img7.gif" ALT="$ \ensuremath\mathbf{x}\in \ensuremath\mathbb{R}^d$"></SPAN> following the
  Gaussian, or Normal, probability law) is given by:
  <P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><A NAME="eq:gauss"></A><!-- MATH \begin{equation}g_{(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})}(\ensuremath\mathbf{x}) = \frac{1}{\sqrt{2\pi}^d      \sqrt{\det\left(\ensuremath\boldsymbol{\Sigma}\right)}} \, e^{-\frac{1}{2} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})^{\mathsf T}
      \ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})}
  \end{equation} --><TABLE CLASS="equation" CELLPADDING="0" WIDTH="100%" ALIGN="CENTER"><TR VALIGN="MIDDLE"><TD NOWRAP ALIGN="CENTER"><SPAN CLASS="MATH"><IMG WIDTH="291" HEIGHT="44" ALIGN="MIDDLE" BORDER="0" SRC="img8.gif" ALT="$\displaystyle g_{(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})}(......th\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})}$"></SPAN></TD><TD NOWRAP CLASS="eqno" WIDTH="10" ALIGN="RIGHT">(<SPAN CLASS="arabic">1</SPAN>)</TD></TR></TABLE></DIV><BR CLEAR="ALL"><P></P>where <!-- MATH $\ensuremath\boldsymbol{\mu}$ --><SPAN CLASS="MATH"><IMG WIDTH="13" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img9.gif" ALT="$ \ensuremath\boldsymbol{\mu}$"></SPAN> is the mean vector and <!-- MATH $\ensuremath\boldsymbol{\Sigma}$ --><SPAN CLASS="MATH"><IMG WIDTH="15" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img10.gif" ALT="$ \ensuremath\boldsymbol{\Sigma}$"></SPAN> is the covariance matrix.
  <!-- MATH $\ensuremath\boldsymbol{\mu}$ --><SPAN CLASS="MATH"><IMG WIDTH="13" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img9.gif" ALT="$ \ensuremath\boldsymbol{\mu}$"></SPAN> and <!-- MATH $\ensuremath\boldsymbol{\Sigma}$ --><SPAN CLASS="MATH"><IMG WIDTH="15" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img10.gif" ALT="$ \ensuremath\boldsymbol{\Sigma}$"></SPAN> are the <EM>parameters</EM> of the Gaussian
  distribution.<P></LI><LI>The mean vector <!-- MATH $\ensuremath\boldsymbol{\mu}$ --><SPAN CLASS="MATH"><IMG WIDTH="13" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img9.gif" ALT="$ \ensuremath\boldsymbol{\mu}$"></SPAN> contains the mean values of each
  dimension, <!-- MATH $\mu_i = E(x_i)$ --><SPAN CLASS="MATH"><IMG WIDTH="70" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img11.gif" ALT="$ \mu_i = E(x_i)$"></SPAN>, with <SPAN CLASS="MATH"><IMG WIDTH="33" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img12.gif" ALT="$ E(x)$"></SPAN> being the <SPAN  CLASS="textit">expected
    value</SPAN> of <SPAN CLASS="MATH"><IMG WIDTH="11" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img13.gif" ALT="$ x$"></SPAN>.<P></LI><LI>All of the variances <SPAN CLASS="MATH"><IMG WIDTH="18" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img14.gif" ALT="$ c_{ii}$"></SPAN> and covariances <SPAN CLASS="MATH"><IMG WIDTH="19" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img15.gif" ALT="$ c_{ij}$"></SPAN> are
  collected together into the covariance matrix <!-- MATH $\ensuremath\boldsymbol{\Sigma}$ --><SPAN CLASS="MATH"><IMG WIDTH="15" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img10.gif" ALT="$ \ensuremath\boldsymbol{\Sigma}$"></SPAN> of dimension
  <SPAN CLASS="MATH"><IMG WIDTH="35" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img16.gif" ALT="$ d\times d$"></SPAN>:<P><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><!-- MATH \begin{equation*}\ensuremath\boldsymbol{\Sigma}=  \left[
    \begin{array}{*{4}{c}}
      c_{11} & c_{12} & \cdots & c_{1n} \\
      c_{21} & c_{22} & \cdots & c_{2n} \\
      \vdots & \vdots & \ddots & \vdots \\
      c_{n1} & c_{n2} & \cdots & c_{nn} \\
    \end{array}
  \right]
\end{equation*} --><TABLE CLASS="equation*" CELLPADDING="0" WIDTH="100%" ALIGN="CENTER"><TR VALIGN="MIDDLE"><TD NOWRAP ALIGN="CENTER"><SPAN CLASS="MATH"><IMG WIDTH="183" HEIGHT="90" ALIGN="MIDDLE" BORDER="0" SRC="img17.gif" ALT="$\displaystyle \ensuremath\boldsymbol{\Sigma}=
 \left[
 \begin{array}{*{4}{c}}
 ......ddots &amp; \vdots  
 c_{n1} &amp; c_{n2} &amp; \cdots &amp; c_{nn}  
 \end{array}
 \right]$"></SPAN></TD><TD NOWRAP CLASS="eqno" WIDTH="10" ALIGN="RIGHT">&nbsp;&nbsp;&nbsp;</TD></TR></TABLE></DIV><BR CLEAR="ALL"><P></P><P>The covariance <SPAN CLASS="MATH"><IMG WIDTH="19" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img15.gif" ALT="$ c_{ij}$"></SPAN> of two components <SPAN CLASS="MATH"><IMG WIDTH="16" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img18.gif" ALT="$ x_i$"></SPAN> and <SPAN CLASS="MATH"><IMG WIDTH="17" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img19.gif" ALT="$ x_j$"></SPAN> of <!-- MATH $\ensuremath\mathbf{x}$ --><SPAN CLASS="MATH"><IMG WIDTH="12" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img20.gif" ALT="$ \ensuremath\mathbf{x}$"></SPAN>
measures their tendency to vary together, i.e., to co-vary,
<!-- MATH \begin{displaymath}c_{ij} = E\left((x_i-\mu_i)^{\mathsf T}\,(x_j-\mu_j)\right).\end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="190" HEIGHT="32" ALIGN="MIDDLE" BORDER="0" SRC="img21.gif" ALT="$\displaystyle c_{ij} = E\left((x_i-\mu_i)^{\mathsf T} (x_j-\mu_j)\right).$"></DIV><P></P>If two components <SPAN CLASS="MATH"><IMG WIDTH="16" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img18.gif" ALT="$ x_i$"></SPAN> and <SPAN CLASS="MATH"><IMG WIDTH="17" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img19.gif" ALT="$ x_j$"></SPAN>, <SPAN CLASS="MATH"><IMG WIDTH="33" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img22.gif" ALT="$ i\ne j$"></SPAN>, have zero covariance
<!-- MATH $c_{ij} = 0$ --><SPAN CLASS="MATH"><IMG WIDTH="45" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img23.gif" ALT="$ c_{ij} = 0$"></SPAN> they are <EM>orthogonal</EM> in the statistical sense, which
transposes to a geometric sense (the expectation is a scalar product
of random variables; a null scalar product means orthogonality).  If
all components of <!-- MATH $\ensuremath\mathbf{x}$ --><SPAN CLASS="MATH"><IMG WIDTH="12" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img20.gif" ALT="$ \ensuremath\mathbf{x}$"></SPAN> are mutually orthogonal the covariance matrix
has a diagonal form.<P></LI><LI><!-- MATH $\sqrt{\ensuremath\boldsymbol{\Sigma}}$ --><SPAN CLASS="MATH"><IMG WIDTH="27" HEIGHT="33" ALIGN="MIDDLE" BORDER="0" SRC="img24.gif" ALT="$ \sqrt{\ensuremath\boldsymbol{\Sigma}}$"></SPAN> defines the <EM>standard deviation</EM> of the random
  variable <!-- MATH $\ensuremath\mathbf{x}$ --><SPAN CLASS="MATH"><IMG WIDTH="12" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img20.gif" ALT="$ \ensuremath\mathbf{x}$"></SPAN>. Beware: this square root is meant in the <EM>matrix
    sense</EM>.<P></LI><LI>If <!-- MATH $\ensuremath\mathbf{x}\circlearrowleft {\cal N}(\mathbf{0},\mathbf{I})$ --><SPAN CLASS="MATH"><IMG WIDTH="75" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img25.gif" ALT="$ \ensuremath\mathbf{x}\circlearrowleft {\cal N}(\mathbf{0},\mathbf{I})$"></SPAN> (<!-- MATH $\ensuremath\mathbf{x}$ --><SPAN CLASS="MATH"><IMG WIDTH="12" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img20.gif" ALT="$ \ensuremath\mathbf{x}$"></SPAN>
  follows a normal law with zero mean and unit variance; <!-- MATH $\mathbf{I}$ --><SPAN CLASS="MATH"><IMG WIDTH="9" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img26.gif" ALT="$ \mathbf{I}$"></SPAN>
  denotes the identity matrix), and if <!-- MATH $\mathbf{y} = \ensuremath\boldsymbol{\mu}+\sqrt{\ensuremath\boldsymbol{\Sigma}}\,\ensuremath\mathbf{x}$ --><SPAN CLASS="MATH"><IMG WIDTH="92" HEIGHT="33" ALIGN="MIDDLE" BORDER="0" SRC="img27.gif" ALT="$ \mathbf{y} = \ensuremath\boldsymbol{\mu}+\sqrt{\ensuremath\boldsymbol{\Sigma}} \ensuremath\mathbf{x}$"></SPAN>, then <!-- MATH $\mathbf{y} \circlearrowleft {\calN}(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})$ --><SPAN CLASS="MATH"><IMG WIDTH="83" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img28.gif" ALT="$ \mathbf{y} \circlearrowleft {\calN}(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})$"></SPAN>.<P></LI></UL><P><H3><A NAME="SECTION00011200000000000000">Experiment:</A></H3>
Generate samples <SPAN CLASS="MATH"><IMG WIDTH="16" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img29.gif" ALT="$ X$"></SPAN> of <SPAN CLASS="MATH"><IMG WIDTH="16" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img30.gif" ALT="$ N$"></SPAN> points, <!-- MATH $X=\{\ensuremath\mathbf{x}_1,\ensuremath\mathbf{x}_2,\ldots,\ensuremath\mathbf{x}_N\}$ --><SPAN CLASS="MATH"><IMG WIDTH="134" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img31.gif" ALT="$ X=\{\ensuremath\mathbf{x}_1,\ensuremath\mathbf{x}_2,\ldots,\ensuremath\mathbf{x}_N\}$"></SPAN>, with <SPAN CLASS="MATH"><IMG WIDTH="70" HEIGHT="14" ALIGN="BOTTOM" BORDER="0" SRC="img32.gif" ALT="$ N=10000$"></SPAN>, coming from a 2-dimensional
Gaussian process that has mean
<!-- MATH \begin{displaymath}\ensuremath\boldsymbol{\mu}= \left[ \begin{array}{c} 730 \\1090 \end{array} \right]\end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="89" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img33.gif" ALT="$\displaystyle \ensuremath\boldsymbol{\mu}= \left[ \begin{array}{c} 730  1090 \end{array} \right]$"></DIV><P></P>and variance<UL><LI>8000 for both dimensions (<EM>spherical process</EM>) (sample <SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img34.gif" ALT="$ X_1$"></SPAN>):
  <!-- MATH \begin{displaymath}\ensuremath\boldsymbol{\Sigma}_1 = \left[ \begin{array}{cc}      8000 & 0 \\
      0    & 8000
    \end{array} \right]
  \end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="138" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img35.gif" ALT="$\displaystyle \ensuremath\boldsymbol{\Sigma}_1 = \left[ \begin{array}{cc}8000 &amp; 0 \\0 &amp; 8000\end{array} \right]$"></DIV><P></P>
</LI><LI>expressed as a <EM>diagonal</EM> covariance matrix (sample <SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img36.gif" ALT="$ X_2$"></SPAN>):
  <!-- MATH \begin{displaymath}\ensuremath\boldsymbol{\Sigma}_2 = \left[ \begin{array}{cc}      8000 & 0 \\
      0    & 18500
    \end{array} \right]
  \end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="145" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img37.gif" ALT="$\displaystyle \ensuremath\boldsymbol{\Sigma}_2 = \left[ \begin{array}{cc}8000 &amp; 0 \\0 &amp; 18500\end{array} \right]$"></DIV><P></P>
</LI><LI>expressed as a <EM>full</EM> covariance matrix (sample <SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img38.gif" ALT="$ X_3$"></SPAN>):
  <!-- MATH \begin{displaymath}\ensuremath\boldsymbol{\Sigma}_3 = \left[ \begin{array}{cc}      8000 & 8400 \\
      8400 & 18500
    \end{array} \right]
  \end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="145" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img39.gif" ALT="$\displaystyle \ensuremath\boldsymbol{\Sigma}_3 = \left[ \begin{array}{cc}8000 &amp; 8400 \\8400 &amp; 18500\end{array} \right]$"></DIV><P></P></LI></UL>
Use the function <TT>gausview</TT> (<TT>&#187; help gausview</TT>) to plot the
results as clouds of points in the 2-dimensional plane, and to view the
corresponding 2-dimensional probability density functions (pdfs) in 2D and
3D.<P><H3><A NAME="SECTION00011300000000000000">Example:</A></H3>
<TT>&#187; N = 10000;</TT> <BR>
<TT>&#187; mu = [730 1090]; sigma_1 = [8000 0; 0 8000];</TT> <BR>
<TT>&#187; X1 = randn(N,2) * sqrtm(sigma_1) + repmat(mu,N,1);</TT> <BR>
<TT>&#187; gausview(X1,mu,sigma_1,'Sample X1');</TT> <BR>Repeat for the two other variance matrices <!-- MATH $\ensuremath\boldsymbol{\Sigma}_2$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img40.gif" ALT="$ \ensuremath\boldsymbol{\Sigma}_2$"></SPAN> and <!-- MATH $\ensuremath\boldsymbol{\Sigma}_3$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img41.gif" ALT="$ \ensuremath\boldsymbol{\Sigma}_3$"></SPAN>.
Use the radio buttons to switch the plots on/off. Use the ``view''

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -