⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 expectation-maximization algorithm.htm

📁 这个书很重要 可以好好帮助 不信你可以下来看看 真的很好 看看哦
💻 HTM
📖 第 1 页 / 共 3 页
字号:
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} \frac{\partial \mathcal{L}(\theta)}{\partial \mu_i}   &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \left( - \frac{\partial}{\partial \mu_i} \frac{1}{2}(\mathbf y_j - \mathbf \mu_i)^T \Sigma_i^{-1} (\mathbf y_j - \mathbf \mu_i) \right) \\  &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \left( - \frac{1}{2}(\Sigma_i^{-1} +\Sigma_i^{-T})(\mathbf y_j - \mathbf \mu_i)(-1) \right) \\  &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \left( \Sigma_i^{-1}(\mathbf y_j - \mathbf \mu_i) \right)  \\  &amp;=&amp; 0 \\  &amp; \Downarrow &amp; \\ \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \Sigma_i^{-1} \mathbf \mu_i &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \Sigma_i^{-1} \mathbf y_j \\  &amp; \Downarrow &amp; \\ \mu_i \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \mathbf y_j \\  &amp; \Downarrow &amp; \\ \mu_i &amp;=&amp; \frac{\sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \mathbf y_j}{\sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t)} \\ \end{matrix}" 
  src="Expectation-maximization algorithm.files/48d80f46d68a35f448bae59e934a63b2.png"> 
  </DD></DL>
<P>New estimate for covariance:</P>
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} \frac{\partial \mathcal{L}(\theta)}{\partial \Sigma_i}  &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \left( - \frac{\partial}{\partial \Sigma_i} \frac{1}{2} \ln \left| \Sigma_i \right| - \frac{\partial}{\partial \Sigma_i} \frac{1}{2}(\mathbf y_j - \mathbf \mu_i)^T \Sigma_i^{-1} (\mathbf y_j - \mathbf \mu_i) \right) \\  &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \left( - \frac{1}{2} \Sigma_i^{-T} + \frac{1}{2} \Sigma_i^{-T}(\mathbf y_j - \mathbf \mu_i) (\mathbf y_j - \mathbf \mu_i)^T \Sigma_i^{-T} \right) \\  &amp;=&amp; 0 \\  &amp; \Downarrow &amp; \\ \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \Sigma_i^{-1} &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \Sigma_i^{-1} (\mathbf y_j - \mathbf \mu_i) (\mathbf y_j - \mathbf \mu_i)^T \Sigma_i^{-1} \\  &amp; \Downarrow &amp; \\ \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) &amp;=&amp; \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \Sigma_i^{-1} (\mathbf y_j - \mathbf \mu_i) (\mathbf y_j - \mathbf \mu_i)^T \\  &amp; \Downarrow &amp; \\ \Sigma_i  &amp;=&amp; \frac{\sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) (\mathbf y_j - \mathbf \mu_i) (\mathbf y_j - \mathbf \mu_i)^T}{\sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t)} \\ \end{matrix}" 
  src="Expectation-maximization algorithm.files/e28ec3f79ec53568a5ddc4f085f871a2.png"> 
  </DD></DL>
<P>New estimate for class probability:</P>
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} \frac{\partial \mathcal{L}(\theta)}{\partial P(x_i|\theta)}  &amp;=&amp; \left( \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \frac{\partial \ln P(x_i|\theta)}{\partial P(x_i|\theta)} \right) - \lambda \left( \frac{\partial P(x_i|\theta)}{\partial P(x_i|\theta)} \right) \\  &amp;=&amp; \left( \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \frac{1}{P(x_i|\theta)} \right) - \lambda \\  &amp;=&amp; 0 \\  &amp; \Downarrow &amp; \\ \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \frac{1}{P(x_i|\theta)} &amp;=&amp; \lambda \\ P(x_i|\theta) &amp;=&amp; \frac{1}{\lambda} \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \\ \end{matrix}" 
  src="Expectation-maximization algorithm.files/74c51634b42cb135e4240dafc4ca62ca.png"> 
  </DD></DL>
<P>Inserting into the constraint:</P>
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} \sum_{i=1}^{n} P(x_i|\theta)   &amp;=&amp; \sum_{i=1}^{n} \frac{1}{\lambda} \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \\  &amp;=&amp; 1 \\  &amp; \Downarrow &amp; \\ \lambda &amp;=&amp; \sum_{i=1}^{n} \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \\ \end{matrix}" 
  src="Expectation-maximization algorithm.files/114b382cb5b31a55aef7cec2ac0fe5bb.png"> 
  </DD></DL>
<P>Inserting <SPAN class=texhtml>λ</SPAN> into our estimate:</P>
<DL>
  <DD><IMG class=tex 
  alt="\begin{matrix} P(x_i|\theta) &amp;=&amp; \frac{1}{\lambda} \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t) \\  &amp;=&amp;  \frac{\sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t)}{\sum_{i=1}^{n} \sum_{j=1}^m P(x_i | \mathbf y_j, \theta_t)} \\ \end{matrix}" 
  src="Expectation-maximization algorithm.files/b19fed36411893a9c8b11ec72d636b3b.png"> 
  </DD></DL>
<P>These estimates now become our <SPAN class=texhtml>θ<SUB><I>t</I> + 
1</SUB></SPAN>, to be used in the next estimation step.</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: References" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=10">edit</A>]</DIV>
<P><A id=References name=References></A></P>
<H2>References</H2>
<UL>
  <LI>Arthur Dempster, Nan Laird, and Donald Rubin. "Maximum likelihood from 
  incomplete data via the EM algorithm". <I>Journal of the Royal Statistical 
  Society</I>, Series B, 39(1):1–38, 1977 <A class="external autonumber" 
  title=http://links.jstor.org/sici?sici=0035-9246%281977%2939%3A1%3C1%3AMLFIDV%3E2.0.CO%3B2-Z 
  href="http://links.jstor.org/sici?sici=0035-9246%281977%2939%3A1%3C1%3AMLFIDV%3E2.0.CO%3B2-Z">[1]</A>. 

  <LI>Robert Hogg, Joseph McKean and Allen Craig. <I>Introduction to 
  Mathematical Statistics</I>. pp. 359-364. Upper Saddle River, NJ: Pearson 
  Prentice Hall, 2005. 
  <LI>Radford Neal, <A title="Geoffrey Hinton" 
  href="http://en.wikipedia.org/wiki/Geoffrey_Hinton">Geoffrey Hinton</A>. "A 
  view of the EM algorithm that justifies incremental, sparse, and other 
  variants". In Michael I. Jordan (editor), <I>Learning in Graphical Models</I> 
  pp 355-368. Cambridge, MA: MIT Press, 1999. 
  <LI><A class="external text" 
  title=http://www.inference.phy.cam.ac.uk/mackay/itila/ 
  href="http://www.inference.phy.cam.ac.uk/mackay/itila/">The on-line textbook: 
  Information Theory, Inference, and Learning Algorithms</A>, by <A 
  title="David J.C. MacKay" 
  href="http://en.wikipedia.org/wiki/David_J.C._MacKay">David J.C. MacKay</A> 
  includes simple examples of the E-M algorithm such as clustering using the 
  soft K-means algorithm, and emphasizes the variational view of the E-M 
  algorithm. 
  <LI><A class="external text" 
  title=http://citeseer.ist.psu.edu/bilmes98gentle.html 
  href="http://citeseer.ist.psu.edu/bilmes98gentle.html">A Gentle Tutorial of 
  the EM Algorithm and its Application to Parameter Estimation for Gaussian 
  Mixture and Hidden Markov Models</A>, by <A class=new title="J. Bilmes" 
  href="http://en.wikipedia.org/w/index.php?title=J._Bilmes&amp;action=edit">J. 
  Bilmes</A> includes a simplified derivation of the EM equations for Gaussian 
  Mixtures and Gaussian Mixture Hidden Markov Models. 
  <LI><A class="external text" 
  title=http://www.cse.buffalo.edu/faculty/mbeal/thesis/index.html 
  href="http://www.cse.buffalo.edu/faculty/mbeal/thesis/index.html">Variational 
  Algorithms for Approximate Bayesian Inference</A>, by M. J. Beal includes 
  comparisons of EM to Variational Bayesian EM and derivations of several models 
  including Variational Bayesian HMMs. 
  <LI><A class="external text" 
  title=http://www.cc.gatech.edu/~dellaert/em-paper.pdf 
  href="http://www.cc.gatech.edu/~dellaert/em-paper.pdf">The Expectation 
  Maximization Algorithm</A>, by Frank Dellaert, gives an easier explanation of 
  EM algorithm in terms of lowerbound maximization. </LI></UL>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: See also" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=11">edit</A>]</DIV>
<P><A id=See_also name=See_also></A></P>
<H2>See also</H2>
<UL>
  <LI><A title="Estimation theory" 
  href="http://en.wikipedia.org/wiki/Estimation_theory">estimation theory</A> 
  <LI><A title="Data clustering" 
  href="http://en.wikipedia.org/wiki/Data_clustering">Data clustering</A> 
  <LI><A title="K-means algorithm" 
  href="http://en.wikipedia.org/wiki/K-means_algorithm">K-means algorithm</A> 
  </LI></UL><!-- Saved in parser cache with key enwiki:pcache:idhash:470752-0!1!0!default!!en!2 and timestamp 20061010080301 -->
<DIV class=printfooter>Retrieved from "<A 
href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">http://en.wikipedia.org/wiki/Expectation-maximization_algorithm</A>"</DIV>
<DIV id=catlinks>
<P class=catlinks><A title=Special:Categories 
href="http://en.wikipedia.org/w/index.php?title=Special:Categories&amp;article=Expectation-maximization_algorithm">Categories</A>: 
<SPAN dir=ltr><A title="Category:Estimation theory" 
href="http://en.wikipedia.org/wiki/Category:Estimation_theory">Estimation 
theory</A></SPAN> | <SPAN dir=ltr><A title="Category:Machine learning" 
href="http://en.wikipedia.org/wiki/Category:Machine_learning">Machine 
learning</A></SPAN> | <SPAN dir=ltr><A title="Category:Optimization algorithms" 
href="http://en.wikipedia.org/wiki/Category:Optimization_algorithms">Optimization 
algorithms</A></SPAN></P></DIV><!-- end content -->
<DIV class=visualClear></DIV></DIV></DIV></DIV>
<DIV id=column-one>
<DIV class=portlet id=p-cactions>
<H5>Views</H5>
<UL>
  <LI class=selected id=ca-nstab-main><A 
  href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Article</A> 

  <LI id=ca-talk><A 
  href="http://en.wikipedia.org/wiki/Talk:Expectation-maximization_algorithm">Discussion</A> 

  <LI id=ca-edit><A 
  href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit">Edit 
  this page</A> 
  <LI id=ca-history><A 
  href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=history">History</A> 
  </LI></UL></DIV>
<DIV class=portlet id=p-personal>
<H5>Personal tools</H5>
<DIV class=pBody>
<UL>
  <LI id=pt-login><A 
  href="http://en.wikipedia.org/w/index.php?title=Special:Userlogin&amp;returnto=Expectation-maximization_algorithm">Sign 
  in / create account</A> </LI></UL></DIV></DIV>
<DIV class=portlet id=p-logo><A title="Main Page" 
style="BACKGROUND-IMAGE: url(/images/wiki-en.png)" 
href="http://en.wikipedia.org/wiki/Main_Page"></A></DIV>
<SCRIPT type=text/javascript> if (window.isMSIE55) fixalpha(); </SCRIPT>

<DIV class=portlet id=p-navigation>
<H5>Navigation</H5>
<DIV class=pBody>
<UL>
  <LI id=n-mainpage><A href="http://en.wikipedia.org/wiki/Main_Page">Main 
  Page</A> 
  <LI id=n-portal><A 
  href="http://en.wikipedia.org/wiki/Wikipedia:Community_Portal">Community 
  Portal</A> 
  <LI id=n-Featured-articles><A 
  href="http://en.wikipedia.org/wiki/Wikipedia:Featured_articles">Featured 
  articles</A> 
  <LI id=n-currentevents><A 
  href="http://en.wikipedia.org/wiki/Portal:Current_events">Current events</A> 
  <LI id=n-recentchanges><A 
  href="http://en.wikipedia.org/wiki/Special:Recentchanges">Recent changes</A> 
  <LI id=n-randompage><A 
  href="http://en.wikipedia.org/wiki/Special:Random">Random article</A> 
  <LI id=n-help><A href="http://en.wikipedia.org/wiki/Help:Contents">Help</A> 
  <LI id=n-contact><A 
  href="http://en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact Wikipedia</A> 

  <LI id=n-sitesupport><A 
  href="http://wikimediafoundation.org/wiki/Fundraising#Donation_methods">Donations</A> 
  </LI></UL></DIV></DIV>
<DIV class=portlet id=p-search>
<H5><LABEL for=searchInput>Search</LABEL></H5>
<DIV class=pBody id=searchBody>
<FORM id=searchform action=/wiki/Special:Search>
<DIV><INPUT id=searchInput accessKey=f name=search> <INPUT class=searchButton id=searchGoButton type=submit value=Go name=go>&nbsp; <INPUT class=searchButton type=submit value=Search name=fulltext> 
</DIV></FORM></DIV></DIV>
<DIV class=portlet id=p-tb>
<H5>Toolbox</H5>
<DIV class=pBody>
<UL>
  <LI id=t-whatlinkshere><A 
  href="http://en.wikipedia.org/w/index.php?title=Special:Whatlinkshere&amp;target=Expectation-maximization_algorithm">What 
  links here</A> 
  <LI id=t-recentchangeslinked><A 
  href="http://en.wikipedia.org/w/index.php?title=Special:Recentchangeslinked&amp;target=Expectation-maximization_algorithm">Related 
  changes</A> 
  <LI id=t-upload><A href="http://en.wikipedia.org/wiki/Special:Upload">Upload 
  file</A> 
  <LI id=t-specialpages><A 
  href="http://en.wikipedia.org/wiki/Special:Specialpages">Special pages</A> 
  <LI id=t-print><A 
  href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;printable=yes">Printable 
  version</A> 
  <LI id=t-permalink><A 
  href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;oldid=80576311">Permanent 
  link</A>
  <LI id=t-cite><A 
  href="http://en.wikipedia.org/w/index.php?title=Special:Cite&amp;page=Expectation-maximization_algorithm&amp;id=80576311">Cite 
  this article</A> </LI></UL></DIV></DIV>
<DIV class=portlet id=p-lang>
<H5>In other languages</H5>
<DIV class=pBody>
<UL>
  <LI class=interwiki-de><A 
  href="http://de.wikipedia.org/wiki/EM-Algorithmus">Deutsch</A> 
  <LI class=interwiki-fr><A 
  href="http://fr.wikipedia.org/wiki/Algorithme_espérance-maximisation">Français</A> 

  <LI class=interwiki-zh><A 
  href="http://zh.wikipedia.org/wiki/最大期望算法">中文</A> 
</LI></UL></DIV></DIV></DIV><!-- end of the left (by default at least) column -->
<DIV class=visualClear></DIV>
<DIV id=footer>
<DIV id=f-poweredbyico><A href="http://www.mediawiki.org/"><IMG alt=MediaWiki 
src="Expectation-maximization algorithm.files/poweredby_mediawiki_88x31.png"></A></DIV>
<DIV id=f-copyrightico><A href="http://wikimediafoundation.org/"><IMG 
alt="Wikimedia Foundation" 
src="Expectation-maximization algorithm.files/wikimedia-button.png" 
border=0></A></DIV>
<UL id=f-list>
  <LI id=lastmod>This page was last modified 08:02, 10 October 2006. 
  <LI id=copyright>All text is available under the terms of the <A 
  class=internal title="Wikipedia:Text of the GNU Free Documentation License" 
  href="http://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License">GNU 
  Free Documentation License</A>. (See <B><A class=internal 
  title=Wikipedia:Copyrights 
  href="http://en.wikipedia.org/wiki/Wikipedia:Copyrights">Copyrights</A></B> 
  for details.) <BR>Wikipedia® is a registered trademark of the Wikimedia 
  Foundation, Inc.<BR>
  <LI id=privacy><A title="wikimedia:Privacy policy" 
  href="http://wikimediafoundation.org/wiki/Privacy_policy">Privacy policy</A> 
  <LI id=about><A title=Wikipedia:About 
  href="http://en.wikipedia.org/wiki/Wikipedia:About">About Wikipedia</A> 
  <LI id=disclaimer><A title="Wikipedia:General disclaimer" 
  href="http://en.wikipedia.org/wiki/Wikipedia:General_disclaimer">Disclaimers</A> 
  </LI></UL></DIV>
<SCRIPT type=text/javascript>if (window.runOnloadHook) runOnloadHook();</SCRIPT>
</DIV><!-- Served by srv35 in 0.106 secs. --></BODY></HTML>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -