⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 expectation-maximization algorithm.htm

📁 这个书很重要 可以好好帮助 不信你可以下来看看 真的很好 看看哦
💻 HTM
📖 第 1 页 / 共 3 页
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3c.org/TR/1999/REC-html401-19991224/loose.dtd">
<!-- saved from url=(0041)http://en.wikipedia.org/wiki/Em_algorithm -->
<HTML lang=en dir=ltr xml:lang="en" 
xmlns="http://www.w3.org/1999/xhtml"><HEAD><TITLE>Expectation-maximization algorithm - Wikipedia, the free encyclopedia</TITLE>
<META http-equiv=Content-Type content="text/html; charset=utf-8">
<META 
content="Expectation-maximization algorithm,Algorithm,Baum-Welch algorithm,Bayes' theorem,Closed form,Computer vision,Computing,Conditional distribution,Conditional expectation,Conjugate gradient,Data clustering" 
name=keywords><LINK href="/favicon.ico" rel="shortcut icon"><LINK 
title="Wikipedia (English)" href="/w/opensearch_desc.php" 
type=application/opensearchdescription+xml rel=search><LINK 
href="http://www.gnu.org/copyleft/fdl.html" rel=copyright>
<STYLE type=text/css media=screen,projection>@import url( /skins-1.5/monobook/main.css?13 );
</STYLE>
<LINK media=print 
href="Expectation-maximization algorithm.files/commonPrint.css" type=text/css 
rel=stylesheet><!--[if lt IE 5.5000]><style type="text/css">@import "/skins-1.5/monobook/IE50Fixes.css?13";</style><![endif]--><!--[if IE 5.5000]><style type="text/css">@import "/skins-1.5/monobook/IE55Fixes.css?13";</style><![endif]--><!--[if IE 6]>
<STYLE type=text/css>@import url( /skins-1.5/monobook/IE60Fixes.css?13 );
</STYLE>
<![endif]--><!--[if IE 7]><style type="text/css">@import "/skins-1.5/monobook/IE70Fixes.css?13";</style><![endif]--><!--[if lt IE 7]>
<SCRIPT src="Expectation-maximization algorithm.files/IEFixes.js" 
type=text/javascript></SCRIPT>

<META http-equiv=imagetoolbar content=no><![endif]-->
<SCRIPT type=text/javascript>			var skin = "monobook";			var stylepath = "/skins-1.5";			var wgArticlePath = "/wiki/$1";			var wgScriptPath = "/w";			var wgServer = "http://en.wikipedia.org";                        			var wgCanonicalNamespace = "";			var wgNamespaceNumber = 0;			var wgPageName = "Expectation-maximization_algorithm";			var wgTitle = "Expectation-maximization algorithm";			var wgArticleId = 470752;			var wgIsArticle = true;                        			var wgUserName = null;			var wgUserLanguage = "en";			var wgContentLanguage = "en";		</SCRIPT>

<SCRIPT src="Expectation-maximization algorithm.files/wikibits.js" 
type=text/javascript><!-- wikibits js --></SCRIPT>

<SCRIPT 
src="C:\Documents and Settings\admin\桌面\EM\Expectation-maximization algorithm.files\index(2).php" 
type=text/javascript><!-- site js --></SCRIPT>

<STYLE type=text/css>@import url( /w/index.php?title=MediaWiki:Common.css&usemsgcache=yes&action=raw&ctype=text/css&smaxage=2678400 );
@import url( /w/index.php?title=MediaWiki:Monobook.css&usemsgcache=yes&action=raw&ctype=text/css&smaxage=2678400 );
@import url( /w/index.php?title=-&action=raw&gen=css&maxage=2678400 );
</STYLE>
<!-- Head Scripts -->
<SCRIPT src="Expectation-maximization algorithm.files/ajax.js" 
type=text/javascript></SCRIPT>

<META content="MSHTML 6.00.2800.1491" name=GENERATOR></HEAD>
<BODY class="mediawiki ns-0 ltr">
<DIV id=globalWrapper>
<DIV id=column-content>
<DIV id=content><A id=top name=top></A>
<DIV id=siteNotice>
<DIV style="FONT-SIZE: 80%; TEXT-ALIGN: right">Your <B><A class=extiw 
title=wikimedia:Fundraising 
href="http://wikimediafoundation.org/wiki/Fundraising">continued 
donations</A></B> keep Wikipedia running!&nbsp;&nbsp;&nbsp;&nbsp;</DIV></DIV>
<H1 class=firstHeading>Expectation-maximization algorithm</H1>
<DIV id=bodyContent>
<H3 id=siteSub>From Wikipedia, the free encyclopedia</H3>
<DIV id=contentSub>(Redirected from <A title="Em algorithm" 
href="http://en.wikipedia.org/w/index.php?title=Em_algorithm&amp;redirect=no">Em 
algorithm</A>)</DIV>
<DIV id=jump-to-nav>Jump to: <A 
href="http://en.wikipedia.org/wiki/Em_algorithm#column-one">navigation</A>, <A 
href="http://en.wikipedia.org/wiki/Em_algorithm#searchInput">search</A></DIV><!-- start content -->
<P>In <A title=Statistics 
href="http://en.wikipedia.org/wiki/Statistics">statistical</A> <A 
title=Computing href="http://en.wikipedia.org/wiki/Computing">computing</A>, an 
<B>expectation-maximization (EM) algorithm</B> is an <A title=Algorithm 
href="http://en.wikipedia.org/wiki/Algorithm">algorithm</A> for finding <A 
title="Maximum likelihood" 
href="http://en.wikipedia.org/wiki/Maximum_likelihood">maximum likelihood</A> 
estimates of <A title=Parameter 
href="http://en.wikipedia.org/wiki/Parameter">parameters</A> in <A 
title=Probability 
href="http://en.wikipedia.org/wiki/Probability">probabilistic</A> models, where 
the model depends on unobserved <A title="Latent variable" 
href="http://en.wikipedia.org/wiki/Latent_variable">latent variables</A>. EM is 
frequently used for <A title="Data clustering" 
href="http://en.wikipedia.org/wiki/Data_clustering">data clustering</A> in <A 
title="Machine learning" 
href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</A> and <A 
title="Computer vision" 
href="http://en.wikipedia.org/wiki/Computer_vision">computer vision</A>. EM 
alternates between performing an expectation (E) step, which computes an 
expectation of the likelihood by including the latent variables as if they were 
observed, and a maximization (M) step, which computes the maximum likelihood 
estimates of the parameters by maximizing the expected likelihood found on the E 
step. The parameters found on the M step are then used to begin another E step, 
and the process is repeated:</P>
<DL>
  <DD>i=0, t<SUB>i</SUB>=[arbitrary] 
  <DD>REPEAT 
  <DD>{ 
  <DL>
    <DD>compute Q(t|t<SUB>i</SUB>) 
    <DD>find t<SUB>i+1</SUB> such as Q(t|t<SUB>i+1</SUB>) = <IMG class=tex 
    alt="max_{t_i} Q(t|t_i)" 
    src="Expectation-maximization algorithm.files/bad3d04987de6b75fcf9cd66dfb521b4.png"> 

    <DD>IF (t<SUB>i</SUB>&nbsp;!= t<SUB>i+1</SUB>) 
    <DL>
      <DD>t<SUB>i</SUB> = t<SUB>i+1</SUB> </DD></DL>
    <DD>ELSE 
    <DL>
      <DD>FINISH </DD></DL></DD></DL>
  <DD>} </DD></DL>
<TABLE class=toc id=toc summary=Contents>
  <TBODY>
  <TR>
    <TD>
      <DIV id=toctitle>
      <H2>Contents</H2></DIV>
      <UL>
        <LI class=toclevel-1><A 
        href="http://en.wikipedia.org/wiki/Em_algorithm#Specification_of_the_EM_procedure"><SPAN 
        class=tocnumber>1</SPAN> <SPAN class=toctext>Specification of the EM 
        procedure</SPAN></A> 
        <UL>
          <LI class=toclevel-2><A 
          href="http://en.wikipedia.org/wiki/Em_algorithm#Estimate_unobservable_data"><SPAN 
          class=tocnumber>1.1</SPAN> <SPAN class=toctext>Estimate unobservable 
          data</SPAN></A> 
          <LI class=toclevel-2><A 
          href="http://en.wikipedia.org/wiki/Em_algorithm#Maximize_expected_log-likelihood_for_the_complete_dataset"><SPAN 
          class=tocnumber>1.2</SPAN> <SPAN class=toctext>Maximize expected 
          log-likelihood for the complete dataset</SPAN></A> 
          <LI class=toclevel-2><A 
          href="http://en.wikipedia.org/wiki/Em_algorithm#Properties"><SPAN 
          class=tocnumber>1.3</SPAN> <SPAN class=toctext>Properties</SPAN></A> 
          </LI></UL>
        <LI class=toclevel-1><A 
        href="http://en.wikipedia.org/wiki/Em_algorithm#Incremental_versions"><SPAN 
        class=tocnumber>2</SPAN> <SPAN class=toctext>Incremental 
        versions</SPAN></A> 
        <LI class=toclevel-1><A 
        href="http://en.wikipedia.org/wiki/Em_algorithm#Relation_to_variational_Bayes_methods"><SPAN 
        class=tocnumber>3</SPAN> <SPAN class=toctext>Relation to variational 
        Bayes methods</SPAN></A> 
        <LI class=toclevel-1><A 
        href="http://en.wikipedia.org/wiki/Em_algorithm#Example:_Mixture_Gaussian"><SPAN 
        class=tocnumber>4</SPAN> <SPAN class=toctext>Example: Mixture 
        Gaussian</SPAN></A> 
        <UL>
          <LI class=toclevel-2><A 
          href="http://en.wikipedia.org/wiki/Em_algorithm#E-step:"><SPAN 
          class=tocnumber>4.1</SPAN> <SPAN class=toctext>E-step:</SPAN></A> 
          <LI class=toclevel-2><A 
          href="http://en.wikipedia.org/wiki/Em_algorithm#M-step"><SPAN 
          class=tocnumber>4.2</SPAN> <SPAN class=toctext>M-step</SPAN></A> 
        </LI></UL>
        <LI class=toclevel-1><A 
        href="http://en.wikipedia.org/wiki/Em_algorithm#References"><SPAN 
        class=tocnumber>5</SPAN> <SPAN class=toctext>References</SPAN></A> 
        <LI class=toclevel-1><A 
        href="http://en.wikipedia.org/wiki/Em_algorithm#See_also"><SPAN 
        class=tocnumber>6</SPAN> <SPAN class=toctext>See also</SPAN></A> 
    </LI></UL></TD></TR></TBODY></TABLE>
<P>
<SCRIPT type=text/javascript>//<![CDATA[ if (window.showTocToggle) { var tocShowText = "show"; var tocHideText = "hide"; showTocToggle(); } //]]></SCRIPT>
</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Specification of the EM procedure" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=1">edit</A>]</DIV>
<P><A id=Specification_of_the_EM_procedure 
name=Specification_of_the_EM_procedure></A></P>
<H2>Specification of the EM procedure</H2>
<P>Let <IMG class=tex alt=\textbf{y} 
src="Expectation-maximization algorithm.files/ff58c8e0e55b508d25fa7aff97d497b1.png"> 
denote incomplete data consisting of values of observable variables and <IMG 
class=tex alt=\textbf{x} 
src="Expectation-maximization algorithm.files/f2a48e1cd2da440643ea07a3b2f60e6f.png"> 
the missing data. Together, <IMG class=tex alt=\textbf{x} 
src="Expectation-maximization algorithm.files/f2a48e1cd2da440643ea07a3b2f60e6f.png"> 
and <IMG class=tex alt=\textbf{y} 
src="Expectation-maximization algorithm.files/ff58c8e0e55b508d25fa7aff97d497b1.png"> 
form the complete data. <IMG class=tex alt=\textbf{x} 
src="Expectation-maximization algorithm.files/f2a48e1cd2da440643ea07a3b2f60e6f.png"> 
can either be actual missing measurement or a hidden variable that would make 
the problem easier if its value would be known. For instance, in <A 
title="Mixture model" href="http://en.wikipedia.org/wiki/Mixture_model">mixture 
models</A>, the likelihood formula would be much more convenient if mixture 
components that "generated" the samples would be known (see example below).</P>
<DIV class=editsection style="FLOAT: right; MARGIN-LEFT: 5px">[<A 
title="Edit section: Estimate unobservable data" 
href="http://en.wikipedia.org/w/index.php?title=Expectation-maximization_algorithm&amp;action=edit&amp;section=2">edit</A>]</DIV>
<P><A id=Estimate_unobservable_data name=Estimate_unobservable_data></A></P>
<H3>Estimate unobservable data</H3>
<P>Let <IMG class=tex alt=p\, 
src="Expectation-maximization algorithm.files/5a34bb082daf037b3c4b14c13af6855b.png"> 
be the joint <A title="Probability distribution function" 
href="http://en.wikipedia.org/wiki/Probability_distribution_function">probability 
distribution</A> (continuous case) or <A title="Probability mass function" 
href="http://en.wikipedia.org/wiki/Probability_mass_function">probability mass 
function</A> (discrete case) of the complete data with parameters given by the 
vector <SPAN class=texhtml>θ</SPAN>: <IMG class=tex 
alt="p( \mathbf y, \mathbf x | \theta)" 
src="Expectation-maximization algorithm.files/e8761ada1a3ed08273a0b5659a07de7a.png">. 
This function then also gives the complete data <A title=Likelihood 
href="http://en.wikipedia.org/wiki/Likelihood">likelihood</A>. Further, note 
that the <A title="Conditional distribution" 
href="http://en.wikipedia.org/wiki/Conditional_distribution">conditional 
distribution</A> of the missing data given the observed can be expressed as:</P>
<DL>
  <DD><IMG class=tex 
  alt="p(\mathbf x |\mathbf y, \theta) = \frac{p(\mathbf y, \mathbf x | \theta)}{p(\mathbf y | \theta)} = \frac{p(\mathbf y|\mathbf x, \theta) p(\mathbf x |\theta) }{\int p(\mathbf y|\mathbf x, \theta) p(\mathbf x |\theta) d\mathbf x}" 
  src="Expectation-maximization algorithm.files/0d42ab25f9928bd7659e6b2f6d4d2f82.png"> 
  </DD></DL>
<P>when using the <A title="Bayes' theorem" 
href="http://en.wikipedia.org/wiki/Bayes'_theorem">Bayes rule</A> and <A 
title="Law of total probability" 
href="http://en.wikipedia.org/wiki/Law_of_total_probability">law of total 
probability</A>. This formulation only requires knowledge of the observation 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -