⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node1.html

📁 高斯混合模型算法
💻 HTML
📖 第 1 页 / 共 3 页
字号:
 $p(\ensuremath\mathbf{x}_i|\ensuremath\boldsymbol{\Theta})$ --><SPAN CLASS="MATH"><IMG WIDTH="51" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img62.gif" ALT="$ p(\ensuremath\mathbf{x}_i\vert\ensuremath\boldsymbol{\Theta})$"></SPAN> for that
  point. In the case of Gaussian models <!-- MATH $\ensuremath\boldsymbol{\Theta}= (\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})$ --><SPAN CLASS="MATH"><IMG WIDTH="73" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img63.gif" ALT="$ \ensuremath\boldsymbol{\Theta}= (\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})$"></SPAN>, this
  amounts to the evaluation of equation&nbsp;<A HREF="#eq:gauss">1</A>.
</LI><LI><EM>Joint likelihood</EM>: for a set of independent identically
  distributed (i.i.d.) samples, say <!-- MATH $X = \{\ensuremath\mathbf{x}_1, \ensuremath\mathbf{x}_2, \ldots, \ensuremath\mathbf{x}_N\}$ --><SPAN CLASS="MATH"><IMG WIDTH="134" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img31.gif" ALT="$ X=\{\ensuremath\mathbf{x}_1,\ensuremath\mathbf{x}_2,\ldots,\ensuremath\mathbf{x}_N\}$"></SPAN>, the joint (or total) likelihood is the product of the
  likelihoods for each point. For instance, in the Gaussian case:
  <P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><A NAME="eq:joint-likelihood"></A><!-- MATH \begin{equation}p(X|\ensuremath\boldsymbol{\Theta}) =    \prod_{i=1}^{N} p(\ensuremath\mathbf{x}_i|\ensuremath\boldsymbol{\Theta}) =
    \prod_{i=1}^{N} p(\ensuremath\mathbf{x}_i|\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma}) =
    \prod_{i=1}^{N} g_{(\ensuremath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})}(\ensuremath\mathbf{x}_i)
  \end{equation} --><TABLE CLASS="equation" CELLPADDING="0" WIDTH="100%" ALIGN="CENTER"><TR VALIGN="MIDDLE"><TD NOWRAP ALIGN="CENTER"><SPAN CLASS="MATH"><IMG WIDTH="338" HEIGHT="58" ALIGN="MIDDLE" BORDER="0" SRC="img64.gif" ALT="$\displaystyle p(X\vert\ensuremath\boldsymbol{\Theta}) =
 \prod_{i=1}^{N} p(\ens......emath\boldsymbol{\mu},\ensuremath\boldsymbol{\Sigma})}(\ensuremath\mathbf{x}_i)$"></SPAN></TD><TD NOWRAP CLASS="eqno" WIDTH="10" ALIGN="RIGHT">(<SPAN CLASS="arabic">2</SPAN>)</TD></TR></TABLE></DIV><BR CLEAR="ALL"><P></P></LI></UL><P><H3><A NAME="SECTION00013200000000000000">Question:</A></H3>
Why do we might want to compute the <EM>log-likelihood</EM> rather than
the simple <EM>likelihood</EM>?
<BR><P>Computing the log-likelihood turns the product into a sum:
<!-- MATH \begin{displaymath}p(X|\ensuremath\boldsymbol{\Theta}) = \prod_{i=1}^{N} p(\ensuremath\mathbf{x}_i|\ensuremath\boldsymbol{\Theta}) \quad \Leftrightarrow \quad\log p(X|\ensuremath\boldsymbol{\Theta}) = \log \prod_{i=1}^{N} p(\ensuremath\mathbf{x}_i|\ensuremath\boldsymbol{\Theta}) = \sum_{i=1}^{N}
\log p(\ensuremath\mathbf{x}_i|\ensuremath\boldsymbol{\Theta})
\end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="470" HEIGHT="58" ALIGN="MIDDLE" BORDER="0" SRC="img65.gif" ALT="$\displaystyle p(X\vert\ensuremath\boldsymbol{\Theta}) = \prod_{i=1}^{N} p(\ensu......{i=1}^{N}\log p(\ensuremath\mathbf{x}_i\vert\ensuremath\boldsymbol{\Theta})$"></DIV><P></P><P>In the Gaussian case, it also avoids the computation of the exponential:
<BR><DIV ALIGN="CENTER" CLASS="mathdisplay"><A NAME="eq:loglikely"></A><!-- MATH \begin{eqnarray}p(\ensuremath\mathbf{x}|\ensuremath\boldsymbol{\Theta}) & = & \frac{1}{\sqrt{2\pi}^d \sqrt{\det\left(\ensuremath\boldsymbol{\Sigma}\right)}}  \, e^{-\frac{1}{2} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})^{\mathsf T}\ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})} \nonumber \\
  \log p(\ensuremath\mathbf{x}|\ensuremath\boldsymbol{\Theta}) & = &
  \frac{1}{2} \left[-d \log \left( 2\pi \right)
    -  \log \left( \det\left(\ensuremath\boldsymbol{\Sigma}\right) \right)
    -  (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})^{\mathsf T}\ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})\right]
\end{eqnarray} --><TABLE CELLPADDING="0" ALIGN="CENTER" WIDTH="100%"><TR VALIGN="MIDDLE"><TD NOWRAP ALIGN="RIGHT"><IMG WIDTH="46" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img66.gif" ALT="$\displaystyle p(\ensuremath\mathbf{x}\vert\ensuremath\boldsymbol{\Theta})$"></TD><TD WIDTH="10" ALIGN="CENTER" NOWRAP><IMG WIDTH="14" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img67.gif" ALT="$\displaystyle =$"></TD><TD ALIGN="LEFT" NOWRAP><IMG WIDTH="217" HEIGHT="44" ALIGN="MIDDLE" BORDER="0" SRC="img68.gif" ALT="$\displaystyle \frac{1}{\sqrt{2\pi}^d \sqrt{\det\left(\ensuremath\boldsymbol{\Si......th\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})}$"></TD><TD CLASS="eqno" WIDTH=10 ALIGN="RIGHT">&nbsp;</TD></TR><TR VALIGN="MIDDLE"><TD NOWRAP ALIGN="RIGHT"><IMG WIDTH="66" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img69.gif" ALT="$\displaystyle \log p(\ensuremath\mathbf{x}\vert\ensuremath\boldsymbol{\Theta})$"></TD><TD WIDTH="10" ALIGN="CENTER" NOWRAP><IMG WIDTH="14" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img67.gif" ALT="$\displaystyle =$"></TD><TD ALIGN="LEFT" NOWRAP><IMG WIDTH="331" HEIGHT="44" ALIGN="MIDDLE" BORDER="0" SRC="img70.gif" ALT="$\displaystyle \frac{1}{2} \left[-d \log \left( 2\pi \right)- \log \left( \det\......dsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})\right]$"></TD><TD CLASS="eqno" WIDTH=10 ALIGN="RIGHT">(<SPAN CLASS="arabic">3</SPAN>)</TD></TR></TABLE></DIV><BR CLEAR="ALL">
Furthermore, since <SPAN CLASS="MATH"><IMG WIDTH="40" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img71.gif" ALT="$ \log(x)$"></SPAN> is a monotonically growing function, the
log-likelihoods have the same relations of order as the likelihoods
<!-- MATH \begin{displaymath}p(x|\ensuremath\boldsymbol{\Theta}_1) > p(x|\ensuremath\boldsymbol{\Theta}_2) \quad \Leftrightarrow \quad\log p(x|\ensuremath\boldsymbol{\Theta}_1) > \log p(x|\ensuremath\boldsymbol{\Theta}_2),
\end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="329" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img72.gif" ALT="$\displaystyle p(x\vert\ensuremath\boldsymbol{\Theta}_1) &gt; p(x\vert\ensuremath\b......emath\boldsymbol{\Theta}_1) &gt; \log p(x\vert\ensuremath\boldsymbol{\Theta}_2),$"></DIV><P></P>so they can be used directly for classification.<P><H3><A NAME="SECTION00013300000000000000">Find the right statements:</A></H3>
We can further simplify the computation of the log-likelihood in
eq.&nbsp;<A HREF="#eq:loglikely">3</A> for classification by
<DL COMPACT><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>dropping the division by two: <!-- MATH $\frac{1}{2}\left[\ldots\right]$ --><SPAN CLASS="MATH"><IMG WIDTH="39" HEIGHT="30" ALIGN="MIDDLE" BORDER="0" SRC="img73.gif" ALT="$ \frac{1}{2} \left[\ldots\right]$"></SPAN>,
</DD><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>dropping term <!-- MATH $d\log \left( 2\pi \right)$ --><SPAN CLASS="MATH"><IMG WIDTH="60" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img74.gif" ALT="$ d\log \left( 2\pi \right)$"></SPAN>,
</DD><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>dropping term <!-- MATH $\log \left( \det\left(\ensuremath\boldsymbol{\Sigma}\right)\right)$ --><SPAN CLASS="MATH"><IMG WIDTH="79" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img75.gif" ALT="$ \log \left( \det\left(\ensuremath\boldsymbol{\Sigma}\right)\right)$"></SPAN>,
</DD><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>dropping term <!-- MATH $(\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})^{\mathsf T}\ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})$ --><SPAN CLASS="MATH"><IMG WIDTH="130" HEIGHT="32" ALIGN="MIDDLE" BORDER="0" SRC="img76.gif" ALT="$ (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})^{\mathsf T}\ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\mathbf{x}-\ensuremath\boldsymbol{\mu})$"></SPAN>,
</DD><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>calculating the term <!-- MATH $\log \left( \det\left(\ensuremath\boldsymbol{\Sigma}\right)\right)$ --><SPAN CLASS="MATH"><IMG WIDTH="79" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img75.gif" ALT="$ \log \left( \det\left(\ensuremath\boldsymbol{\Sigma}\right)\right)$"></SPAN> in advance.
</DD></DL><P><P><BR>We can drop term(s) because:
<DL COMPACT><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>The term(s) are independent of <!-- MATH $\ensuremath\boldsymbol{\mu}$ --><SPAN CLASS="MATH"><IMG WIDTH="13" HEIGHT="25" ALIGN="MIDDLE" BORDER="0" SRC="img9.gif" ALT="$ \ensuremath\boldsymbol{\mu}$"></SPAN>.
</DD><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>The terms are negligible small.
</DD><DT><SPAN CLASS="MATH"><IMG WIDTH="14" HEIGHT="13" ALIGN="BOTTOM" BORDER="0" SRC="img43.gif" ALT="$ \Box$"></SPAN></DT><DD>The term(s) are independent of the classes.
</DD></DL><P>As a summary, log-likelihoods use simpler computation and are readily
usable for classification tasks.<P><H3><A NAME="SECTION00013400000000000000">Experiment:</A></H3>
Given the following 4 Gaussian models <!-- MATH $\ensuremath\boldsymbol{\Theta}_i = (\ensuremath\boldsymbol{\mu}_i,\ensuremath\boldsymbol{\Sigma}_i)$ --><SPAN CLASS="MATH"><IMG WIDTH="87" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img77.gif" ALT="$ \ensuremath\boldsymbol{\Theta}_i = (\ensuremath\boldsymbol{\mu}_i,\ensuremath\boldsymbol{\Sigma}_i)$"></SPAN>
<DIV ALIGN="CENTER"><TABLE CELLPADDING=3><TR><TD ALIGN="CENTER"><!-- MATH ${\cal N}_1: \; \ensuremath\boldsymbol{\Theta}_1 = \left(\left[\begin{array}{c}730 \\1090\end{array}\right],
\left[\begin{array}{cc}8000 & 0 \\0 & 8000\end{array}\right]
\right)$ --><SPAN CLASS="MATH"><IMG WIDTH="258" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img78.gif" ALT="$ {\cal N}_1: \; \ensuremath\boldsymbol{\Theta}_1 = \left(\left[\begin{array}{......right],\left[\begin{array}{cc}8000 &amp; 0  0 &amp; 8000\end{array}\right]\right)$"></SPAN></TD><TD ALIGN="CENTER">&nbsp;</TD><TD ALIGN="CENTER"><!-- MATH ${\cal N}_2: \; \ensuremath\boldsymbol{\Theta}_2 = \left(\left[\begin{array}{c}730 \\1090\end{array}\right],
\left[\begin{array}{cc}8000 & 0 \\0 & 18500\end{array}\right]
\right)$ --><SPAN CLASS="MATH"><IMG WIDTH="265" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img79.gif" ALT="$ {\cal N}_2: \; \ensuremath\boldsymbol{\Theta}_2 = \left(\left[\begin{array}{......ight],\left[\begin{array}{cc}8000 &amp; 0  0 &amp; 18500\end{array}\right]\right)$"></SPAN></TD></TR><TR><TD ALIGN="CENTER"><!-- MATH ${\cal N}_3: \; \ensuremath\boldsymbol{\Theta}_3 = \left(\left[\begin{array}{c}730 \\1090\end{array}\right],
\left[\begin{array}{cc}8000 & 8400 \\8400 & 18500\end{array}\right]
\right)$ --><SPAN CLASS="MATH"><IMG WIDTH="265" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img80.gif" ALT="$ {\cal N}_3: \; \ensuremath\boldsymbol{\Theta}_3 = \left(\left[\begin{array}{......\left[\begin{array}{cc}8000 &amp; 8400  8400 &amp; 18500\end{array}\right]\right)$"></SPAN></TD><TD ALIGN="CENTER">&nbsp;</TD><TD ALIGN="CENTER"><!-- MATH ${\cal N}_4: \; \ensuremath\boldsymbol{\Theta}_4 = \left(\left[\begin{array}{c}270 \\1690\end{array}\right],
\left[\begin{array}{cc}8000 & 8400 \\8400 & 18500\end{array}\right]
\right)$ --><SPAN CLASS="MATH"><IMG WIDTH="265" HEIGHT="47" ALIGN="MIDDLE" BORDER="0" SRC="img81.gif" ALT="$ {\cal N}_4: \; \ensuremath\boldsymbol{\Theta}_4 = \left(\left[\begin{array}{......\left[\begin{array}{cc}8000 &amp; 8400  8400 &amp; 18500\end{array}\right]\right)$"></SPAN></TD></TR></TABLE></DIV>
<BR><BR>compute the following log-likelihoods for the whole sample
<SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img38.gif" ALT="$ X_3$"></SPAN> (10000 points):<P><!-- MATH \begin{displaymath}\log p(X_3|\ensuremath\boldsymbol{\Theta}_1),\; \log p(X_3|\ensuremath\boldsymbol{\Theta}_2),\; \log p(X_3|\ensuremath\boldsymbol{\Theta}_3),\;\text{and}\; \log p(X_3|\ensuremath\boldsymbol{\Theta}_4).
\end{displaymath} --><P></P><DIV ALIGN="CENTER" CLASS="mathdisplay"><IMG WIDTH="268" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img82.gif" ALT="$\displaystyle \log p(X_3\vert\ensuremath\boldsymbol{\Theta}_1),\; \log p(X_3\ve......th\boldsymbol{\Theta}_2),\; \log p(X_3\vert\ensuremath\boldsymbol{\Theta}_3),\;$">&nbsp; &nbsp;and<IMG WIDTH="90" HEIGHT="28" ALIGN="MIDDLE" BORDER="0" SRC="img83.gif" ALT="$\displaystyle \; \log p(X_3\vert\ensuremath\boldsymbol{\Theta}_4).$"></DIV><P></P><P><H3><A NAME="SECTION00013500000000000000">Example:</A></H3>
<TT>&#187; N = size(X3,1)</TT> <BR>
<TT>&#187; mu_1 = [730 1090]; sigma_1 = [8000 0; 0 8000];</TT> <BR>
<TT>&#187; logLike1 = 0;</TT> <BR>
<TT>&#187; for i = 1:N;</TT> <BR>
<TT>logLike1 = logLike1 + (X3(i,:) - mu_1) * inv(sigma_1) * (X3(i,:) - mu_1)';</TT> <BR>
<TT>end;</TT> <BR>
<TT>&#187; logLike1 =  - 0.5 * (logLike1 + N*log(det(sigma_1)) + 2*N*log(2*pi))</TT> <BR><P>Note: Use the function <TT>gausview</TT> to compare the
relative positions of the models <!-- MATH ${\cal N}_1$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img84.gif" ALT="$ {\calN}_1$"></SPAN>, <!-- MATH ${\cal N}_2$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img85.gif" ALT="$ {\cal N}_2$"></SPAN>, <!-- MATH ${\calN}_3$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img86.gif" ALT="$ {\cal N}_3$"></SPAN> and <!-- MATH ${\cal N}_4$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img87.gif" ALT="$ {\cal N}_4$"></SPAN> with respect to the data set <SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img38.gif" ALT="$ X_3$"></SPAN>, e.g.:
<BR>
<TT>&#187; mu_1 = [730 1090]; sigma_1 = [8000 0; 0 8000];</TT> <BR>
<TT>&#187; gausview(X3,mu_1,sigma_1,'Comparison of X3 and N1');</TT> <BR>
<BR>
<H3><A NAME="SECTION00013600000000000000">Question:</A></H3>
Of <!-- MATH ${\cal N}_1$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img84.gif" ALT="$ {\calN}_1$"></SPAN>, <!-- MATH ${\cal N}_2$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img85.gif" ALT="$ {\cal N}_2$"></SPAN>, <!-- MATH ${\cal N}_3$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img86.gif" ALT="$ {\cal N}_3$"></SPAN> and <!-- MATH ${\cal N}_4$ --><SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img87.gif" ALT="$ {\cal N}_4$"></SPAN>, which
model ``explains'' best the data <SPAN CLASS="MATH"><IMG WIDTH="21" HEIGHT="26" ALIGN="MIDDLE" BORDER="0" SRC="img38.gif" ALT="$ X_3$"></SPAN>?  Which model has the highest
number of parameters (with non-zero values)?  Which model would you
choose for a good compromise between the number of parameters and the
capacity to accurately represent the data?<P><DIV CLASS="navigation"><br><table border=0 cellspacing=0 callpadding=0 width=100% class="tut_nav"><tr valign=middle class="tut_nav"><td valign=middle align=left width=1% class="tut_nav"><A NAME="tex2html12"  HREF="Gaussian.html"><IMG  ALIGN="absmiddle" BORDER="0" ALT="previous" SRC="prev.gif"></A></td><td valign=middle align=left class="tut_nav">&nbsp;<A NAME="tex2html13"  HREF="Gaussian.html">Tutorial: Gaussian Statistics and Unsupervised Learning</A></td><td align=right valign=middle class="tut_nav"><A NAME="tex2html21"  HREF="node2.html">Statistical pattern recognition</A>&nbsp;<A NAME="tex2html20"  HREF="node2.html"><IMG  ALIGN="absmiddle" BORDER="0" ALT="next" SRC="next.gif"></A></td></tr></table></DIV><!--End of Navigation Panel--></BODY></HTML>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -