📄 part1_1.htm
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0050)http://www.yobology.info/harbin/part1/part_1_1.htm -->
<HTML><HEAD><TITLE>Part1_1</TITLE>
<META http-equiv=content-type content="text/html; charset=shift_jis">
<META content="MSHTML 6.00.2900.2180" name=GENERATOR></HEAD>
<BODY text=black vLink=black aLink=black link=black bgColor=#223344>
<TABLE
style="FONT-WEIGHT: normal; FONT-SIZE: 14pt; FONT-STYLE: normal; FONT-FAMILY: 'Times New Roman'; BACKGROUND-COLOR: rgb(204,204,204); TEXT-DECORATION: none"
width=773 bgColor=#cccccc border=0>
<TBODY>
<TR
style="BORDER-RIGHT: 4px ridge; BORDER-TOP: 4px ridge; BORDER-LEFT: 4px ridge; BORDER-BOTTOM: 4px ridge; BACKGROUND-COLOR: rgb(204,204,204)">
<TD
style="PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 16pt; PADDING-BOTTOM: 8px; LINE-HEIGHT: 150%; PADDING-TOP: 8px; FONT-FAMILY: 'Times New Roman'"
borderColor=#0099ff borderColorLight=#0099ff width=753
borderColorDark=#0099ff height=78>
<P align=right><I><A
href="http://www.yobology.info/harbin/part1/index.htm">index</A></I></P>
<P align=center>Part 1 </P>
<P align=center>Chapter 1 Basic Preliminary (Shannon's
Theory)<BR></P></TD></TR>
<TR>
<TD
style="PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 14pt; PADDING-BOTTOM: 8px; LINE-HEIGHT: 150%; PADDING-TOP: 8px; FONT-FAMILY: 'Times New Roman'; BACKGROUND-COLOR: rgb(204,204,204)"
borderColor=#0099ff borderColorLight=#0099ff width=753
borderColorDark=#0099ff height=428>
<P>仒1 BIT<BR> When we see an event that occurs with
probability <I>p</I>, we get information</P>
<P align=center><IMG src="Part1_1.files/part_1_1_htm_eqn3223.gif" border=0
NAMO_EQN__><!--NAMO_EQN__ 224 1I=\: -\log _{2}\: p\quad bit--></P>
<P align=center> <IMG height=347 src="Part1_1.files/img5.gif"
width=563 border=0></P>
<P><SPAN style="FONT-SIZE: 12pt"><FONT color=#660000>Note: In coin
tossing, let probability of head or tail be 1/2. Then we get 1 bit from
single trial. BIT is acronym of binary information unit.</FONT></SPAN></P>
<P>仒2 Entropy<BR> Consider a memoryless source outputting the
symbol "0" or "1" and let <I>p</I> denote the probability of symbol
"0". The average information coming out from this source is given by</P>
<P align=center><IMG src="Part1_1.files/part_1_1_htm_eqn27974.gif"
border=0 NAMO_EQN__><!--NAMO_EQN__ 208 1H(X)=-p\log _{2}p-(1-p)\log _{2}(1-p)\quad bit\slash symbol--></P>
<P>This increases monotonously from <IMG
src="Part1_1.files/part_1_1_htm_eqn8136.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1p=0--> to <IMG
src="Part1_1.files/part_1_1_htm_eqn8199.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1p=1\slash 2-->, hence implies randomness
of the symbol sequence.</P>
<P align=center><IMG height=317 alt=img15.gif
src="Part1_1.files/img15.gif" width=514 border=0></P>
<P align=center>THEOREM (source coding)<BR><I>H(X)</I> gives lower bound
of the reversible information compression.</P>
<P align=center><Context of proof></P>
<UL>
<OL>
<LI align="left">
<DIV align=left>There is a compact reversible coding scheme
( Huffman) which satisfies<BR> <IMG
src="Part1_1.files/part_1_1_htm_eqn9591.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1H(S)\leq L\leq H(S)+1,\quad where\; L\; is\; average\; of\; the\; code\; length--></DIV>
<LI align="left">
<DIV align=left>Apply following formula with the memoryless source to
above inequality.<BR><IMG
src="Part1_1.files/part_1_1_htm_eqn25655.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 208 1H(S^{n}\: )=nH(S),\quad where\: S^{n}\: is\: n\, th\: expansion\: of\: the\: source--></DIV>
<LI align="left">
<DIV align=left>If <I>L</I> is shorter than <IMG
src="Part1_1.files/part_1_1_htm_eqn13709.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 176 1H(S)-->, such code is not
reversible.</DIV></LI></OL></UL>
<P align=center><Design rule of Huffman code></P>
<P align=center><IMG height=429 src="Part1_1.files/img60.gif" width=783
border=0></P>
<P align=center> </P>
<P align=center><Numerical examples for binary source></P>
<P align=left>The<I> 2</I>nd expansion of source outputs "00", "01", "10"
and "11" respectively by probabilities<I> pp</I>, <I>p(1-p)</I>,
<I>(1-p)p </I>and <I>(1-p)(1-p)</I>. For this source, design Huffman
code and estimate its average code length <IMG
src="Part1_1.files/part_1_1_htm_eqn18510.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1L_{2}-->. Similarly, by expanding the
source we have following graphs. Black lines shows <IMG
src="Part1_1.files/part_1_1_htm_eqn20559.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1H(S^{n}\: )\slash n=H(S)-->. Orange
colored curve shows <IMG src="Part1_1.files/part_1_1_htm_eqn21070.gif"
border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1(H(S^{n}\: )+1)\slash n=(nH(S)+1)\slash n-->.
Purple colored curve shows <IMG
src="Part1_1.files/part_1_1_htm_eqn26187.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1L_{n}\slash n--> of Huffman
code.</P>
<P align=center><IMG height=359 src="Part1_1.files/img61.gif" width=581
border=0></P>
<P align=center><IMG height=253 src="Part1_1.files/img63.gif" width=82
border=0> <IMG height=253 src="Part1_1.files/img64.gif" width=209
border=0></P>
<P align=center><IMG height=369 src="Part1_1.files/img62.gif" width=598
border=0></P>
<P align=center><IMG height=253 src="Part1_1.files/img65.gif" width=82
border=0> <IMG height=253 src="Part1_1.files/img66.gif" width=173
border=0></P>
<P align=left> </P>
<P>仒3 Mutual information<BR>I am a messenger to inform you source's
output "0" or "1". But I am careless and sometimes tell you inverse
of the output. Let the probability of my mistake be 1-<I>r</I>. </P>
<P align=center><IMG height=301 src="Part1_1.files/img4.gif" width=582
border=0></P>
<P>Mutual information means amount of information about source's output
contained in my message and given by</P>
<P align=center><IMG src="Part1_1.files/part_1_1_htm_eqn7702.gif" border=0
NAMO_EQN__><!--NAMO_EQN__ 208 1M(X,Y)=H(Y)-H(Y\mid X)=H(X)-H(X\mid Y)--><BR><IMG
src="Part1_1.files/part_1_1_htm_eqn12584.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 208 1H(Y\mid X)\; is\; entropy\; of\; conditional\; probability\; P(my\: report\mid output)--><BR><IMG
src="Part1_1.files/part_1_1_htm_eqn7856.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1H(X\mid Y)\; is\; entropy\; of\; a\; posteriory\; probability\; P(output\mid my\: report)--></P>
<P><SPAN style="FONT-SIZE: 12pt"><FONT color=#660000>Note: Small value of
<IMG src="Part1_1.files/part_1_1_htm_eqn12960.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 160 1H(X\mid Y)--> or <IMG
src="Part1_1.files/part_1_1_htm_eqn8368.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 176 1H(X\mid Y)--> implies that my report is
reliable. </FONT></SPAN></P>
<P align=center><IMG height=401 alt=img16.gif
src="Part1_1.files/img16.gif" width=535 border=0></P>
<P align=center><IMG height=511 src="Part1_1.files/img70.gif" width=590
border=0></P>
<P> </P>
<P align=center>THEOREM (channel coding)<BR>Assume that transmission speed
of the output sequence<BR>丒丒丒丒0, 1, 1, 0, 1, 0, 1, 丒丒丒丒<BR>is <IMG
src="Part1_1.files/part_1_1_htm_eqn16731.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 208 1V\; bit\slash sec-->. There exits a
perfect error correcting scheme with lower speed than</P>
<P align=center><IMG src="Part1_1.files/part_1_1_htm_eqn28487.gif"
border=0 NAMO_EQN__><!--NAMO_EQN__ 208 1M(X,Y)\cdot V\quad bit\slash sec--> .</P>
<P align=center>Proof<BR>Refer Shannon's original paper (difficult to
prove in general cases)</P>
<P>For completely randomized data sequence (<IMG
src="Part1_1.files/part_1_1_htm_eqn21566.gif" border=0 NAMO_EQN__><!--NAMO_EQN__ 192 1p=0.5-->), the channel capacity is
plotted as follows.</P>
<P align=center><IMG height=317 src="Part1_1.files/img7.gif" width=513
border=0></P>
<P align=left>Refer graphs of other average information, <I>H(Y)</I>,
<I>H(Y|X)</I>, <I>H(X|Y)</I> and <I>H(X,Y)</I>.</P>
<P align=center><IMG height=388 src="Part1_1.files/img71.gif" width=517
border=0></P>
<P align=center><IMG height=388 src="Part1_1.files/img72.gif" width=517
border=0></P>
<P align=center><IMG height=388 src="Part1_1.files/img73.gif" width=517
border=0></P>
<P align=center><IMG height=388 src="Part1_1.files/img74.gif" width=517
border=0></P>
<P align=center> </P></TD></TR></TBODY></TABLE>
<P> </P></BODY></HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -