📄 the art of error correcting coding iterative decoding and capacity.htm
字号:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0048)http://the-art-of-ecc.com/8_Iterative/index.html -->
<HTML><HEAD><TITLE>The Art of Error Correcting Coding: Iterative decoding and capacity</TITLE>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="Robert Morelos-Zaragoza" name=Author>
<META
content="Error Control Coding,Error CorrectingCoding,Error Correcting Codes,FEC,Turbo Codes,Iterative Decoding,DigitalCommunications,Wireless,Satellite,Data,Coded Modulation,Golay,Hamming,BCH,Reed Solomon,Viterbi Decoder,Soft Decision Decoding,Sudan Algorithm,Unequal Error Protection,Variable Rate Coding,Adaptive Coding,Convolutional Codes,LDPC,Low-Density Parity-Check Codes,The Art of Error-Correcting Coding,Capacity-achieving codes,Coding is not dead, is more alive than ever"
name=description>
<META content="MSHTML 6.00.2900.3395" name=GENERATOR></HEAD>
<BODY bgColor=#ffffff>
<CENTER><B><FONT size=+1><BIG>Iteratively decodable
codes</BIG><BR></FONT></B></CENTER>
<P>In this page pointers are presented to programs in C/C++ for simulating
iterative decoding algorithms. These include programs to compute
constellation-constrainted capacity. The design of interleavers is an important
issue, particularly with short frame lengths. Computer programs to construct
several types of interleavers are given. These and other issues are discussed in
Chapter 8 of the <A
href="http://www.amazon.com/gp/product/0470015586/ref=sr_11_1/102-3025380-7157754?ie=UTF8">book!</A>
<BR> </P>
<P><BIG><B><FONT size=+1><BIG>TURBO CODES</BIG></FONT></B> </BIG></P>
<P>In the design and implementation of a turbo code in software, there are two
main issues: <BR>1. Construction of an interleaver <BR>2. Simulation of
iterative decoding </P><B>1. Interleavers</B>
<P></P><A
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/ranint.c">RANDOM
interleaver</A> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/helint.c">HELICAL
interleaver</A> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/primint.c">PRIMITIVE
interleaver</A> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/cyclint.c">CYCLIC
interleaver</A> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/diagint.c">DIAGONAL
interleaver</A> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Interleaver/blockint.c">BLOCK
interleaver</A>
<P></P>
<P>The programs output the interleaver array as one integer per row (i.e.,
separated by CR characters). Other types of interleavers exist, but the above
classes should always yield competitive alternatives. The programs are well
documented. <BR> </P>
<P><B>2. Iterative decoding</B> </P>
<P><B>MAP decoding of a parallel concatenated (turbo) convolutional code: Rate
1/3</B> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Turbo/turbo.cpp">turbo.cpp</A> <A
href="http://the-art-of-ecc.com/8_Iterative/Turbo/random.cpp">random.cpp</A> <A
href="http://the-art-of-ecc.com/8_Iterative/Turbo/random.h">random.h</A> </P>
<P>Simulation of a binary rate-1/3 turbo code with two identical rate-1/2
component recursive convolutional encoders. The memory of the encoders can be
selected betwen 2, 3 and 4, corresponding to 4, 8 and 16 state trellises
respectively. Files <I>turbo_MAP.cpp</I> and <I>random.cpp</I> must be
compiled together, with the "gcc" command in a unix-like environment (or
equivalent in other OS) as </P>
<P>gcc -O2 turbo_MAP.cpp random.cpp -lm </P>
<P><B>NOTE</B>: This version of the program does not consider <I>tail bits</I>
to terminate the trellis. As a result, performance will be worse than turbo
codes with tail bits, specially with short interleaver lengths. The original
program was written by Mathys Walma in 1998. Please refere to the original <A
href="http://the-art-of-ecc.com/8_Iterative/Turbo/README_1998.txt">README</A>
file. Many changes to the program were necessary to make it more compatible with
the style of the programs in this site. <BR> </P>
<P><B>MAP decoding of a parallel concatenated (turbo) convolutional code:
Puncturing and rate 1/2</B> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/Turbo/turbo_punc.cpp">turbo_punc.cpp</A>
<A href="http://the-art-of-ecc.com/8_Iterative/Turbo/random.cpp">random.cpp</A>
<A href="http://the-art-of-ecc.com/8_Iterative/Turbo/random.h">random.h</A> </P>
<P>These programs are used for simulation of a rate-1/2 <I>punctured</I> turbo
code. A puncturing rule is applied in the branch metric (gamma) computation
stage, very much in the same way as in the convolutional code case. In this
version, the puncturing rule is hard-coded in the program, but it should be easy
to specified it in a file, just as in the case of binary <A
href="http://the-art-of-ecc.com/5_Convolutional/index.html">punctured
convolutional codes</A>. </P>
<P>All other comments made for the rate-1/3 turbo code above are pertinent to
the punctured rate-1/2 turbo code. <BR> </P>
<P><BIG><B><FONT size=+1><BIG>LDPC CODES</BIG></FONT></B> </BIG></P>
<P>Below are iterative soft-decision (belief propagation) decoding and
hard-decision decoding algorithms for the important family of low-density
parity-check codes. Since these algorithms can be applied to any linear code
(with a low-density parity check matrix), a simplification is made in that the
all-zero codeword is always transmitted. This simplifies programming enormously.
</P>
<P>The iterative decoding algorithms need the parity-check matrix as an input.
The format of the files specifying the structure of these matrices (or the
Tanner graphs) is the same as that used by David MacKay, that is, the "alist"
(adjacency list) format. Please refer to <A
href="http://www.inference.phy.cam.ac.uk/mackay/CodesFiles.html">his web
site</A> for more information, some C source code, and to pick up some matrices
for your simulations. Below are two examples of parity-check matrix file
definition, for Gallager's (20,5) code and a finite (projective) geometry cyclic
(273,191,17) code, respectively: <BR><A
href="http://the-art-of-ecc.com/8_Iterative/BPdec/gallager.20.4.3.dat">gallager.20.4.3</A>
<BR><A
href="http://the-art-of-ecc.com/8_Iterative/BPdec/DSC.273.17.17">DSC.273.17.17</A>
</P>
<P><I><B>NOTE:</B> The three numbers in suffix of a file name above are N.J.K,
where N=length, J=maximum degree of bit nodes and K=maximum degree of check
nodes.</I> <BR> <BR><B>Belief-propagation decoding algorithm:
Probabilistic decoding</B> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/BPdec/pearl.c">pearl.c</A> </P>
<P>The algorithm <I>works directly with probabilities</I>. In terms of numerical
precision, it is the most stable BP decoder, although it is very intensive in
terms of exp() computations. <BR> <BR><B>Belief-propagation decoding
algorithm: Logarithm domain</B> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/BPdec/log_pearl.c">log_pearl.c</A>
</P>
<P>This version of BP algorithm is obtained from the probabilistic one by
<I>straight log-domain translation</I>. No approximation (table) used. This
results in log(exp(a)+/-exp(b)) computations that are even more intensive than
in pearl.c... <BR> <BR><B>Belief-propagation decoding algorithm:
Log-likelihood domain</B> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/BPdec/llr_pearl.c">llr_pearl.c</A>
</P>
<P>This version utilizes <I>log-likelihood ratios</I>. This results in much
improvement in terms of speed compared to log_pearl.c with practically the same
numerical precision as pearl.c. It can be further improved by the use of a
look-up table to avoid exp() computations, as mentioned in the <B><A
href="http://www.amazon.com/exec/obidos/ASIN/0471495816/qid=1016858656/sr=8-1/ref=sr_8_5_1/104-2054788-7703131">book</A></B>.
<BR> <BR><B>Bit-flipping hard-decision decoding algorithm</B> <BR><A
href="http://the-art-of-ecc.com/8_Iterative/BPdec/bitf.c">bitf.c</A> </P>
<P>Gallager's iterative bit-flipping decoding of linear block codes. The user
can specify a threshold on the number of unsatisfied parity checks needed in
order to flip (or complement the value of) a bit. <BR><BR><BIG>
<BR></BIG><B><FONT size=+1>The Tanner graph of a binary cyclic code:
</FONT></B><A
href="http://the-art-of-ecc.com/8_Iterative/tannercyclic.m"><SMALL><FONT
size=+1><SMALL>tannercyclic.m</SMALL></FONT></SMALL></A><B><FONT
size=+1><BR> </FONT></B><BR></P>
<P><BIG><B><FONT size=+1><BIG>Capacity</BIG></FONT></B> </BIG></P>
<P>Both turbo codes and LDPC codes are so-called capacity achieving. Therefore,
there is an interest in knowing the theoretical limits of transmission when
using a particular modulation scheme. This is called constellation-constrainted
capacity. Below are some programs that are useful in computing this capacity
compared to the ultimate Shannon capacity of an AWGN channel. <BR></P><B>Compute
the constellation-constrainted capacity of an AWGN channel:</B> <A
href="http://the-art-of-ecc.com/8_Iterative/Capacity/capacity.c">capacity.c</A>
<BR><B>Compute the Shannon capacity (versus SNR per symbol, or Es/No): </B><A
href="http://the-art-of-ecc.com/8_Iterative/Capacity/shannon.c">shannon.c</A>
<BR><B>Compute the Shannon capacity (versus SNR per bit, or Eb/No): </B><A
href="http://the-art-of-ecc.com/8_Iterative/Capacity/shannon2.c">shannon2.c</A>
<BR> <BR>
<P><B><A href="http://the-art-of-ecc.com/topics.html">BACK TO CONTENTS</A></B>
<BR></P>
<HR>
<H6 style="FONT-WEIGHT: normal"><FONT color=#000000>This page was last updated
on August 6, 2008, by Robert H. Morelos-Zaragoza.</FONT></H6><!-- text below generated by server. PLEASE REMOVE --><!-- Counter/Statistics data collection code -->
<SCRIPT language=JavaScript
src="The Art of Error Correcting Coding Iterative decoding and capacity.files/whv2_001.js"></SCRIPT>
<SCRIPT language=javascript>geovisit();</SCRIPT>
<NOSCRIPT><IMG height=1 alt=setstats
src="The Art of Error Correcting Coding Iterative decoding and capacity.files/visit.gif"
width=1 border=0></NOSCRIPT></BODY></HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -