⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 decoding.html

📁 用C语言编写的LDPC译码程序
💻 HTML
字号:
<HTML><HEAD><TITLE> Decoding Received Blocks </TITLE></HEAD><BODY><H1> Decoding Received Blocks </H1>Transmitted codewords are decoded from the received data on the basisof the <I>likelihood</I> of the possible codewords, which is theprobability of receiving the data that was actually received if thecodeword is question were the one that was sent.  This softwarepresently deals only with memoryless channels, in which the noise isindependent from bit to bit.  For such a channel, the likelihoodfactorizes into a product of likelihoods for each bit.  For decoding purposes, all that matters is the relative likelihoodfor a bit to be 1 versus 0.  This is captured by the <I>likelihoodratio</I> in favour of a 1, which is P(data | bit is 1) / P(data |bit is 0).  For a Binary Symmetric Channel with error probability <I>p</I>, the likelihood ratio in favour of a 1 bit is as follows:<BLOCKQUOTE>  If the received data was +1: (1-<I>p</I>) / <I>p</I><BR>  If the received data was -1: <I>p</I> / (1-<I>p</I>)</BLOCKQUOTE>For an Additive White Gaussian Noise channel, with signals of +1 for a 1 bitand or -1 for a 0 bit, and with noise standard deviation <I>s</I>, thelikelihood ratio in favour of a 1 bit when data <I>y</I> was received is<BLOCKQUOTE>  exp ( 2y / s<SUP><SMALL>2</SMALL></SUP> )</BLOCKQUOTE><P>It is usual to consider codewords to be equally likely <I>apriori</I>.  This is reasonable if the source messages are all equallylikely (any source redundancy being ignored, or remove by apreliminary data compression stage), provided that the mapping fromsource messages to codewords is onto.  Decoding can then be done usingonly the parity check matrix defining the codewords, without referenceto the generator matrix defining the mapping from source messages tocodewords.  Note that the condition that this mapping be onto isn'ttrue with this software in the atypical case where the code is definedby a parity check matrix with redundant rows; see the discussion of <AHREF="dep-H.html">linear dependence in parity check matrices</A>.This minor complication is mostly ignored here, except by the exhaustiveenumeration decoding methods.<P>Assuming equal <I>a priori</I> probabilities for codewords, theprobability of correctly decoding an entire codeword is minimized bypicking the codeword with the highest likelihood.  One might insteadwish to decode each bit to the value that is most probable.  Thisminimizes the bit error rate, but is not in general guaranteed to leada decoding for each block to the most probable complete codeword;indeed, the decoding may not be a codeword at all.  Minimizing the biterror rate seems nevertheless to be the most sensible objective,unless block boundaries have some significance in a wider context.<P>Optimal decoding by either criterion is infeasible for generallinear codes when messages are more than about 20 or 30 bits inlength.  The fundamental advantage of Low Density Parity Check codesis that good (though not optimal) decodings can be obtained by using aprobability propagation algorithm, as described next.<A NAME="prprp"><H2>Decoding by probability propagation</H2></A><P>The probability propagation algorithm was originally devised byRobert Gallager in the early 1960's and later reinvented by DavidMacKay and myself.  It can be seen as an instance of the sum-productalgorithm for inference on factor graphs.  See the <AHREF="refs.html">references</A> for details.  Below, I give a fairlyintuitive description of the algorithm.  <P>The algorithm uses only the parity check matrix for the code, whosecolumns correspond to codeword bits, and whose rows correspond toparity checks, and the likelihood ratios for the bits derived from thedata.  It aims to find the probability of each bit of the transmittedcodeword being 1, though the results of the algorithm are in generalonly approximate.<P>The begin, information about each bit of the codeword derived fromthe received data for that bit alone is expressed as a <I>probabilityratio</I>, the probability of the bit being 1 divided by theprobability of the bit being 0.  This probability ratio is equal tothe likelihood ratio (see above) for that bit, since 0 and 1 areassumed to be equally likely <I>a priori</I>.  As the algorithmprogresses, these probability ratios will be modified to take accountof information obtained from other bits, in conjunction with therequirement that the parity checks be satisfied.  To avoid doublecounting of information, for every bit, the algorithm maintains aseparate probability ratio for each parity check that that bitparticipates in, giving the probability for that bit to be 1 versus 0based only on information derived from <I>other</I> parity checks,along with the data received for the bit.<P>For each parity check, the algorithm maintains separate<I>likelihood ratios</I> (analogous to, but distinct from, thelikelihood ratios based on received data), for every bit thatparticipates in that parity check.  These ratios give the probabilityof that parity check being satisfied if the bit in question is 1divided by the probability of the check being satisfied if the bit is0, taking account of the probabilities of each of the <I>other</I>bits participating in this check being 1, as derived from theprobability ratios for these bits with respect to this check.<P>The algorithm alternates between recalculating the likelihoodratios for each check, which are stored in the <B>lr</B> fields of theparity check matrix entries, and recalculating the probability ratiosfor each bit, which are stored in the <B>pr</B> fields of the entriesin the sparse matrix representation of the parity check matrix.  (Seethe documentation on <A HREF="mod2sparse.html#rep">representation ofsparse matrices</A> for details on these entries.)<P>Recalculating the likelihood ratio for a check with respect to somebit may appear time consuming, requiring that all possiblecombinations of values for the other bits participating in the checkbe considered.  Fortunately, there is a short cut.  One can calculate<BLOCKQUOTE> <I>t</I>  = product of [ 1 / (1+<I>p<SUB><SMALL>i</SMALL></SUB></I>)               - <I>p<SUB><SMALL>i</SMALL></SUB></I> /                       (1+<I>p<SUB><SMALL>i</SMALL></SUB></I>) ] = product of [ 2 / (1+<I>p<SUB><SMALL>i</SMALL></SUB></I>) - 1 ]</BLOCKQUOTE>where the product is over the probability ratios<I>p<SUB><SMALL>i</SMALL></SUB></I> for the other bits participatingin this check.  Factor <I>i</I> in this product is equal to probabilityof bit <I>i</I> being 0 minus the probability that it is 1.  The terms in the expansion of this product (in the first form above) correspond to possible combinations of values for the other bits, with the result that <I>t</I> will be the probability of the check being satisfied if the bit in question is 0 minus the probability if the bit in question is 1.  The likelihood ratio for this check with respect to the bit in question can then be calculated as (1-<I>t</I>)/(1+<I>t</I>).<P>For a particular check, the product above differs for differentbits, with respect to which we wish to calculate a likelihood ratio,only in that for each bit the factor corresponding to that bit is leftout.  We can calculate all these products easily by ordering the bitsarbitrarily, computing running products of the factor for the firstbit, the factors for the first two bits, etc., and also runningproducts of the factor for the last bit, the factors for the last twobits, etc.  Multiplying the running product of the factors up to<I>i</I>-1 by the running product of the factors from <I>i</I>+1 ongives the product needed for bit <I>i</I>.  The second form of the factors above is used, as it requires less computation, and is stillwell defined even if some ratios are infinite.<P>To recalculate the probability ratio for a bit with respect to acheck, all that is need is to multiply together the likelihood ratiofor this bit derived from the received data (see above), and thecurrent values of the likelihood ratios for all the <I>other</I>checks that this bit participates in, with respect to this bit.  Tosave time, these products are computed by combining forward and backward products, similarly to the method used for likelihood ratios.<P>By including likelihood ratios from all checks, a similarcalculation produces the current probability ratio for the bit to be 1versus 0 based on all information that has propagated to the bit sofar.  This ratio can be thresholded at one to produce the current bestguess as to whether this bit is a 1 or a 0.<P>The hope is that this algorithm will eventually converge to a statewhere these bit probabilities give a near-optimal decoding.  This isdoes not always occur, but the algorithm behaves well enough toproduce very good results at rates approaching (though not yetreaching) the theoretical Shannon limit.<P><A NAME="decode"><HR><B>decode</B>: Decode blocks of received datainto codewords.<BLOCKQUOTE><PRE>decode [ -t ] <I>pchk-file received-file decoded-file</I> [ <I>bp-file</I> ] <I>channel method</I></PRE><BLOCKQUOTE>where <TT><I>channel</I></TT> is one of:<BLOCKQUOTE><PRE>bsc <I>error-probability</I>awgn <I>standard-deviation</I></PRE></BLOCKQUOTE>and <TT><I>method</I></TT> is one of:<BLOCKQUOTE><PRE>enum-block <TT><I>gen-file</I></TT>enum-bit <TT><I>gen-file</I></TT>prprp <TT>[-]<I>max-iterations</I></TT></PRE></BLOCKQUOTE></BLOCKQUOTE></BLOCKQUOTE><P>Decodes the blocks in <TT><I>received-file</I></TT>, which areassumed to be have been received through the specified channel.  Theresults written to <TT><I>decoded-file</I></TT> are the specifieddecoding method's guesses as to what bits were sent through thechannel, given what was received.  The probability of each bit being a1, as judged by the decoding method being used, is written to<TT><I>bp-file</I></TT>, if given.  <P>A newline is output at the end of each block written to<TT><I>decoded-file</I></TT>.  Newlines in<TT><I>received-file</I></TT> are ignored.  A warning is displayed onstandard error if the number of bits in <TT><I>received-file</I></TT>is not a multiple of the block length.<P>The type of channel that is assumed is specified after the file name arguments.  This may currently be either <TT>bsc</TT> (or <TT>BSC</TT>) for the Binary Symmetric Channel, or <TT>awgn</TT> (or <TT>AWGN</TT>) for theAdditive White Gaussian Noise channel.  The channel type is followed byan argument specifying the assumed characteristics of the channel, as follows:<BLOCKQUOTE> <P>BSC: The probability that a bit will be flipped by noise - ie, the        probability that the bit received is an error.<P>AWGN: The standard deviation of the Gaussian noise added to the encodings of the bits.</BLOCKQUOTE>See the description of <A HREF="channel.html">channel transmission</A>for more about these channels.<P>Following the channel specification is a specification of thedecoding method to use.  The <TT>enum-block</TT> and <TT>enum-bit</TT>methods find the optimal decoding by exhaustive enumeration ofcodewords derived from all possible source messages.  They differ inthat <TT>enum-block</TT> decodes to the most likely codeword, whereas<TT>enum-bit</TT> decodes to the bits that are individually mostprobable.  These methods require that a file containing arepresentation of a generator matrix be given, to allow enumeration ofcodewords.  If the parity check matrix has no redundant rows, anyvalid generator matrix will give the same decoding (except perhaps ifthere is a tie).  If redundant rows exist, the generator matrix shouldspecify the same set of message bits as the generator matrix that wasused for the actual encoding, since the redundancy will lead to somecodeword bits being fixed at zero (see <A HREF="dep-H.html">lineardependence in parity check matrices</A>).<P>The <TT>prprp</TT> decoding method decodes using <AHREF="#prprp">probability propagation</A>.  The maximum number ofiterations of probability propagation to do is given following<TT>prprp</TT>.  If a minus sign precedes this number, the maximumnumber of iterations is always done.  If no minus sign is present, thealgorithm stops once the tentative decoding, based on bit-by-bitprobabilities, is a valid codeword.  Note that continuing to themaximum number of iterations may possibly change the decoding comparedto stopping at the first a valid codeword, and will usually result inat least slightly different bit probabilities (as written to<TT><I>bp-file</I></TT>, if specified).<P>A summary is displayed on standard error, giving the total numberof blocks decoded, the number of blocks that decoded to validcodewords, and the average number of iterations of the decodingalgorithm used.<P>If the <B>-t</B> option is given, block by block information is writtento standard output, giving the block number (from zero), the number ofiterations of the decoding algorithm, and whether the final decoded blockwas a valid codeword (0 or 1).  These columns are preceded by a line ofheaders, so that the file is suitable for reading into the S-Plus or R statistics packages, with a command such as<BLOCKQUOTE><PRE>data <- read.table(<I>file</I>,header=T)</PRE></BLOCKQUOTE><P><A NAME="extract"><HR><B>extract</B>: Extract the message bits from a block.<BLOCKQUOTE><PRE>extract <I>gen-file decoded-file extracted-file</I></PRE></BLOCKQUOTE><P>Given a file of codewords in <TT><I>decoded-file</I></TT> (usually,decoded blocks output by <A HREF="#decode"><TT>decode</TT></A>), and agenerator matrix from <TT><I>gen-file</I></TT> (needed only todetermine where the message bits are located in a codeword), thisprogram writes the message bits extracted from these codewords to thefile <TT><I>extracted-file</I></TT>.  <P>A newline is output at the end of each block written to<TT><I>extracted-file</I></TT>.  Newlines in<TT><I>decoded-file</I></TT> are ignored.  A warning is displayed onstandard error if the number of bits in <TT><I>decoded-file</I></TT>is not a multiple of the block length.<HR><A HREF="index.html">Back to index for LDPC software</A></BODY></HTML>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -