⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 decoding.html

📁 剑桥大学David J.C. MacKay 个人网站公布的2006年的代码
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<HTML><HEAD><TITLE> Decoding Received Blocks </TITLE></HEAD><BODY><H1> Decoding Received Blocks </H1>Transmitted codewords are decoded from the received data on the basisof the <I>likelihood</I> of the possible codewords, which is theprobability of receiving the data that was actually received if thecodeword is question were the one that was sent.  This softwarepresently deals only with memoryless channels, in which the noise isindependent from bit to bit.  For such a channel, the likelihoodfactorizes into a product of likelihoods for each bit.  For decoding purposes, all that matters is the relative likelihoodfor a bit to be 1 versus 0.  This is captured by the <I>likelihoodratio</I> in favour of a 1, which is P(data | bit is 1) / P(data |bit is 0).  <P>For a Binary Symmetric Channel with error probability <I>p</I>, the likelihood ratio in favour of a 1 bit is as follows:<BLOCKQUOTE>  If the received data was +1: (1-<I>p</I>) / <I>p</I><BR>  If the received data was -1: <I>p</I> / (1-<I>p</I>)</BLOCKQUOTE>For an Additive White Gaussian Noise channel, with signals of +1 for a 1 bitand or -1 for a 0 bit, and with noise standard deviation <I>s</I>, thelikelihood ratio in favour of a 1 bit when data <I>y</I> was received is<BLOCKQUOTE>  exp ( 2y / s<SUP><SMALL>2</SMALL></SUP> )</BLOCKQUOTE>For an Additive White Logistic Noise channel, the correspondinglikelihood ratio is<I>d</I><SUB><SMALL>1</SMALL></SUB>/<I>d</I><SUB><SMALL>0</SMALL></SUB>,where<I>d</I><SUB><SMALL>1</SMALL></SUB>=<I>e</I><SUB><SMALL>1</SMALL></SUB>/ (1+<I>e</I><SUB><SMALL>1</SMALL></SUB>)<SUP><SMALL>2</SMALL></SUP> and<I>d</I><SUB><SMALL>0</SMALL></SUB>=<I>e</I><SUB><SMALL>0</SMALL></SUB>/ (1+<I>e</I><SUB><SMALL>0</SMALL></SUB>)<SUP><SMALL>2</SMALL></SUP>,with <I>e</I><SUB><SMALL>1</SMALL></SUB>=exp(-(<I>y</I>-1)/<I>w</I>) and<I>e</I><SUB><SMALL>0</SMALL></SUB>=exp(-(<I>y</I>+1)/<I>w</I>).<BLOCKQUOTE> </BLOCKQUOTE><P>It is usual to consider codewords to be equally likely <I>apriori</I>.  This is reasonable if the source messages are all equallylikely (any source redundancy being ignored, or remove by apreliminary data compression stage), provided that the mapping fromsource messages to codewords is onto.  Decoding can then be done usingonly the parity check matrix defining the codewords, without referenceto the generator matrix defining the mapping from source messages tocodewords.  Note that the condition that this mapping be onto isn'ttrue with this software in the atypical case where the code is definedby a parity check matrix with redundant rows; see the discussion of <AHREF="dep-H.html">linear dependence in parity check matrices</A>.This minor complication is mostly ignored here, except by the exhaustiveenumeration decoding methods.<P>Assuming equal <I>a priori</I> probabilities for codewords, theprobability of correctly decoding an entire codeword is minimized bypicking the codeword with the highest likelihood.  One might insteadwish to decode each bit to the value that is most probable.  Thisminimizes the bit error rate, but is not in general guaranteed to leada decoding for each block to the most probable complete codeword;indeed, the decoding may not be a codeword at all.  Minimizing the biterror rate seems nevertheless to be the most sensible objective,unless block boundaries have some significance in a wider context.<P>Optimal decoding by either criterion is infeasible for generallinear codes when messages are more than about 20 or 30 bits inlength.  The fundamental advantage of Low Density Parity Check codesis that good (though not optimal) decodings can be obtained by methodssuch as probability propagation, described next.<A NAME="prprp"><H2>Decoding by probability propagation</H2></A><P>The probability propagation algorithm was originally devised byRobert Gallager in the early 1960's and later reinvented by DavidMacKay and myself.  It can be seen as an instance of the sum-productalgorithm for inference on factor graphs, and as an instance of beliefpropagation in probabilistic networks.  See the <AHREF="refs.html">references</A> for details.  Below, I give a fairlyintuitive description of the algorithm.<P>The algorithm uses only the parity check matrix for the code, whosecolumns correspond to codeword bits, and whose rows correspond toparity checks, and the likelihood ratios for the bits derived from thedata.  It aims to find the probability of each bit of the transmittedcodeword being 1, though the results of the algorithm are in generalonly approximate.<P>The begin, information about each bit of the codeword derived fromthe received data for that bit alone is expressed as a <I>probabilityratio</I>, the probability of the bit being 1 divided by theprobability of the bit being 0.  This probability ratio is equal tothe likelihood ratio (see above) for that bit, since 0 and 1 areassumed to be equally likely <I>a priori</I>.  As the algorithmprogresses, these probability ratios will be modified to take accountof information obtained from other bits, in conjunction with therequirement that the parity checks be satisfied.  To avoid doublecounting of information, for every bit, the algorithm maintains aseparate probability ratio for each parity check that that bitparticipates in, giving the probability for that bit to be 1 versus 0based only on information derived from <I>other</I> parity checks,along with the data received for the bit.<P>For each parity check, the algorithm maintains separate<I>likelihood ratios</I> (analogous to, but distinct from, thelikelihood ratios based on received data), for every bit thatparticipates in that parity check.  These ratios give the probabilityof that parity check being satisfied if the bit in question is 1divided by the probability of the check being satisfied if the bit is0, taking account of the probabilities of each of the <I>other</I>bits participating in this check being 1, as derived from theprobability ratios for these bits with respect to this check.<P>The algorithm alternates between recalculating the likelihoodratios for each check, which are stored in the <B>lr</B> fields of theparity check matrix entries, and recalculating the probability ratiosfor each bit, which are stored in the <B>pr</B> fields of the entriesin the sparse matrix representation of the parity check matrix.  (Seethe documentation on <A HREF="mod2sparse.html#rep">representation ofsparse matrices</A> for details on these entries.)<P>Recalculating the likelihood ratio for a check with respect to somebit may appear time consuming, requiring that all possiblecombinations of values for the other bits participating in the checkbe considered.  Fortunately, there is a short cut.  One can calculate<BLOCKQUOTE> <I>t</I>  = product of [ 1 / (1+<I>p<SUB><SMALL>i</SMALL></SUB></I>)               - <I>p<SUB><SMALL>i</SMALL></SUB></I> /                       (1+<I>p<SUB><SMALL>i</SMALL></SUB></I>) ] = product of [ 2 / (1+<I>p<SUB><SMALL>i</SMALL></SUB></I>) - 1 ]</BLOCKQUOTE>where the product is over the probability ratios<I>p<SUB><SMALL>i</SMALL></SUB></I> for the other bits participatingin this check.  Factor <I>i</I> in this product is equal to probabilityof bit <I>i</I> being 0 minus the probability that it is 1.  The terms in the expansion of this product (in the first form above) correspond to possible combinations of values for the other bits, with the result that <I>t</I> will be the probability of the check being satisfied if the bit in question is 0 minus the probability if the bit in question is 1.  The likelihood ratio for this check with respect to the bit in question can then be calculated as (1-<I>t</I>)/(1+<I>t</I>).<P>For a particular check, the product above differs for differentbits, with respect to which we wish to calculate a likelihood ratio,only in that for each bit the factor corresponding to that bit is leftout.  We can calculate all these products easily by ordering the bitsarbitrarily, computing running products of the factor for the firstbit, the factors for the first two bits, etc., and also runningproducts of the factor for the last bit, the factors for the last twobits, etc.  Multiplying the running product of the factors up to<I>i</I>-1 by the running product of the factors from <I>i</I>+1 ongives the product needed for bit <I>i</I>.  The second form of the factors above is used, as it requires less computation, and is stillwell defined even if some ratios are infinite.<P>To recalculate the probability ratio for a bit with respect to acheck, all that is need is to multiply together the likelihood ratiofor this bit derived from the received data (see above), and thecurrent values of the likelihood ratios for all the <I>other</I>checks that this bit participates in, with respect to this bit.  Tosave time, these products are computed by combining forward and backward products, similarly to the method used for likelihood ratios.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -