📄 rfc3385.txt
字号:
Network Working Group D. Sheinwald
Request for Comments: 3385 J. Satran
Category: Informational IBM
P. Thaler
V. Cavanna
Agilent
September 2002
Internet Protocol Small Computer System Interface (iSCSI)
Cyclic Redundancy Check (CRC)/Checksum Considerations
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2002). All Rights Reserved.
Abstract
In this memo, we attempt to give some estimates for the probability
of undetected errors to facilitate the selection of an error
detection code for the Internet Protocol Small Computer System
Interface (iSCSI).
We will also attempt to compare Cyclic Redundancy Checks (CRCs) with
other checksum forms (e.g., Fletcher, Adler, weighted checksums), as
permitted by available data.
1. Introduction
Cyclic Redundancy Check (CRC) codes [Peterson] are shortened cyclic
codes used for error detection. A number of CRC codes have been
adopted in standards: ATM, IEC, IEEE, CCITT, IBM-SDLC, and more
[Baicheva]. The most important expectation from this kind of code is
a very low probability for undetected errors. The probability of
undetected errors in such codes has been, and still is, subject to
extensive studies that have included both analytical models and
simulations. Those codes have been used extensively in
communications and magnetic recording as they demonstrate good "burst
error" detection capabilities, but are also good at detecting several
independent bit errors. Hardware implementations are very simple and
well known; their simplicity has made them popular with hardware
Sheinwald, et. al. Informational [Page 1]
RFC 3385 iSCSI CRC Considerations September 2002
developers for many years. However, algorithms and software for
effective implementations of CRC are now also widely available
[Williams].
The probability of undetected errors depends on the polynomial
selected to generate the code, the error distribution (error model),
and the data length.
2. Error Models and Goals
We will analyze the code behavior under two conditions:
- noisy channel - burst errors with an average length of n bits
- low noise channel - independent single bit errors
Burst errors are the prevalent natural phenomenon on communication
lines and recording media. The numbers quoted for them revolve
around the BER (bit error rate). However, those numbers are
frequently nothing more than a reflection of the Burst Error Rate
multiplied by the average burst length. In field engineering tests,
three numbers are usually quoted together -- BER, error-free-seconds
and severely-error-seconds; this illustrates our point.
Even beyond communication and recording media, the effects of errors
will be bursty. An example of this is a memory error that will
affect more than a single bit and the total effect will not be very
different from the communication error, or software errors that occur
while manipulating packets will have a burst effect. Software errors
also result in burst errors. In addition, serial internal
interconnects will make this type of error the most common within
machines as well.
We also analyze the effects of single independent bit errors, since
these may be caused by certain defects.
On burst, we assume an average burst error duration of bd, which at a
given transmission rate s, will result in an average burst of a =
bd*s bits. (E.g., an average burst duration of 3 ns at 1Gbs gives an
average burst of 3 bits.)
For the burst error rate, we will take 10^-10. The numbers quoted
for BER on wired communication channels are between 10^-10 to 10^-12
and we consider the BER as burst-error-rate*average-burst-length.
Nevertheless, please keep in mind that if the channel includes
wireless links, the error rates may be substantially higher.
For independent single bit errors, we assume a 10^-11 error rate.
Sheinwald, et. al. Informational [Page 2]
RFC 3385 iSCSI CRC Considerations September 2002
Because the error detection mechanisms will have to transport large
amounts of data (petabytes=10^16 bits) without errors, we will target
very low probabilities for undetected errors for all block lengths
(at 10Gb/s that much data can be sent in less than 2 weeks on a
single link).
Alternatively, as iSCSI has to perform efficiently, we will require
that the error detection capability of a selected protection
mechanism be very good, at least up to block lengths of 8k bytes
(64kbits).
The error detection capability should keep the probability of
undetected errors at values that would be "next-to-impossible". We
recognize, however, that such attributes are hard to quantify and we
resorted to physics. The value 10^23 is the Avogadro number while
10^45 is the number of atoms in the known Universe (or it was many
years ago when we read about it) and those are the bounds of
incertitude we could live with. (10^-23 at worst and 10^-45 if we
can afford it.) For 8k blocks, the per/bit equivalent would be
(10^-28 to 10^-50).
3. Background and Literature Survey
Each codeword of a binary (n,k) CRC code C consists of n = k+r bits.
The block of r parity bits is computed from the block of k
information bits. The code has a degree r generator polynomial g(x).
The code is linear in the sense that the bitwise addition of any two
codewords yields a codeword.
For the minimal m such that g(x) divides (x^m)-1, either n=m, and the
code C comprises the set D of all the multiplications of g(x) modulo
(x^m)-1, or n<m, and C is obtained from D by shortening each word in
the latter in m-n specific positions. (This also reduces the number
of words since all zero words are then discarded and duplicates are
not maintained.)
Error detection at the receiving end is made by computing the parity
bits from the received information block, and comparing them with the
received parity bits.
An undetected error occurs when the received word c' is a codeword,
but is different from the c that is transmitted.
This is only possible when the error pattern e=c'-c is a codeword by
itself (because of the linearity of the code). The performance of a
CRC code is measured by the probability Pud of undetected channel
errors.
Sheinwald, et. al. Informational [Page 3]
RFC 3385 iSCSI CRC Considerations September 2002
Let Ai denote the number of codewords of weight i, (i.e., with i 1-
bits). For a binary symmetric channel (BSC), with sporadic,
independent bit error ratio of probability 0<=epsilon<=0.5, the
probability of undetected errors for the code C is thus given by:
Pud(C,epsilon) = Sigma[for i=d to n] (Ai*(epsilon^i)*(1-epsilon)^(n-i))
where d is the distance of the code: the minimal weight difference
between two codewords in C which, by the linearity of the code, is
also the minimal weight of any codeword in the code. Pud can also be
expressed by the weight distribution of the dual code: the set of
words each of which is orthogonal (bitwise AND yields an even number
of 1-bits) to every word of C. The fact that Pud can be computed
using the dual code is extremely important; while the number of
codewords in the code is 2^k, the number of codewords in the dual
code is 2^r. k is in the orders of thousands, and r in the order of
16 or 24 or 32. If we use Bi to denote the number of codewords in
the dual code which are of weight i, then ([LinCostello]):
Pud (C,epsilon) = 2^-r Sigma [for i=0 to n] Bi*(1-2*epsilon)^i -
(1-epsilon)^n
Wolf [Wolf94o] introduced an efficient algorithm for enumerating all
the codewords of a code and finding their weight distribution.
Wolf [Wolf82] found that, counter to what was assumed, (1) there
exist codes for which Pud(C,epsilon)>Pud(C,0.5) for some epsilon
not=0.5 and (2) Pud is not always increasing for 0<=epsilon<=0.5.
The value of what was assumed to be the worst Pud is Pud(C,0.5)=(2^-
r) - (2^-n). This stems from the fact that with epsilon=0.5, all 2^n
received words are equally likely and out of them 2^(n-r)-1 will be
accepted as codewords of no errors, although they are different from
the codeword transmitted. Previously Pud had been assumed to equal
[2^(n-r)-1]/(2^n-1) or the ratio of the number of non-zero multiples
of the polynomial of degree less than n (each such multiple is
undetected) and the number of possible error polynomials. With
either formula Pud approaches 1/2^r as n approaches infinity, but
Wolf's formula is more accurate.
Wolf [Wolf94j] investigated the CCITT code of r=16 parity bits. This
code is a member of the family of (shortened codes of) BCH codes of
length 2^(r-1) -1 (r=16 in the CCITT 16-bit case) generated by a
polynomial of the form g(x) =(x+1)p(x) with p(x) being a primitive
polynomial of degree r-1 (=15 in this case). These codes have a BCH
design distance of 4. That is, the minimal distance between any two
codewords in the code is at least 4 bits (which is earned by the fact
Sheinwald, et. al. Informational [Page 4]
RFC 3385 iSCSI CRC Considerations September 2002
that the sequence of powers of alpha, the root of p(x), which are
roots of g(x), includes three consecutive powers -- alpha^0, alpha^1,
alpha^2). Hence, every 3 single bit errors are detectable.
Wolf found that different shortened versions of a given code, of the
same codeword length, perform the same (independent of which specific
indexes are omitted from the original code). He also found that for
the unshortened codes, all primitive polynomials yield codes of the
same performance. But for the shortened versions, the choice of the
primitive polynomial does make a difference. Wolf [Wolf94j] found a
primitive polynomial which (when multiplied by x+1) yields a
generating polynomial that outperforms the CCITT one by an order of
magnitude. For 32-bit redundancy bits, he found an example of two
polynomials that differ in their probability of undetected burst of
length 33 by 4 orders of magnitude.
It so happens, that for some shortened codes, the minimum distance,
or the distribution of the weights, is better than for others derived
from different unshortened codes.
Baicheva, et. al. [Baicheva] made a comprehensive comparison of
different generating polynomials of degree 16 of the form g(x) =
(x+1)p(x), and of other forms. They computed their Pud for code
lengths up to 1024 bits. They measured their "goodness" -- if
Pud(C,epsilon) <= Pud(C,0.5) and being "well-behaved" -- if
Pud(C,epsilon) increases with epsilon in the range (0,0.5). The
paper gives a comprehensive table that lists which of the polynomials
is good and which is well-behaved for different length ranges.
For a single burst error, Wolf [Wolf94J] suggested the model of (b:p)
burst -- the errors only occur within a span of b bits, and within
that span, the errors occur randomly, with a bit error probability 0
<= p <= 1.
For p=0.5, which used to be considered the worst case, it is well
known [Wolf94J] that the probability of undetected one burst error of
length b <= r is 0, of length b=r+1 is 2^-(r-1), and of b > r+1, is
2^-r, independently of the choice of the primitive polynomial.
With Wolf's definition, where p can be different from 0.5, indeed it
was found that for a given b there are values of p, different from
0.5 which maximize the probability of undetected (b:p) burst error.
Sheinwald, et. al. Informational [Page 5]
RFC 3385 iSCSI CRC Considerations September 2002
Wolf proved that for a given code, for all b in the range r < b < n,
the conditional probability of undetected error for the (n, n-r)
code, given that a (b:p) burst occurred, is equal to the probability
of undetected errors for the same code (the same generating
polynomial), shortened to block length b, when this shortened code is
used with a binary symmetric channel with channel (sporadic,
independent) bit error probability p.
For the IEEE-802.3 used CRC32, Fujiwara et al. [Fujiwara89] measured
the weights of all words of all shortened versions of the IEEE 802.3
code of 32 check bits. This code is generated by a primitive
polynomial of degree 32:
g(x) = x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 +
x^7 + x^5 + x^4 + x^2 + x + 1 and hence the designed distance of it
is only 3. This distance holds for codes as long as 2^32-1.
However, the frame format of the MAC (Media Access Control) of the
data link layer in IEEE 802.3, as well as that of the data link layer
for the Ethernet (1980) forbid lengths exceeding 12,144 bits. Thus,
only such bounded lengths are investigated in [Fujiwara89]. For
shortened versions, the minimum distance was found to be 4 for
lengths 4096 to 12,144; 5 for lengths 512 to 2048; and even 15 for
lengths 33 through 42. A chart of results of calculations of Pud is
presented in [Fujiwara89] from which we can see that for codes of
length 12,144 and BSC of epsilon = 10^-5 - 10^-4,
Pud(12,144,epsilon)= 10^-14 - 10^-13 and for epsilon = 10^-4 - 10^-3,
Pud(512,epsilon) = 10^-15, Pud(1024,epsilon) = 10^-14,
Pud(2048,epsilon) = 10^-13, Pud(4096,epsilon) = 10^-12 - 10^-11, and
Pud(8192,epsilon) = 10^-10 which is rather close to 2^-32.
Castagnoli, et. al. [Castagnoli93] extended Fujiwara's technique for
efficiently calculating the minimum distance through the weight
distribution of the dual code and explored a large number of CRC
codes with 24 and 32 redundancy bit. They explored several codes
built as a multiplication of several lower degree irreducible
polynomials.
In the popular class of (x+1)*deg31-irreducible-polynomial they
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -