⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1750.txt

📁 bind-3.2.
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   distribution.  Consider the distribution of the parity function of N   bit samples.  The probabilities that the parity will be one or zero   will be the sum of the odd or even terms in the binomial expansion of   (p + q)^N, where p = 0.5 + e, the probability of a one, and q = 0.5 -   e, the probability of a zero.   These sums can be computed easily as                         N            N        1/2 * ( ( p + q )  + ( p - q )  )   and                         N            N        1/2 * ( ( p + q )  - ( p - q )  ).   (Which one corresponds to the probability the parity will be 1   depends on whether N is odd or even.)   Since p + q = 1 and p - q = 2e, these expressions reduce to                       N        1/2 * [1 + (2e) ]   and                       N        1/2 * [1 - (2e) ].   Neither of these will ever be exactly 0.5 unless e is zero, but we   can bring them arbitrarily close to 0.5.  If we want the   probabilities to be within some delta d of 0.5, i.e. then                            N        ( 0.5 + ( 0.5 * (2e)  ) )  <  0.5 + d.Eastlake, Crocker & Schiller                                   [Page 11]RFC 1750        Randomness Recommendations for Security    December 1994   Solving for N yields N > log(2d)/log(2e).  (Note that 2e is less than   1, so its log is negative.  Division by a negative number reverses   the sense of an inequality.)   The following table gives the length of the string which must be   sampled for various degrees of skew in order to come within 0.001 of   a 50/50 distribution.                       +---------+--------+-------+                       | Prob(1) |    e   |    N  |                       +---------+--------+-------+                       |   0.5   |  0.00  |    1  |                       |   0.6   |  0.10  |    4  |                       |   0.7   |  0.20  |    7  |                       |   0.8   |  0.30  |   13  |                       |   0.9   |  0.40  |   28  |                       |   0.95  |  0.45  |   59  |                       |   0.99  |  0.49  |  308  |                       +---------+--------+-------+   The last entry shows that even if the distribution is skewed 99% in   favor of ones, the parity of a string of 308 samples will be within   0.001 of a 50/50 distribution.5.2.2 Using Transition Mappings to De-Skew   Another technique, originally due to von Neumann [VON NEUMANN], is to   examine a bit stream as a sequence of non-overlapping pairs. You   could then discard any 00 or 11 pairs found, interpret 01 as a 0 and   10 as a 1.  Assume the probability of a 1 is 0.5+e and the   probability of a 0 is 0.5-e where e is the eccentricity of the source   and described in the previous section.  Then the probability of each   pair is as follows:            +------+-----------------------------------------+            | pair |            probability                  |            +------+-----------------------------------------+            |  00  | (0.5 - e)^2          =  0.25 - e + e^2  |            |  01  | (0.5 - e)*(0.5 + e)  =  0.25     - e^2  |            |  10  | (0.5 + e)*(0.5 - e)  =  0.25     - e^2  |            |  11  | (0.5 + e)^2          =  0.25 + e + e^2  |            +------+-----------------------------------------+   This technique will completely eliminate any bias but at the expense   of taking an indeterminate number of input bits for any particular   desired number of output bits.  The probability of any particular   pair being discarded is 0.5 + 2e^2 so the expected number of input   bits to produce X output bits is X/(0.25 - e^2).Eastlake, Crocker & Schiller                                   [Page 12]RFC 1750        Randomness Recommendations for Security    December 1994   This technique assumes that the bits are from a stream where each bit   has the same probability of being a 0 or 1 as any other bit in the   stream and that bits are not correlated, i.e., that the bits are   identical independent distributions.  If alternate bits were from two   correlated sources, for example, the above analysis breaks down.   The above technique also provides another illustration of how a   simple statistical analysis can mislead if one is not always on the   lookout for patterns that could be exploited by an adversary.  If the   algorithm were mis-read slightly so that overlapping successive bits   pairs were used instead of non-overlapping pairs, the statistical   analysis given is the same; however, instead of provided an unbiased   uncorrelated series of random 1's and 0's, it instead produces a   totally predictable sequence of exactly alternating 1's and 0's.5.2.3 Using FFT to De-Skew   When real world data consists of strongly biased or correlated bits,   it may still contain useful amounts of randomness.  This randomness   can be extracted through use of the discrete Fourier transform or its   optimized variant, the FFT.   Using the Fourier transform of the data, strong correlations can be   discarded.  If adequate data is processed and remaining correlations   decay, spectral lines approaching statistical independence and   normally distributed randomness can be produced [BRILLINGER].5.2.4 Using Compression to De-Skew   Reversible compression techniques also provide a crude method of de-   skewing a skewed bit stream.  This follows directly from the   definition of reversible compression and the formula in Section 2   above for the amount of information in a sequence.  Since the   compression is reversible, the same amount of information must be   present in the shorter output than was present in the longer input.   By the Shannon information equation, this is only possible if, on   average, the probabilities of the different shorter sequences are   more uniformly distributed than were the probabilities of the longer   sequences.  Thus the shorter sequences are de-skewed relative to the   input.   However, many compression techniques add a somewhat predicatable   preface to their output stream and may insert such a sequence again   periodically in their output or otherwise introduce subtle patterns   of their own.  They should be considered only a rough technique   compared with those described above or in Section 6.1.2.  At a   minimum, the beginning of the compressed sequence should be skipped   and only later bits used for applications requiring random bits.Eastlake, Crocker & Schiller                                   [Page 13]RFC 1750        Randomness Recommendations for Security    December 19945.3 Existing Hardware Can Be Used For Randomness   As described below, many computers come with hardware that can, with   care, be used to generate truly random quantities.5.3.1 Using Existing Sound/Video Input   Increasingly computers are being built with inputs that digitize some   real world analog source, such as sound from a microphone or video   input from a camera.  Under appropriate circumstances, such input can   provide reasonably high quality random bits.  The "input" from a   sound digitizer with no source plugged in or a camera with the lens   cap on, if the system has enough gain to detect anything, is   essentially thermal noise.   For example, on a SPARCstation, one can read from the /dev/audio   device with nothing plugged into the microphone jack.  Such data is   essentially random noise although it should not be trusted without   some checking in case of hardware failure.  It will, in any case,   need to be de-skewed as described elsewhere.   Combining this with compression to de-skew one can, in UNIXese,   generate a huge amount of medium quality random data by doing        cat /dev/audio | compress - >random-bits-file5.3.2 Using Existing Disk Drives   Disk drives have small random fluctuations in their rotational speed   due to chaotic air turbulence [DAVIS].  By adding low level disk seek   time instrumentation to a system, a series of measurements can be   obtained that include this randomness. Such data is usually highly   correlated so that significant processing is needed, including FFT   (see section 5.2.3).  Nevertheless experimentation has shown that,   with such processing, disk drives easily produce 100 bits a minute or   more of excellent random data.   Partly offsetting this need for processing is the fact that disk   drive failure will normally be rapidly noticed.  Thus, problems with   this method of random number generation due to hardware failure are   very unlikely.6. Recommended Non-Hardware Strategy   What is the best overall strategy for meeting the requirement for   unguessable random numbers in the absence of a reliable hardware   source?  It is to obtain random input from a large number of   uncorrelated sources and to mix them with a strong mixing function.Eastlake, Crocker & Schiller                                   [Page 14]RFC 1750        Randomness Recommendations for Security    December 1994   Such a function will preserve the randomness present in any of the   sources even if other quantities being combined are fixed or easily   guessable.  This may be advisable even with a good hardware source as   hardware can also fail, though this should be weighed against any   increase in the chance of overall failure due to added software   complexity.6.1 Mixing Functions   A strong mixing function is one which combines two or more inputs and   produces an output where each output bit is a different complex non-   linear function of all the input bits.  On average, changing any   input bit will change about half the output bits.  But because the   relationship is complex and non-linear, no particular output bit is   guaranteed to change when any particular input bit is changed.   Consider the problem of converting a stream of bits that is skewed   towards 0 or 1 to a shorter stream which is more random, as discussed   in Section 5.2 above.  This is simply another case where a strong   mixing function is desired, mixing the input bits to produce a   smaller number of output bits.  The technique given in Section 5.2.1   of using the parity of a number of bits is simply the result of   successively Exclusive Or'ing them which is examined as a trivial   mixing function immediately below.  Use of stronger mixing functions   to extract more of the randomness in a stream of skewed bits is   examined in Section 6.1.2.6.1.1 A Trivial Mixing Function   A trivial example for single bit inputs is the Exclusive Or function,   which is equivalent to addition without carry, as show in the table   below.  This is a degenerate case in which the one output bit always   changes for a change in either input bit.  But, despite its   simplicity, it will still provide a useful illustration.                   +-----------+-----------+----------+                   |  input 1  |  input 2  |  output  |                   +-----------+-----------+----------+                   |     0     |     0     |     0    |                   |     0     |     1     |     1    |                   |     1     |     0     |     1    |                   |     1     |     1     |     0    |                   +-----------+-----------+----------+   If inputs 1 and 2 are uncorrelated and combined in this fashion then   the output will be an even better (less skewed) random bit than the   inputs.  If we assume an "eccentricity" e as defined in Section 5.2   above, then the output eccentricity relates to the input eccentricityEastlake, Crocker & Schiller                                   [Page 15]RFC 1750        Randomness Recommendations for Security    December 1994   as follows:        e       = 2 * e        * e         output        input 1    input 2   Since e is never greater than 1/2, the eccentricity is always   improved except in the case where at least one input is a totally   skewed constant.  This is illustrated in the following table where   the top and left side values are the two input eccentricities and the   entries are the output eccentricity:     +--------+--------+--------+--------+--------+--------+--------+     |    e   |  0.00  |  0.10  |  0.20  |  0.30  |  0.40  |  0.50  |     +--------+--------+--------+--------+--------+--------+--------+     |  0.00  |  0.00  |  0.00  |  0.00  |  0.00  |  0.00  |  0.00  |     |  0.10  |  0.00  |  0.02  |  0.04  |  0.06  |  0.08  |  0.10  |     |  0.20  |  0.00  |  0.04  |  0.08  |  0.12  |  0.16  |  0.20  |     |  0.30  |  0.00  |  0.06  |  0.12  |  0.18  |  0.24  |  0.30  |     |  0.40  |  0.00  |  0.08  |  0.16  |  0.24  |  0.32  |  0.40  |     |  0.50  |  0.00  |  0.10  |  0.20  |  0.30  |  0.40  |  0.50  |     +--------+--------+--------+--------+--------+--------+--------+   However, keep in mind that the above calculations assume that the   inputs are not correlated.  If the inputs were, say, the parity of

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -