📄 thesis.txt
字号:
be potentially rather complex, especially if the symbols that are
scrambled are individual bits.
Permutation, when used in combination with substitution, is very useful
in further increasing the complexity of a cipher, since the two algorithms
act in different ways to baffle the interceptor. This is significant,
since it is not possible to actually increase the security of a substitution
cipher by simply repeating the same thing with a different key. The
result in such a case would be another substitution cipher with the
same complexity.
C. Noise Addition
Noise addition can either be a way of enhancing another encryption
scheme, or of hiding the very existence of a message. For example,
a grid can be set up on a piece of paper where only certain positions
are significant. These positions are then concatenated together to
reveal the secret message. The rest of the paper is then filled with
something that uses those characters, but contains something different,
like a letter to someone's mother. The same thing can be done electronically,
by defining a block of bits with only part of the bits significant. The
rest of the bits are then filled with truly random noise, like the
output of a Geiger counter. This method can be very effective, especially
if the position of the information-bearing bits vary from block to
block in a pseudorandom manner, and if there is a relatively large
proportion of truly random noise. The obvious disadvantage to this
kind of scheme is that it makes very poor use of communications channels
and data storage facilities.
One useful variation on the noise addition method is to multiplex
several encrypted streams of data (encrypted with another method)
together with some pseudorandom interleaving. Since the other encrypted
streams of data act like the random noise added to any one stream
of data, this is a good way to improve security when a large volume
of encrypted data must be sent through the same channel.
D. Feedback & Chaining
A block cipher like DES or MPJ, when applied directly to an input
file with a highly repetitive structure (i. e., lots of spaces between
columns) will also display some of that structure. Although it may
not be possible to determine the exact contents of what has been encrypted,
it may be rather easy to determine something about the nature of what
has been encrypted. To deny the interceptor even this information,
and to further increase the complexity of the encrypted information,
feedback and chaining may be used. Chaining refers to making the
results of the encryption of one block dependent on previous results. This
is usually done in one of two ways. One is by adding the plain text
from the last block to the cipher text of this block, modulo two. The
other is by adding the cipher text from the preceding block to the
cipher text of the current block, modulo two. These are referred
to as plain text feedback and cipher text feedback, respectively.
1. Plain Text Feedback
Plain text feedback has the property that any errors in transmission
get propagated clear through the rest of the message. This may be
a desirable property if it is necessary to detect any tampering with
the message, but it makes it difficult to use real (error-prone) channels. If
it is necessary to have the property of error propagation to detect
tampering and to use real channels, then the message should be transmitted
using an error detection and correction protocol of some sort.
2. Cipher Text Feedback
Cipher text feedback also causes some error propagation, but not clear
through to the end of a message. An error in one block will cause
an error in that block in the bit positions where the error occurred
and in all of the next block, but the receiving station can recover
all subsequent error-free blocks. This method is a good compromise
between security and error recovery. It is still desirable to wrap
encrypted data in an error detection and correction protocol to avoid
having a one bit error wipe out many bits.
E. Analog Encryption
There are several methods of analog encryption. None of them are
as secure as digital methods can be. Analog encryption is generally
based on spectrum inversion, spectrum scrambling, time slice scrambling,
and (for video signals) suppression of synchronization information. These
elements are also used in combination [LOD]. These methods are commonly
used by cable TV companies (and sometimes on satellite channels) to
ensure that people only get the programming that they have paid for. Articles
on how to build devices to defeat some of these things appear periodically
in electronic hobbyist magazines.
The state of the art in current use for protection of satellite TV
signals is the General Instruments Video Cipher II. This device used
digital (DES) encryption of the audio information and a partially
analog encryption of the video signal. This device is a deterrent
to those who receive satellite TV without paying a cable TV operator,
but it has been broken by a hacker who figured out that it was possible
to modify the microcode in the device so that a key purchased for
one machine would work on all of them with modified code. This is,
of course, illegal, but it allows one person to pay for a service
and many people to use it for free [DOH]. The problem here is not
one of the encryption algorithm, but in the key management.
IV. FACTORS RELATED TO ENCRYPTION
A. Change of Language
Although it is not really encryption, the use of a different language
from what the interceptor is likely to know does increase the difficulty
of cryptanalysis. For example, I know that it would be far more difficult
for me to solve even a simple Caesar substitution cipher in Chinese
than in English, Spanish, or German. This principal was used effectively
by the United States against Japan in World War II by using the Navajo
language for spoken radio communications [KAH]. Since the Navajo
language had no written form, and was only spoken by a small group
of Native Americans, there were no Japanese who knew the language. The
Japanese never did figure that ``code'' out. Thus, the Navajo ``Code
Talkers'' made a great contribution to their country.
Part of the reason for the success of the Navajo Code Talkers was
that the Japanese had no idea what was going on. Since this success
is now well known, it is difficult to determine the potential success
of a repeat of this kind of approach. There are still some languages
that are spoken only by a very few people in the world, some of which
have no written form. These could be used to increase the effective
``key space.''
Although a change of language was a great victory for the Navajo People
and the United States of America during World War II, it is difficult
to adapt this success to computerized communication methods.
B. Digitization
The very act of digitizing an audio or video signal provides some
protection from the casual eavesdropper. Although this provides no
protection against a determined snooper, it is an appropriate level
of additional security for protecting the privacy of things like mobile
telephone conversations. Of course, once an analog signal is digitized,
there is a wide range of digital encryption methods available. Of
course, some methods would be overkill, since the same communication
would probably be transmitted in the clear over wire and/or microwave
channels as well as by radio between the car and the stationary cellular
transceiver.
Although there has been some ill-conceived efforts by the mobile telephone
industry to legislate privacy for such applications by saying that
it is illegal to listen to someone else's phone calls around 800 MHz
(where cellular phones usually operate), this kind of thing is unenforceable.
Worse yet, such legislation provides a false sense of security in
a world where scanners and other receivers that operate in that range
are readily available.
C. Compression
Data compression itself tends to obscure things by taking, for example,
and ASCII text file in English and reducing it to a binary file that
is not as easy to read. There is a more important effect of data
compression, however. The removal of the natural redundancy of the
plain text data before encrypting it with some kind of encryption
algorithm makes it much more difficult to cryptanalyze the cipher
text. In the extreme case where all of the redundancy of a message
is removed, all messages of a given length would be meaningful, and
it would be impossible for the cryptanalyst to determine which one
was intended.
D. Multiplexing
Multiplexed signals are less readable to the casual observer, but
standard multiplexing offers no real increase in cryptographic strength. It
can, however, increase the strength of multiple streams of encrypted
data when the multiplexing is done with a pseudorandom interleave
(as discussed above under noise addition). This makes the multiplexing
and demultiplexing processes more complex with respect to synchronization
and timing, and is likely to introduce more delay into the system
than more straight forward multiplexing arrangements.
V. COMPARISON OF SELECTED ALGORITHMS
A. One-Time Key Tape
The one-time key tape, also called the one-time pad, has a tremendous
advantage. It is the only commonly used encryption method that is
provably secure. Proving the security of most other systems generally
reduces to an impossible negative proof <197> an attempt to prove
the lack of a method to defeat it.
The way the one-time key tape works is to add each character of the
message to a character of the key modulo the alphabet size. When
working with any binary stream of data, this is easy to implement
by exclusive-oring the input stream with the key stream. At the receiving
end, the same thing is done to decrypt the data. Since P + K + K
= P (modulo 2), the original stream is recovered.
As implied by the name, it is important that each key be used only
once. If it were used more than once, then the system would be vulnerable
to attack with the known cipher text and corresponding plain text
attack. The key is obtained from the boolean identity: K = (P +
K) + P, where K is the key, P is the plain text, P + K is the cipher
text, and all additions are modulo the alphabet size (usually 2). The
key thus recovered would then be used to decipher the next message
that used it.
According to C. E. Shannon [SHA], for a cipher to be provably secure,
the number of keys must be as great as the number of potential messages. The
number of keys he refers to, of course, is the number of keys that
yields a distinctly different transformation of the message text into
cipher text.
This method of encryption is provably secure, since an exhaustive
search of the keys applied to any cipher text of a given length will
yield all possible plain text messages of the same length. Since
there are many messages of the same length that make sense, many of
which have totally contradictory or unrelated meanings, the cryptanalyst
has no way of knowing which one was intended unless he has the key.
Naturally, any method of encryption this secure has a price. The
price is the sheer volume of keying material that must be kept secure
and managed. For any communication network or data storage system
that handles large amounts of data, key management becomes a nightmare
with this system.
B. Linear Shift Register Feedback
Linear shift registers with selected feedback taps are commonly used
to generate pseudorandom sequences for such things as spread spectrum
communications. They could be (and are, in some cases) applied to
cryptography by exclusive-oring the pseudorandom stream with the plain
text stream.
The main weakness to this approach is the cipher text with corresponding
plain text attack. If the cryptanalyst obtains the plain text that
came from a certain cipher text, then he can recreate a portion of
the pseudorandom stream by exclusive-oring the two together. The
linear shift register feedback function can then be expressed as a
system of linear boolean equations that will solve the portion of
the cycle that was received. A solution of this system of equations
is possible with only enough bits to be a few times as long as the
shift register at the most [DEN]. Since this amount of plain text
required to break the cipher is so much less than the length of the
pseudorandom sequence, it is likely that this solution can then be
applied to additional cipher text to recover it, too.
C. Exponential Encryption
The classic algorithm of this type is the RSA algorithm, which is
named after the initials of its creators (R. L. Rivest, A. Shamir,
and L. Aldeman). The security of this algorithm rests on the
fact that even with supercomputers, it is very difficult to factor
the product of two very large prime numbers. The RSA algorithm has
a unique property in that the key has two parts, one of which may
be made public. This makes this algorithm very well suited to the
use of digital signatures.
The RSA algorithm defines the relationships between the plain text
message, the cipher text, and the elements of the key as follows [JAC]:
C = P^e mod m
P = C^d mod m
m = pq
where C = Cipher text, P = Plain text, e = encryption key, d = private
decryption key, m = public modulus, and p and q are randomly chosen
large (> 150 digits each) prime numbers. The receiver chooses a private
key d and calculates a public key e. d must be relatively prime to
(p - 1) and (q - 1). The algorithm works because ed = 1 mod
the least common multiple of (p - 1)(q - 1). The algorithm
is only secure if very large prime numbers are used. A number of
100 digits is too small - this size of number can be factored
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -