📄 rfc817.txt
字号:
while running in an interrupt handler. Thus, the programmer who is
forced to implement all or part of his protocol package as an interrupt
handler must be the best sort of expert in the operating system
involved, and must be prepared for development sessions filled with
obscure bugs which crash not just the protocol package but the entire
operating system.
A final problem with processing at interrupt time is that the
system scheduler has no control over the percentage of system time used
by the protocol handler. If a large number of packets arrive, from a
foreign host that is either malfunctioning or fast, all of the time may
be spent in the interrupt handler, effectively killing the system.
There are other problems associated with putting protocols into an
operating system kernel. The simplest problem often encountered is that
the kernel address space is simply too small to hold the piece of code
in question. This is a rather artificial sort of problem, but it is a
severe problem none the less in many machines. It is an appallingly
unpleasant experience to do an implementation with the knowledge that
8
for every byte of new feature put in one must find some other byte of
old feature to throw out. It is hopeless to expect an effective and
general implementation under this kind of constraint. Another problem
is that the protocol package, once it is thoroughly entwined in the
operating system, may need to be redone every time the operating system
changes. If the protocol and the operating system are not maintained by
the same group, this makes maintenance of the protocol package a
perpetual headache.
The third option for protocol implementation is to take the
protocol package and move it outside the machine entirely, on to a
separate processor dedicated to this kind of task. Such a machine is
often described as a communications processor or a front-end processor.
There are several advantages to this approach. First, the operating
system on the communications processor can be tailored for precisely
this kind of task. This makes the job of implementation much easier.
Second, one does not need to redo the task for every machine to which
the protocol is to be added. It may be possible to reuse the same
front-end machine on different host computers. Since the task need not
be done as many times, one might hope that more attention could be paid
to doing it right. Given a careful implementation in an environment
which is optimized for this kind of task, the resulting package should
turn out to be very efficient. Unfortunately, there are also problems
with this approach. There is, of course, a financial problem associated
with buying an additional computer. In many cases, this is not a
problem at all since the cost is negligible compared to what the
programmer would cost to do the job in the mainframe itself. More
9
fundamentally, the communications processor approach does not completely
sidestep any of the problems raised above. The reason is that the
communications processor, since it is a separate machine, must be
attached to the mainframe by some mechanism. Whatever that mechanism,
code is required in the mainframe to deal with it. It can be argued
that the program to deal with the communications processor is simpler
than the program to implement the entire protocol package. Even if that
is so, the communications processor interface package is still a
protocol in nature, with all of the same structural problems. Thus, all
of the issues raised above must still be faced. In addition to those
problems, there are some other, more subtle problems associated with an
outboard implementation of a protocol. We will return to these problems
later.
There is a way of attaching a communications processor to a
mainframe host which sidesteps all of the mainframe implementation
problems, which is to use some preexisting interface on the host machine
as the port by which a communications processor is attached. This
strategy is often used as a last stage of desperation when the software
on the host computer is so intractable that it cannot be changed in any
way. Unfortunately, it is almost inevitably the case that all of the
available interfaces are totally unsuitable for this purpose, so the
result is unsatisfactory at best. The most common way in which this
form of attachment occurs is when a network connection is being used to
mimic local teletypes. In this case, the front-end processor can be
attached to the mainframe by simply providing a number of wires out of
the front-end processor, each corresponding to a connection, which are
10
plugged into teletype ports on the mainframe computer. (Because of the
appearance of the physical configuration which results from this
arrangement, Michael Padlipsky has described this as the "milking
machine" approach to computer networking.) This strategy solves the
immediate problem of providing remote access to a host, but it is
extremely inflexible. The channels being provided to the host are
restricted by the host software to one purpose only, remote login. It
is impossible to use them for any other purpose, such as file transfer
or sending mail, so the host is integrated into the network environment
in an extremely limited and inflexible manner. If this is the best that
can be done, then it should be tolerated. Otherwise, implementors
should be strongly encouraged to take a more flexible approach.
4. Protocol Layering
The previous discussion suggested that there was a decision to be
made as to where a protocol ought to be implemented. In fact, the
decision is much more complicated than that, for the goal is not to
implement a single protocol, but to implement a whole family of protocol
layers, starting with a device driver or local network driver at the
bottom, then IP and TCP, and eventually reaching the application
specific protocol, such as Telnet, FTP and SMTP on the top. Clearly,
the bottommost of these layers is somewhere within the kernel, since the
physical device driver for the net is almost inevitably located there.
Equally clearly, the top layers of this package, which provide the user
his ability to perform the remote login function or to send mail, are
not entirely contained within the kernel. Thus, the question is not
11
whether the protocol family shall be inside or outside the kernel, but
how it shall be sliced in two between that part inside and that part
outside.
Since protocols come nicely layered, an obvious proposal is that
one of the layer interfaces should be the point at which the inside and
outside components are sliced apart. Most systems have been implemented
in this way, and many have been made to work quite effectively. One
obvious place to slice is at the upper interface of TCP. Since TCP
provides a bidirectional byte stream, which is somewhat similar to the
I/O facility provided by most operating systems, it is possible to make
the interface to TCP almost mimic the interface to other existing
devices. Except in the matter of opening a connection, and dealing with
peculiar failures, the software using TCP need not know that it is a
network connection, rather than a local I/O stream that is providing the
communications function. This approach does put TCP inside the kernel,
which raises all the problems addressed above. It also raises the
problem that the interface to the IP layer can, if the programmer is not
careful, become excessively buried inside the kernel. It must be
remembered that things other than TCP are expected to run on top of IP.
The IP interface must be made accessible, even if TCP sits on top of it
inside the kernel.
Another obvious place to slice is above Telnet. The advantage of
slicing above Telnet is that it solves the problem of having remote
login channels emulate local teletype channels. The disadvantage of
putting Telnet into the kernel is that the amount of code which has now
12
been included there is getting remarkably large. In some early
implementations, the size of the network package, when one includes
protocols at the level of Telnet, rivals the size of the rest of the
supervisor. This leads to vague feelings that all is not right.
Any attempt to slice through a lower layer boundary, for example
between internet and TCP, reveals one fundamental problem. The TCP
layer, as well as the IP layer, performs a demultiplexing function on
incoming datagrams. Until the TCP header has been examined, it is not
possible to know for which user the packet is ultimately destined.
Therefore, if TCP, as a whole, is moved outside the kernel, it is
necessary to create one separate process called the TCP process, which
performs the TCP multiplexing function, and probably all of the rest of
TCP processing as well. This means that incoming data destined for a
user process involves not just a scheduling of the user process, but
scheduling the TCP process first.
This suggests an alternative structuring strategy which slices
through the protocols, not along an established layer boundary, but
along a functional boundary having to do with demultiplexing. In this
approach, certain parts of IP and certain parts of TCP are placed in the
kernel. The amount of code placed there is sufficient so that when an
incoming datagram arrives, it is possible to know for which process that
datagram is ultimately destined. The datagram is then routed directly
to the final process, where additional IP and TCP processing is
performed on it. This removes from the kernel any requirement for timer
based actions, since they can be done by the process provided by the
13
user. This structure has the additional advantage of reducing the
amount of code required in the kernel, so that it is suitable for
systems where kernel space is at a premium. The RFC 814, titled "Names,
Addresses, Ports, and Routes," discusses this rather orthogonal slicing
strategy in more detail.
A related discussion of protocol layering and multiplexing can be
found in Cohen and Postel [1].
5. Breaking Down the Barriers
In fact, the implementor should be sensitive to the possibility of
even more peculiar slicing strategies in dividing up the various
protocol layers between the kernel and the one or more user processes.
The result of the strategy proposed above was that part of TCP should
execute in the process of the user. In other words, instead of having
one TCP process for the system, there is one TCP process per connection.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -