rfc928.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 1,198 行 · 第 1/5 页
TXT
1,198 行
will be found if you've offloaded TCP. That is, it's reasonable
to let the user "tell" the outboard PI at Begin time if big or
small buffers are expected to be in play "net-ward" as part of the
protocol, but the outboard PI is expected to deliver bits to the
Host as they come unless throttled by the Channel Layer, or by
some to-be-invented other discipline to force the OPE to buffer.
(For present purposes, we envision letting the Channel Layer
handle it, but nifty mechanizations of encouraging the OPE to
"make like a buffer" would be at least looked at.) As a
Fabrication issue, it is the case that "equity" has to be dealt
with with regard to the use of the OPE's resources (especially
buffers) across H-FP connections/channels, but that's a different
issue anyway, touched upon in the final fine point.
Padlipsky [Page 13]
RFC 928 December 1984
Introduction to H-FP
Precedence
Clearly, the existence of a notion of Precedence in DOD protocols
has to get reflected in the outboard PI's implementations. Just
what, if any, role it has in the H-FP, per se, is, however, by no
means clear. That is, if the Host doesn't take Begins from the
OPE and is "full up" on the number of Server Telnet connections
it's willing to handle, what should happen if a high precedence
SYN comes in on the Telnet Well-Known Socket (in present day
terms)? Probably the OPE should arbitrarily close a low
precedence connection to make room for the new one, and signal the
Host, but even that assumes the Host will always hurry to be
prepared to do a new passive Begin. Perhaps we've stumbled across
still another argument in favor of "Symmetric Begins".... At any
rate, Precedence does need further study--although it shouldn't
deter us from making "the rest" of the protocol work while we're
waiting for inspiration on how to handle Precedence too.
A Note on Host Integration
The most important thing about Hosts in any intercomputer network is
that they furnish the resources to be shared. The most significant
obstacle to sharing those resources, however, is the fact that almost
invariably they were designed under the assumption that the Host was
a fully autonomous entity. That is, few operating systems currently
deployed "expect" to be members of a heterogeneous community of
operating systems. In many cases, this built-in insularity goes so
far as to have applications programs cognizant of the particular type
of terminal from which they will be invoked.
Intercomputer networking protocols attempt to resolve the problems of
heterogeneity by virtue of presenting appropriate common intermediate
representations (or "virtualizations") of the constructs and concepts
necessary to do resource sharing. A Host-Host protocol such as TCP
"is" a virtual interprocess communication mechanism; a virtual
terminal protocol such as Telnet obviously is a mechanism for
defining and dealing with virtual terminals; FTP offers common
representations of files; and so on. It cannot be stressed strongly
enough, though, that this entire approach to intercomputer networking
is predicated on the assumption that the modules which interpret the
protocols (PIs, as we'll refer to them often) will be PROPERLY
integrated into the various participating operating systems. Even in
the presence of powerful OPEs, wherein the bulk of the work of the
various PIs is performed outboard of the Host, the inboard "hooks"
which serve to interface the outboard PIs to the native system must
not only be present, they must be "right". The argument parallels
the analysis of the flexible vs. rigid front-ending attachment
Padlipsky [Page 14]
RFC 928 December 1984
Introduction to H-FP
strategy issue of [1]; to borrow an example, if you attempt to
integrate FTP by "looking like" a native terminal user and the
operator forces a message to all terminals, you've got an undetected
pollution of your data stream. So the key issue in attaching Hosts to
networks is not what sort of hardware is required or what sort of
protocol is interpreted by the Host and the OPE (or comm subnet
processor, for that matter), but how the PIs (full or partial) are
made to interrelate with the pre-existing environment.
It would be well beyond the scope of this document to attempt even to
sketch (much less specify) how to integrate H-FP PIs into each type
of operating system which will be found in the DoD. An example,
though, should be of use and interest. Therefore, because it is the
implementation with which we are most intimately familiar, even
though it's been several years, we propose to sketch the Multics
operating system integration of the original ARPANET Network Control
Program (NCP)--which is functionally equivalent to an H-FP PI for
offloading ARM L II and L I--and Telnet. (A few comments will also
be made about FTP.) Note, by the way, that the sketch is for a
"full-blown" H-FP; that is, shortcuts along the lines of the
scenario-driven approach mentioned above are not dealt with here.
One of the particularly interesting features of Multics is the fact
that each process possesses an extremely large "segmented virtual
memory". That is, memory references other than to the segment at
hand (which can itself be up to 256K 36-bit words long) indirect
through a descriptor segment, which is in principle "just another
segment", by segment number and offset within the segment, so that a
single process--or "scheduling and access control entity"--can
contain rather impressive amounts of code and data. Given that the
code is "pure procedure" (or "re-entrant"), a "distributed
supervisor" approach is natural; each process, then, appears to have
in its address space a copy of each procedure segment (with
system-wide and process-specific data segments handled
appropriately). Without going too far afield, the distributed
supervisor approach allows interrupts to be processed by whichever
process happens to be running at a given time, although, of course,
interprocess communication may well be a consequence of processing a
particular interrupt.
A few other necessary background points: A distinguished process,
called the Answering Service, exists, originally to field interrupts
from terminals and in general to create processes after
authenticating them. Other shared resources such as line printers
are also managed by distinguished processes, generically known as
"Daemons". Device driver code, as is customary on many operating
systems, resides at least in part in the supervisor (or hard core
Padlipsky [Page 15]
RFC 928 December 1984
Introduction to H-FP
operating system). Finally (for our purposes, at least), within a
process all interfaces are by closed subroutine calls and all I/O is
done by generic function calls on symbolically named streams; also,
all system commands (and, of course, user written programs which need
to) use the streams "user_input" and "user_output" for the obvious
purposes. (At normal process creation time, both user I/O streams
are "attached" to the user's terminal, but either or both can be
attached to any other I/O system interface module instead--including
to one which reads and writes files, which is handy for consoleless
processes.)
All that almost assuredly doesn't do justice to Multics, but equally
likely is more than most readers of this document want to know, so
let's hope it's enough to make the following integration sketch
comprehensible. (There will be some conscious omissions in the
sketch, and doubtless some unconscious ones, but if memory serves, no
known lies have been included.)
Recalling that NCP is functionally equivalent to H-FP, let's start
with it. In the first place, the device driver for the 1822 spec
hardware interface resides in the supervisor. (For most systems, the
PI for H-FP's link protocol probably would too.) In Multics,
interrupt time processing can only be performed by supervisor
segments, so in the interests of efficiency, both the IMP-Host (1822
software) Protocol PI and the multiplexing/demultiplexing aspects of
the Host-Host Protocol PI also reside in the supervisor. (An H-FP PI
would probably also have its multiplexing/demultiplexing there; that
is, that portion of the Channel Layer code which mediates access to
the OPE and/or decides what process a given message is to be sent to
might well be in the supervisor for efficiency reasons. It is not,
however, a hard and fast rule that it would be so. The system's
native interprocess communications mechanism's characteristics might
allow all the Channel Layer to reside outside of the supervisor.)
Even with a very large virtual memory, though, there are
administrative biases against putting too much in the supervisor, so
"everything else" lives outside the supervisor. In fact, there are
two places where the rest of the Host-Host Protocol is interpreted on
Multics, although it is not necessarily the case that an H-FP PI
would follow the same partitioning even on Multics, much less on some
other operating system. However, with NCP, because there is a
distinguished "control link" over which Host-Host commands are sent
in the NCP's Host-Host protocol, the Multics IMP-Host Protocol PI
relegates such traffic to a Network Daemon process, which naturally
is a key element in the architecture. (Things would be more
efficient, though, if there weren't a separate Daemon, because other
processes then have to get involved with interprocess communication
Padlipsky [Page 16]
RFC 928 December 1984
Introduction to H-FP
to it; H-FP PI designers take note.) To avoid traversing the Daemon
for all traffic, though, normal reads and writes (i.e., noncontrol
link traffic) are done by the appropriate user process. By virtue of
the distributed supervisor approach, then, there is a supervisor call
interface to "the NCP" available to procedures (programs) within user
processes. (The Daemon process uses the same interface, but by virtue
of its ID has the ability to exercise certain privileged primitives
as well.)
If a native process (perhaps one meaning to do "User Telnet", but not
limited to that) wanted to use the network, it would call the open
primitive of "the NCP", do reads and writes, and so on. An
interesting point has to do with just how this interface works: The
reads are inherently asynchronous; that is, you don't know just when
the data from the net are going to be available. In Multics, there's
an "event" mechanism that's used in the NCP interface that allows the
calling process to decide whether or not it will go blocked waiting
for input when it reads the net (it might want to stay active in
order to keep outputting, but need to be prepared for input as well),
so asynchrony can be dealt with. In the version of Unix (tm) on
which an early NFE was based, however, native I/O was always
synchronous; so in order to deal with both input from the terminal
and input from the net, that system's User Telnet had to consist of
two processes (which is not very efficient of system resources).
Similar considerations might apply to other operating systems
integrating H-FP; native I/O and interprocess communication
disciplines have to be taken into account in designing. (Nor can one
simply posit a brand new approach for "the network", because Telnet
will prove to rely even more heavily on native mode assumptions.)
The other aspect of NCP integration which we should at least touch
on--especially because process-level protocols make no sense without
it--is how "Well-Known Sockets" (WKSs) work. In broad terms, on
Multics the Network Daemon initially "owns" all sockets. For
Well-Known Sockets, where a particular process-level protocol will be
in effect after a successful connection to a given WKS, code is added
to the Answering Service to call upon the NCP at system
initialization time to be the process "listening" on the WKSs. (This
is a consequence of the fact that the Answering Service is/was the
only Multics process which can create processes; strategies on other
systems would differ according to their native process creation
disciplines.) How to get the "right kind of process" will be
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?