📄 rfc871.txt
字号:
concept contributed by the NWG, for it was observed that if n
different types of Host (i.e., different operating systems) had
to be made aware of the physical characteristics of m different
types of terminal in order to exercise physical control over
them--or even if n different kinds of Host had to become aware of
the native terminals supported by m other kinds of Hosts if
physical control were to remain local--there would be an
administratively intractable "n x m problem." So the notion of
creating a "virtual terminal" arose, probably by analogy to
"virtual memory" in the sense of something that "wasn't really
there" but could be used as if it
3
RFC 871 September 1982
were; that is, a common intermediate representation (CIR) of
terminal characteristics was defined in order to allow the Host
to which a terminal was physically attached to map the particular
characteristics of the terminal into a CIR, so that the Host
being logged into, knowing the CIR as part of the relevant
protocol, could map out of it into a form already acceptable to
the native operating system. And when it came time to develop a
File Transfer Protocol, the same virtualizing or CIR trick was
clearly just as useful as for a terminal oriented protocol, so
virtualizing became part of the axiom set too.
The NWG, then, at least pioneered and probably invented the
notion of doing intercomputer networking/resource sharing via
hierarchical, layered protocols for interprocess communication
over logical connections of common intermediate representations/
virtualizations. Meanwhile, outside of the ARPA research
community, "the ARPANET" was perceived to be a major
technological advance. "Networking" became the "in" thing. And
along with popular success came the call for standards; in
particular, standards based on a widely-publicized "Reference
Model for Open System Interconnection" promulgated by the
International Standards Organization. Not too surprisingly, Open
System Interconnection looks a lot like resource sharing, the
ISORM posits a layered protocol hierarchy, "connections" occur
frequently, and emerging higher level protocols tend to
virtualize; after all, one expects standards to reflect the state
of the art in question. But even if the ISORM, suitably refined,
does prove to be the wave of the future, this author feels that
the ARM is by no means a whitecap, and deserves explication--both
in its role as the ISORM's "roots" and as the basis of a
still-viable alternative protocol suite.
Axiomatization
Let's begin with the axioms of the ARPANET Reference Model.
Indeed, let's begin by recalling what an axiom is, in common
usage: a principle the truth of which is deemed self-evident.
Given that definition, it's not too surprising that axioms rarely
get stated or examined in non-mathematical discourse. It turns
out, however, that the axiomatization of the ARM--as best we can
recall and reconstruct it--is not only germane to the enunciation
of the ARM, but is also a source of instructive contrasts with
our view of the axiomatization of the ISORM. (See [1] again.)
Resource Sharing
The fundamental axiom of the ARM is that intercomputer
networking protocols (as distinct from communications network
4
RFC 871 September 1982
protocols) are to enable heterogeneous computer operating systems
("Hosts") to achieve resource sharing. Indeed, the session at
the 1970 SJCC in which the ARPANET entered the open literature
was entitled "Resource Sharing Computer Networks".
Of course, as self-evident truths, axioms rarely receive
much scrutiny. Just what resource sharing is isn't easy to pin
down--nor, for that matter, is just what Open System
Interconnection is. But it must have something to do with the
ability of the programs and data of the several Hosts to be used
by and with programs and data on other of the Hosts in some sort
of cooperative fashion. It must, that is, confer more
functionality upon the human user than merely the ability to log
in/on to a Host miles away ("remote access").
A striking property of this axiom is that it renders
protocol suites such as "X.25"/"X.28"/ "X.29" rather
uninteresting for our purposes, for they appear to have as their
fundamental axiom the ability to achieve remote access only. (It
might even be a valid rule of thumb that any "network" which
physically interfaces to Hosts via devices that resemble milking
machines--that is, which attach as if they were just a group of
locally-known types of terminals--isn't a resource sharing
network.)
Reference [3] addresses the resource sharing vs. remote
access topic in more detail.
Interprocess Communication
The second axiom of the ARM is that resource sharing will be
achieved via an interprocess communication mechanism of some
sort. Again, the concept isn't particularly well-defined in the
"networking" literature. Here, however, there's some
justification, for the concept is fairly well known in the
Operating Systems branch of the Computer Science literature,
which was the field most of the NWG members came from.
Unfortunately, because intercomputer networking involves
communications devices of several sorts, many whose primary field
is Communications became involved with "networking" but were not
in a position to appreciate the implications of the axiom.
A process may be viewed as the active element of a Host, or
as an address space in execution, or as a "job", or as a "task",
or as a "control point"--or, actually, as any one (or more) of at
least 29 definitions from at least 28 reputable computer
scientists. What's important for present purposes isn't the
precise definition (even if there were one), but the fact that
the axiom's presence dictates the absence of at least one other
axiom at the same level of
5
RFC 871 September 1982
abstraction. That is, we might have chosen to attempt to achieve
resource sharing through an explicitly interprocedure
communication oriented mechanism of some sort--wherein the
entities being enabled to communicate were subroutines, or pieces
of address spaces--but we didn't. Whether this was because
somebody realized that you could do interprocedure communication
(or achieve a "virtual address space" or "distributed operating
system" or some such formulation) on top of an interprocess
communication mechanism (IPC), or whether "it just seemed
obvious" to do IPC doesn't matter very much. What matters is
that the axiom was chosen, assumes a fair degree of familiarity
with Operating Systems, doesn't assume extremely close coupling
of Hosts, and has led to a working protocol suite which does
achieve resource sharing--and certainly does appear to be an
axiom the ISORM tacitly accepted, along with resource sharing.
Logical Connections
The next axiom has to do with whether and how to demultiplex
IPC "channels", "routes", "paths", "ports", or "sockets". That
is, if you're doing interprocess communication (IPC), you still
have to decide whether a process can communicate with more than
one other process, and, if so, how to distinguish between the bit
streams. (Indeed, even choosing streams rather than blocks is a
decision.) Although it isn't treated particularly explicitly in
the literature, it seems clear that the ARM axiom is to do IPC
over logical connections, in the following sense: Just as batch
oriented operating systems found it useful to allow processes
(usually thought of as jobs--or even "programs") to be insulated
from the details of which particular physical tape drives were
working well enough at a particular moment to spin the System
Input and Output reels, and created the view that a reference to
a "logical tape number" would always get to the right physical
drive for the defined purpose, so too the ARM's IPC mechanism
creates logical connections between processes. That is, the IPC
addressing mechanism has semantics as well as syntax.
"Socket" n on any participating Host will be defined as the
"Well-Known Socket" (W-KS) where a particular service (as
mechanized by a program which follows, or "interprets", a
particular protocol [4]) is found. (Note that the W-KS is
defined for the "side" of a connection where a given service
resides; the user side will, in order to be able to demultiplex
its network-using processes, of course assign different numbers
to its "sides" of connections to a given W-KS. Also, the serving
side takes cognizance of the using side's Host designation as
well as the proferred socket, so it too can demultiplex.)
Clearly, you want free sockets as well as Well-Known ones, and we
have them. Indeed, at each level of the ARM
6
RFC 871 September 1982
hierarchy the addressing entities are divided into assigned and
unassigned sets, and the distinction has proven to be quite
useful to networking researchers in that it confers upon them the
ability to experiment with new functions without interfering with
running mechanisms.
On this axiom, the ISORM differs from the ARM. ISORM
"peer-peer" connections (or "associations") appear to be used
only for demultiplexing, with the number assigned by the receive
side rather than the send side. That is, a separate protocol is
intro- duced to establish that a particular "transport"
connection will be used in the present "session" for some
particular service. At the risk of editorializing, logical
connections seem much cleaner than "virtual" connections (using
virtual in the sense of something that "isn't really there" but
can be used as if it were, by analogy to virtual memory, as noted
above, and in deference to the X.25 term "virtual circuit", which
appears to have dictated the receiver-assigned posture the ISORM
takes at its higher levels.) Although the ISORM view "works", the
W-KS approach avoids the introduction of an extra protocol.
Layering
The next axiom is perhaps the best-known, and almost
certainly the worst-understood. As best we can reconstruct
things, the NWG was much taken with the Computer Science buzzword
of the times, "modularity". "Everybody knew" modularity was a
Good Thing. In addition, we were given a head start because the
IMP's weren't under our direct control anyway, but could possibly
change at some future date, and we didn't want to be "locked in"
to the then-current IMP-Host protocol. So it was enunciated that
protocols which were to be members of the ARM suite (ARMS, for
future reference, although at the time nobody used "ARM", much
less "ARMS") were to be layered. It was widely agreed that this
meant a given protocol's control information (i.e., the control
information exchanged by counterpart protocol interpreters, or
"peer entities" in ISORM terms) should be treated strictly as
data by a protocol "below" it, so that you could invoke a
protocol interpreter (PI) through a known interface, but if
either protocol changed there would not be any dependencies in
the other on the former details of the one, and as long as the
interface didn't change you wouldn't have to change the PI of the
protocol which hadn't changed.
All well and good, if somewhat cryptic. The important point
for present purposes, however, isn't a seemingly-rigorous
definition of Layering, but an appreciation of what the axiom
meant in the evolution of the ARM. What it meant was that we
tried to come up
7
RFC 871 September 1982
with protocols that represented reasonable "packagings" of
functionality. For reasons that are probably unknowable, but
about which some conjectures will be offered subsequently, the
ARM and the ISORM agree strongly on the presence of Layering in
their respective axiomatizations but differ strikingly as to what
packagings of functionality are considered appropriate. To
anticipate a bit, the ARM concerns itself with three layers and
only one of them is mandatorily traversed; whereas the ISORM,
again as everybody knows, has, because of emerging "sub-layers",
what must be viewed as at least seven layers, and many who have
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -