📄 rfc871.txt
字号:
studied it believe that all of the layers must be traversed on
each transmission/reception of data.
Perhaps the most significant point of all about Layering is
that the most frequently-voiced charge at NWG protocol committee
design meetings was, "That violates Layering!" even though nobody
had an appreciably-clearer view of what Layering meant than has
been presented here, yet the ARMS exists. We can only guess what
goes on in the design meetings for protocols to become members of
the ISORM suite (ISORMS), but it doesn't seem likely that having
more layers could possibly decrease the number of arguments....
Indeed, it's probably fair to say that the ARM view of
Layering is to treat layers as quite broad functional groupings
(Network Interface, Host-Host, and Process-Level, or
Applications), the constituents of which are to be modular.
E.g., in the Host-Host layer of the current ARMS, the Internet
Protocol, IP, packages internet addressing--among other
things--for both the Transmission Control Protocol, TCP, which
packages reliable interprocess communication, and UDP--the less
well-known User Datagram Protocol--which packages only
demultiplexable interprocess communication ... and for any other
IPC packaging which should prove desirable. The ISORM view, on
the other hand, fundamentally treats layers as rather narrow
functional groupings, attempting to force modularity by requiring
additional layers for additional functions (although the
"classes" view of the proposed ECMA-sponsored ISORM Transport
protocol tends to mimic the relations between TCP, UDP, and IP).
It is, by the way, forcing this view of modularity by
multiplying layers rather than by trusting the designers of a
given protocol to make it usable by other protocols within its
own layer that we suspect to be a major cause of the divergence
between the ISORM and the ARM, but, as indicated, the issue
almost certainly is not susceptible of proof. (The less
structured view of modularity will be returned to in the next
major section.) At any rate, the notion that "N-entities" must
communicate with one another by means of "N-1 entities" does seem
to us to take the ISORM out of its
8
RFC 871 September 1982
intended sphere of description into the realm of prescription,
where we believe it should not be, if for no other reason than
that for a reference model to serve a prescriptive role levies
unrealizable requirements of precision, and of familiarity with
all styles of operating systems, on its expositors. In other
words, as it is currently presented, the ISORM hierarchy of
protocols turns out to be a rather strict hierarchy, with
required, "chain of command" implications akin to the Elizabethan
World Picture's Great Chain of Being some readers might recall if
they've studied Shakespeare, whereas in the ARM a cat can even
invoke a king, much less look at one.
Common Intermediate Representations
The next axiom to be considered might well not be an axiom
in a strict sense of the term, for it is susceptible of "proof"
in some sense. That is, when it came time to design the first
Process-Level (roughly equivalent to ISORM Level 5.3 [5] through
7) ARMS protocol, it did seem self-evident that a "virtual
terminal" was a sound conceptual model--but it can also be
demonstrated that it is. The argument, customarily shorthanded
as "the N X M Problem", was sketched above; it goes as follows:
If you want to let users at remote terminals log in/on to Hosts
(and you do--resource sharing doesn't preclude remote access, it
subsumes it), you have a problem with Hosts' native terminal
control software or "access methods", which only "know about"
certain kinds/brands/types of terminals, but there are many more
terminals out there than any Host has internalized (even those
whose operating systems take a generic view of I/O and don't
allow applications programs to "expect" particular terminals).
You don't want to make N different types of Host/Operating
System have to become aware of M different types of terminal.
You don't want to limit access to users who are at one particular
type of terminal even if all your Hosts happen to have one in
common. Therefore, you define a common intermediate
representation (CIR) of the properties of terminals--or create a
Network Virtual Terminal (NVT), where "virtual" is used by
analogy to "virtual memory" in the sense of something that isn't
necessarily really present physically but can be used as if it
were. Each Host adds one terminal to its set of supported types,
the NVT--where adding means translating/mapping from the CIR to
something acceptable to the rest of the programs on your system
when receiving terminal-oriented traffic "from the net", and
translating/mapping to the CIR from whatever your acceptable
native representation was when sending terminal-oriented traffic
"to the net". (And the system to which the terminal is
physically attached does the same things.)
9
RFC 871 September 1982
"Virtualizing" worked so well for the protocol in question
("Telnet", for TELetypewriter NETwork) that when it came time to
design a File Transfer Protocol (FTP), it was employed again--in
two ways, as it happens. (It also worked so well that in some
circles, "Telnet" is used as a generic term for "Virtual Terminal
Protocol", just like "Kleenex" for "disposable handkerchief".)
The second way in which FTP (another generic-specific) used
Common Intermediate Representations is well-known: you can make
your FTP protocol interpreters (PI's) use certain "virtual" file
types in ARMS FTP's and in proposed ISORMS FTP's. The first way
a CIR was used deserved more publicity, though: We decided to
have a command-oriented FTP, in the sense of making it possible
for users to cause files to be deleted from remote directories,
for example, as well as simply getting a file added to a remote
directory. (We also wanted to be able to designate some files to
be treated as input to the receiving Hosts' native "mail" system,
if it had one.) Therefore, we needed an agreed-upon
representation of the commands--not only spelling the names, but
also defining the character set, indicating the ends of lines,
and so on. In less time than it takes to write about, we
realized we already had such a CIR: "Telnet".
So we "used Telnet", or at any rate the NVT aspects of that
protocol, as the "Presentation" protocol for the control aspects
of FTP--but we didn't conclude from that that Telnet was a lower
layer than FTP. Rather, we applied the principles of modularity
to make use of a mechanism for more than one purpose--and we
didn't presume to know enough about the internals of everybody
else's Host to dictate how the program(s) that conferred the FTP
functionality interfaced with the program(s) that conferred the
Telnet functionality. That is, on some operating systems it
makes sense to let FTP get at the NVT CIR by means of closed
subroutine calls, on others through native IPC, and on still
others by open subroutine calls (in the sense of replicating the
code that does the NVT mapping within the FTP PI). Such
decisions are best left to the system programmers of the several
Hosts. Although the ISORM takes a similar view in principle, in
practice many ISORM advocates take the model prescriptively
rather than descriptively and construe it to require that PI's at
a given level must communicate with each other via an "N-1
entity" even within the same Host. (Still other ISORMites
construe the model as dictating "monolithic" layers--i.e., single
protocols per level--but this view seems to be abating.)
One other consideration about virtualizing bears mention:
it's a good servant but a bad master. That is, when you're
dealing with the amount of traffic that traverses a
terminal-oriented logical (or even virtual) connection, you don't
worry much about how many CPU cycles you're "wasting" on mapping
into and out of the NVT CIR; but
10
RFC 871 September 1982
when you're dealing with files that can be millions of bits long,
you probably should worry--for those CPU cycles are in a fairly
real sense the resources you're making sharable. Therefore, when
it comes to (generic) FTP's, even though we've seen it in one or
two ISORM L6 proposals, having only a virtual file conceptual
model is not wise. You'd rather let one side or the other map
directly between native representations where possible, to
eliminate the overhead for going into and out of the CIR--for
long enough files, anyway, and provided one side or the other is
both willing and able to do the mapping to the intended
recipient's native representation.
Efficiency
The last point leads nicely into an axiom that is rarely
acknowledged explicitly, but does belong in the ARM list of
axioms: Efficiency is a concern, in several ways. In the first
place, protocol mechanisms are meant to follow the design
principle of Parsimony, or Least Mechanism; witness the argument
immediately above about making FTP's be able to avoid the double
mapping of a Virtual File approach when they can. In the second
place, witness the argument further above about leaving
implementation decisions to implementers. In the author's
opinion, the worst mistake in the ISORM isn't defining seven (or
more) layers, but decreeing that "N-entities" must communicate
via "N-1 entities" in a fashion which supports the interpretation
that it applies intra-Host as well as inter-Host. If you picture
the ISORM as a highrise apartment building, you are constrained
to climb down the stairs and then back up to visit a neighbor
whose apartment is on your own floor. This might be good
exercise, but CPU's don't need aerobics as far as we know.
Recalling that this paper is only secondarily about ARM
"vs." ISORM, let's duly note that in the ARM there is a concern
for efficiency from the perspective of participating Hosts'
resources (e.g., CPU cycles and, it shouldn't be overlooked,
"core") expended on interpreting protocols, and pass on to the
final axiom without digressing to one or two proposed specific
ISORM mechanisms which seem to be extremely inefficient.
Equity
The least known of the ARM axioms has to do with a concern
over whether particular protocol mechanisms would entail undue
perturbation of native mechanisms if implemented in particular
Hosts. That is, however reluctantly, the ARMS designers were
willing to listen to claims that "you can't implement that in my
system" when particular tactics were proposed and, however
11
RFC 871 September 1982
grudgingly, retreat from a mechanism that seemed perfectly
natural on their home systems to one which didn't seriously
discommode a colleague's home system. A tacit design principle
based on equity was employed. The classic example had to do with
"electronic mail", where a desire to avoid charging for incoming
mail led some FTP designers to think that the optionally
mandatory "login" commands of the protocol shouldn't be mandatory
after all. But the commands were needed by some operating
systems to actuate not only accounting mechanisms but
authentication mechanisms as well, and the process which
"fielded" FTP connections was too privileged (and too busy) to
contain the FTP PI as well. So (to make a complex story
cryptic), a common name and password were advertised for a "free"
account for incoming mail, and the login commands remained
mandatory (in the sense that any Host could require their
issuance before it participated in FTP).
Rather than attempt to clarify the example, let's get to its
moral: The point is that how well protocol mechanisms integrate
with particular operating systems can be extremely subtle, so in
order to be equitable to participating systems, you must either
have your designers be sophisticated implementers or subject your
designs to review by sophisticated implementers (and grant veto
power to them in some sense).
It is important to note that, in the author's view, the
ISORM not only does not reflect application of the Principle of
Equity, but it also fails to take any explicit cognizance of the
necessity of properly integrating its protocol interpreters into
continuing operating systems. Probably motivated by Equity
considerations, ARMS protocols, on the other hand, represent the
result of intense implementation discussion and testing.
Articulation
Given the foregoing discussion of its axioms, and a reminder
that we find it impossible in light of the existence of dozens of
definitions of so fundamental a notion as "process" to believe in
rigorous definitions, the ARPANET Reference Model is not going to
require much space to articulate. Indeed, given further the
observation that we believe reference models are supposed to be
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -