📄 rfc1077.txt
字号:
trunks are rising faster than the speeds of the switching elements.
This change in the balance of speeds has manifested itself in several
ways. In most current designs for local area networks, where
Gigabit Working Group [Page 10]
RFC 1077 November 1988
bandwidth is not expensive, the design decision was to trade off
effective use of the bandwidth for a simplified switching technique.
In particular, networks such as Ethernet use broadcast as the normal
distribution method, which essentially eliminates the need for a
switching element.
As we look at still higher speed networks, and in particular networks
in which the bandwidth is still the expensive component, we must
design new options for switching which will permit effective use of
bandwidth without the switch itself becoming the bottleneck.
The central thrust of new research must thus be to explore new
network architectures that are consistent with these very different
speed assumptions.
The development of computer communications has been tremendously
distorted by the characteristics of wide-area networking: normally
high cost, low speed, high error rate, large delay. The time is ripe
for a revolution in thinking, technology, and approaches, analogous
to the revolution caused by VCR technology over 8 and 16 mm. film
technology.
Fiber optics is clearly the enabling technology for high-speed
transmission, in fact, so much so that there is an expectation that
the switching elements will now hold down the data rates. Both
conventional circuit switching and packet switching have significant
problems at higher data rates. For instance, circuit switching
requires increasing delays for FTDM synchronization to handle skew.
In the case of packet switching, traditional approaches require too
much processing per packet to handle the tremendous data flow. The
problem for both switching regimes is the "intelligence" in the
switches, which in turn requires electronics technology.
Besides intelligence, another problem for wide-area networks is
storage, both because it ties us to electronics (for the foreseeable
future) and because it produces instabilities in a large-scale
system. (See, for instance, the work by Van Jacobson on self-
organizing phenomena for self-destruction in the Internet.)
Techniques are required to eliminate dependence on storage, such as
cut-through routing.
Overall, high-speed WANs are the greatest agents of change, the
greatest catalyst both commercially and militarily, and the area ripe
for revolution. Judging by the attributes of current high-speed
network research prototypes, WANs of the future will be photonic,
multi-gigabit networks with enormous throughput, low delay, and low
error rate.
Gigabit Working Group [Page 11]
RFC 1077 November 1988
A zero-based budgeting approach is required to develop the new high-
speed internetwork architecture. That is, the time is ripe to
significantly rethink the Internet, building on experience with this
system. Issues of concern are manageability, understanding
evolvability and support for the new communication requirements,
including remote procedure call, real-time, security and fault-
tolerance.
The GN must be able to deal with two sources of high-bandwidth
requirements. There will be some end devices (computers) connected
more or less directly to the GN because of their individual
requirements for high bandwidth (e.g., supercomputers needing to
drive remote high-bandwidth graphics devices). In addition, the
aggregate traffic due to large numbers of moderate rate users
(estimates are roughly up to a million potential users needing up to
1 Mbit/s at any given time) results in a high-bandwidth requirement
in total on the GN. The statistics of such traffic are different and
there are different possible technical approaches for dealing with
them. Thus, an architectural approach for dealing with both must be
developed.
Overall, the next-generation architecture has to be, first and
foremost, a management architecture. The directions in link speeds,
processor speeds and memory solve the performance problems for many
communication situations so well that manageability becomes the
predominant concern. (In fact, fast communication makes large
systems more prone to performance, reliability, and security
problems.) In many ways, the management system of the internetwork
is the ultimate distributed system. The solution to this tough
problem may well require the best talents from the communications,
operating systems and distributed systems communities, perhaps even
drawing on database and parallelism research.
3.1.1. High-Speed Internet using High-Speed Networks
The GN will need to take advantage of a multitude of different and
heterogeneous networks, all of high speed. In addition to networks
based on the technology of the GB, there will be high-speed LANs. A
key issue in the development of the GN will be the development of a
strategy for interconnecting such networks to provide gigabit service
on an end to end basis. This will involve techniques for switching,
interfacing, and management (as discussed in the sections below)
coupled with an architecture that allows the GN to take full
advantage of the performance of the various high-speed networks.
Gigabit Working Group [Page 12]
RFC 1077 November 1988
3.1.2. Network Organization
The GN will need an architecture that supports the need to manage the
system as well as obtain high performance. We note that almost all
human-engineered systems are hierarchically structured from the
standpoint of control, monitoring, and information flow. A
hierarchical design may be the key to manageability in the next-
generation architecture.
One approach is to use a general three-level structure, corresponding
to interadministrational, intraadministrational, and cluster
networks. The first level interconnects communication facilities of
truly separate administrations where there is significant separation
of security, accounting, and goals. The second level interconnects
subadministrations which exist for management convenience in large
organizations. For example, a research group within a university may
function as a subadministration. The cluster level consists of
networks configured to provides maximal performance among hosts which
are in frequent communication, such as a set of diskless workstations
and their common file server. These hosts are typically, but not
necessarily, geographically collocated. For example, two remote
networks may be tightly coupled by a fiber optic link that bridges
between the two physical networks, making them function as one.
Research along these lines should study the interorganizational
characteristics of communications, such as those being investigated
by the IAB Task Force on Autonomous Networks. Based on current
results, we expect that such work would clearly demonstrate that
considerable communication takes place between particular
subadministrations in different administrations; communication
patterns are not strictly hierarchical. For example, there might be
intense direct communication between the experimental physics
departments of two independent universities, or between the computer
support group of one company and the operating system development
group of another. In addition, (sub)administrations may well also
require divisions into public information and private information.
3.1.3. Fault-Tolerant System
Although the GN will be developed as part of an experimental research
program, it will also serve as part of the infrastructure for
researchers who are experimenting with applications which will use
such a network. The GN must have reasonably high availability to
support these research activities. In addition to facilitate the
transfer of this technology to future operational military and
Gigabit Working Group [Page 13]
RFC 1077 November 1988
commercial users, it will need to be designed to become highly
reliable. This can be accomplished through diversity of transmission
paths, the development of fault-tolerant switches, use of a
distributed control structure with self-correcting algorithms, and
the protection of network control traffic. The architecture of a GN
should support and allow for all of these things.
3.1.4. Functional Division of Control Between Network Elements
Current protocol architectures use the layered model of functional
decomposition first developed in the early work on ARPANET protocols.
The concept of layering has been a powerful concept which has allowed
dramatic variation in network technologies without requiring the
complete reimplementation of applications. The concept of layering
has had a first-order impact on the development of international
standards for data communication---witness the ISO "Reference Model
for Open Systems Interconnection."
Unfortunately, however, the powerful concept of layering has been
paired, both in the DoD Internet work and the ISO work, with an
extremely weak concept of the interface between layers. The
interface designs are all organized around the idea of commands and
responses plus an error indicator. For example, the TCP service
interface provides the user with commands to set up or close a TCP
connection and commands to send and receive datagrams. The user may
well "know" whether they are using a file transfer service or a
character-at-a- time virtual terminal, but can't tell the TCP. The
underlying network may "know" that failures have reduced the path to
the user's destination to a single 9.6 kbit/s link, but it also can't
tell the TCP implementation.
All of the information that an analyst would consider crucial in
diagnosing system performance is carefully hidden from adjacent
layers. One "solution" often discussed (but rarely implemented) is
to condense all of this information into a few bits of "Type of
Service" or "Quality of Service" request flowing in one direction
only---from application to network. It seems likely that this
approach cannot succeed, both because it applies too much compression
to the knowledge available and because it does not provide two-way
flow.
We believe it to be likely that the next-generation network will
require a much richer interface between every pair of adjacent layers
if adequate performance is to be achieved. Research is needed into
the conceptual mechanisms, both indicators and controls, that can be
implemented at these interfaces and that, when used, will result in
Gigabit Working Group [Page 14]
RFC 1077 November 1988
better performance. If real differences in performance can be
observed, then the implementors of every layer will have a strong
incentive to make use of the mechanisms.
We can observe the first glimmers of this sort of coordination
between layers in current work. For example, in the ISO work there
are 5 classes of transport protocol which are supposed to provide a
range of possible matches between application needs and network
capabilities. Unfortunately, it is the case today that the class of
transport protocol is chosen statically, by the implementer, rather
than dynamically. The DARPA Wideband net offers a choice of stream
or datagram service, but typically a given host uses all one or all
the other---again, a static rather than a dynamic choice. The
research that we believe is needed, therefore, is not how to provide
alternatives, but how to provide them and choose among them on a
dynamic, real-time basis.
3.1.5. Different Switch Technologies
One approach to high-performance networking is to design a technology
that is expected to work as a stand-alone demonstration, without
addressing the need for interconnection to other networks. Such an
experiment may be very valuable for rapid exploration of the design
space. However, our experience with the Internet project suggests
that a primary research goal should be the development of a network
architecture that permits the interconnection of a number of
different switching technologies.
The Internet project was successful to a large extent because it
could incorporate a number of new and preexisting network
technologies: various local area networks, store and forward
switching networks, broadcast satellite nets, packet radio networks,
and so on. In this way, it decoupled the use of the protocols from a
particular technology base. In fact, the technology base evolved
rapidly, but the Internet protocols themselves provided a stability
that led to their success.
The next-generation architecture must similarly deal with a diverse
and evolving technology base. We see "fast-packet" switching now
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -