📄 rfc1152.txt
字号:
Network Working Group C. Partridge
Request for Comments: 1152 BBN Systems and Technologies
April 1990
Workshop Report
Internet Research Steering Group Workshop on
Very-High-Speed Networks
Status of this Memo
This memo is a report on a workshop sponsored by the Internet
Research Steering Group. This memo is for information only. This
RFC does not specify an Internet standard. Distribution of this memo
is unlimited.
Introduction
The goal of the workshop was to gather together a small number of
leading researchers on high-speed networks in an environment
conducive to lively thinking. The hope is that by having such a
workshop the IRSG has helped to stimulate new or improved research in
the area of high-speed networks.
Attendance at the workshop was limited to fifty people, and attendees
had to apply to get in. Applications were reviewed by a program
committee, which accepted about half of them. A few key individuals
were invited directly by the program committee, without application.
The workshop was organized by Dave Clark and Craig Partridge.
This workshop report is derived from session writeups by each of the
session chairman, which were then reviewed by the workshop
participants.
Session 1: Protocol Implementation (David D. Clark, Chair)
This session was concerned with what changes might be required in
protocols in order to achieve very high-speed operation.
The session was introduced by David Clark (MIT LCS), who claimed that
existing protocols would be sufficient to go at a gigabit per second,
if that were the only goal. In fact, proposals for high-speed
networks usually include other requirements as well, such as going
long distances, supporting many users, supporting new services such
as reserved bandwidth, and so on. Only by examining the detailed
requirements can one understand and compare various proposals for
protocols. A variety of techniques have been proposed to permit
protocols to operate at high speeds, ranging from clever
Partridge [Page 1]
RFC 1152 IRSG Workshop Report April 1990
implementation to complete relayering of function. Clark asserted
that currently even the basic problem to be solved is not clear, let
alone the proper approach to the solution.
Mats Bjorkman (Uppsala University) described a project that involved
the use of an outboard protocol processor to support high-speed
operation. He asserted that his approach would permit accelerated
processing of steady-state sequences of packets. Van Jacobson (LBL)
reported results that suggest that existing protocols can operate at
high speeds without the need for outboard processors. He also argued
that resource reservation can be integrated into a connectionless
protocol such as IP without losing the essence of the connectionless
architecture. This is in contrast to a more commonly held belief
that full connection setup will be necessary in order to support
resource reservation. Jacobson said that he has an experimental IP
gateway that supports resource reservation for specific packet
sequences today.
Dave Borman (Cray Research) described high-speed execution of TCP on
a Cray, where the overhead is most probably the system and I/O
architecture rather than the protocol. He believes that protocols
such as TCP would be suitable for high-speed operation if the windows
and sequence spaces were large enough. He reported that the current
speed of a TCP transfer between the processors of a Cray Y-MP was
over 500 Mbps. Jon Crowcroft (University College London) described
the current network projects at UCL. He offered a speculation that
congestion could be managed in very high-speed networks by returning
to the sender any packets for which transmission capacity was not
available.
Dave Feldmeier (Bellcore) reported on the Bellcore participation in
the Aurora project, a joint experiment of Bellcore, IBM, MIT, and
UPenn, which has the goal of installing and evaluating two sorts of
switches at gigabit speeds between those four sites. Bellcore is
interested in switch and protocol design, and Feldmeier and his group
are designing and implementing a 1 Gbps transport protocol and
network interface. The protocol processor will have special support
for such things as forward error correction to deal with ATM cell
loss in VLSI; a new FEC code and chip design have been developed to
run at 1 Gbps.
Because of the large number of speakers, there was no general
discussion after this session.
Partridge [Page 2]
RFC 1152 IRSG Workshop Report April 1990
Session 2: High-Speed Applications (Keith Lantz, Chair)
This session focused on applications and the requirements they impose
on the underlying networks. Keith Lantz (Olivetti Research
California) opened by introducing the concept of the portable office
- a world where a user is able to take her work with her wherever she
goes. In such an office a worker can access the same services and
the same people regardless of whether she is in the same building
with those services and people, at home, or at a distant site (such
as a hotel) - or whether she is equipped with a highly portable,
multi-media workstation, which she can literally carry with her
wherever she goes. Thus, portable should be interpreted as referring
to portability of access to services rather than to portability of
hardware. Although not coordinated in advance, each of the
presentations in this session can be viewed as a perspective on the
portable office.
The bulk of Lantz's talk focused on desktop teleconferencing - the
integration of traditional audio/video teleconferencing technologies
with workstation-based network computing so as to enable
geographically distributed individuals to collaborate, in real time,
using multiple media (in particular, text, graphics, facsimile,
audio, and video) and all available computer-based tools, from their
respective locales (i.e., office, home, or hotel). Such a facility
places severe requirements on the underlying network. Specifically,
it requires support for several data streams with widely varying
bandwidths (from a few Kbps to 1 Gbps) but generally low delay, some
with minimal jitter (i.e., isochronous), and all synchronized with
each other (i.e., multi-channel or media synchronization). It
appears that high-speed network researchers are paying insufficient
attention to the last point, in particular. For example, the bulk of
the research on ATM has assumed that channels have independent
connection request and burst statistics; this is clearly not the case
in the context of desktop teleconferencing.
Lantz also stressed the need for adaptive protocols, to accommodate
situations where the capacity of the network is exceeded, or where it
is necessary to interoperate with low-speed networks, or where human
factors suggest that the quality of service should change (e.g.,
increasing or decreasing the resolution of a video image). Employing
adaptive protocols suggests, first, that the interface to the network
protocols must be hardware-independent and based only on quality of
service. Second, a variety of code conversion services should be
available, for example, to convert from one audio encoding scheme to
another. Promising examples of adaptive protocols in the video
domain include variable-rate constant-quality coding, layered or
embedded coding, progressive transmission, and (most recently, at
UC-Berkeley) the extension of the concepts of structured graphics to
Partridge [Page 3]
RFC 1152 IRSG Workshop Report April 1990
video, such that the component elements of the video image are kept
logically separate throughout the production-to-presentation cycle.
Charlie Catlett (National Center for Supercomputing Applications)
continued by analyzing a specific scientific application, simulation
of a thunderstorm, with respect to its network requirements. The
application was analyzed from the standpoint of identifying data flow
and the interrelationships between the computational algorithms, the
supercomputer CPU throughput, the nature and size of the data set,
and the available network services (throughput, delay, etc).
Simulation and the visualization of results typically involves
several steps:
1. Simulation
2. Tessellation (transform simulation data into three-dimensional
geometric volume descriptions, or polygons)
3. Rendering (transform polygons into raster image)
For the thunderstorm simulation, the simulation and tessellation are
currently done using a Cray supercomputer and the resulting polygons
are sent to a Silicon Graphics workstation to be rendered and
displayed. The simulation creates data at a rate of between 32 and
128 Mbps (depending on the number of Cray-2 processors working on the
simulation) and the tessellation output data rate is in typically in
the range of 10 to 100 Mbps, varying with the complexity of the
visualization techniques. The SGI workstation can display 100,000
polygons/sec which for this example translates to up to 10
frames/sec. Analysis tools such as tracer particles and two-
dimensional slices are used interactively at the workstation with
pre-calculated polygon sets.
In the next two to three years, supercomputer speeds of 10-30 GFLOPS
and workstation speeds of up to 1 GFLOPS and 1 million
polygons/second display are projected to be available. Increased
supercomputer power will yield a simulation data creation rate of up
to several Gbps for this application. The increased workstation
power will allow both tessellation and rendering to be done at the
workstation. The use of shared window systems will allow multiple
researchers on the network to collaborate on a simulation, with the
possibility of each scientist using his or her own visualization
techniques with the tessellation process running on his or her
workstation. Further developments, such as network virtual memory,
will allow the tessellation processes on the workstations to access
variables directly in supercomputer memory.
Partridge [Page 4]
RFC 1152 IRSG Workshop Report April 1990
Terry Crowley (BBN Systems and Technologies) continued the theme of
collaboration, in the context of real-time video and audio, shared
multimedia workspaces, multimedia and video mail, distributed file
systems, scientific visualization, network access to video and image
information, transaction processing systems, and transferring data
and computational results between workstations and supercomputers.
In general, such applications could help groups collaborate by
directly providing communication channels (real-time video, shared
multimedia workspaces), by improving and expanding on the kinds of
information that can be shared (multimedia and video mail,
supercomputer data and results), and by reducing replication and the
complexity of sharing (distributed file systems, network access to
video and image information).
Actual usage patterns for these applications are hard to predict in
advance. For example, real-time video might be used for group
conferencing, for video phone calls, for walking down the hall, or
for providing a long-term shared viewport between remote locations in
order to help establish community ties. Two characteristics of
network traffic that we can expect are the need to provide multiple
data streams to the end user and the need to synchronize these
streams. These data streams will include real-time video, access to
stored video, shared multimedia workspaces, and access to other
multimedia data. A presentation involving multiple data streams must
be synchronized in order to maintain cross-references between them
(e.g., pointing actions within the shared multimedia workspace that
are combined with a voice request to delete this and save that).
While much traffic will be point-to-point, a significant amount of
traffic will involve conferences between multiple sites. A protocol
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -