📄 rfc1152.txt
字号:
Mbps to 1 Gbps.
Mills' approach involves an externally switched architecture in which
the timing and routing of flows between crossbar switches are
determined by sequencing tables and counters in high-speed memory
Partridge [Page 14]
RFC 1152 IRSG Workshop Report April 1990
local to each crossbar. The switch program is driven by a
reservation-TDMA protocol and distributed scheduling algorithm
running in a co-located, general-purpose processor. The end-to-end
customers are free to use any protocol or data format consistent with
the timing of the network. His primary interest in the initial
phases of the project is the study of appropriate reservation and
scheduling algorithms. He expect these algorithms to have much in
common with the PODA algorithm used in the SATNET and WIDEBAND
satellite systems and to the algorithms being considered for the
Multiple Satellite System (MSS).
John Robinson (JR, BBN Systems and Technologies) gave a talk called
Beyond the Butterfly, which described work on a design for an ATM
cell switch, known as MONET. The talk described strategies for
buffering at the input and output interfaces to a switch fabric
(crossbar or butterfly). The main idea was that cells should be
introduced to the switch fabric in random sequence and to random
fabric entry ports to avoid persistent traffic patterns having high
cell loss in the switch fabric, where losses arise due to contention
at output ports or within the switch fabric (in the case of a
butterfly). Next, the relationship of this work to an earlier design
for a large-scale parallel processor, the Monarch, was described. In
closing, JR offered the claim that this class of switch is realizable
in current technology (barely) for operation over SONET OC-48 2.4
Gbps links.
Dave Sincoskie (Bellcore) reported on two topics: recent switch
construction at Bellcore, and high-speed processing of ATM cells
carrying VC or DG information. Recent switch design has resulted in
a switch architecture named SUNSHINE, a Batcher-banyan switch which
uses recirculation and multiple output banyans to resolve contention
and increase throughput. A paper on this switch will be published at
ISS '90, and is available upon request from the author. One of the
interesting traffic results from simulations of SUNSHINE shows that
per-port output queues of up to 1,000 cells (packets) may be
necessary for bursty traffic patterns. Also, Bill Marcus (at
Bellcore) has recently produced Batcher-banyan (32x32) chips which
test up to 170Mb/sec per port.
The second point in this talk was that there is little difference in
the switching processing of Virtual Circuit (VC) and Datagram (DG)
traffic that which has been previously broken into ATM cells at the
network edge. The switch needs to do a header translation operation
followed by some queueing (not necessarily FIFO). The header
translation of the VC and DG cells differs mainly in the memory
organization of the address translation tables (dense vs. sparse).
The discussion after the presentations seemed to wander off the topic
Partridge [Page 15]
RFC 1152 IRSG Workshop Report April 1990
of switching, back to some of the source-routing vs. network routing
issues discussed earlier in the day.
Session 9: Open Mike Night (Craig Partridge, Chair)
As an experiment, the workshop held an open mike session during the
evening of the second day. Participants were invited to speak for up
to five minutes on any subject of their choice. Minutes of this
session are sketchy because the chair found himself pre-occupied by
keeping speakers roughly within their time limits.
Charlie Catlett (NSCA) showed a film of the thunderstorm simulations
he discussed earlier.
Dave Cheriton (Stanford) made a controversial suggestion that perhaps
one could manage congestion in the network simply by using a steep
price curve, in which sending large amounts of data cost
exponentially more than sending small amounts of data (thus leading
people only to ask for large bandwidth when they needed it, and
having them pay so much, that we can afford to give it to them).
Guru Parulkar (Washington University, St. Louis) argued that the
recent discussion on appropriateness of existing protocol and need
for new protocols (protocol architecture) for gigabit networking
lacks the right focus. The emphasis of the discussion should be on
what is the right functionality for gigabit speeds, which is simpler
per packet processing, combination of rate and window based flow
control, smart retransmission strategy, appropriate partitioning of
work among host cpu+os, off board cpu, and custom hardware, and
others. It is not surprising that the existing protocols can be
modified to include this functionality. By the same token, it is not
surprising that new protocols can be designed which take advantage of
lessons of existing protocols and also include other features
necessary for gigabit speeds.
Raj Jain (DEC) suggested we look at new ways to measure protocol
performance, suggesting our current metrics are insufficiently
informative.
Dan Helman (UCSC) asked the group to consider, more carefully, who
exactly the users of the network will be. Large consumers? or many
small consumers?
Partridge [Page 16]
RFC 1152 IRSG Workshop Report April 1990
Session 10: Miscellaneous Topics (Bob Braden, Chair)
As its title implies, this session covered a variety of different
topics relating to high-speed networking.
Jim Kurose (University of Massachussetts) described his studies of
scheduling and discard policies for real-time (constrained delay)
traffic. He showed that by enforcing local deadlines at switches
along the path, it is possible to significantly reduce overall loss
for such traffic. Since his results depend upon the traffic model
assumptions, he ended with a plea for work on traffic models, stating
that Poisson models can sometimes lead to results that are wrong by
many orders of magnitude.
Nachum Shacham (SRI International) discussed the importance of error
correction schemes that can recover lost cells, and as an example
presented a simple scheme based upon longitudinal parity. He also
showed a variant, diagonal parity, which allows a single missing cell
to be recreated and its position in the stream determined.
Two talks concerned high-speed LANs. Biswanath Muhkerjee (UC Davis)
surveyed the various proposals for fair scheduling on unidirectional
bus networks, especially those that are distance insensitive, i.e.,
that can achieve 100% channel utilization independent of the bus
length and data rate. He described in particular his own scheme,
which he calls p-i persistant.
Howard Salwen (Proteon), speaking in place of Mehdi Massehi of IBM
Zurich who was unable to attend, also discussed high-speed LAN
technologies. At 100 Mbps, a token ring has a clear advantage, but
at 1 Gbps, the speed of light kills 802.6, for example. He briefly
described Massehi's reservation-based scheme, CRMA (Cyclic-
Reservation Multiple-Access).
Finally, Yechiam Yemeni (YY, Columbia University) discussed his work
on a protocol silicon compiler. In order to exploit the potential
parallelism, he is planning to use one processor per connection.
The session closed with a spirited discussion of about the relative
merits of building an experimental network versus simulating it.
Proponents of simulation pointed out the high cost of building a
prototype and limitation on the solution space imposed by a
particular hardware realization. Proponents of building suggested
that artificial traffic can never explore the state space of a
network as well as real traffic can, and that an experimental
prototype is important for validating simulations.
Partridge [Page 17]
RFC 1152 IRSG Workshop Report April 1990
Session 11: Protocol Architectures (Vint Cerf, Chair)
Nick Maxemchuk (AT&T Bell Labs) summarized the distinctions between
circuit switching, virtual circuits, and datagrams. Circuits are
good for (nearly) constant rate sources. Circuit switching dedicates
resources for the entire period of service. You have to set up the
resource allocation before using it. In a 1.7 Gbps network, a 3000-
mile diameter consumes 10**7 bytes during the circuit set-up round-
trip time, and potentially the same for circuit teardown. Some
service requirements (file transfer, facsimile transmission) are far
smaller than the wasted 2*10**7 bytes these circuit management delays
impose. (Of course, these costs are not as dramatic if the allocated
bandwidth is less than the maximum possible.)
Virtual circuits allow shared use of bandwidth (multiplexing) when
the primary source of traffic is idle (as in Voice Time Assigned
Speech Interpolation). The user notifies the network of planned
usage.
Datagrams (DG) are appropriate when there is no prior knowledge of
use statistics or usage is far less than the capacity wasted during
circuit or virtual circuit set-up. One can adaptively route traffic
among equivalent resources.
In gigabit ATMs, the high service speed and decreased cell size
increases the relative burstiness of service requests. All of these
characteristics combine to make DG service very attractive.
Maxemchuk then described a deflection routing notion in which traffic
would be broken into units of fixed length and allowed into the
network when capacity was available and routed out by any available
channel, with preference being given to the channel on the better
path. This idea is similar to the hot potato routing of Paul Baran's
1964 packet switching design. With buffering (one buffer), Maxemchuk
achieved a theoretical 90% utilization. Large reassembly buffers
provide for better throughput.
Maxemchuk did not have an answer to the question: how do you make
sure empty "slots" are available where needed? This is rather like
the problem encountered by D. Davies at the UK National Physical
Laboratory in his isarythmic network design in which a finite number
of crates are available for data transport throughout the network.
Guru Parulkar (Washington University, St. Louis) presented a broad
view of an Internet architecture in which some portion of the system
would operate at gigabit speeds. In his model, internet, transport,
and application protocols would operate end to end. The internet
functions would be reflected in gateways and in the host/net
Partridge [Page 18]
RFC 1152 IRSG Workshop Report April 1990
interface, as they are in the current Internet. However, the
internet would support a new type of service called a congram which
aims at combining strengths of both soft connection and datagram.
In this architecture, a variable grade of service would be provided.
Users could request congrams (UCON) or the system could set them up
internally (Picons) to avoid end-to-end setup latency. The various
grades of service could be requested, conceptually, by asserting
various required (desired) levels of error control, throughput,
delay, interarrival jitter, and so on. Gateways based on ATM
switches, for example, would use packet processors at entry/exit to
do internet specific per packet processing, which may include
fragmentation and reassembly of packets (into and out of ATM cells).
At the transport level, Parulkar argued for protocols which can
provide application-oriented flow and error control with simple per
packet processing. He also mentioned the notion of a generalized RPC
(GRPC) in which code, data, and execution might be variously local or
remote from the procedure initiator. GRPC can be implemented using
network level virtual storage mechanisms.
The basic premise of Raj Yavatkar's presentation (University of
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -