📄 rfc1152.txt
字号:
providing a multicast capability is critical.
Finally, Greg Watson (HP) presented an overview of ongoing work at
the Hewlett-Packard Bristol lab. Their belief is that, while
applications for high-speed networks employing supercomputers are the
the technology drivers, the economic drivers will be applications
requiring moderate bandwidth (say 10 Mbps) that are used by everyone
on the network.
They are investigating how multimedia workstations can assist
distributed research teams - small teams of people who are
geographically dispersed and who need to work closely on some area of
research. Each workstation provides multiple video channels,
together with some distributed applications running on personal
computers. The bandwidth requirements per workstation are about 40
Mbps, assuming a certain degree of compression of the video channels.
Currently the video is distributed as an analog signal over CATV
equipment. Ideally it would all be carried over a single, unified
wide-area network operating in the one-to-several Gbps range.
Partridge [Page 5]
RFC 1152 IRSG Workshop Report April 1990
They have constructed a gigabit network prototype and are currently
experimenting with uncompressed video carried over the same network
as normal data traffic.
Session 3: Lightwave Technology and its Implications (Ira Richer, Chair)
Bob Kennedy (MIT) opened the session with a talk on network design in
an era of excess bandwidth. Kennedy's research is focused on multi-
purpose networks in which bandwidth is not a scarce commodity,
networks with bandwidths of tens of terahertz. Kennedy points out
that a key challenge in such networks is that electronics cannot keep
up with fiber speeds. He proposes that we consider all-optical
networks (in which all signals are optical) with optoelectronic nodes
or gateways capable of recognizing and capturing only traffic
destined for them, using time, frequency, or code divisions of the
huge bandwidth. The routing algorithms in such networks would be
extremely simple to avoid having to convert fiber-optics into slower
electronic pathways to do switching.
Rich Gitlin (AT&T Bell Labs) gave a talk on issues and opportunities
in broadband telecommunications networks, with emphasis on the role
of fiber optic and photonic technology. A three-level architecture
for a broadband telecommunications network was presented. The
network is B-ISDN/ATM 150 (Mbps) based and consists of: customer
premises equipment (PBXs, LANs, multimedia terminals) that access the
network via a router/gateway, a Network Node (which is a high
performance ATM packet switch) that serves both as a LAN-to-LAN
interconnect and as a packet concentrator for traffic destined for
CPE attached to other Network Nodes, and a backbone layer that
interconnects the NODES via a Digital Cross-Connect System that
provide reconfigurable SONET circuits between the NODES (the use of
circuits minizes delay and avoids the need for implementation of
peak-transmission-rate packet switching). Within this framework, the
most likely places for near-term application of photonics, apart from
pure transport (ie, 150 Mbps channels in a 2.4 Gbps SONET system),
are in the Cross-Connect (a Wavelength Division Multiplexed based
structure was described) and in next-generation LANs that provide
Gigabit per second throughputs by use of multiple fibers, concurrent
transmission, and new access mechanisms (such as store and forward).
A planned interlocation Bell Labs multimedia gigabit/sec research
network, LuckyNet, was described that attempts to extend many of the
above concepts to achieve its principal goals: provision of a gigabit
per second capability to a heterogeneous user community, the
stimulation of applications that require Gpbs throughput (initial
applications are video conferencing and LAN interconnect), and, to
the extent possible, be based on standards so that interconnection
with other Gigabit testbeds is possible.
Partridge [Page 6]
RFC 1152 IRSG Workshop Report April 1990
Session 4: High Speed Networks and the Phone System
(David Tennenhouse, Chair)
David Tennenhouse (MIT) reported on the ATM workshop he hosted the
two days previous to this workshop. His report will appear as part
of the proceedings of his workshop.
Wally St. John (LANL) followed with a presentation on the Los Alamos
gigabit testbed. This testbed is based on the High Performance
Parallel Interface (HPPI) and on crossbar switch technology. LANL
has designed its own 16x16 crossbar switch and has also evaluated the
Network Systems 8x8 crossbar switch. Future plans for the network
include expansion to the CASA gigabit testbed. The remote sites (San
Diego Supercomputer Center, Caltech, and JPL) are configured
similarly to the LANL testbed. The long-haul interface is from HPPI
to/from SONET (using ATM if in time).
Wally also discussed some of the problems related to building a
HPPI-SONET gateway:
a) Flow control. The HPPI, by itself, is only readily extensible
to 64 km because of the READY-type flow control used in the
physical layer. The gateway will need to incorporate larger
buffers and independent flow control.
b) Error-rate expectations. SONET is only specified to have a
1E-10 BER on a per hop basis. This is inadequate for long
links. Those in the know say that SONET will be much better
but the designer is faced with the poor BER in the SONET spec.
c) Frame mapping. There are several interesting issues to be
considered in finding a good mapping from the HPPI packet
to the SONET frame. Some are what SONET STS levels will be
available in what time frame, the availability of concatenated
service, and the error rate issue.
Dan Helman (UCSC) talked about work he has been doing with Darrell
Long to examine the interconnection of Internet networks via an ATM
B-ISDN network. Since network interfaces and packet processing are
the expensive parts of high-speed networks, they believe it doesn't
make sense to use the ATM backbone only for transmission; it should
be used for switching as well. Therefore gateways (either shared by
a subnet or integrated with fast hosts) are needed to encapsulate or
convert conventional protocols to ATM format. Gateways will be
responsible for caching connections to recently accessed
destinations. Since many short-lived low-bandwidth connections as
foreseen (e.g., for mail and ftp), routing in the ATM network (to set
up connections) should not be complicated - a form of static routing
Partridge [Page 7]
RFC 1152 IRSG Workshop Report April 1990
should be adequate. Connection performance can be monitored by the
gateways. Connections are reestablished if unacceptable. All
decision making can be done by gateways and route servers at low
packet rates, rather than the high aggregate rate of the ATM network.
One complicated issue to be addressed is how to transparently
introduce an ATM backbone alongside the existing Internet.
Session 5: Distributed Systems (David Farber, Chair)
Craig Partridge (BBN Systems and Technologies) started this session
by arguing that classic RPC does not scale well to gigabit-speed
networks. The gist of his argument was that machines are getting
faster and faster, while the round-trip delay of networks is staying
relatively constant because we cannot send faster than the speed of
light. As a result, the effective cost of doing a simple RPC,
measured in instruction cycles spent waiting at the sending machine,
will become extremely high (millions of instruction cycles spent
waiting for the reply to an RPC). Furthermore, the methods currently
used to improve RPC performance, such as futures and parallel RPC, do
not adequately solve this problem. Future requests will have to be
made much much earlier if they are to complete by the time they are
needed. Parallel RPC allows multiple threads, but doesn't solve the
fact that each individual sequence of RPCs still takes a very long
time.
Craig went on to suggest that there are at least two possible ways
out of the problem. One approach is to try to do a lot of caching
(to waste bandwidth to keep the CPU fed). A limitation of this
approach is that at some point the cache becomes so big that you have
to keep in consistent with other systems' caches, and you suddenly
find yourself doing synchronization RPCs to avoid doing normal RPCs
(oops!). A more promising approach is to try to consolidate RPCs
being sent to the same machine into larger operations which can be
sent as a single transaction, run on the remote machine, and the
result returned. (Craig noted that he is pursuing this approach in
his doctoral dissertation at Harvard).
Ken Schroder (BBN Systems and Technologies) gave a talk on the
challenges of combining gigabit networks with wide-area heterogeneous
distributed operating systems. Ken feels the key goals of wide area
distributed systems will be to support large volume data transfers
between users of conferencing and similar applications, and to
deliver information to a large number of end users sharing services
such as satellite image databases. These distributed systems will be
motivated by the natural distribution of users, of information and of
expensive special purpose computer resources.
Ken pointed to three of the key problems that must be addressed at
Partridge [Page 8]
RFC 1152 IRSG Workshop Report April 1990
the system level in these environments: how to provide high
utilization; how to manage consistency and synchronization in the
presence of concurrency and non-determinism; and how to construct
scalable system and application services. Utilization is key only to
high performance applications, where current systems would be limited
by the cost of factors such as repeatedly copying messages,
converting data representations and switching between application and
operating system. Concurrency can be used improve performance, but
is also likely to occur in many programs inadvertently because of
distribution. Techniques are required both to exploit concurrency
when it is needed, and to limit it when non-determinism can lead to
incorrect results. Extensive research on ensuring consistency and
resolving resource conflicts has been done in the database area,
however distributed scheduling and the need for high availability
despite partial system failures introduce special problems that
require additional research. Service scalability will be required to
support customer needs as the size of the user community grow. It
will require attention both ensuring that components do not break
when they are subdivided across additional processors to support a
larger user population, and to ensure that performance does to each
user can be affordably maintained as new users are added.
In a bold presentation, Dave Cheriton (Stanford) made a sweeping
argument that we are making a false dichotomy between distributed
operating systems and networks. In a gigabit world, he argued, the
major resource in the system is the network, and in a normal
operating system we would expect such a critical resource to be
managed by the operating system. Or, put another way, the gigabit
network distributed operating system should manage the network.
Cheriton went on to say that if a gigabit distributed operating
system is managing the network, then it is perfectly reasonable to
make the network very dumb (but fast) and put the system intelligence
in the operating systems on the hosts that form the distributed
system.
In another talk on interprocess communication, Jonathan Smith (UPenn)
again raised the problem of network delay limiting RPC performance.
In contrast to Partridge's earlier talk, Smith argued that the
appropriate approach is anticipation or caching. He justified his
argument with a simple cost example. If a system is doing a page
fetch between two systems which have a five millisecond round-trip
network delay between them, the cost of fetching n pages is:
5 msec + (n-1) * 32 usec
Thus the cost of fetching an additional page is only 32 usec, but
underfetching and having to make another request to get a page you
missed costs 5000 usec. Based on these arguments, Smith suggested
Partridge [Page 9]
RFC 1152 IRSG Workshop Report April 1990
that we re-examine work in virtual memory to see if there are
comfortable ways to support distributed virtual memory with
anticipation.
In the third talk on RPC in the session, Tommy Joseph (Olivetti), for
reasons similar to those of Partridge and Smith, argued that we have
to get rid of RPC and give programmers alternative programming
paradigms. He sketched out ideas for asynchronous paradigms using
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -