⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1077.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   With the wide-area interconnection of local networks by the GN,   gateways are expected to become a significant performance bottleneck   unless significant advances are made in gateway performance.  In   addition, many network management concerns suggest putting more   functionality (such as access control) in the gateways, further   increasing their load and the need for greater capacity.  This would   then raise the issue of the trade-off between general-purpose   hardware and special-purpose hardware.   On the general-purpose side, it may be feasible to use a general-   purpose multiprocessor based on high-end microprocessors (perhaps as   exotic as the GaAs MIPS) in conjunction with a high-speed block   transfer bus, as proposed as part of the FutureBus standard (which is   extendible to higher speeds than currently commercially planned) and   intelligent high-speed network adaptors.  This would also allow the   direct use of hardware, operating systems, and software tools   developed as part of other DARPA programs, such as Strategic   Computing.  It also appears to make this gateway software more   portable to commercial machines as they become available in this   performance range.   The specialized hardware approach is based on the assumption that   general-purpose hardware, particularly the interconnection bus,   cannot be fast enough to support the level of performance required.   The expected emphasis is on various interconnection network   techniques.  These approaches appear to require greater expense, less   commercial availability and more specialized software.  They need to   be critically evaluated with respect to the general-purpose gateway   hardware approach, especially if the latter is using multiple buses   for fault-tolerance as well as capacity extension (in the absence of   failure).   The same general-purpose vs. special-purpose contention is an issue   with operating system software.  Conventionally, gateways run   specialized run-time executives that are designed specifically for   the gateway and gateway functions.  However, the growing   sophistication of the gateway makes this approach less feasible.  It   appears important to investigate the feasibility of using a standard   operating system foundation on the gateways that is known to provide   the required security and reliability properties (as well as real-   time performance properties).Gigabit Working Group                                          [Page 21]RFC 1077                                                   November 1988   3.2.5.  VLSI and Optronics Implementations   It appears fairly clear that gigabit communication will use fiber   optics for at least the near future.  Without major advances in   optronics to allow effectively for optical computers, communication   must cross the optical-electronic boundary two or more times.  There   are significant cost, performance, reliability, and security benefits   for minimizing the number of such crossings.  (As an example of a   security benefit, optics is not prone to electronic surveillance or   jamming while electronics clearly is, so replacing an optic-   electronic-optic node with a pure optic node eliminates that   vulnerability point.)   The benefits of improved technology in optronics is so great that its   application here is purely another motivation for an already active   research area (that deserves strong continued support).  Therefore,   we focus here in the issue of matching current (and near-term   expected) optronics capabilities with network requirements.   The first and perhaps greatest area of opportunity is to achieve   totally (or largely) photonic switches in the network switching   nodes.  That is, most packets would be switched without crossing the   optics-electronics boundary at all.  For this to be feasible, the   switch must use very simple switching logic, require very little   storage and operate on packets of a significant size.  The source-   routed packet switches with loopback on blockage of Blazenet   illustrate the type of techniques that appear required to achieve   this goal.   Research is required to investigate the feasibility of optronic   implementation of switches.  It appears highly likely that networks   will at some point in the future be totally photonically switched,   having the impact on networking comparable to the effect of   integrated circuits on processors and memories.   A next level of focus is to achieve optical switching in the common   case in gateways.  One model is a multiprocessor with an optical   interconnect.  Packets associated with established paths through the   gateway are optically switched and processed through the   interconnect.  Other packets are routed to the multiprocessor,   crossing into the electronics domain.  Research is required to marry   the networking requirements and technology with optronics technology,   pushing the state of the art in both areas in the process.   Given the long-term presence of the optic-electronic boundary,   improvements in technology in this area are also important.  However,   it appears that there is already enormous commercial researchGigabit Working Group                                          [Page 22]RFC 1077                                                   November 1988   activity in this area, particularly within the telephone companies.   This is another area in which collaborative investigation appears far   better than an new independent research effort.   VLSI technology is an established technology with active research   support.  The GN effort does not appear to require major new   initiatives in the VLSI area, yet one should be open to significant   novel opportunities not identified here.   3.2.6.  High-Speed Transfer Protocols   To achieve the desired speeds, it will be necessary to rethink the   form of protocols.      1.  The simple idea of a stateless gateway must be replaced by a          more complex model in which the gateway understands the          desired function of the end point and applies suitable          optimizations to the flow.      2.  If multiplexing is done in the time domain, the elements of          multiplexing are probably so small that no significant          processing can be performed on each individually.  They must          be processed as an aggregate.  This implies that the unit of          multiplexing is not the same as the unit of processing.      3.  The interfaces between the structural layers of the          communication system must change from a simple          command/response style to a richer system which includes          indications and controls.      4.  An approach must be developed that couples the memory          management in the host and the structure of the transmitted          data, to allow efficient transfers into host memory.   The result of rethinking these problems will be a new style of   communications and protocols, in which there is a much higher degree   of shared responsibility among the components (hosts, switches,   gateways).  This may have little resemblance to previous work either   in the DARPA or commercial communities.   3.3.  High-Speed Host Interfaces   As networks get faster, the most significant bottleneck will turn out   to be the packet processing overhead in the host.  While this doesGigabit Working Group                                          [Page 23]RFC 1077                                                   November 1988   not restrict the aggregate rates we can achieve over trunks, it   prevents delivery of high data rate flows to the host-based   applications, which will prevent the development of new applications   needing high bandwidth.  The host bottleneck is thus a serious   impediment to networked use of supercomputers.   To build a GN we need to create new ways for hosts and their high   bandwidth peripherals to connect to networks.  We believe that   pursuing research in the ways to most effectively isolate host and   LAN development paths from the GN is the most productive way to   proceed.  By decoupling the development paths, neither is restricted   by the momentary performance of capability bottlenecks of the other.   The best context in which to view this separation is with the notion   of a network front end (NFE).  The NFE can take the electronic input   data at many data rates and transform it into gigabit light data   appropriately packetized to traverse the GN.  The NFE can accept   inputs from many types of gateways, hosts, host peripherals, and LANS   and provide arbitration and path set-up facilities as needed.  Most   importantly, the NFE can perform protocol arbitration to retain   upward compatibility with the existing Internet protocols while   enabling those sophisticated network input sources to execute GN   specific high-throughput protocols.  Of course, this introduces the   need for research into high-speed NFEs to avoid the NFE becoming a   bottleneck.   3.3.1.  VLSI and Optronics Implementations   In a host interface, unless the host is optical (an unlikely prospect   in the near-term), the opportunities for optronic support are   limited.  In fact, with a serial-to-parallel conversion on reception   stepping the clock rate down by a factor of 32 (assuming a 32-bit   data path on the host interface), optronic speeds are not required in   the immediate future.   One exception may be for encryption.  Current VLSI implementations of   standard encryption algorithms run in the 10 Mbit/s range.  Optronic   implementation of these encryption techniques and encryption   techniques specifically oriented to, or taking advantage of, optronic   capabilities appears to be an area of some potential (and enormous   benefit if achieved).   The potential of targeted VLSI research in this area appears limited   for similar reasons discussed above with its application in high-   speed switching.  The major benefits will arise from work that is   well-motivated by other research (such as high-performance   parallelism) and by strong commercial interest.  Again, we need to beGigabit Working Group                                          [Page 24]RFC 1077                                                   November 1988   open to imaginative opportunities not foreseen here while keeping   ourselves from being diverted into low-impact research without   further insights being put forward.   3.3.2.  High-Performance Transport Protocols   Current transport protocols exhibit some severe problems for maximal   performance, especially for using hardware support.  For example, TCP   places the checksum in the packet header, forcing the packet to be   formed and read fully before transmission begins.  ISO TP4 is even   worse, locating the checksum in a variable portion of the header at   an indeterminate offset, making hardware implementation extremely   difficult.   The current Internet has thrived and grown due to the existence of   TCP implementations for a wide variety of classes of host computers.   These various TCP implementations achieve robust interoperability by   a "least common denominator" approach to features and options.  Some   applications have arisen in the current Internet, and analogs can be   envisioned for the GN environment, which need qualities of service   not generally supported by the ubiquitous generic TCP, and therefore   special purpose transport protocols have been developed.  Examples   include special purpose transport protocols such as UDP (user   datagram protocol), RDP (reliable datagram protocol), LDP   (loader/debugger protocol), NETBLT (high-speed block transfer   protocol), NVP (network voice protocol) and PVP (packet video   protocol).  Efforts are also under way to develop a new generic   transport protocol VMTP (versatile message transaction protocol)   which will remedy some of deficiencies of TCP, without the need to   resort to special purpose protocols for some applications.  Research   is needed in this area to understand how transport level protocols   should be constructed for a GN which provide adequate qualities of   service and ease of implementation.   A new transport protocol of reasonable success can be expected to   last for ten years more.  Therefore, a new protocol should not be   over optimized for current networks and must not ignore the   functional deficiencies of current protocols.  These deficiencies are   essential to remedy before it is feasible to deploy even current   distributed systems technology for military and commercial   applications.   Forward Error Correction (FEC) is a useful approach when the   bandwidth/delay ratio of the physical medium is high, as can be   expected in transcontinental photonic links.  A degenerate form of   FEC is to simply transmit multiple copies of the data; this allowsGigabit Working Group                                          [Page 25]RFC 1077                                                   November 1988   one to trade bandwidth for delay and reliability, without requiring   much intelligence.  In fact, it is generally true that reliability,   bandwidth, and delay are interrelated and an improvement in one   generally comes at the expense of the others for a given technolo

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -