📄 rfc1077.txt
字号:
may see the network as a message-passing system, or as memory. At the same time, the network may use classic packets, wavelength division, or space division switching. A number of basic functions must be rethought to provide an architecture that is not dependent on the underlying switching model. For example, our transport protocols assume that data will be lost in units of a packet. If part of a packet is lost, we discard the whole thing. And if several packets are systematically lost in sequence, we may not recover effectively. There must be a host-level unit of error recovery that is independent of the network. This sort of abstraction must be applied to all the aspects of service specification: error recovery, flow control, addressing, and so on. 3.1.6. Network Operations, Monitoring, and Control There is a hierarchy of progressively more effective and sophisticated techniques for network management that applies regardless of network bandwidth and application considerations: 1. Reactive problem management 2. Reactive resource management 3. Proactive problem management 4. Proactive resource management. Today's network management strategies are primarily reactive rather than proactive: Problem management is initiated in response to user complaints about service outages; resource allocation decisions are made when users complain about deterioration of quality of service. Today's network management systems are stuck at step 1 or perhaps step 2 of the hierarchy. Future network management systems will provide proactive problem management---problem diagnosis and restoral of service before users become aware that there was a problem; and proactive resource management---dynamic allocation of network bandwidth and switching resources to ensure that an acceptable level of service is continuously maintained. The GN management system should be expected to provide proactive problem and resource management capabilities. It will have to do so while contending with three important changes in the managed network environment:Gigabit Working Group [Page 16]RFC 1077 November 1988 1. More complicated devices under management 2. More diverse types of devices 3. More variety of application protocols. Performance under these conditions will require that we seriously re-think how a network management system handles the expected high volumes of raw management-related data. It will become especially important for the system to provide thresholding, filtering, and alerting mechanisms that can save the human operator from drowning in data, while still permitting access to details when diagnostic or fault isolation modes are invoked. The presence of expert assistant capabilities for early fault detection, diagnosis, and problem resolution will be mandatory. These capabilities are highly desirable today, but they will be essential to contend with the complexity and diversity of devices and applications in the Gigabit Network. In addition to its role in dealing with complexity, automation provides the only hope of controlling and reducing the high costs of daily management and operation of a GN. Proactive resource management in GNs must be better understood and practiced, initially as an effort requiring human intervention and direction. Once this is achieved, it too must become automated to a high degree in the GN. 3.1.7. Naming and Addressing Strategies Current networks, both voice (telephone) and data, use addressing structures which closely tie the address to the physical location on the network. That is, the address identifies a physical access point, rather than the higher-level entity (computer, process, human) attached to that access point. In future networks, this physical aspect of addressing must be removed. Consider, for example, finding the desired party in the telephone network of today. For a person not at his listed number, finding the number of the correct telephone may require preliminary calls, in which advice is given to the person placing the call. This works well when a human is placing the call, since humans are well equipped to cope with arbitrary conversations. But if a computer is placing the call, the process of obtaining the correct address will have to be incorporated in the architecture as a core service of the network.Gigabit Working Group [Page 17]RFC 1077 November 1988 Since it is reasonable to expect mobile hosts, hosts that are connected to multiple networks, and replicated hosts, the issue of mapping to the physical address must be properly resolved. To permit the network to maintain the dynamic mapping to current physical address, it is necessary that high-level entities have a name (or logical address) that identifies them independently of location. The name is maintained by the network, and mapped to the current physical location as a core network service. For example, mobile hosts, hosts that are connected to multiple networks, and replicated hosts would have static names whose mapping to physical addresses (many-to-one, in some cases) would change with time. Hosts are not the only entities whose physical location varies. Users' electronic mail addresses change. Within distributed systems, processes and files migrate from host to host. In a computing environment where robustness and survivability are important, entire applications may move about, or they may be redundant. The needed function must be considered in the context of the mobility and address resolution rates if all addresses in a global data network were of this sort. The distributed network directory discussed elsewhere in this report should be designed to provide the necessary flexibility, and responsiveness. The nature and administration of names must also be considered. Names that are arbitrary or unwieldy would be barely better than the addresses used now. The name space should be designed so that it can easily be partitioned among the agencies that will assign names. The structure of names should facilitate, rather than hinder, the mapping function. For example, it would be hard to optimize the mapping function if names were flat and unstructured. 3.2. High-Speed Switching The term "high-speed switching" refers to changing the switching at a high rate, rather than switching high-speed links, because the latter is not difficult at low speeds. (Consider, for example, manual switching of fiber connections). The switching regime chosen for the network determines various aspects of its performance, its charging policies, and even its effective capabilities. As an example of the latter, it is difficult to expect a circuit-switched network to provide strong multicast support. A major area of debate lies in the choice between packet switching and circuit switching. This is a key research issue for the GN,Gigabit Working Group [Page 18]RFC 1077 November 1988 considering also the possibility of there being combinations of the two approaches that are feasible. 3.2.1. Unit of Management vs. Multiplexing With very high data rates, either the unit of management and switching must be larger or the speed of the processor elements for management and switching must be faster. For example, at a gigabit, a 576 byte packet takes roughly 5 microseconds to be received so a packet switch must act extremely fast to avoid being the dominant delay in packet times. Moreover, the storage time for the packet in a conventional store and forward implementation also becomes a significant component of the delay. Thus, for packet switching to remain attractive in this environment, it appears necessary to increase the size of packets (or switch on packet groups), do so- called virtual cut-through and use high-speed routing techniques, such as high-speed route caches and source routing. Alternatively, for circuit switching to be attractive, it must provide very fast circuit setup and tear-down to support the bursty nature of most computer communication. This problem is rendered difficult (and perhaps impossible for certain traffic loads) because the delay across the country is so large relative to the data rate. That is, even with techniques such as so-called fast select, bandwidth is reserved by the circuit along the path for almost twice the propagation time before being used. With gigabit circuit switching, because it is not feasible to physically switch channels, the low-level switching is likely doing FTDM on micro-packets, as is currently done in telephony. Performing FTDM at gigabit data rates is a challenging research problem if the skew introduced by wide-area communication is to be handled with reasonable overhead for spacing of this micro-packets. Given the lead and resources of the telephone companies, this area of investigation should, if pursued, be pursued cooperatively. 3.2.2. Bandwidth Reservation Algorithms Some applications, such as real-time video, require sustained high data rate streams over a significant period of time, such as minutes if not hours. Intuitively, it is appealing for such applications to pre-allocate the bandwidth they require to minimize the switching load on the network and guarantee that the required bandwidth is available. Research is required to determine the merits of bandwidthGigabit Working Group [Page 19]RFC 1077 November 1988 reservation, particular in conjunction with the different switching technologies. There is some concern to raise that bandwidth reservation may require excessive intelligence in the network, reducing the performance and reliability of the network. In addition, bandwidth reservation opens a new option for denial of service by an intruder or malicious user. Thus, investigations in this area need to proceed in concert with work on switching technologies and capabilities and security and reliability requirements. 3.2.3. Multicast Capabilities It is now widely accepted that multicast should be provided as a user-level service, as described in RFC 1054 for IP, for example. However, further research is required to determine the best way to support this facility at the network layer and lower. It is fairly clear that the GN will be built from point-to-point fiber links that do not provide multicast/broadcast for free. At the most conservative extreme, one could provide no support and require that each host or gateway simulate multicast by sending multiple, individually addressed packets. However, there are significant advantages to providing very low level multicast support (besides the obvious performance advantages). For example, multicast routing in a flooding form provides the most fault-tolerant, lowest-delay form of delivery which, if reserved for very high priority messages, provides a good emergency facility for high-stress network applications. Multicast may also be useful as an approach to defeat traffic analysis. Another key issue arises with the distinction between so-called open group multicast and closed group multicast. In the former, any host can multicast to the group, whereas in the latter, only members of the group can multicast to it. The latter is easier to support and adequate for conferencing, for example. However, for more client- server structured applications, such as using file/database server, computation servers, etc. as groups, open multicast is required. Research is needed to address both forms of multicast. In addition, security issues arise in controlling the membership of multicast groups. This issue should be addressed in concert with work on secure forms of routing in general.Gigabit Working Group [Page 20]RFC 1077 November 1988 3.2.4. Gateway Technologies
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -