📄 rfc1254.txt
字号:
Network Working Group A. MankinRequest for Comments: 1254 MITRE K. Ramakrishnan Digital Equipment Corporation Editors August 1991 Gateway Congestion Control SurveyStatus of this Memo This memo provides information for the Internet community. It is a survey of some of the major directions and issues. It does not specify an Internet standard. Distribution of this memo is unlimited.Abstract The growth of network intensive Internet applications has made gateway congestion control a high priority. The IETF Performance and Congestion Control Working Group surveyed and reviewed gateway congestion control and avoidance approaches. The purpose of this paper is to present our review of the congestion control approaches, as a way of encouraging new discussion and experimentation. Included in the survey are Source Quench, Random Drop, Congestion Indication (DEC Bit), and Fair Queueing. The task remains for Internet implementors to determine and agree on the most effective mechanisms for controlling gateway congestion.1. Introduction Internet users regularly encounter congestion, often in mild forms. However, severe congestion episodes have been reported also; and gateway congestion remains an obstacle for Internet applications such as scientific supercomputing data transfer. The need for Internet congestion control originally became apparent during several periods of 1986 and 1987, when the Internet experienced the "congestion collapse" condition predicted by Nagle [Nag84]. A large number of widely dispersed Internet sites experienced simultaneous slowdown or cessation of networking services for prolonged periods. BBN, the firm responsible for maintaining the then backbone of the Internet, the ARPANET, responded to the collapse by adding link capacity [Gar87]. Much of the Internet now uses as a transmission backbone the National Science Foundation Network (NSFNET). Extensive monitoring and capacity planning are being done for the NSFNET backbone; still, asPerformance and Congestion Control Working Group [Page 1]RFC 1254 Gateway Congestion Control Survey August 1991 the demand for this capacity grows, and as resource-intensive applications such as wide-area file system management [Sp89] increasingly use the backbone, effective congestion control policies will be a critical requirement. Only a few mechanisms currently exist in Internet hosts and gateways to avoid or control congestion. The mechanisms for handling congestion set forth in the specifications for the DoD Internet protocols are limited to: Window flow control in TCP [Pos81b], intended primarily for controlling the demand on the receiver's capacity, both in terms of processing and buffers. Source quench in ICMP, the message sent by IP to request that a sender throttle back [Pos81a]. One approach to enhancing Internet congestion control has been to overlay the simple existing mechanisms in TCP and ICMP with more powerful ones. Since 1987, the TCP congestion control policy, Slow- start, a collection of several algorithms developed by Van Jacobson and Mike Karels [Jac88], has been widely adopted. Successful Internet experiences with Slow-start led to the Host Requirements RFC [HREQ89] classifying the algorithms as mandatory for TCP. Slow-start modifies the user's demand when congestion reaches such a point that packets are dropped at the gateway. By the time such overflows occur, the gateway is congested. Jacobson writes that the Slow-start policy is intended to function best with a complementary gateway policy [Jac88].1.1 Definitions The characteristics of the Internet that we are interested in include that it is, in general, an arbitrary mesh-connected network. The internetwork protocol is connectionless. The number of users that place demands on the network is not limited by any explicit mechanism; no reservation of resources occurs and transport layer set-ups are not disallowed due to lack of resources. A path from a source to destination host may have multiple hops, through several gateways and links. Paths through the Internet may be heterogeneous (though homogeneous paths also exist and experience congestion). That is, links may be of different speeds. Also, the gateways and hosts may be of different speeds or may be providing only a part of their processing power to communication-related activity. The buffers for storing information flowing through Internet gateways are finite. The nature of the internet protocol is to drop packets when these buffers overflow.Performance and Congestion Control Working Group [Page 2]RFC 1254 Gateway Congestion Control Survey August 1991 Gateway congestion arises when the demand for one or more of the resources of the gateway exceeds the capacity of that resource. The resources include transmission links, processing, and space used for buffering. Operationally, uncongested gateways operate with little queueing on average, where the queue is the waiting line for a particular resource of the gateway. One commonly used quantitative definition [Kle79] for when a resource is congested is when the operating point is greater than the point at which resource power is maximum, where resource power is defined as the ratio of throughput to delay (See Section 2.2). At this operating point, the average queue size is close to one, including the packet in service. Note that this is a long-term average queue size. Several definitions exist for the timescale of averaging for congestion detection and control, such as dominant round-trip time and queue regeneration cycle (see Section 2.1). Mechanisms for handling congestion may be divided into two categories, congestion recovery and congestion avoidance. Congestion recovery tries to restore an operating state, when demand has already exceeded capacity. Congestion avoidance is preventive in nature. It tries to keep the demand on the network at or near the point of maximum power, so that congestion never occurs. Without congestion recovery, the network may cease to operate entirely (zero throughput), whereas the Internet has been operating without congestion avoidance for a long time. Overall performance may improve with an effective congestion avoidance mechanism. Even if effective congestion avoidance was in place, congestion recovery schemes would still be required, to retain throughput in the face of sudden changes (increase of demand, loss of resources) that can lead to congestion. In this paper, the term "user" refers to each individual transport (TCP or another transport protocol) entity. For example, a TCP connection is a "user" in this terminology. The terms "flow" and "stream" are used by some authors in the same sense. Some of the congestion control policies discussed in this paper, such as Selective Feedback Congestion Indication and Fair Queueing aggregate multiple TCP connections from a single host (or between a source host-destination host pair) as a virtual user. The term "cooperating transport entities" will be defined as a set of TCP connections (for example) which follow an effective method of adjusting their demand on the Internet in response to congestion. The most restrictive interpretation of this term is that the transport entities follow identical algorithms for congestion control and avoidance. However, there may be some variation in these algorithms. The extent to which heterogeneous end-system congestion control and avoidance may be accommodated by gateway policies shouldPerformance and Congestion Control Working Group [Page 3]RFC 1254 Gateway Congestion Control Survey August 1991 be a subject of future research. The role played in Internet performance of non-cooperating transport entities is discussed in Section 5.1.2 Goals and Scope of This Paper The task remains for Internet implementors to determine effective mechanisms for controlling gateway congestion. There has been minimal common practice on which to base recommendations for Internet gateway congestion control. In this survey, we describe the characteristics of one experimental gateway congestion management policy, Random Drop, and several that are better-known: Source Quench, Congestion Indication, Selective Feedback Congestion Indication, and Fair Queueing, both Bit-Round and Stochastic. A motivation for documenting Random Drop is that it has as primary goals low overhead and suitability for scaling up for Internets with higher speed links. Both of these are important goals for future gateway implementations that will have fast links, fast processors, and will have to serve large numbers of interconnected hosts. The structure of this paper is as follows. First, we discuss performance goals, including timescale and fairness considerations. Second, we discuss the gateway congestion control policies. Random Drop is sketched out, with a recommendation for using it for congestion recovery and a separate section on its use as congestion avoidance. Third, since gateway congestion control in itself does not change the end-systems' demand, we briefly present the effective responses to these policies by two end-system congestion control schemes, Slow-start and End-System Congestion Indication. Among our conclusions, we address the issues of transport entities that do not cooperate with gateway congestion control. As an appendix, because of the potential interactions with gateway congestion policies, we report on a scheme to help in controlling the performance of Internet gateways to connection-oriented subnets (in particular, X.25). Resources in the current Internet are not charged to users of them. Congestion avoidance techniques cannot be expected to help when users increase beyond the capacity of the underlying facilities. There are two possible solutions for this, increase the facilities and available bandwidth, or forcibly reduce the demand. When congestion is persistent despite implemented congestion control mechanisms, administrative responses are needed. These are naturally not within the scope of this paper. Also outside the scope of this paper are routing techniques that may be used to relocate demand away from congested individual resources (e.g., path-splitting and load- balancing).Performance and Congestion Control Working Group [Page 4]RFC 1254 Gateway Congestion Control Survey August 19912. Performance Goals To be able to discuss design and use of various mechanisms for improving Internetwork performance, we need to have clear performance goals for the operation of gateways and sets of end-systems. Internet experience shows that congestion control should be based on adaptive principles; this requires efficient computation of metrics by algorithms for congestion control. The first issue is that of the interval over which these metrics are estimated and/or measured.2.1 Interval for Measurement/Estimation of Performance Metrics Network performance metrics may be distorted if they are computed over intervals that are too short or too long relative to the dynamic characteristics of the network. For instance, within a small interval, two FTP users with equal paths may appear to have sharply different demands, as an effect of brief, transient fluctuations in their respective processing. An overly long averaging interval results in distortions because of the changing number of users sharing the resource measured during the time. It is similarly important for congestion control mechanisms exerted at end systems to find an appropriate interval for control. The first approach to the monitoring, or averaging, interval for congestion control is one based on round-trip times. The rationale for it is as follows: control mechanisms must adapt to changes in Internet congestion as quickly as possible. Even on an uncongested path, changed conditions will not be detected by the sender faster than a round-trip time. The effect of a sending end-system's control will also not be seen in less than a round-trip time in the entire path as well as at the end systems. For the control mechanism to be adaptive, new information on the path is needed before making a
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -