⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2914.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 3 页
字号:
Network Working Group                                         S. FloydRequest for Comments: 2914                                       ACIRIBCP: 41                                                 September 2000Category: Best Current Practice                     Congestion Control PrinciplesStatus of this Memo   This document specifies an Internet Best Current Practices for the   Internet Community, and requests discussion and suggestions for   improvements.  Distribution of this memo is unlimited.Copyright Notice   Copyright (C) The Internet Society (2000).  All Rights Reserved.Abstract   The goal of this document is to explain the need for congestion   control in the Internet, and to discuss what constitutes correct   congestion control.  One specific goal is to illustrate the dangers   of neglecting to apply proper congestion control.  A second goal is   to discuss the role of the IETF in standardizing new congestion   control protocols.1.  Introduction   This document draws heavily from earlier RFCs, in some cases   reproducing entire sections of the text of earlier documents   [RFC2309, RFC2357].  We have also borrowed heavily from earlier   publications addressing the need for end-to-end congestion control   [FF99].2.  Current standards on congestion control   IETF standards concerning end-to-end congestion control focus either   on specific protocols (e.g., TCP [RFC2581], reliable multicast   protocols [RFC2357]) or on the syntax and semantics of communications   between the end nodes and routers about congestion information (e.g.,   Explicit Congestion Notification [RFC2481]) or desired quality-of-   service (diff-serv)).  The role of end-to-end congestion control is   also discussed in an Informational RFC on "Recommendations on Queue   Management and Congestion Avoidance in the Internet" [RFC2309].  RFC   2309 recommends the deployment of active queue management mechanisms   in routers, and the continuation of design efforts towards mechanismsFloyd, ed.               Best Current Practice                  [Page 1]RFC 2914             Congestion Control Principles        September 2000   in routers to deal with flows that are unresponsive to congestion   notification.  We freely borrow from RFC 2309 some of their general   discussion of end-to-end congestion control.   In contrast to the RFCs discussed above, this document is a more   general discussion of the principles of congestion control.  One of   the keys to the success of the Internet has been the congestion   avoidance mechanisms of TCP.  While TCP is still the dominant   transport protocol in the Internet, it is not ubiquitous, and there   are an increasing number of applications that, for one reason or   another, choose not to use TCP.  Such traffic includes not only   multicast traffic, but unicast traffic such as streaming multimedia   that does not require reliability; and traffic such as DNS or routing   messages that consist of short transfers deemed critical to the   operation of the network.  Much of this traffic does not use any form   of either bandwidth reservations or end-to-end congestion control.   The continued use of end-to-end congestion control by best-effort   traffic is critical for maintaining the stability of the Internet.   This document also discusses the general role of the IETF in the   standardization of new congestion control protocols.   The discussion of congestion control principles for differentiated   services or integrated services is not addressed in this document.   Some categories of integrated or differentiated services include a   guarantee by the network of end-to-end bandwidth, and as such do not   require end-to-end congestion control mechanisms.3.  The development of end-to-end congestion control.3.1.  Preventing congestion collapse.   The Internet protocol architecture is based on a connectionless end-   to-end packet service using the IP protocol.  The advantages of its   connectionless design, flexibility and robustness, have been amply   demonstrated.  However, these advantages are not without cost:   careful design is required to provide good service under heavy load.   In fact, lack of attention to the dynamics of packet forwarding can   result in severe service degradation or "Internet meltdown".  This   phenomenon was first observed during the early growth phase of the   Internet of the mid 1980s [RFC896], and is technically called   "congestion collapse".   The original specification of TCP [RFC793] included window-based flow   control as a means for the receiver to govern the amount of data sent   by the sender.  This flow control was used to prevent overflow of the   receiver's data buffer space available for that connection.  [RFC793]Floyd, ed.               Best Current Practice                  [Page 2]RFC 2914             Congestion Control Principles        September 2000   reported that segments could be lost due either to errors or to   network congestion, but did not include dynamic adjustment of the   flow-control window in response to congestion.   The original fix for Internet meltdown was provided by Van Jacobson.   Beginning in 1986, Jacobson developed the congestion avoidance   mechanisms that are now required in TCP implementations [Jacobson88,   RFC 2581].  These mechanisms operate in the hosts to cause TCP   connections to "back off" during congestion.  We say that TCP flows   are "responsive" to congestion signals (i.e., dropped packets) from   the network.  It is these TCP congestion avoidance algorithms that   prevent the congestion collapse of today's Internet.   However, that is not the end of the story.  Considerable research has   been done on Internet dynamics since 1988, and the Internet has   grown.  It has become clear that the TCP congestion avoidance   mechanisms [RFC2581], while necessary and powerful, are not   sufficient to provide good service in all circumstances.  In addition   to the development of new congestion control mechanisms [RFC2357],   router-based mechanisms are in development that complement the   endpoint congestion avoidance mechanisms.   A major issue that still needs to be addressed is the potential for   future congestion collapse of the Internet due to flows that do not   use responsible end-to-end congestion control.  RFC 896 [RFC896]   suggested in 1984 that gateways should detect and `squelch'   misbehaving hosts: "Failure to  respond  to  an  ICMP  Source  Quench   message, though,  should be regarded as grounds for action by a   gateway to disconnect a host.  Detecting such failure is non-trivial   but  is a worthwhile area for further research."  Current papers   still propose that routers detect and penalize flows that are not   employing acceptable end-to-end congestion control [FF99].3.2.  Fairness   In addition to a concern about congestion collapse, there is a   concern about `fairness' for best-effort traffic.  Because TCP "backs   off" during congestion, a large number of TCP connections can share a   single, congested link in such a way that bandwidth is shared   reasonably equitably among similarly situated flows.  The equitable   sharing of bandwidth among flows depends on the fact that all flows   are running compatible congestion control algorithms.  For TCP, this   means congestion control algorithms conformant with the current TCP   specification [RFC793, RFC1122, RFC2581].   The issue of fairness among competing flows has become increasingly   important for several reasons.  First, using window scaling   [RFC1323], individual TCPs can use high bandwidth even over high-Floyd, ed.               Best Current Practice                  [Page 3]RFC 2914             Congestion Control Principles        September 2000   propagation-delay paths.  Second, with the growth of the web,   Internet users increasingly want high-bandwidth and low-delay   communications, rather than the leisurely transfer of a long file in   the background.  The growth of best-effort traffic that does not use   TCP underscores this concern about fairness between competing best-   effort traffic in times of congestion.   The popularity of the Internet has caused a proliferation in the   number of TCP implementations.  Some of these may fail to implement   the TCP congestion avoidance mechanisms correctly because of poor   implementation [RFC2525].  Others may deliberately be implemented   with congestion avoidance algorithms that are more aggressive in   their use of bandwidth than other TCP implementations; this would   allow a vendor to claim to have a "faster TCP".  The logical   consequence of such implementations would be a spiral of increasingly   aggressive TCP implementations, or increasingly aggressive transport   protocols, leading back to the point where there is effectively no   congestion avoidance and the Internet is chronically congested.   There is a well-known way to achieve more aggressive performance   without even changing the transport protocol, by changing the level   of granularity: open multiple connections to the same place, as has   been done in the past by some Web browsers.  Thus, instead of a   spiral of increasingly aggressive transport protocols, we would   instead have a spiral of increasingly aggressive web browsers, or   increasingly aggressive applications.   This raises the issue of the appropriate granularity of a "flow",   where we define a `flow' as the level of granularity appropriate for   the application of both fairness and congestion control.  From RFC   2309:  "There are a few `natural' answers: 1) a TCP or UDP connection   (source address/port, destination address/port); 2) a   source/destination host pair; 3) a given source host or a given   destination host.  We would guess that the source/destination host   pair gives the most appropriate granularity in many circumstances.   The granularity of flows for congestion management is, at least in   part, a policy question that needs to be addressed in the wider IETF   community."   Again borrowing from RFC 2309, we use the term "TCP-compatible" for a   flow that behaves under congestion like a flow produced by a   conformant TCP.  A TCP-compatible flow is responsive to congestion   notification, and in steady-state uses no more bandwidth than a   conformant TCP running under comparable conditions (drop rate, RTT,   MTU, etc.)Floyd, ed.               Best Current Practice                  [Page 4]RFC 2914             Congestion Control Principles        September 2000   It is convenient to divide flows into three classes: (1) TCP-   compatible flows, (2) unresponsive flows, i.e., flows that do not   slow down when congestion occurs, and (3) flows that are responsive   but are not TCP-compatible.  The last two classes contain more   aggressive flows that pose significant threats to Internet   performance, as we discuss below.   In addition to steady-state fairness, the fairness of the initial   slow-start is also a concern.  One concern is the transient effect on   other flows of a flow with an overly-aggressive slow-start procedure.   Slow-start performance is particularly important for the many flows   that are short-lived, and only have a small amount of data to   transfer.3.3.  Optimizing performance regarding throughput, delay, and loss.   In addition to the prevention of congestion collapse and concerns   about fairness, a third reason for a flow to use end-to-end   congestion control can be to optimize its own performance regarding   throughput, delay, and loss.  In some circumstances, for example in   environments of high statistical multiplexing, the delay and loss   rate experienced by a flow are largely independent of its own sending   rate.  However, in environments with lower levels of statistical   multiplexing or with per-flow scheduling, the delay and loss rate   experienced by a flow is in part a function of the flow's own sending   rate.  Thus, a flow can use end-to-end congestion control to limit   the delay or loss experienced by its own packets.  We would note,   however, that in an environment like the current best-effort   Internet, concerns regarding congestion collapse and fairness with   competing flows limit the range of congestion control behaviors   available to a flow.4.  The role of the standards process   The standardization of a transport protocol includes not only   standardization of aspects of the protocol that could affect   interoperability (e.g., information exchanged by the end-nodes), but   also standardization of mechanisms deemed critical to performance   (e.g., in TCP, reduction of the congestion window in response to a   packet drop).  At the same time, implementation-specific details and   other aspects of the transport protocol that do not affect   interoperability and do not significantly interfere with performance   do not require standardization.  Areas of TCP that do not require   standardization include the details of TCP's Fast Recovery procedure   after a Fast Retransmit [RFC2582].  The appendix uses examples from   TCP to discuss in more detail the role of the standards process in   the development of congestion control.Floyd, ed.               Best Current Practice                  [Page 5]RFC 2914             Congestion Control Principles        September 20004.1.  The development of new transport protocols.   In addition to addressing the danger of congestion collapse, the   standardization process for new transport protocols takes care to   avoid a congestion control `arms race' among competing protocols.  As   an example, in RFC 2357 [RFC2357] the TSV Area Directors and their   Directorate outline criteria for the publication as RFCs of   Internet-Drafts on reliable multicast transport protocols.  From   [RFC2357]:  "A particular concern for the IETF is the impact of   reliable multicast traffic on other traffic in the Internet in times   of congestion, in particular the effect of reliable multicast traffic   on competing TCP traffic....  The challenge to the IETF is to   encourage research and implementations of reliable multicast, and to   enable the needs of applications for reliable multicast to be met as   expeditiously as possible, while at the same time protecting the   Internet from the congestion disaster or collapse that could result   from the widespread use of applications with inappropriate reliable   multicast mechanisms."   The list of technical criteria that must be addressed by RFCs on new   reliable multicast transport protocols include the following:  "Is   there a congestion control mechanism? How well does it perform? When   does it fail?  Note that congestion control mechanisms that operate   on the network more aggressively than TCP will face a great burden of   proof that they don't threaten network stability."   It is reasonable to expect that these concerns about the effect of   new transport protocols on competing traffic will apply not only to   reliable multicast protocols, but to unreliable unicast, reliable   unicast, and unreliable multicast traffic as well.4.2.  Application-level issues that affect congestion control

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -