⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2914.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 3 页
字号:
   The specific issue of a browser opening multiple connections to the   same destination has been addressed by RFC 2616 [RFC2616], which   states in Section 8.1.4 that "Clients that use persistent connections   SHOULD limit the number of simultaneous connections that they   maintain to a given server.  A single-user client SHOULD NOT maintain   more than 2 connections with any server or proxy."4.3.  New developments in the standards process   The most obvious developments in the IETF that could affect the   evolution of congestion control are the development of integrated and   differentiated services [RFC2212, RFC2475] and of Explicit Congestion   Notification (ECN) [RFC2481].  However, other less dramatic   developments are likely to affect congestion control as well.Floyd, ed.               Best Current Practice                  [Page 6]RFC 2914             Congestion Control Principles        September 2000   One such effort is that to construct Endpoint Congestion Management   [BS00], to enable multiple concurrent flows from a sender to the same   receiver to share congestion control state.  By allowing multiple   connections to the same destination to act as one flow in terms of   end-to-end congestion control, a Congestion Manager could allow   individual connections slow-starting to take advantage of previous   information about the congestion state of the end-to-end path.   Further, the use of a Congestion Manager could remove the congestion   control dangers of multiple flows being opened between the same   source/destination pair, and could perhaps be used to allow a browser   to open many simultaneous connections to the same destination.5.  A description of congestion collapse   This section discusses congestion collapse from undelivered packets   in some detail, and shows how unresponsive flows could contribute to   congestion collapse in the Internet.  This section draws heavily on   material from [FF99].   Informally, congestion collapse occurs when an increase in the   network load results in a decrease in the useful work done by the   network.  As discussed in Section 3, congestion collapse was first   reported in the mid 1980s [RFC896], and was largely due to TCP   connections unnecessarily retransmitting packets that were either in   transit or had already been received at the receiver.  We call the   congestion collapse that results from the unnecessary retransmission   of packets classical congestion collapse.  Classical congestion   collapse is a stable condition that can result in throughput that is   a small fraction of normal [RFC896].  Problems with classical   congestion collapse have generally been corrected by the timer   improvements and congestion control mechanisms in modern   implementations of TCP [Jacobson88].   A second form of potential congestion collapse occurs due to   undelivered packets.  Congestion collapse from undelivered packets   arises when bandwidth is wasted by delivering packets through the   network that are dropped before reaching their ultimate destination.   This is probably the largest unresolved danger with respect to   congestion collapse in the Internet today.  Different scenarios can   result in different degrees of congestion collapse, in terms of the   fraction of the congested links' bandwidth used for productive work.   The danger of congestion collapse from undelivered packets is due   primarily to the increasing deployment of open-loop applications not   using end-to-end congestion control.  Even more destructive would be   best-effort applications that *increase* their sending rate in   response to an increased packet drop rate (e.g., automatically using   an increased level of FEC).Floyd, ed.               Best Current Practice                  [Page 7]RFC 2914             Congestion Control Principles        September 2000   Table 1 gives the results from a scenario with congestion collapse   from undelivered packets, where scarce bandwidth is wasted by packets   that never reach their destination.  The simulation uses a scenario   with three TCP flows and one UDP flow competing over a congested 1.5   Mbps link.  The access links for all nodes are 10 Mbps, except that   the access link to the receiver of the UDP flow is 128 Kbps, only 9%   of the bandwidth of shared link.  When the UDP source rate exceeds   128 Kbps, most of the UDP packets will be dropped at the output port   to that final link.        UDP        Arrival   UDP       TCP       Total        Rate      Goodput   Goodput   Goodput       --------------------------------------         0.7       0.7      98.5      99.2         1.8       1.7      97.3      99.1         2.6       2.6      96.0      98.6         5.3       5.2      92.7      97.9         8.8       8.4      87.1      95.5        10.5       8.4      84.8      93.2        13.1       8.4      81.4      89.8        17.5       8.4      77.3      85.7        26.3       8.4      64.5      72.8        52.6       8.4      38.1      46.4        58.4       8.4      32.8      41.2        65.7       8.4      28.5      36.8        75.1       8.4      19.7      28.1        87.6       8.4      11.3      19.7       105.2       8.4       3.4      11.8       131.5       8.4       2.4      10.7   Table 1.  A simulation with three TCP flows and one UDP flow.   Table 1 shows the UDP arrival rate from the sender, the UDP goodput   (defined as the bandwidth delivered to the receiver), the TCP goodput   (as delivered to the TCP receivers), and the aggregate goodput on the   congested 1.5 Mbps link.  Each rate is given as a fraction of the   bandwidth of the congested link.  As the UDP source rate increases,   the TCP goodput decreases roughly linearly, and the UDP goodput is   nearly constant.  Thus, as the UDP flow increases its offered load,   its only effect is to hurt the TCP and aggregate goodput.  On the   congested link, the UDP flow ultimately `wastes' the bandwidth that   could have been used by the TCP flow, and reduces the goodput in the   network as a whole down to a small fraction of the bandwidth of the   congested link.Floyd, ed.               Best Current Practice                  [Page 8]RFC 2914             Congestion Control Principles        September 2000   The simulations in Table 1 illustrate both unfairness and congestion   collapse.  As [FF99] discusses, compatible congestion control is not   the only way to provide fairness; per-flow scheduling at the   congested routers is an alternative mechanism at the routers that   guarantees fairness.  However, as discussed in [FF99], per-flow   scheduling can not be relied upon to prevent congestion collapse.   There are only two alternatives for eliminating the danger of   congestion collapse from undelivered packets.  The first alternative   for preventing congestion collapse from undelivered packets is the   use of effective end-to-end congestion control by the end nodes.   More specifically, the requirement would be that a flow avoid a   pattern of significant losses at links downstream from the first   congested link on the path.  (Here, we would consider any link a   `congested link' if any flow is using bandwidth that would otherwise   be used by other traffic on the link.) Given that an end-node is   generally unable to distinguish between a path with one congested   link and a path with multiple congested links, the most reliable way   for a flow to avoid a pattern of significant losses at a downstream   congested link is for the flow to use end-to-end congestion control,   and reduce its sending rate in the presence of loss.   A second alternative for preventing congestion collapse from   undelivered packets would be a guarantee by the network that packets   accepted at a congested link in the network will be delivered all the   way to the receiver [RFC2212, RFC2475].  We note that the choice   between the first alternative of end-to-end congestion control and   the second alternative of end-to-end bandwidth guarantees does not   have to be an either/or decision; congestion collapse can be   prevented by the use of effective end-to-end congestion by some of   the traffic, and the use of end-to-end bandwidth guarantees from the   network for the rest of the traffic.6.  Forms of end-to-end congestion control   This document has discussed concerns about congestion collapse and   about fairness with TCP for new forms of congestion control.  This   does not mean, however, that concerns about congestion collapse and   fairness with TCP necessitate that all best-effort traffic deploy   congestion control based on TCP's Additive-Increase Multiplicative-   Decrease (AIMD) algorithm of reducing the sending rate in half in   response to each packet drop.  This section separately discusses the   implications of these two concerns of congestion collapse and   fairness with TCP.Floyd, ed.               Best Current Practice                  [Page 9]RFC 2914             Congestion Control Principles        September 20006.1.  End-to-end congestion control for avoiding congestion collapse.   The avoidance of congestion collapse from undelivered packets   requires that flows avoid a scenario of a high sending rate, multiple   congested links, and a persistent high packet drop rate at the   downstream link.  Because congestion collapse from undelivered   packets consists of packets that waste valuable bandwidth only to be   dropped downstream, this form of congestion collapse is not possible   in an environment where each flow traverses only one congested link,   or where only a small number of packets are dropped at links   downstream of the first congested link.  Thus, any form of congestion   control that successfully avoids a high sending rate in the presence   of a high packet drop rate should be sufficient to avoid congestion   collapse from undelivered packets.   We would note that the addition of Explicit Congestion Notification   (ECN) to the IP architecture would not, in and of itself, remove the   danger of congestion collapse for best-effort traffic.  ECN allows   routers to set a bit in packet headers as an indication of congestion   to the end-nodes, rather than being forced to rely on packet drops to   indicate congestion.  However, with ECN, packet-marking would replace   packet-dropping only in times of moderate congestion.  In particular,   when congestion is heavy, and a router's buffers overflow, the router   has no choice but to drop arriving packets.6.2.  End-to-end congestion control for fairness with TCP.   The concern expressed in [RFC2357] about fairness with TCP places a   significant though not crippling constraint on the range of viable   end-to-end congestion control mechanisms for best-effort traffic.  An   environment with per-flow scheduling at all congested links would   isolate flows from each other, and eliminate the need for congestion   control mechanisms to be TCP-compatible.  An environment with   differentiated services, where flows marked as belonging to a certain   diff-serv class would be scheduled in isolation from best-effort   traffic, could allow the emergence of an entire diff-serv class of   traffic where congestion control was not required to be TCP-   compatible.  Similarly, a pricing-controlled environment, or a diff-   serv class with its own pricing paradigm, could supercede the concern   about fairness with TCP.  However, for the current Internet   environment, where other best-effort traffic could compete in a FIFO   queue with TCP traffic, the absence of fairness with TCP could lead   to one flow `starving out' another flow in a time of high congestion,   as was illustrated in Table 1 above.   However, the list of TCP-compatible congestion control procedures is   not limited to AIMD with the same increase/ decrease parameters as   TCP.  Other TCP-compatible congestion control procedures includeFloyd, ed.               Best Current Practice                 [Page 10]RFC 2914             Congestion Control Principles        September 2000   rate-based variants of AIMD; AIMD with different sets of   increase/decrease parameters that give the same steady-state   behavior; equation-based congestion control where the sender adjusts   its sending rate in response to information about the long-term   packet drop rate; layered multicast where receivers subscribe and   unsubscribe from layered multicast groups; and possibly other forms   that we have not yet begun to consider.7. Acknowledgements   Much of this document draws directly on previous RFCs addressing   end-to-end congestion control.  This attempts to be a summary of   ideas that have been discussed for many years, and by many people.   In particular, acknowledgement is due to the members of the End-to-   End Research Group, the Reliable Multicast Research Group, and the   Transport Area Directorate.  This document has also benefited from   discussion and feedback from the Transport Area Working Group.   Particular thanks are due to Mark Allman for feedback on an earlier   version of this document.8. References   [BS00]       Balakrishnan H. and S. Seshan, "The Congestion Manager",                Work in Progress.   [DMKM00]     Dawkins, S., Montenegro, G., Kojo, M. and V. Magret,                "End-to-end Performance Implications of Slow Links",                Work in Progress.   [FF99]       Floyd, S. and K. Fall, "Promoting the Use of End-to-End                Congestion Control in the Internet", IEEE/ACM                Transactions on Networking, August 1999.  URL                http://www.aciri.org/floyd/end2end-paper.html   [HPF00]      Handley, M., Padhye, J. and S. Floyd, "TCP Congestion                Window Validation", RFC 2861, June 2000.   [Jacobson88] V. Jacobson, Congestion Avoidance and Control, ACM                SIGCOMM '88, August 1988.   [RFC793]     Postel, J., "Transmission Control Protocol", STD 7, RFC                793, September 1981.   [RFC896]     Nagle, J., "Congestion Control in IP/TCP", RFC 896,                January 1984.   [RFC1122]    Braden, R., Ed., "Requirements for Internet Hosts --                Communication Layers", STD 3, RFC 1122, October 1989.Floyd, ed.               Best Current Practice                 [Page 11]RFC 2914             Congestion Control Principles        September 2000   [RFC1323]    Jacobson, V., Braden, R. and D. Borman, "TCP Extensions                for High Performance", RFC 1323, May 1992.   [RFC2119]    Bradner, S., "Key words for use in RFCs to Indicate                Requirement Levels", BCP 14, RFC 2119, March 1997.   [RFC2212]    Shenker, S., Partridge, C. and R. Guerin, "Specification                of Guaranteed Quality of Service", RFC 2212, September                1997.   [RFC2309]    Braden, R., Clark, D., Crowcroft, J., Davie, B.,                Deering, S., Estrin, D., Floyd, S., Jacobson, V.,                Minshall, G., Partridge, C., Peterson, L., Ramakrishnan,                K.K., Shenker, S., Wroclawski, J., and L. Zhang,                "Recommendations on Queue Management and Congestion                Avoidance in the Internet", RFC 2309, April 1998.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -