⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc3150.txt

📁 RFC 的详细文档!
💻 TXT
📖 第 1 页 / 共 3 页
字号:
   Recommendation: If it is possible to do so, MTUs should be chosen
   that do not monopolize network interfaces for human-perceptible
   amounts of time, and implementors should not chose MTUs that will
   occupy a network interface for significantly more than 100-200
   milliseconds.

2.4 Interactions with TCP Congestion Control [RFC2581]

   In many cases, TCP connections that traverse slow links have the slow
   link as an "access" link, with higher-speed links in use for most of
   the connection path.  One common configuration might be a laptop
   computer using dialup access to a terminal server (a last-hop
   router), with an HTTP server on a high-speed LAN "behind" the
   terminal server.




Dawkins, et al.          Best Current Practice                  [Page 6]

RFC 3150                   PILC - Slow Links                   July 2001


   In this case, the HTTP server may be able to place packets on its
   directly-attached high-speed LAN at a higher rate than the last-hop
   router can forward them on the low-speed link.  When the last-hop
   router falls behind, it will be unable to buffer the traffic intended
   for the low-speed link, and will become a point of congestion and
   begin to drop the excess packets.  In particular, several packets may
   be dropped in a single transmission window when initial slow start
   overshoots the last-hop router buffer.

   Although packet loss is occurring, it isn't detected at the TCP
   sender until one RTT time after the router buffer space is exhausted
   and the first packet is dropped.  This late congestion signal allows
   the congestion window to increase up to double the size it was at the
   time the first packet was dropped at the router.

   If the link MTU is large enough to take more than the delayed ACK
   timeout interval to transmit a packet, an ACK is sent for every
   segment and the congestion window is doubled in a single RTT.  If a
   smaller link MTU is in use and delayed ACKs can be utilized, the
   congestion window increases by a factor of 1.5 in one RTT.  In both
   cases the sender continues transmitting packets well beyond the
   congestion point of the last-hop router, resulting in multiple packet
   losses in a single window.

   The self-clocking nature of TCP's slow start and congestion avoidance
   algorithms prevent this buffer overrun from continuing.  In addition,
   these algorithms allow senders to "probe" for available bandwidth -
   cycling through an increasing rate of transmission until loss occurs,
   followed by a dramatic (50-percent) drop in transmission rate.  This
   happens when a host directly connected to a low-speed link offers an
   advertised window that is unrealistically large for the low-speed
   link.  During the congestion avoidance phase the peer host continues
   to probe for available bandwidth, trying to fill the advertised
   window, until packet loss occurs.

   The same problems may also exist when a sending host is directly
   connected to a slow link as most slow links have some local buffer in
   the link interface.  This link interface buffer is subject to
   overflow exactly in the same way as the last-hop router buffer.

   When a last-hop router with a small number of buffers per outbound
   link is used, the first buffer overflow occurs earlier than it would
   if the router had a larger number of buffers.  Subsequently with a
   smaller number of buffers the periodic packet losses occur more
   frequently during congestion avoidance, when the sender probes for
   available bandwidth.





Dawkins, et al.          Best Current Practice                  [Page 7]

RFC 3150                   PILC - Slow Links                   July 2001


   The most important responsibility of router buffers is to absorb
   bursts.  Too few buffers (for example, only three buffers per
   outbound link as described in [RFC2416]) means that routers will
   overflow their buffer pools very easily and are unlikely to absorb
   even a very small burst.  When a larger number of router buffers are
   allocated per outbound link, the buffer space does not overflow as
   quickly but the buffers are still likely to become full due to TCP's
   default behavior.  A larger number of router buffers leads to longer
   queuing delays and a longer RTT.

   If router queues become full before congestion is signaled or remain
   full for long periods of time, this is likely to result in "lock-
   out", where a single connection or a few connections occupy the
   router queue space, preventing other connections from using the link
   [RFC2309], especially when a tail drop queue management discipline is
   being used.

   Therefore, it is essential to have a large enough number of buffers
   in routers to be able to absorb data bursts, but keep the queues
   normally small.  In order to achieve this it has been recommended in
   [RFC2309] that an active queue management mechanism, like Random
   Early Detection (RED) [RED93], should be implemented in all Internet
   routers, including the last-hop routers in front of a slow link.  It
   should also be noted that RED requires a sufficiently large number of
   router buffers to work properly.  In addition, the appropriate
   parameters of RED on a last-hop router connected to a slow link will
   likely deviate from the defaults recommended.

   Active queue management mechanism do not eliminate packet drops but,
   instead, drop packets at earlier stage to solve the full-queue
   problem for flows that are responsive to packet drops as congestion
   signal.  Hosts that are directly connected to low-speed links may
   limit the receive windows they advertise in order to lower or
   eliminate the number of packet drops in a last-hop router.  When
   doing so one should, however, take care that the advertised window is
   large enough to allow full utilization of the last-hop link capacity
   and to allow triggering fast retransmit, when a packet loss is
   encountered.  This recommendation takes two forms:

   -  Modern operating systems use relatively large default TCP receive
      buffers compared to what is required to fully utilize the link
      capacity of low-speed links.  Users should be able to choose the
      default receive window size in use - typically a system-wide
      parameter.  (This "choice" may be as simple as "dial-up access/LAN
      access" on a dialog box - this would accommodate many environments
      without requiring hand-tuning by experienced network engineers.)





Dawkins, et al.          Best Current Practice                  [Page 8]

RFC 3150                   PILC - Slow Links                   July 2001


   -  Application developers should not attempt to manually manage
      network bandwidth using socket buffer sizes.  Only in very rare
      circumstances will an application actually know both the bandwidth
      and delay of a path and be able to choose a suitably low (or high)
      value for the socket buffer size to obtain good network
      performance.

   This recommendation is not a general solution for any network path
   that might involve a slow link.  Instead, this recommendation is
   applicable in environments where the host "knows" it is always
   connected to other hosts via "slow links".  For hosts that may
   connect to other host over a variety of links (e.g., dial-up laptop
   computers with LAN-connected docking stations), buffer auto-tuning
   for the receive buffer is a more reasonable recommendation, and is
   discussed below.

2.5 TCP Buffer Auto-tuning

   [SMM98] recognizes a tension between the desire to allocate "large"
   TCP buffers, so that network paths are fully utilized, and a desire
   to limit the amount of memory dedicated to TCP buffers, in order to
   efficiently support large numbers of connections to hosts over
   network paths that may vary by six orders of magnitude.

   The technique proposed is to dynamically allocate TCP buffers, based
   on the current congestion window, rather than attempting to
   preallocate TCP buffers without any knowledge of the network path.

   This proposal results in receive buffers that are appropriate for the
   window sizes in use, and send buffers large enough to contain two
   windows of segments, so that SACK and fast recovery can recover
   losses without forcing the connection to use lengthy retransmission
   timeouts.

   While most of the motivation for this proposal is given from a
   server's perspective, hosts that connect using multiple interfaces
   with markedly-different link speeds may also find this kind of
   technique useful.  This is true in particular with slow links, which
   are likely to dominate the end-to-end RTT.  If the host is connected
   only via a single slow link interface at a time, it is fairly easy to
   (dynamically) adjust the receive window (and thus the advertised
   window) to a value appropriate for the slow last-hop link with known
   bandwidth and delay characteristics.

   Recommendation: If a host is sometimes connected via a slow link but
   the host is also connected using other interfaces with markedly-
   different link speeds, it may use receive buffer auto-tuning to
   adjust the advertised window to an appropriate value.



Dawkins, et al.          Best Current Practice                  [Page 9]

RFC 3150                   PILC - Slow Links                   July 2001


2.6 Small Window Effects

   If a TCP connection stabilizes with a congestion window of only a few
   segments (as could be expected on a "slow" link), the sender isn't
   sending enough segments to generate three duplicate acknowledgements,
   triggering fast retransmit and fast recovery.  This means that a
   retransmission timeout is required to repair the loss - dropping the
   TCP connection to a congestion window with only one segment.

   [TCPB98] and [TCPF98] observe that (in studies of network trace
   datasets) it is relatively common for TCP retransmission timeouts to
   occur even when some duplicate acknowledgements are being sent.  The
   challenge is to use these duplicate acknowledgements to trigger fast
   retransmit/fast recovery without injecting traffic into the network
   unnecessarily - and especially not injecting traffic in ways that
   will result in instability.

   The "Limited Transmit" algorithm [RFC3042] suggests sending a new
   segment when the first and second duplicate acknowledgements are
   received, so that the receiver is more likely to be able to continue
   to generate duplicate acknowledgements until the TCP retransmit
   threshold is reached, triggering fast retransmit and fast recovery.
   When the congestion window is small, this is very useful in assisting
   fast retransmit and fast recovery to recover from a packet loss
   without using a retransmission timeout.  We note that a maximum of
   two additional new segments will be sent before the receiver sends
   either a new acknowledgement advancing the window or two additional
   duplicate acknowledgements, triggering fast retransmit/fast recovery,
   and that these new segments will be acknowledgement-clocked, not
   back-to-back.

   Recommendation: Limited Transmit should be implemented in all hosts.

3.0 Summary of Recommended Optimizations

   This section summarizes our recommendations regarding the previous
   standards-track mechanisms, for end nodes that are connected via a
   slow link.

   Header compression should be implemented.  [RFC1144] header
   compression can be enabled over robust network links.  [RFC2507]
   should be used over network connections that are expected to
   experience loss due to corruption as well as loss due to congestion.
   For extremely lossy and slow links, implementors should evaluate ROHC
   [RFC3095] as a potential solution.  [RFC1323] TCP timestamps must be
   turned off because (1) their protection against TCP sequence number
   wrapping is unjustified for slow links, and (2) they complicate TCP
   header compression.



Dawkins, et al.          Best Current Practice                 [Page 10]

RFC 3150                   PILC - Slow Links                   July 2001


   IP Payload Compression [RFC2393] should be implemented, although
   compression at higher layers of the protocol stack (for example [RFC
   2616]) may make this mechanism less useful.

   For HTTP/1.1 environments, [RFC2616] payload compression should be
   implemented and should be used for payloads that are not already
   compressed.

   Implementors should choose MTUs that don't monopolize network
   interfaces for more than 100-200 milliseconds, in order to limit the
   impact of a single connection on all other connections sharing the
   network interface.

   Use of active queue management is recommended on last-hop routers
   that provide Internet access to host behind a slow link.  In
   addition, number of router buffers per slow link should be large
   enough to absorb concurrent data bursts from more than a single flow.
   To absorb concurrent data bursts from two or three TCP senders with a
   typical data burst of three back-to-back segments per sender, at
   least six (6) or nine (9) buffers are needed.  Effective use of
   active queue management is likely to require even larger number of
   buffers.

   Implementors should consider the possibility that a host will be
   directly connected to a low-speed link when choosing default TCP
   receive window sizes.

   Application developers should not attempt to manually manage network
   bandwidth using socket buffer sizes as only in very rare
   circumstances an application will be able to choose a suitable value
   for the socket buffer size to obtain good network performance.

   Limited Transmit [RFC3042] should be implemented in all end hosts as
   it assists in triggering fast retransmit when congestion window is
   small.

   All of the mechanisms described above are stable standards-track RFCs
   (at Proposed Standard status, as of this writing).

   In addition, implementors may wish to consider TCP buffer auto-
   tuning, especially when the host system is likely to be used with a
   wide variety of access link speeds.  This is not a standards-track
   TCP mechanism but, as it is an operating system implementation issue,
   it does not need to be standardized.

   Of the above mechanisms, only Header Compression (for IP and TCP) may
   cease to work in the presence of end-to-end IPSEC.  However,
   [RFC3095] does allow compressing the ESP header.



Dawkins, et al.          Best Current Practice                 [Page 11]

RFC 3150                   PILC - Slow Links                   July 2001


4.0 Topics For Further Work

   In addition to the standards-track mechanisms discussed above, there
   are still opportunities to improve performance over low-speed links.

   "Sending fewer bits" is an obvious response to slow link speeds.  The
   now-defunct HTTP-NG proposal [HTTP-NG] replaced the text-based HTTP
   header representation with a binary representation for compactness.
   However, HTTP-NG is not moving forward and HTTP/1.1 is not being
   enhanced to include a more compact HTTP header representation.
   Instead, the Wireless Application Protocol (WAP) Forum has opted for
   the XML-based Wireless Session Protocol [WSP], which includes a
   compact header encoding mechanism.

   It would be nice to agree on a more compact header representation
   that will be used by all WWW communities, not only the wireless WAN

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -