⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc3135.txt

📁 RFC 的详细文档!
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   local retransmissions towards the receiving end system.

   Some PEP implementations perform local retransmissions even though
   they do not use local acknowledgments to alter TCP connection
   performance.  Basic Snoop [SNOOP] is a well know example of such a
   PEP implementation.  Snoop caches TCP data segments it receives and
   forwards and then monitors the end-to-end acknowledgments coming from
   the receiving TCP end system for duplicate acknowledgments (DUPACKs).
   When DUPACKs are received, Snoop locally retransmits the lost TCP
   data segments from its cache, suppressing the DUPACKs flowing to the
   sending TCP end system until acknowledgments for new data are
   received.  The Snoop system also implements an option to employ local
   negative acknowledgments to trigger local TCP retransmissions.  This
   can be achieved, for example, by applying TCP selective
   acknowledgments locally on the error-prone link.  (See Section 5.3
   for details.)

3.1.4 TCP ACK Filtering and Reconstruction

   On paths with highly asymmetric bandwidth the TCP ACKs flowing in the
   low-speed direction may get congested if the asymmetry ratio is high
   enough.  The ACK filtering and reconstruction mechanism addresses
   this by filtering the ACKs on one side of the link and reconstructing
   the deleted ACKs on the other side of the link.  The mechanism and
   the issue of dealing with TCP ACK congestion with highly asymmetric
   links are discussed in detail in [RFC2760] and in [BPK97].

3.2 Tunneling

   A Performance Enhancing Proxy may encapsulate messages to carry the
   messages across a particular link or to force messages to traverse a
   particular path.  A PEP at the other end of the encapsulation tunnel
   removes the tunnel wrappers before final delivery to the receiving
   end system.  A tunnel might be used by a distributed split connection
   TCP implementation as the means for carrying the connection between
   the distributed PEPs.  A tunnel might also be used to support forcing
   TCP connections which use asymmetric routing to go through the end
   points of a distributed PEP implementation.

3.3 Compression

   Many PEP implementations include support for one or more forms of
   compression.  In some PEP implementations, compression may even be
   the only mechanism used for performance improvement.  Compression
   reduces the number of bytes which need to be sent across a link.
   This is useful in general and can be very important for bandwidth



Border, et al.               Informational                     [Page 10]

RFC 3135          PILC - Performance Enhancing Proxies         June 2001


   limited links.  Benefits of using compression include improved link
   efficiency and higher effective link utilization, reduced latency and
   improved interactive response time, decreased overhead and reduced
   packet loss rate over lossy links.

   Where appropriate, link layer compression is used.  TCP and IP header
   compression are also frequently used with PEP implementations.
   [RFC1144] describes a widely deployed method for compressing TCP
   headers.  Other header compression algorithms are described in
   [RFC2507], [RFC2508] and [RFC2509].

   Payload compression is also desirable and is increasing in importance
   with today's increased emphasis on Internet security.  Network (IP)
   layer (and above) security mechanisms convert IP payloads into random
   bit streams which defeat applicable link layer compression mechanisms
   by removing or hiding redundant "information."  Therefore,
   compression of the payload needs to be applied before security
   mechanisms are applied.  [RFC2393] defines a framework where common
   compression algorithms can be applied to arbitrary IP segment
   payloads.  However, [RFC2393] compression is not always applicable.
   Many types of IP payloads (e.g., images, audio, video and "zipped"
   files being transferred) are already compressed.  And, when security
   mechanisms such as TLS [RFC2246] are applied above the network (IP)
   layer, the data is already encrypted (and possibly also compressed),
   again removing or hiding any redundancy in the payload.  The
   resulting additional transport or network layer compression will
   compact only headers, which are small, and possibly already covered
   by separate compression algorithms of their own.

   With application layer PEPs one can employ application-specific
   compression.  Typically an application-specific (or content-specific)
   compression mechanism is much more efficient than any generic
   compression mechanism.  For example, a distributed Web PEP
   implementation may implement more efficient binary encoding of HTTP
   headers, or a PEP can employ lossy compression that reduces the image
   quality of online-images on Web pages according to end user
   instructions, thus reducing the number of bytes transferred over a
   slow link and consequently the response time perceived by the user
   [LHKR96].

3.4 Handling Periods of Link Disconnection with TCP

   Periods of link disconnection or link outages are very common with
   some wireless links.  During these periods, a TCP sender does not
   receive the expected acknowledgments.  Upon expiration of the
   retransmit timer, this causes TCP to close its congestion window with
   all of the related drawbacks.  A TCP PEP may monitor the traffic
   coming from the TCP sender towards the TCP receiver behind the



Border, et al.               Informational                     [Page 11]

RFC 3135          PILC - Performance Enhancing Proxies         June 2001


   disconnected link.  The TCP PEP retains the last ACK, so that it can
   shut down the TCP sender's window by sending the last ACK with a
   window set to zero.  Thus, the TCP sender will go into persist mode.

   To make this work in both directions with an integrated TCP PEP
   implementation, the TCP receiver behind the disconnected link must be
   aware of the current state of the connection and, in the event of a
   disconnection, it must be capable of freezing all timers.  [M-TCP]
   implements such operation.  Another possibility is that the
   disconnected link is surrounded by a distributed PEP pair.

   In split connection TCP implementations, a period of link
   disconnection can easily be hidden from the end host on the other
   side of the PEP thus precluding the TCP connection from breaking even
   if the period of link disconnection lasts a very long time; if the
   TCP PEP cannot forward data due to link disconnection, it stops
   receiving data.  Normal TCP flow control then prevents the TCP sender
   from sending more than the TCP advertised window allowed by the PEP.
   Consequently, the PEP and its counterpart behind the disconnected
   link can employ a modified TCP version which retains the state and
   all unacknowledged data segments across the period of disconnection
   and then performs local recovery as the link is reconnected.  The
   period of link disconnection may or may not be hidden from the
   application and user, depending upon what application the user is
   using the TCP connection for.

3.5 Priority-based Multiplexing

   Implementing priority-based multiplexing of data over a slow and
   expensive link may significantly improve the performance and
   usability of the link for selected applications or connections.

   A user behind a slow link would experience the link more feasible to
   use in case of simultaneous data transfers, if urgent data transfers
   (e.g., interactive connections) could have shorter response time
   (better performance) than less urgent background transfers.  If the
   interactive connections transmit enough data to keep the slow link
   fully utilized, it might be necessary to fully suspend the background
   transfers for awhile to ensure timely delivery for the interactive
   connections.

   In flight TCP segments of an end-to-end TCP connection (with low
   priority) cannot be delayed for a long time.  Otherwise, the TCP
   timer at the sending end would expire, resulting in suboptimal
   performance.  However, this kind of operation can be controlled in
   conjunction with a split connection TCP PEP by assigning different
   priorities for different connections (or applications).  A split
   connection PEP implementation allows the PEP in an intermediate node



Border, et al.               Informational                     [Page 12]

RFC 3135          PILC - Performance Enhancing Proxies         June 2001


   to delay the data delivery of a lower-priority TCP flow for an
   unlimited period of time by simply rescheduling the order in which it
   forwards data of different flows to the destination host behind the
   slow link.  This does not have a negative impact on the delayed TCP
   flow as normal TCP flow control takes care of suspending the flow
   between the TCP sender and the PEP, when the PEP is not forwarding
   data for the flow, and resumes it once the PEP decides to continue
   forwarding data for the flow.  This can further be assisted, if the
   protocol stacks on both sides of the slow link implement priority
   based scheduling of connections.

   With such a PEP implementation, along with user-controlled
   priorities, the user can assign higher priority for selected
   interactive connection(s) and have much shorter response time for the
   selected connection(s), even if there are simultaneous low priority
   bulk data transfers which in regular end-to-end operation would
   otherwise eat the available bandwidth of the slow link almost
   completely.  These low priority bulk data transfers would then
   proceed nicely during the idle periods of interactive connections,
   allowing the user to keep the slow and expensive link (e.g., wireless
   WAN) fully utilized.

   Other priority-based mechanisms may be applied on shared wireless
   links with more than two terminals.  With shared wireless mediums
   becoming a weak link in Internet QoS architectures, many may turn to
   PEPs to provide extra priority levels across a shared wireless medium
   [SHEL00].  These PEPs are distributed on all nodes of the shared
   wireless medium.  For example, in an 802.11 WLAN this PEP is
   implemented in the access point (base station) and each mobile host.
   One PEP then uses distributed queuing techniques to coordinate
   traffic classes of all nodes.  This is also sometimes called subnet
   bandwidth management.  See [BBKT97] for an example of queuing
   techniques which can be used to achieve this.  This technique can be
   implemented either above or below the IP layer.  Priority treatment
   can typically be specified either by the user or by marking the
   (IPv4) ToS or (IPv6) Traffic Class IP header field.

3.6 Protocol Booster Mechanisms

   Work in [FMSBMR98] shows a range of other possible PEP mechanisms
   called protocol boosters.  Some of these mechanisms are specific to
   UDP flows.  For example, a PEP may apply asymmetrical methods such as
   extra UDP error detection.  Since the 16 bit UDP checksum is
   optional, it is typically not computed.  However, for links with
   errors, the checksum could be beneficial.  This checksum can be added
   to outgoing UDP packets by a PEP.





Border, et al.               Informational                     [Page 13]

RFC 3135          PILC - Performance Enhancing Proxies         June 2001


   Symmetrical mechanisms have also been developed.  A Forward Erasure
   Correction (FZC) mechanism can be used with real-time and multicast
   traffic.  The encoding PEP adds a parity packet over a block of
   packets.  Upon reception, the parity is removed and missing data is
   regenerated.  A jitter control mechanism can be implemented at the
   expense of extra latency.  A sending PEP can add a timestamp to
   outgoing packets.  The receiving PEP then delays packets in order to
   reproduce the correct interval.

4. Implications of Using PEPs

   The following sections describe some of the implications of using
   Performance Enhancing Proxies.

4.1 The End-to-end Argument

   As indicated in [RFC1958], the end-to-end argument [SRC84] is one of
   the architectural principles of the Internet.  The basic argument is
   that, as a first principle, certain required end-to-end functions can
   only be correctly performed by the end systems themselves.  Most of
   the potential negative implications associated with using PEPs are
   related to the possibility of breaking the end-to-end semantics of
   connections.  This is one of the main reasons why PEPs are not
   recommended for general use.

   As indicated in Section 2.5, not all PEP implementations break the
   end-to-end semantics of connections.  Correctly designed PEPs do not
   attempt to replace any application level end-to-end function, but
   only attempt to add performance optimizations to a subpath of the
   end-to-end path between the application endpoints.  Doing this can be
   consistent with the end-to-end argument.  However, a user or network
   administrator adding a PEP to his network configuration should be
   aware of the potential end-to-end implications related to the
   mechanisms being used by the particular PEP implementation.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -