📄 rfc3135.txt
字号:
2.1.2 Application Layer PEPs
Application layer PEPs operate above the transport layer. Today,
different kinds of application layer proxies are widely used in the
Internet. Such proxies include Web caches and relay Mail Transfer
Agents (MTA) and they typically try to improve performance or service
availability and reliability in general and in a way which is
applicable in any environment but they do not necessarily include any
optimizations that are specific to certain link characteristics.
Application layer PEPs, on the other hand, can be implemented to
improve application protocol as well as transport layer performance
with respect to a particular application being used with a particular
type of link. An application layer PEP may have the same
functionality as the corresponding regular proxy for the same
application (e.g., relay MTA or Web caching proxy) but extended with
link-specific optimizations of the application protocol operation.
Some application protocols employ extraneous round trips, overly
verbose headers and/or inefficient header encoding which may have a
significant impact on performance, in particular, with long delay and
slow links. This unnecessary overhead can be reduced, in general or
Border, et al. Informational [Page 5]
RFC 3135 PILC - Performance Enhancing Proxies June 2001
for a particular type of link, by using an application layer PEP in
an intermediate node. Some examples of application layer PEPs which
have been shown to improve performance on slow wireless WAN links are
described in [LHKR96] and [CTC+97].
2.2 Distribution
A PEP implementation may be integrated, i.e., it comprises a single
PEP component implemented within a single node, or distributed, i.e.,
it comprises two or more PEP components, typically implemented in
multiple nodes. An integrated PEP implementation represents a single
point at which performance enhancement is applied. For example, a
single PEP component might be implemented to provide impedance
matching at the point where wired and wireless links meet.
A distributed PEP implementation is generally used to surround a
particular link for which performance enhancement is desired. For
example, a PEP implementation for a satellite connection may be
distributed between two PEPs located at each end of the satellite
link.
2.3 Implementation Symmetry
A PEP implementation may be symmetric or asymmetric. Symmetric PEPs
use identical behavior in both directions, i.e., the actions taken by
the PEP occur independent from which interface a packet is received.
Asymmetric PEPs operate differently in each direction. The direction
can be defined in terms of the link (e.g., from a central site to a
remote site) or in terms of protocol traffic (e.g., the direction of
TCP data flow, often called the TCP data channel, or the direction of
TCP ACK flow, often called the TCP ACK channel). An asymmetric PEP
implementation is generally used at a point where the characteristics
of the links on each side of the PEP differ or with asymmetric
protocol traffic. For example, an asymmetric PEP might be placed at
the intersection of wired and wireless networks or an asymmetric
application layer PEP might be used for the request-reply type of
HTTP traffic. A PEP implementation may also be both symmetric and
asymmetric at the same time with regard to different mechanisms it
employs. (PEP mechanisms are described in Section 3.)
Whether a PEP implementation is symmetric or asymmetric is
independent of whether the PEP implementation is integrated or
distributed. In other words, a distributed PEP implementation might
operate symmetrically at each end of a link (i.e., the two PEPs
function identically). On the other hand, a distributed PEP
implementation might operate asymmetrically, with a different PEP
implementation at each end of the link. Again, this usually is used
with asymmetric links. For example, for a link with an asymmetric
Border, et al. Informational [Page 6]
RFC 3135 PILC - Performance Enhancing Proxies June 2001
amount of bandwidth available in each direction, the PEP on the end
of the link forwarding traffic in the direction with a large amount
of bandwidth might focus on locally acknowledging TCP traffic in
order to use the available bandwidth. At the same time, the PEP on
the end of the link forwarding traffic in the direction with very
little bandwidth might focus on reducing the amount of TCP
acknowledgement traffic being forwarded across the link (to keep the
link from congesting).
2.4 Split Connections
A split connection TCP implementation terminates the TCP connection
received from an end system and establishes a corresponding TCP
connection to the other end system. In a distributed PEP
implementation, this is typically done to allow the use of a third
connection between two PEPs optimized for the link. This might be a
TCP connection optimized for the link or it might be another
protocol, for example, a proprietary protocol running on top of UDP.
Also, the distributed implementation might use a separate connection
between the proxies for each TCP connection or it might multiplex the
data from multiple TCP connections across a single connection between
the PEPs.
In an integrated PEP split connection TCP implementation, the PEP
again terminates the connection from one end system and originates a
separate connection to the other end system. [I-TCP] documents an
example of a single PEP split connection implementation.
Many integrated PEPs use a split connection implementation in order
to address a mismatch in TCP capabilities between two end systems.
For example, the TCP window scaling option [RFC1323] can be used to
extend the maximum amount of TCP data which can be "in flight" (i.e.,
sent and awaiting acknowledgement). This is useful for filling a
link which has a high bandwidth*delay product. If one end system is
capable of using scaled TCP windows but the other is not, the end
system which is not capable can set up its connection with a PEP on
its side of the high bandwidth*delay link. The split connection PEP
then sets up a TCP connection with window scaling over the link to
the other end system.
Split connection TCP implementations can effectively leverage TCP
performance enhancements optimal for a particular link but which
cannot necessarily be employed safely over the global Internet.
Note that using split connection PEPs does not necessarily exclude
simultaneous use of IP for end-to-end connectivity. If a split
connection is managed per application or per connection and is under
the control of the end user, the user can decide whether a particular
Border, et al. Informational [Page 7]
RFC 3135 PILC - Performance Enhancing Proxies June 2001
TCP connection or application makes use of the split connection PEP
or whether it operates end-to-end. When a PEP is employed on a last
hop link, the end user control is relatively easy to implement.
In effect, application layer proxies for TCP-based applications are
split connection TCP implementations with end systems using PEPs as a
service related to a particular application. Therefore, all
transport (TCP) layer enhancements that are available with split
connection TCP implementations can also be employed with application
layer PEPs in conjunction with application layer enhancements.
2.5 Transparency
Another key characteristic of a PEP is its degree of transparency.
PEPs may operate totally transparently to the end systems, transport
endpoints, and/or applications involved (in a connection), requiring
no modifications to the end systems, transport endpoints, or
applications.
On the other hand, a PEP implementation may require modifications to
both ends in order to be used. In between, a PEP implementation may
require modifications to only one of the ends involved. Either of
these kind of PEP implementations is non-transparent, at least to the
layer requiring modification.
It is sometimes useful to think of the degree of transparency of a
PEP implementation at four levels, transparency with respect to the
end systems (network-layer transparent PEP), transparency with
respect to the transport endpoints (transport-layer transparent PEP),
transparency with respect to the applications (application-layer
transparent PEP) and transparency with respect to the users. For
example, a user who subscribes to a satellite Internet access service
may be aware that the satellite terminal is providing a performance
enhancing service even though the TCP/IP stack and the applications
in the user's PC are not aware of the PEP which implements it.
Note that the issue of transparency is not the same as the issue of
maintaining end-to-end semantics. For example, a PEP implementation
which simply uses a TCP ACK spacing mechanism maintains the end-to-
end semantics of the TCP connection while a split connection TCP PEP
implementation may not. Yet, both can be implemented transparently
to the transport endpoints at both ends. The implications of not
maintaining the end-to-end semantics, in particular the end-to-end
semantics of TCP connections, are discussed in Section 4.
Border, et al. Informational [Page 8]
RFC 3135 PILC - Performance Enhancing Proxies June 2001
3. PEP Mechanisms
An obvious key characteristic of a PEP implementation is the
mechanism(s) it uses to improve performance. Some examples of PEP
mechanisms are described in the following subsections. A PEP
implementation might implement more than one of these mechanisms.
3.1 TCP ACK Handling
Many TCP PEP implementations are based on TCP ACK manipulation. The
handling of TCP acknowledgments can differ significantly between
different TCP PEP implementations. The following subsections
describe various TCP ACK handling mechanisms. Many implementations
combine some of these mechanisms and possibly employ some additional
mechanisms as well.
3.1.1 TCP ACK Spacing
In environments where ACKs tend to bunch together, ACK spacing is
used to smooth out the flow of TCP acknowledgments traversing a link.
This improves performance by eliminating bursts of TCP data segments
that the TCP sender would send due to back-to-back arriving TCP
acknowledgments [BPK97].
3.1.2 Local TCP Acknowledgements
In some PEP implementations, TCP data segments received by the PEP
are locally acknowledged by the PEP. This is very useful over
network paths with a large bandwidth*delay product as it speeds up
TCP slow start and allows the sending TCP to quickly open up its
congestion window. Local (negative) acknowledgments are often also
employed to trigger local (and faster) error recovery on links with
significant error rates. (See Section 3.1.3.)
Local acknowledgments are automatically employed with split
connection TCP implementations. When local acknowledgments are used,
the burden falls upon the TCP PEP to recover any data which is
dropped after the PEP acknowledges it.
3.1.3 Local TCP Retransmissions
A TCP PEP may locally retransmit data segments lost on the path
between the TCP PEP and the receiving end system, thus aiming at
faster recovery from lost data. In order to achieve this the TCP PEP
may use acknowledgments arriving from the end system that receives
the TCP data segments, along with appropriate timeouts, to determine
Border, et al. Informational [Page 9]
RFC 3135 PILC - Performance Enhancing Proxies June 2001
when to locally retransmit lost data. TCP PEPs sending local
acknowledgments to the sending end system are required to employ
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -