rfc3143.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 1,796 行 · 第 1/4 页
TXT
1,796 行
Cooper & Dilley Informational [Page 8]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
2. Designers of HTTP/1.1 extensions should consider using
mechanisms other than Vary to prevent false caching.
It is not clear whether the Vary mechanism is widely
implemented in caches; if not, this favors solution #1.
Workaround
A cache could treat the presence of a Vary header in a response as
an implicit "Cache-control: no-store", except for "known" status
codes, even though this is not required by RFC 2616. This would
avoid any transparency failures. "Known status codes" for basic
HTTP/1.1 caches probably include: 200, 203, 206, 300, 301, 410
(although this list should be re-evaluated in light of the problem
discussed here).
References
See [9] for the specification of the delta encoding extension, as
well as for an example of the use of a Cache-Control extension
instead of "Vary."
Contact
Jeff Mogul <mogul@pa.dec.com>
2.1.2 Client Chaining Loses Valuable Length Meta-Data
Name
Client Chaining Loses Valuable Length Meta-Data
Classification
Performance
Description
HTTP/1.1[3] implementations are prohibited from sending Content-
Length headers with any message whose body has been Transfer-
Encoded. Because 1.0 clients cannot accept chunked Transfer-
Encodings, receiving 1.1 implementations must forward the body to
1.0 clients must do so without the benefit of information that was
discarded earlier in the chain.
Significance
Low
Implications
Lacking either a chunked transfer encoding or Content-Length
indication creates negative performance implications for how the
proxy must forward the message body.
Cooper & Dilley Informational [Page 9]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
In the case of response bodies, the server may either forward the
response while closing the connection to indicate the end of the
response or must utilize store and forward semantics to buffer the
entire response in order to calculate a Content-Length. The
former option defeats the performance benefits of persistent
connections in HTTP/1.1 (and their Keep-Alive cousin in HTTP/1.0)
as well as creating some ambiguously lengthed responses. The
latter store and forward option may not even be feasible given the
size of the resource and it will always introduce increased
latency.
Request bodies must undertake the store and forward process as 1.0
request bodies must be delimited by Content-Length headers. As
with response bodies this may place unacceptable resource
constraints on the proxy and the request may not be able to be
satisfied.
Indications
The lack of HTTP/1.0 style persistent connections between 1.0
clients and 1.1 proxies, only when accessing 1.1 servers, is a
strong indication of this problem.
Solution(s)
An HTTP specification clarification that would allow origin known
identity document Content-Lengths to be carried end to end would
alleviate this issue.
Workaround
None.
Contact
Patrick McManus <mcmanus@AppliedTheory.com>
2.2 Known Architectural Problems
2.2.1 Interception proxies break client cache directives
Name
Interception proxies break client cache directives
Classification
Architecture
Description
HTTP[3] is designed for the user agent to be aware if it is
connected to an origin server or to a proxy. User agents
believing they are transacting with an origin server but which are
Cooper & Dilley Informational [Page 10]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
really in a connection with an interception proxy may fail to send
critical cache-control information they would have otherwise
included in their request.
Significance
High
Implications
Clients may receive data that is not synchronized with the origin
even when they request an end to end refresh, because of the lack
of inclusion of either a "Cache-control: no-cache" or "must-
revalidate" header. These headers have no impact on origin server
behavior so may not be included by the browser if it believes it
is connected to that resource. Other related data implications
are possible as well. For instance, data security may be
compromised by the lack of inclusion of "private" or "no-store"
clauses of the Cache-control header under similar conditions.
Indications
Easily detected by placing fresh (un-expired) content on a caching
proxy while changing the authoritative copy, then requesting an
end-to-end reload of the data through a proxy in both interception
and explicit modes.
Solution(s)
Eliminate the need for interception proxies and IP spoofing, which
will return correct context awareness to the client.
Workaround
Include relevant Cache-Control directives in every request at the
cost of increased bandwidth and CPU requirements.
Contact
Patrick McManus <mcmanus@AppliedTheory.com>
2.2.2 Interception proxies prevent introduction of new HTTP methods
Name
Interception proxies prevent introduction of new HTTP methods
Classification
Architecture
Description
A proxy that receives a request with a method unknown to it is
required to generate an HTTP 501 Error as a response. HTTP
methods are designed to be extensible so there may be applications
deployed with initial support just for the user agent and origin
Cooper & Dilley Informational [Page 11]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
server. An interception proxy that hijacks requests which include
new methods destined for servers that have implemented those
methods creates a de-facto firewall where none may be intended.
Significance
Medium within interception proxy environments.
Implications
Renders new compliant applications useless unless modifications
are made to proxy software. Because new methods are not required
to be globally standardized it is impossible to keep up to date in
the general case.
Solution(s)
Eliminate the need for interception proxies. A client receiving a
501 in a traditional HTTP environment may either choose to repeat
the request to the origin server directly, or perhaps be
configured to use a different proxy.
Workaround
Level 5 switches (sometimes called Level 7 or application layer
switches) can be used to keep HTTP traffic with unknown methods
out of the proxy. However, these devices have heavy buffering
responsibilities, still require TCP sequence number spoofing, and
do not interact well with persistent connections.
The HTTP/1.1 specification allows a proxy to switch over to tunnel
mode when it receives a request with a method or HTTP version it
does not understand how to handle.
Contact
Patrick McManus <mcmanus@AppliedTheory.com>
Henrik Nordstrom <hno@hem.passagen.se> (HTTP/1.1 clarification)
2.2.3 Interception proxies break IP address-based authentication
Name
Interception proxies break IP address-based authentication
Classification
Architecture
Description
Some web servers are not open for public access, but restrict
themselves to accept only requests from certain IP address ranges
for security reasons. Interception proxies alter the source
(client) IP addresses to that of the proxy itself, without the
Cooper & Dilley Informational [Page 12]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
knowledge of the client/user. This breaks such authentication
mechanisms and prohibits otherwise allowed clients access to the
servers.
Significance
Medium
Implications
Creates end user confusion and frustration.
Indications
Users may start to see refused connections to servers after
interception proxies are deployed.
Solution(s)
Use user-based authentication instead of (IP) address-based
authentication.
Workaround
Using IP filters at the intercepting device (L4 switch) and bypass
all requests to such servers concerned.
Contact
Keith K. Chau <keithc@unitechnetworks.com>
2.2.4 Caching proxy peer selection in heterogeneous networks
Name
Caching proxy peer selection in heterogeneous networks
Classification
Architecture
Description
ICP[4] based caching proxy peer selection in networks with large
variance in latency and bandwidth between peers can lead to non-
optimal peer selection. For example take Proxy C with two
siblings, Sib1 and Sib2, and the following network topology
(summarized).
* Cache C's link to Sib1, 2 Mbit/sec with 300 msec latency
* Cache C's link to Sib2, 64 Kbit/sec with 10 msec latency.
ICP[4] does not work well in this context. If a user submits a
request to Proxy C for page P that results in a miss, C will send
an ICP request to Sib1 and Sib2. Assume both siblings have the
Cooper & Dilley Informational [Page 13]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
requested object P. The ICP_HIT reply will always come from Sib2
before Sib1. However, it is clear that the retrieval of large
objects will be faster from Sib1, rather than Sib2.
The problem is more complex because Sib1 and Sib2 can't have a
100% hit ratio. With a hit rate of 10%, it is more efficient to
use Sib1 with resources larger than 48K. The best choice depends
on at least the hit rate and link characteristics; maybe other
parameters as well.
Significance
Medium
Implications
By using the first peer to respond, peer selection algorithms are
not optimizing retrieval latency to end users. Furthermore they
are causing more work for the high-latency peer since it must
respond to such requests but will never be chosen to serve content
if the lower latency peer has a copy.
Indications
Inherent in design of ICP v1, ICP v2, and any cache mesh protocol
that selects peers based upon first response.
This problem is not exhibited by cache digest or other protocols
which (attempt to) maintain knowledge of peer contents and only
hit peers that are believed to have a copy of the requested page.
Solution(s)
This problem is architectural with the peer selection protocols.
Workaround
Cache mesh design when using such a protocol should be done in
such a way that there is not a high latency variance among peers.
In the example presented in the above description the high latency
high bandwidth peer could be used as a parent, but should not be
used as a sibling.
Contact
Ivan Lovric <ivan.lovric@cnet.francetelecom.fr>
John Dilley <jad@akamai.com>
Cooper & Dilley Informational [Page 14]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
2.2.5 ICP Performance
Name
ICP performance
Classification
Architecture(ICP), Performance
Description
ICP[4] exhibits O(n^2) scaling properties, where n is the number
of participating peer proxies. This can lead ICP traffic to
dominate HTTP traffic within a network.
Significance
Medium
Implications
If a proxy has many ICP peers the bandwidth demand of ICP can be
excessive. System managers must carefully regulate ICP peering.
ICP also leads proxies to become homogeneous in what they serve;
if your proxy does not have a document it is unlikely your peers
will have it either. Therefore, ICP traffic requests are largely
unable to locate a local copy of an object (see [6]).
Indications
Inherent in design of ICP v1, ICP v2.
Solution(s)
This problem is architectural - protocol redesign or replacement
is required to solve it if ICP is to continue to be used.
Workaround
Implementation workarounds exist, for example to turn off use of
ICP, to carefully regulate peering, or to use another mechanism if
available, such as cache digests. A cache digest protocol shares
a summary of cache contents using a Bloom Filter technique. This
allows a cache to estimate whether a peer has a document. Filters
are updated regularly but are not always up-to-date so cannot help
when a spike in popularity occurs. They also increase traffic but
not as much as ICP.
Proxy clustering protocols organize proxies into a mesh provide
another alternative solution. There is ongoing research on this
topic.
Contact
John Dilley <jad@akamai.com>
Cooper & Dilley Informational [Page 15]
RFC 3143 Known HTTP Proxy/Caching Problems June 2001
2.2.6 Caching proxy meshes can break HTTP serialization of content
Name
Caching proxy meshes can break HTTP serialization of content
Classification
Architecture (HTTP protocol)
Description
A caching proxy mesh where a request may travel different paths,
depending on the state of the mesh and associated caches, can
break HTTP content serialization, possibly causing the end user to
receive older content than seen on an earlier request, where the
request traversed another path in the mesh.
Significance
Medium
Implications
Can cause end user confusion. May in some situations (sibling
cache hit, object has changed state from cacheable to uncacheable)
be close to impossible to get the caches properly updated with the
new content.
Indications
Older content is unexpectedly returned from a caching proxy mesh
after some time.
Solutions(s)
Work with caching proxy vendors and researchers to find a suitable
protocol for maintaining proxy relations and object state in a
mesh.
Workaround
When designing a hierarchy/mesh, make sure that for each end-
user/URL combination there is only one single path in the mesh
during normal operation.
Contact
Henrik Nordstrom <hno@hem.passagen.se>
Cooper & Dilley Informational [Page 16]
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?