rfc3205.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 788 行 · 第 1/3 页
TXT
788 行
Network Working Group K. Moore
Request for Comments: 3205 University of Tennessee
BCP: 56 February 2002
Category: Best Current Practice
On the use of HTTP as a Substrate
Status of this Memo
This document specifies an Internet Best Current Practices for the
Internet Community, and requests discussion and suggestions for
improvements. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2002). All Rights Reserved.
Abstract
Recently there has been widespread interest in using Hypertext
Transfer Protocol (HTTP) as a substrate for other applications-level
protocols. This document recommends technical particulars of such
use, including use of default ports, URL schemes, and HTTP security
mechanisms.
1. Introduction
Recently there has been widespread interest in using Hypertext
Transfer Protocol (HTTP) [1] as a substrate for other applications-
level protocols. Various reasons cited for this interest have
included:
o familiarity and mindshare,
o compatibility with widely deployed browsers,
o ability to reuse existing servers and client libraries,
o ease of prototyping servers using CGI scripts and similar
extension mechanisms,
o ability to use existing security mechanisms such as HTTP digest
authentication [2] and SSL or TLS [3],
o the ability of HTTP to traverse firewalls, and
o cases where a server often needs to support HTTP anyway.
Moore Best Current Practice [Page 1]
RFC 3205 HTTP Layering February 2002
The Internet community has a long tradition of protocol reuse, dating
back to the use of Telnet [4] as a substrate for FTP [5] and SMTP
[6]. However, the recent interest in layering new protocols over
HTTP has raised a number of questions when such use is appropriate,
and the proper way to use HTTP in contexts where it is appropriate.
Specifically, for a given application that is layered on top of HTTP:
o Should the application use a different port than the HTTP default
of 80?
o Should the application use traditional HTTP methods (GET, POST,
etc.) or should it define new methods?
o Should the application use http: URLs or define its own prefix?
o Should the application define its own MIME-types, or use something
that already exists (like registering a new type of MIME-directory
structure)?
This memo recommends certain design decisions in answer to these
questions.
This memo is intended as advice and recommendation for protocol
designers, working groups, implementors, and IESG, rather than as a
strict set of rules which must be adhered to in all cases.
Accordingly, the capitalized key words defined in RFC 2119, which are
intended to indicate conformance to a specification, are not used in
this memo.
2. Issues Regarding the Design Choice to use HTTP
Despite the advantages listed above, it's worth asking the question
as to whether HTTP should be used at all, or whether the entire HTTP
protocol should be used.
2.1 Complexity
HTTP started out as a simple protocol, but quickly became much more
complex due to the addition of several features unanticipated by its
original design. These features include persistent connections, byte
ranges, content negotiation, and cache support. All of these are
useful for traditional web applications but may not be useful for the
layered application. The need to support (or circumvent) these
features can add additional complexity to the design and
implementation of a protocol layered on top of HTTP. Even when HTTP
can be "profiled" to minimize implementation overhead, the effort of
specifying such a profile might be more than the effort of specifying
a purpose-built protocol which is better suited to the task at hand.
Moore Best Current Practice [Page 2]
RFC 3205 HTTP Layering February 2002
Even if existing HTTP client and server code can often be re-used,
the additional complexity of layering something over HTTP vs. using a
purpose-built protocol can increase the number of interoperability
problems.
2.2 Overhead
Further, although HTTP can be used as the transport for a "remote
procedure call" paradigm, HTTP's protocol overhead, along with the
connection setup overhead of TCP, can make HTTP a poor choice. A
protocol based on UDP, or with both UDP and TCP variants, should be
considered if the payloads are very likely to be small (less than a
few hundred bytes) for the foreseeable future. This is especially
true if the protocol might be heavily used, or if it might be used
over slow or expensive links.
On the other hand, the connection setup overhead can become
negligible if the layered protocol can utilize HTTP/1.1's persistent
connections, and if the same client and server are likely to perform
several transactions during the time the HTTP connection is open.
2.3 Security
Although HTTP appears at first glance to be one of the few "mature"
Internet protocols that can provide good security, there are many
applications for which neither HTTP's digest authentication nor TLS
are sufficient by themselves.
Digest authentication requires a secret (e.g., a password) to be
shared between client and server. This further requires that each
client know the secret to be used with each server, but it does not
provide any means of securely transmitting such secrets between the
parties. Shared secrets can work fine for small groups where
everyone is physically co-located; they don't work as well for large
or dispersed communities of users. Further, if the server is
compromised a large number of secrets may be exposed, which is
especially dangerous if the same secret (or password) is used for
several applications. (Similar concerns exist with TLS based clients
or servers - if a private key is compromised then the attacker can
impersonate the party whose key it has.)
TLS and its predecessor SSL were originally designed to authenticate
web servers to clients, so that a user could be assured (for example)
that his credit card number was not being sent to an imposter.
However, many applications need to authenticate clients to servers,
or to provide mutual authentication of client and server. TLS does
Moore Best Current Practice [Page 3]
RFC 3205 HTTP Layering February 2002
have a capability to provide authentication in each direction, but
such authentication may or may not be suitable for a particular
application.
Web browsers which support TLS or SSL are typically shipped with the
public keys of several certificate authorities (CAs) "wired in" so
that they can verify the identity of any server whose public key was
signed by one of those CAs. For this to work well, every secure web
server's public key has to be signed by one of the CAs whose keys are
wired into popular browsers. This deployment model works when there
are a (relatively) small number of servers whose identities can be
verified, and their public keys signed, by the small number of CAs
whose keys are included in a small number of different browsers.
This scheme does not work as well to authenticate millions of
potential clients to servers. It would take a much larger number of
CAs to do the job, each of which would need to be widely trusted by
servers. Those CAs would also have a more difficult time verifying
the identities of (large numbers of) ordinary users than they do in
verifying the identities of (a smaller number of) commercial and
other enterprises that need to run secure web servers.
Also, in a situation where there were a large number of clients
authenticating with TLS, it seems unlikely that there would be a set
of CAs whose keys were trusted by every server. A client that
potentially needed to authenticate to multiple servers would
therefore need to be configured as to which key to use with which
server when attempting to establish a secure connection to the
server.
For the reasons stated above, client authentication is rarely used
with TLS. A common technique is to use TLS to authenticate the
server to the client and to establish a private channel, and for the
client to authenticate to the server using some other means - for
example, a username and password using HTTP basic or digest
authentication.
For any application that requires privacy, the 40-bit ciphersuites
provided by some SSL implementations (to conform to outdated US
export regulations or to regulations on the use or export of
cryptography in other countries) are unsuitable. Even 56-bit DES
encryption, which is required of conforming TLS implementations, has
been broken in a matter of days with a modest investment in
resources. So if TLS is chosen it may be necessary to discourage use
of small key lengths, or of weak ciphersuites, in order to provide
adequate privacy assurance. If TLS is used to provide privacy for
passwords sent by clients then it is especially important to support
longer keys.
Moore Best Current Practice [Page 4]
RFC 3205 HTTP Layering February 2002
None of the above should be taken to mean that either digest
authentication or TLS are generally inferior to other authentication
systems, or that they are unsuitable for use in other applications
besides HTTP. Many of the limitations of TLS and digest
authentication also apply to other authentication and privacy
systems. The point here is that neither TLS nor digest
authentication is a "magic pixie dust" solution to authentication or
privacy. In every case, an application's designers must carefully
determine the application's users' requirements for authentication
and privacy before choosing an authentication or privacy mechanism.
Note also that TLS can be used with other TCP-based protocols, and
there are SASL [7] mechanisms similar to HTTP's digest
authentication. So it is not necessary to use HTTP in order to
benefit from either TLS or digest-like authentication. However, HTTP
APIs may already support TLS and/or digest.
2.4 Compatibility with Proxies, Firewalls, and NATs
One oft-cited reason for the use of HTTP is its ability to pass
through proxies, firewalls, or network address translators (NATs).
One unfortunate consequence of firewalls and NATs is that they make
it harder to deploy new Internet applications, by requiring explicit
permission (or even a software upgrade of the firewall or NAT) to
accommodate each new protocol. The existence of firewalls and NATs
creates a strong incentive for protocol designers to layer new
applications on top of existing protocols, including HTTP.
However, if a site's firewall prevents the use of unknown protocols,
this is presumably a conscious policy decision on the part of the
firewall administrator. While it is arguable that such policies are
of limited value in enhancing security, this is beside the point -
well-known port numbers are quite useful for a variety of purposes,
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?