📄 rfc2775.txt
字号:
Network Working Group B. CarpenterRequest for Comments: 2775 IBMCategory: Informational February 2000 Internet TransparencyStatus of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.Copyright Notice Copyright (C) The Internet Society (2000). All Rights Reserved.Abstract This document describes the current state of the Internet from the architectural viewpoint, concentrating on issues of end-to-end connectivity and transparency. It concludes with a summary of some major architectural alternatives facing the Internet network layer. This document was used as input to the IAB workshop on the future of the network layer held in July 1999. For this reason, it does not claim to be complete and definitive, and it refrains from making recommendations.Table of Contents 1. Introduction.................................................2 2. Aspects of end-to-end connectivity...........................3 2.1 The end-to-end argument.....................................3 2.2 End-to-end performance......................................4 2.3 End-to-end address transparency.............................4 3. Multiple causes of loss of transparency......................5 3.1 The Intranet model..........................................6 3.2 Dynamic address allocation..................................6 3.2.1 SLIP and PPP..............................................6 3.2.2 DHCP......................................................6 3.3 Firewalls...................................................6 3.3.1 Basic firewalls...........................................6 3.3.2 SOCKS.....................................................7 3.4 Private addresses...........................................7 3.5 Network address translators.................................7 3.6 Application level gateways, relays, proxies, and caches.....8 3.7 Voluntary isolation and peer networks.......................8Carpenter Informational [Page 1]RFC 2775 Internet Transparency February 2000 3.8 Split DNS...................................................9 3.9 Various load-sharing tricks.................................9 4. Summary of current status and impact.........................9 5. Possible future directions..................................11 5.1 Successful migration to IPv6...............................11 5.2 Complete failure of IPv6...................................12 5.2.1 Re-allocating the IPv4 address space.....................12 5.2.2 Exhaustion...............................................13 5.3 Partial deployment of IPv6.................................13 6. Conclusion..................................................13 7. Security Considerations.....................................13 Acknowledgements...............................................14 References.....................................................14 Author's Address...............................................17 Full Copyright Statement.......................................181. Introduction "There's a freedom about the Internet: As long as we accept the rules of sending packets around, we can send packets containing anything to anywhere." [Berners-Lee] The Internet is experiencing growing pains which are often referred to as "the end-to-end problem". This document attempts to analyse those growing pains by reviewing the current state of the network layer, especially its progressive loss of transparency. For the purposes of this document, "transparency" refers to the original Internet concept of a single universal logical addressing scheme, and the mechanisms by which packets may flow from source to destination essentially unaltered. The causes of this loss of transparency are partly artefacts of parsimonious allocation of the limited address space available to IPv4, and partly the result of broader issues resulting from the widespread use of TCP/IP technology by businesses and consumers. For example, network address translation is an artefact, but Intranets are not. Thus the way forward must recognise the fundamental changes in the usage of TCP/IP that are driving current Internet growth. In one scenario, a complete migration to IPv6 potentially allows the restoration of global address transparency, but without removing firewalls and proxies from the picture. At the other extreme, a total failure of IPv6 leads to complete fragmentation of the network layer, with global connectivity depending on endless patchwork.Carpenter Informational [Page 2]RFC 2775 Internet Transparency February 2000 This document does not discuss the routing implications of address space, nor the implications of quality of service management on router state, although both these matters interact with transparency to some extent. It also does not substantively discuss namespace issues.2. Aspects of end-to-end connectivity The phrase "end to end", often abbreviated as "e2e", is widely used in architectural discussions of the Internet. For the purposes of this paper, we first present three distinct aspects of end-to- endness.2.1 The end-to-end argument This is an argument first described in [Saltzer] and reviewed in [RFC 1958], from which an extended quotation follows: "The basic argument is that, as a first principle, certain required end-to-end functions can only be performed correctly by the end-systems themselves. A specific case is that any network, however carefully designed, will be subject to failures of transmission at some statistically determined rate. The best way to cope with this is to accept it, and give responsibility for the integrity of communication to the end systems. Another specific case is end-to-end security. "To quote from [Saltzer], 'The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the endpoints of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.)' "This principle has important consequences if we require applications to survive partial network failures. An end-to-end protocol design should not rely on the maintenance of state (i.e. information about the state of the end-to-end communication) inside the network. Such state should be maintained only in the endpoints, in such a way that the state can only be destroyed when the endpoint itself breaks (known as fate-sharing). An immediate consequence of this is that datagrams are better than classical virtual circuits. The network's job is to transmit datagrams as efficiently and flexibly as possible. Everything else should be done at the fringes."Carpenter Informational [Page 3]RFC 2775 Internet Transparency February 2000 Thus this first aspect of end-to-endness limits what the network is expected to do, and clarifies what the end-system is expected to do. The end-to-end argument underlies the rest of this document.2.2 End-to-end performance Another aspect, in which the behaviour of the network and that of the end-systems interact in a complex way, is performance, in a generalised sense. This is not a primary focus of the present document, but it is mentioned briefly since it is often referred to when discussing end-to-end issues. Much work has been done over many years to improve and optimise the performance of TCP. Interestingly, this has led to comparatively minor changes to TCP itself; [STD 7] is still valid apart from minor additions [RFC 1323, RFC 2581, RFC 2018]. However a great deal of knowledge about good practice in TCP implementations has built up, and the queuing and discard mechanisms in routers have been fine- tuned to improve system performance in congested conditions. Unfortunately all this experience in TCP performance does not help with transport protocols that do not exhibit TCP-like response to congestion [RFC 2309]. Also, the requirement for specified quality of service for different applications and/or customers has led to much new development, especially the Integrated Services [RFC 1633, RFC 2210] and Differentiated Services [RFC 2475] models. At the same time new transport-related protocols have appeared [RFC 1889, RFC 2326] or are in discussion in the IETF. It should also be noted that since the speed of light is not set by an IETF standard, our current notions of end-to-end performance will be largely irrelevant to interplanetary networking. Thus, despite the fact that performance and congestion issues for TCP are now quite well understood, the arrival of QOS mechanisms and of new transport protocols raise new questions about end-to-end performance, but these are not further discussed here.2.3 End-to-end address transparency When the catenet concept (a network of networks) was first described by Cerf in 1978 [IEN 48] following an earlier suggestion by Pouzin in 1974 [CATENET], a clear assumption was that a single logical address space would cover the whole catenet (or Internet as we now know it). This applied not only to the early TCP/IP Internet, but also to the Xerox PUP design, the OSI connectionless network design, XNS, and numerous other proprietary network architectures.Carpenter Informational [Page 4]RFC 2775 Internet Transparency February 2000 This concept had two clear consequences - packets could flow essentially unaltered throughout the network, and their source and destination addresses could be used as unique labels for the end systems. The first of these consequences is not absolute. In practice changes can be made to packets in transit. Some of these are reversible at the destination (such as fragmentation and compression). Others may be irreversible (such as changing type of service bits or decrementing a hop limit), but do not seriously obstruct the end-to- end principle of Section 2.1. However, any change made to a packet in transit that requires per-flow state information to be kept at an intermediate point would violate the fate-sharing aspect of the end- to-end principle. The second consequence, using addresses as unique labels, was in a sense a side-effect of the catenet concept. However, it was a side- effect that came to be highly significant. The uniqueness and durability of addresses have been exploited in many ways, in particular by incorporating them in transport identifiers. Thus they have been built into transport checksums, cryptographic signatures, Web documents, and proprietary software licence servers. [RFC 2101] explores this topic in some detail. Its main conclusion is that IPv4 addresses can no longer be assumed to be either globally unique or invariant, and any protocol or applications design that assumes these properties will fail unpredictably. Work in the IAB and the NAT working group [NAT-ARCH] has analysed the impact of one specific cause of non-uniqueness and non-invariance, i.e., network address translators. Again the conclusion is that many applications will fail, unless they are specifically adapted to avoid the assumption of address transparency. One form of adaptation is the insertion of some form of application level gateway, and another form is for the NAT to modify payloads on the fly, but in either case the adaptation is application-specific. Non-transparency of addresses is part of a more general phenomenon. We have to recognise that the Internet has lost end-to-end transparency, and this requires further analysis.3. Multiple causes of loss of transparency This section describes various recent inventions that have led to the loss of end-to-end transparency in the Internet.Carpenter Informational [Page 5]RFC 2775 Internet Transparency February 20003.1 The Intranet model Underlying a number of the specific developments mentioned below is the concept of an "Intranet", loosely defined as a private corporate network using TCP/IP technology, and connected to the Internet at large in a carefully controlled manner. The Intranet is presumed to be used by corporate employees for business purposes, and to interconnect hosts that carry sensitive or confidential information. It is also held to a higher standard of operational availability than the Internet at large. Its usage can be monitored and controlled, and its resources can be better planned and tuned than those of the public network. These arguments of security and resource management have ensured the dominance of the Intranet model in most corporations and campuses. The emergence of the Intranet model has had a profound effect on the notion of application transparency. Many corporate network managers feel it is for them alone to determine which applications can traverse the Internet/Intranet boundary. In this world view, address transparency may seem to be an unimportant consideration.3.2 Dynamic address allocation3.2.1 SLIP and PPP It is to be noted that with the advent of vast numbers of dial-up Internet users, whose addresses are allocated at dial-up time, and whose traffic may be tunneled back to their home ISP, the actual IP addresses of such users are purely transient. During their period of validity they can be relied on end-to-end, but they must be forgotten at the end of every session. In particular they can have no permanent association with the domain name of the host borrowing them.3.2.2 DHCP Similarly, LAN-based users of the Internet today frequently use DHCP to acquire a new address at system restart, so here again the actual value of the address is potentially transient and must not be stored between sessions.3.3 Firewalls3.3.1 Basic firewalls Intranet managers have a major concern about security: unauthorised traffic must be kept out of the Intranet at all costs. This concern led directly to the firewall concept (a system that intercepts all traffic between the Internet and the Intranet, and only lets throughCarpenter Informational [Page 6]
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -