⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1287.txt

📁 RFC 的详细文档!
💻 TXT
📖 第 1 页 / 共 4 页
字号:
      The following actions are proposed:

      A)   Time Line

           Construct a specific set of estimates for the time at which
           the various problems above will arise, and construct a
           corresponding time-line for development and deployment of a
           new addressing/routing architecture.  Use this time line as a
           basis for evaluating specific proposals for changes.  This is
           a matter for the IETF.

      B)   New Address Format

           Explore the options for a next generation address format and
           develop a plan for migration.  Specifically, construct a
           prototype gateway that does address mapping.  Understand the
           complexity of this task, to guide our thinking about
           migration options.

      C)   Routing on ADs

           Take steps to make network aggregates (ADs) the basis of
           routing.  In particular, explore the several options for a
           global table that maps network numbers to ADs.  This is a
           matter for the IETF.

      D)   Policy-Based Routing

           Continue the current work on policy based routing. There are
           several specific objectives.

           -    Seek ways to control the complexity of setting policy
                (this is a human interface issue, not an algorithm
                complexity issue).

           -    Understand better the issues of maintaining connection
                state in gateways.

           -    Understand better the issues of connection state setup.



Clark, Chapin, Cerf, Braden, & Hobby                            [Page 8]

RFC 1287            Future of Internet Architecture        December 1991


      E)   Research on Further Aggregation

           Explore, as a research activity, how ADs should be aggregated
           into still larger routing elements.

           -    Consider whether the architecture should define the
                "role" of an AD or an aggregate.

           -    Consider whether one universal routing method or
                distinct methods should be used inside and outside ADs
                and aggregates.

      Existing projects planned for DARTnet will help resolve several of
      these issues: state in gateways, state setup, address mapping,
      accounting and so on.  Other experiments in the R&D community also
      bear on this area.

3.  MULTI-PROTOCOL ARCHITECTURE

   Changing the Internet to support multiple protocol suites leads to
   three specific architectural questions:

   o    How exactly will we define "the Internet"?

   o    How would we architect an Internet with n>1 protocol suites,
        regardless of what the suites are?

   o    Should we architect for partial or filtered connectivity?

   o    How to add explicit support for application gateways into the
        architecture?

   3.1  What is the "Internet"?

      It is very difficult to deal constructively with the issue of "the
      multi-protocol Internet" without first determining what we believe
      "the Internet" is (or should be).   We distinguish "the Internet",
      a set of communicating systems, from "the Internet community", a
      set of people and organizations.  Most people would accept a loose
      definition of the latter as "the set of people who believe
      themselves to be part of the Internet community".  However, no
      such "sociological" definition of the Internet itself is likely to
      be useful.

      Not too long ago, the Internet was defined by IP connectivity (IP
      and ICMP were - and still are - the only "required" Internet
      protocols).  If I could PING you, and you could PING me, then we
      were both on the Internet, and a satisfying working definition of



Clark, Chapin, Cerf, Braden, & Hobby                            [Page 9]

RFC 1287            Future of Internet Architecture        December 1991


      the Internet could be constructed as a roughly transitive closure
      of IP-speaking systems.  This model of the Internet was simple,
      uniform, and - perhaps most important - testable.  The IP-
      connectivity model clearly distinguished systems that were "on the
      Internet" from those that were not.

      As the Internet has grown and the technology on which it is based
      has gained widespread commercial acceptance, the sense of what it
      means for a system to be "on the Internet" has changed, to
      include:

      *    Any system that has partial IP connectivity, restricted by
           policy filters.

      *    Any system that runs the TCP/IP protocol suite, whether or
           not it is actually accessible from other parts of the
           Internet.

      *    Any system that can exchange RFC-822 mail, without the
           intervention of mail gateways or the transformation of mail
           objects.

      *    Any system with e-mail connectivity to the Internet, whether
           or not a mail gateway or mail object transformation is
           required.

      These definitions of "the Internet", are still based on the
      original concept of connectivity, just "moving up the stack".

      We propose instead a new definition of the Internet, based on a
      different unifying concept:

      *    "Old" Internet concept:  IP-based.

           The organizing principle is the IP address, i.e., a common
           network address space.

      *    "New" Internet concept:  Application-based.

           The organizing principle is the domain name system and
           directories, i.e., a common - albeit necessarily multiform -
           application name space.

      This suggests that the idea of "connected status", which has
      traditionally been tied to the IP address(via network numbers,
      should instead be coupled to the names and related identifying
      information contained in the distributed Internet directory.




Clark, Chapin, Cerf, Braden, & Hobby                           [Page 10]

RFC 1287            Future of Internet Architecture        December 1991


      A naming-based definition of "the Internet" implies a much larger
      Internet community, and a much more dynamic (and unpredictable)
      operational Internet.  This argues for an Internet architecture
      based on adaptability (to a broad spectrum of possible future
      developments) rather than anticipation.

   3.2  A Process-Based Model of the Multiprotocol Internet

      Rather than specify a particular "multi-protocol Internet",
      embracing a pre-determined number of specific protocol
      architectures, we propose instead a process-oriented model of the
      Internet, which accommodates different protocol architectures
      according to the traditional "things that work" principle.

      A process-oriented Internet model includes, as a basic postulate,
      the assertion that there is no *steady-state* "multi-protocol
      Internet".  The most basic forces driving the evolution of the
      Internet are pushing it not toward multi-protocol diversity, but
      toward the original state of protocol-stack uniformity (although
      it is unlikely that it will ever actually get there).  We may
      represent this tendency of the Internet to evolve towards
      homogeneity as the most "thermodynamically stable" state by
      describing four components of a new process-based Internet
      architecture:

      Part 1: The core Internet architecture

           This is the traditional TCP/IP-based architecture.  It is the
           "magnetic center" of Internet evolution, recognizing that (a)
           homogeneity is still the best way to deal with diversity in
           an internetwork, and (b) IP connectivity is still the best
           basic model of the Internet (whether or not the actual state
           of IP ubiquity can be achieved in practice in a global
           operational Internet).

      "In the beginning", the Internet architecture consisted only of
      this first part.  The success of the Internet, however, has
      carried it beyond its uniform origins;  ubiquity and uniformity
      have been sacrificed in order to greatly enrich the Internet "gene
      pool".

      Two additional parts of the new Internet architecture express the
      ways in which the scope and extent of the Internet have been
      expanded.

      Part 2: Link sharing

           Here physical resources -- transmission media, network



Clark, Chapin, Cerf, Braden, & Hobby                           [Page 11]

RFC 1287            Future of Internet Architecture        December 1991


           interfaces, perhaps some low-level (link) protocols -- are
           shared by multiple, non-interacting protocol suites.  This
           part of the architecture recognizes the necessity and
           convenience of coexistence, but is not concerned with
           interoperability;  it has been called "ships in the night" or
           "S.I.N.".

           Coexisting protocol suites are not, of course, genuinely
           isolated in practice;  the ships passing in the night raise
           issues of management, non-interference, coordination, and
           fairness in real Internet systems.

      Part 3: Application interoperability

           Absent ubiquity of interconnection (i.e., interoperability of
           the "underlying stacks"), it is still possible to achieve
           ubiquitous application functionality by arranging for the
           essential semantics of applications to be conveyed among
           disjoint communities of Internet systems.  This can be
           accomplished by application relays, or by user agents that
           present a uniform virtual access method to different
           application services by expressing only the shared semantics.

           This part of the architecture emphasizes the ultimate role of
           the Internet as a basis for communication among applications,
           rather than as an end in itself.  To the extent that it
           enables a population of applications and their users to move
           from one underlying protocol suite to another without
           unacceptable loss of functionality, it is also a "transition
           enabler".

      Adding parts 2 and 3 to the original Internet architecture is at
      best a mixed blessing.  Although they greatly increase the scope
      of the Internet and the size of the Internet community, they also
      introduce significant problems of complexity, cost, and
      management, and they usually represent a loss of functionality
      (particularly with respect to part 3).  Parts 2 and 3 represent
      unavoidable, but essentially undesirable, departures from the
      homogeneity represented by part 1.  Some functionality is lost,
      and additional system complexity and cost is endured, in order to
      expand the scope of the Internet.  In a perfect world, however,
      the Internet would evolve and expand without these penalties.

      There is a tendency, therefore, for the Internet to evolve in
      favor of the homogeneous architecture represented by part 1, and
      away from the compromised architectures of parts 2 and 3.  Part 4
      expresses this tendency.




Clark, Chapin, Cerf, Braden, & Hobby                           [Page 12]

RFC 1287            Future of Internet Architecture        December 1991


      Part 4: Hybridization/Integration.

           Part 4 recognizes the desirability of integrating similar
           elements from different Internet protocol architectures to
           form hybrids that reduce the variability and complexity of
           the Internet system.  It also recognizes the desirability of
           leveraging the existing Internet infrastructure to facilitate
           the absorption of "new stuff" into the Internet, applying to
           "new stuff" the established Internet practice of test,
           evaluate, adopt.

           This part expresses the tendency of the Internet, as a
           system, to attempt to return to the original "state of grace"
           represented by the uniform architecture of part 1.  It is a
           force acting on the evolution of the Internet, although the
           Internet will never actually return to a uniform state at any
           point in the future.

      According to this dynamic process model, running X.400 mail over
      RFC 1006 on a TCP/IP stack, integrated IS-IS routing, transport
      gateways, and the development of a single common successor to the
      IP and CLNP protocols are all examples of "good things".  They
      represent movement away from the non-uniformity of parts 2 and 3
      towards greater homogeneity, under the influence of the "magnetic
      field" asserted by part 1, following the hybridization dynamic of
      part 4.

4.  SECURITY ARCHITECTURE

   4.1  Philosophical Guidelines

      The principal themes for development of an Internet security
      architecture are simplicity, testability, trust, technology and
      security perimeter identification.

      *    There is more to security than protocols and cryptographic
           methods.

      *    The security architecture and policies should be simple
           enough to be readily understood.  Complexity breeds
           misunderstanding and poor implementation.

      *    The implementations should be testable to determine if the
           policies are met.

      *    We are forced to trust hardware, software and people to make
           any security architecture function.  We assume that the
           technical instruments of security policy enforcement are at



Clark, Chapin, Cerf, Braden, & Hobby                           [Page 13]

RFC 1287            Future of Internet Architecture        December 1991


           least as powerful as modern personal computers and work
           stations; we do not require less capable components to be
           self-protecting (but might apply external remedies such as
           link level encryption devices).

      *    Finally, it is essential to identify security perimeters at
           which protection is to be effective.

   4.2  Security Perimeters

      There were four possible security perimeters: link level,
      net/subnet level, host level, and process/application level.  Each
      imposes different requirements, can admit different techniques,
      and makes different assumptions about what components of the
      system must be trusted to be effective.

      Privacy Enhanced Mail is an example of a process level security
      system; providing authentication and confidentiality for SNMP is
      another example.  Host level security typically means applying an
      external security mechanism on the communication ports of a host
      computer.  Network or subnetwork security means applying the
      external security capability at the gateway/router(s) leading from
      the subnetwork to the "outside".  Link-level security is the
      traditional point-to-point or media-level (e.g., Ethernet)
      encryption mechanism.

      There are many open questions about network/subnetwork security
      protection, not the least of which is a potential mismatch between
      host level (end/end) security methods and methods at the
      network/subnetwork level.  Moreover, network level protection does
      not deal with threats arising within the security perimeter.

      Applying protection at the process level assumes that the
      underlying scheduling and operating system mechanisms can be
      trusted not to prevent the application from applying security when
      appropriate.  As the security perimeter moves downward in the
      system architecture towards the link level, one must make many
      assumptions about the security threat to make an argument that
      enforcement at a particular perimeter is effective.  For example,
      if only link-level encryption is used, one must assume that
      attacks come only from the outside via communications lines, that
      hosts, switches and gateways are physically protected, and the
      people and software in all these components are to be trusted.

   4.3  Desired Security Services

      We need authenticatable distinguished names if we are to implement
      discretionary and non-discretionary access control at application



Clark, Chapin, Cerf, Braden, & Hobby                           [Page 14]

RFC 1287            Future of Internet Architecture        December 1991


      and lower levels in the system.  In addition, we need enforcement
      for integrity (anti-modification, anti-spoof and anti-replay
      defenses), confidentiality, and prevention of denial-of-service.
      For some situations, we may also need to prevent repudiation of
      message transmission or to prevent covert channels.

      We have some building blocks with which to build the Internet
      security system.  Cryptographic algorithms are available (e.g.,
      Data Encryption Standard, RSA, El Gamal, and possibly other public
      key and symmetric key algorithms), as are hash functions such as
      MD2 and MD5.

      We need Distinguished Names (in the OSI sense) and are very much
      in need of an infrastructure for the assignment of such
      identifiers, together with widespread directory services for
      making them known.  Certificate concepts binding distinguished
      names to public keys and binding distinguished names to
      capabilities and permissions may be applied to good advantage.

      At the router/gateway level, we can apply address and protocol
      filters and other configuration controls to help fashion a
      security system.  The proposed OSI Security Protocol 3 (SP3) and
      Security Protocol 4 (SP4) should be given serious consideration as
      possible elements of an Internet security architecture.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -