⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2022.txt

📁 RFC 的详细文档!
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   LIS as they do with conventional subnets. (Relaxation of this
   restriction MAY only occur after future research on the interaction
   between existing layer 3 multicast routing protocols and unicast
   subnet boundaries.)

   The term 'Cluster Member' will be used in this document to refer to
   an endpoint that is currently using a MARS for multicast support.
   Thus potential scope of a cluster may be the entire membership of a
   LIS, while the actual scope of a cluster depends on which endpoints
   are actually cluster members at any given time.

1.3  Document overview.

   This document assumes an understanding of concepts explained in
   greater detail in RFC 1112, RFC 1577, UNI 3.0/3.1, and RFC 1755 [6].

   Section 2 provides an overview of IP multicast and what RFC 1112
   required from Ethernet.

   Section 3 describes in more detail the multicast support services
   offered by UNI 3.0/3.1, and outlines the differences between VC
   meshes and multicast servers (MCSs) as mechanisms for distributing
   packets to multiple destinations.

   Section 4 provides an overview of the MARS and its relationship to
   ATM endpoints. This section also discusses the encapsulation and
   structure of MARS control messages.

   Section 5 substantially defines the entire cluster member endpoint
   behaviour, on both receive and transmit sides. This includes both
   normal operation and error recovery.

   Section 6 summarises the required behaviour of a MARS.

   Section 7 looks at how a multicast server (MCS) interacts with a
   MARS.

   Section 8 discusses how IP multicast routers may make novel use of
   promiscuous and semi-promiscuous group joins. Also discussed is a
   mechanism designed to reduce the amount of IGMP traffic issued by
   routers.

   Section 9 discusses how this document applies in the more general
   (non-IP) case.





Armitage                    Standards Track                     [Page 6]

RFC 2022          Multicast over UNI 3.0/3.1 based ATM     November 1996


   Section 10 summarises the key proposals, and identifies areas for
   future research that are generated by this MARS architecture.

   The appendices provide discussion on issues that arise out of the
   implementation of this document. Appendix A discusses MARS and
   endpoint algorithms for parsing MARS messages. Appendix B describes
   the particular problems introduced by the current IGMP paradigms, and
   possible interim work-arounds.  Appendix C discusses the 'cluster'
   concept in further detail, while Appendix D briefly outlines an
   algorithm for parsing TLV lists.  Appendix E summarises various timer
   values used in this document, and Appendix F provides example
   pseudo-code for a MARS entity.

1.4  Conventions.

   In this document the following coding and packet representation rules
   are used:

      All multi-octet parameters are encoded in big-endian form (i.e.
      the most significant octet comes first).

      In all multi-bit parameters bit numbering begins at 0 for the
      least significant bit when stored in memory (i.e. the n'th bit has
      weight of 2^n).

      A bit that is 'set', 'on', or 'one' holds the value 1.

      A bit that is 'reset', 'off', 'clear', or 'zero' holds the value
      0.

2.  Summary of the IP multicast service model.

   Under IP version 4 (IPv4), addresses in the range between 224.0.0.0
   and 239.255.255.255 (224.0.0.0/4) are termed 'Class D' or 'multicast
   group' addresses. These abstractly represent all the IP hosts in the
   Internet (or some constrained subset of the Internet) who have
   decided to 'join' the specified group.

   RFC1112 requires that a multicast-capable IP interface must support
   the transmission of IP packets to an IP multicast group address,
   whether or not the node considers itself a 'member' of that group.
   Consequently, group membership is effectively irrelevant to the
   transmit side of the link layer interfaces. When Ethernet is used as
   the link layer (the example used in RFC1112), no address resolution
   is required to transmit packets. An algorithmic mapping from IP
   multicast address to Ethernet multicast address is performed locally
   before the packet is sent out the local interface in the same 'send
   and forget' manner as a unicast IP packet.



Armitage                    Standards Track                     [Page 7]

RFC 2022          Multicast over UNI 3.0/3.1 based ATM     November 1996


   Joining and Leaving an IP multicast group is more explicit on the
   receive side - with the primitives JoinLocalGroup and LeaveLocalGroup
   affecting what groups the local link layer interface should accept
   packets from. When the IP layer wants to receive packets from a
   group, it issues JoinLocalGroup. When it no longer wants to receive
   packets, it issues LeaveLocalGroup. A key point to note is that
   changing state is a local issue, it has no effect on other hosts
   attached to the Ethernet.

   IGMP is defined in RFC 1112 to support IP multicast routers attached
   to a given subnet. Hosts issue IGMP Report messages when they perform
   a JoinLocalGroup, or in response to an IP multicast router sending an
   IGMP Query. By periodically transmitting queries IP multicast routers
   are able to identify what IP multicast groups have non-zero
   membership on a given subnet.

   A specific IP multicast address, 224.0.0.1, is allocated for the
   transmission of IGMP Query messages. Host IP layers issue a
   JoinLocalGroup for 224.0.0.1 when they intend to participate in IP
   multicasting, and issue a LeaveLocalGroup for 224.0.0.1 when they've
   ceased participating in IP multicasting.

   Each host keeps a list of IP multicast groups it has been
   JoinLocalGroup'd to. When a router issues an IGMP Query on 224.0.0.1
   each host begins to send IGMP Reports for each group it is a member
   of. IGMP Reports are sent to the group address, not 224.0.0.1, "so
   that other members of the same group on the same network can overhear
   the Report" and not bother sending one of their own. IP multicast
   routers conclude that a group has no members on the subnet when IGMP
   Queries no longer elicit associated replies.

3. UNI 3.0/3.1 support for intra-cluster multicasting.

   For the purposes of the MARS protocol, both UNI 3.0 and UNI 3.1
   provide equivalent support for multicasting. Differences between UNI
   3.0 and UNI 3.1 in required signalling elements are covered in RFC
   1755.

   This document will describe its operation in terms of 'generic'
   functions that should be available to clients of a UNI 3.0/3.1
   signalling entity in a given ATM endpoint. The ATM model broadly
   describes an 'AAL User' as any entity that establishes and manages
   VCs and underlying AAL services to exchange data. An IP over ATM
   interface is a form of 'AAL User' (although the default LLC/SNAP
   encapsulation mode specified in RFC1755 really requires that an 'LLC
   entity' is the AAL User, which in turn supports the IP/ATM
   interface).




Armitage                    Standards Track                     [Page 8]

RFC 2022          Multicast over UNI 3.0/3.1 based ATM     November 1996


   The most fundamental limitations of UNI 3.0/3.1's multicast support
   are:

      Only point to multipoint, unidirectional VCs may be established.

      Only the root (source) node of a given VC may add or remove leaf
      nodes.

   Leaf nodes are identified by their unicast ATM addresses.  UNI
   3.0/3.1 defines two ATM address formats - native E.164 and NSAP
   (although it must be stressed that the NSAP address is so called
   because it uses the NSAP format - an ATM endpoint is NOT a Network
   layer termination point).  In UNI 3.0/3.1 an 'ATM Number' is the
   primary identification of an ATM endpoint, and it may use either
   format. Under some circumstances an ATM endpoint must be identified
   by both a native E.164 address (identifying the attachment point of a
   private network to a public network), and an NSAP address ('ATM
   Subaddress') identifying the final endpoint within the private
   network. For the rest of this document the term will be used to mean
   either a single 'ATM Number' or an 'ATM Number' combined with an 'ATM
   Subaddress'.

3.1 VC meshes.

   The most fundamental approach to intra-cluster multicasting is the
   multicast VC mesh.  Each source establishes its own independent point
   to multipoint VC (a single multicast tree) to the set of leaf nodes
   (destinations) that it has been told are members of the group it
   wishes to send packets to.

   Interfaces that are both senders and group members (leaf nodes) to a
   given group will originate one point to multipoint VC, and terminate
   one VC for every other active sender to the group. This criss-
   crossing of VCs across the ATM network gives rise to the name 'VC
   mesh'.

3.2 Multicast Servers.

   An alternative model has each source establish a VC to an
   intermediate node - the multicast server (MCS). The multicast server
   itself establishes and manages a point to multipoint VC out to the
   actual desired destinations.

   The MCS reassembles AAL_SDUs arriving on all the incoming VCs, and
   then queues them for transmission on its single outgoing point to
   multipoint VC. (Reassembly of incoming AAL_SDUs is required at the
   multicast server as AAL5 does not support cell level multiplexing of
   different AAL_SDUs on a single outgoing VC.)



Armitage                    Standards Track                     [Page 9]

RFC 2022          Multicast over UNI 3.0/3.1 based ATM     November 1996


   The leaf nodes of the multicast server's point to multipoint VC must
   be established prior to packet transmission, and the multicast server
   requires an external mechanism to identify them. A side-effect of
   this method is that ATM interfaces that are both sources and group
   members will receive copies of their own packets back from the MCS
   (An alternative method is for the multicast server to explicitly
   retransmit packets on individual VCs between itself and group
   members. A benefit of this second approach is that the multicast
   server can ensure that sources do not receive copies of their own
   packets.)

   The simplest MCS pays no attention to the contents of each AAL_SDU.
   It is purely an AAL/ATM level device. More complex MCS architectures
   (where a single endpoint serves multiple layer 3 groups) are
   possible, but are beyond the scope of this document. More detailed
   discussion is provided in section 7.

3.3 Tradeoffs.

   Arguments over the relative merits of VC meshes and multicast servers
   have raged for some time. Ultimately the choice depends on the
   relative trade-offs a system administrator must make between
   throughput, latency, congestion, and resource consumption. Even
   criteria such as latency can mean different things to different
   people - is it end to end packet time, or the time it takes for a
   group to settle after a membership change? The final choice depends
   on the characteristics of the applications generating the multicast
   traffic.

   If we focussed on the data path we might prefer the VC mesh because
   it lacks the obvious single congestion point of an MCS.  Throughput
   is likely to be higher, and end to end latency lower, because the
   mesh lacks the intermediate AAL_SDU reassembly that must occur in
   MCSs. The underlying ATM signalling system also has greater
   opportunity to ensure optimal branching points at ATM switches along
   the multicast trees originating on each source.

   However, resource consumption will be higher. Every group member's
   ATM interface must terminate a VC per sender (consuming on-board
   memory for state information, instance of an AAL service, and
   buffering in accordance with the vendors particular architecture). On
   the contrary, with a multicast server only 2 VCs (one out, one in)
   are required, independent of the number of senders. The allocation of
   VC related resources is also lower within the ATM cloud when using a
   multicast server. These points may be considered to have merit in
   environments where VCs across the UNI or within the ATM cloud are
   valuable (e.g. the ATM provider charges on a per VC basis), or AAL
   contexts are limited in the ATM interfaces of endpoints.



Armitage                    Standards Track                    [Page 10]

RFC 2022          Multicast over UNI 3.0/3.1 based ATM     November 1996


   If we focus on the signalling load then MCSs have the advantage when
   faced with dynamic sets of receivers. Every time the membership of a
   multicast group changes (a leaf node needs to be added or dropped),
   only a single point to multipoint VC needs to be modified when using
   an MCS. This generates a single signalling event across the MCS's
   UNI. However, when membership change occurs in a VC mesh, signalling
   events occur at the UNIs of every traffic source - the transient
   signalling load scales with the number of sources. This has obvious
   ramifications if you define latency as the time for a group's
   connectivity to stabilise after change (especially as the number of

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -