⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2121.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 2 页
字号:
Network Working Group                                      G. ArmitageRequest for Comments: 2121                                    BellcoreCategory: Informational                                     March 1997                   Issues affecting MARS Cluster SizeStatus of this Memo   This memo provides information for the Internet community.  This memo   does not specify an Internet standard of any kind.  Distribution of   this memo is unlimited.Abstract   IP multicast over ATM currently uses the MARS model [1] to manage the   use of ATM pt-mpt SVCs for IP multicast packet forwarding. The scope   of any given MARS services is the MARS Cluster - typically the same   as an IPv4 Logical IP Subnet (LIS). Current IP/ATM networks are   usually architected with unicast routing and forwarding issues   dictating the sizes of individual LISes. However, as IP multicast is   deployed as a service, the size of a LIS will only be as big as a   MARS Cluster can be. This document provides a qualitative look at the   issues constraining a MARS Cluster's size, including the impact of VC   limits in switches and NICs, geographical distribution of cluster   members, and the use of VC Mesh or MCS modes to support multicast   groups.1. Introduction   A MARS Cluster is the set of IP/ATM interfaces that are willing to   engage in direct, ATM level pt-mpt SVCs to perform IP multicast   packet forwarding [1]. Each IP/ATM interface (a MARS Client) must   keep state information regarding the ATM addresses of each leaf node   (recipient) of each  pt-mpt SVC it has open. In  addition, each MARS   Client receives MARS_JOIN and MARS_LEAVE messages from the MARS   whenever there is a requirement that Clients around the Cluster need   to update their pt-mpt SVCs for a given IP multicast group.   The definition of Cluster 'size' can mean two things - the number of   MARS Clients using a given MARS, and the geographic distribution of   MARS Clients. The number of MARS Clients in a Cluster impacts on the   amount of state information any given client may need to store while   managing  outgoing  pt- mpt SVCs. It also impacts on the average rate   of JOIN/LEAVE traffic that is propagated by the MARS on   ClusterControlVC, and the number of pt-mpt VCs that may need   modification each time a MARS_JOIN or MARS_LEAVE appears on   ClusterControlVC.Armitage                     Informational                      [Page 1]RFC 2121           Issues affecting MARS Cluster Size         March 1997   The geographic distribution of clients affects the latency between a   client issuing a MARS_JOIN, and it finally being added onto the pt-   mpt VCs of the other MARS Clients transmitting to the specified   multicast group. (This latency is made up of both the time to   propagate the MARS_JOIN, and the delay in the underlying ATM cloud's   reaction to the subsequent ADD_PARTY messages.)   When architecting an IP/ATM network it is important to understand the   worst case scaling limits applicable to your Clusters. This document   provides a primarily qualitative look at the design choices that   impose the most dramatic constraints on Cluster size. Since the focus   is on worst-case scenarios, most of the analysis will assume   multicast groups that are VC Mesh based and have all cluster members   as sources and receivers. Engineering using the worst-case boundary   conditions, then applying optimisations such as Multicast Servers   (MCS), provides the Cluster with a margin of safety.  It is hoped   that more detailed quantitative analysis of Cluster sizing limits   will be prompted by this document.   Section 2 comments on the VC state requirements of the MARS model,   while Sections 3 and 4 identify the group change processing load and   latency characteristics of a cluster as a function of its size.   Section 5 looks at how Multicast Routers (both conventional and   combination router/switch architectures) increase the scale of a   multicast capable IP/ATM network. Finally, Section 6 discusses how   the use of Multicast Servers (MCS) might impact on the worst case   Cluster size limits.2. VC state limitations.   Two characteristics of ATM NICs and switches will limit the number of   members a Cluster may contain. They are:      The maximum number of VCs that can be originated from, or      terminate on, a port (VCmax).      The maximum number of leaf nodes supportable by a root node      (LEAFmax).   We'll assume that the MARS node has similar VCmax and LEAFmax values   as Cluster members.  VCmax affects the Cluster size because of the   following:      The MARS terminates a pt-pt control VC from each cluster member,      and originates a VC for ClusterControlVC and ServerControlVC.Armitage                     Informational                      [Page 2]RFC 2121           Issues affecting MARS Cluster Size         March 1997      When a multicast group is VC Mesh based, a group member terminates      a VC from every sender to the group, per group.      When a multicast group is MCS based, the MCS terminates a VC from      every sender to the group.   LEAFmax affects the Cluster size because of the following:      ClusterControlVC from the MARS. It has a leaf node per cluster      member (MARS Client).      Packet forwarding SVCs out of each MARS Client for each IP      multicast group being sent to. It has a leaf node for each group      member when a group is VC Mesh based.      Packet forwarding SVCs out of each MCS for each IP multicast group      being sent to. It has a leaf node for each group member when a      group is MCS based.   If we have N cluster members, and M multicast groups active (using VC   Mesh mode, and densely populated - all receivers are senders), the   following observations may be made:      ClusterControlVC has N leaf nodes, so            N <= LEAFmax.      The MARS terminates a pt-pt VC from each cluster member, and      originates ClusterControlVC and ServerControlVC, so            (N+2) <= VCmax.      Each Cluster Member sources 1 VC per group, terminates (N-1) VC      per group, originates a pt-pt VC to the MARS, and terminates 1 VC      as a leaf on ClusterControlVC, so            (M*N) + 2 <= VCmax.      The VC sourced by each Cluster member per group goes to all other      cluster members, so            (N-1) <= LEAFmax.Armitage                     Informational                      [Page 3]RFC 2121           Issues affecting MARS Cluster Size         March 1997   Since all the above conditions must be simultaneously true, we can   see that the most constraining requirement is either:      (M*N) + 2 <= VCmax.   or      N <= LEAFmax.   The limit involving VCmax is fundamentally controlled by the VC   consumption of group members using a VC Mesh for data forwarding,   rather than the termination of pt-pt control VCs on the MARS. (It is   in practice going to be very dependent on the multicast group   membership distributions within the cluster.)   The LEAFmax limit comes from ClusterControlVC, and is independent of   the density of group members (or the ratios of senders to receivers)   for active multicast groups within the cluster.   Under UNI 3.0/3.1 the most obvious limit on LEAFmax is 2^15 (the leaf   node ID is 15 bits wide).  However, the signaling driver software for   most ATM NICs may impose a limit much lower than this - a function of   how much per-leaf node state information they need to store (and are   capable of storing) for pt-mpt SVCs.   VCmax is constrained by the ATM NIC hardware (for available   segmentation or reassembly instances), or by the VC capacity of the   switch port that the NIC is attached to.  VCmax will be the smaller   of the two.   A MARS Client may impose its own state storage limitations, such that   the combined memory consumption of a MARS Client and the ATM NIC's   driver in a given host limits both LEAFmax and VCmax to values lower   than the ATM NIC alone might have been able to support.   It may be possible to work around LEAFmax limits by distributing the   leaf nodes across multiple pt-mpt SVCs operating in parallel.   However, such an approach requires further study, and doesn't solve   the VCmax limitation associated with a node terminating too many VCs.   A related observation can also be made that the number of MARS   Clients in a Cluster may be limited by the memory constraints of the   MARS itself. It is required to keep state on all the groups that   every one of its MARS Clients have joined. For a given memory limit,   the maximum number of MARS Clients must drop if the average number of   groups joined per Client rises. Depending on the level of group   memberships, this limitation may be more severe than LEAFmax.Armitage                     Informational                      [Page 4]RFC 2121           Issues affecting MARS Cluster Size         March 19973. Signaling load.   In any given cluster there will be an 'ambient' level of   MARS_JOIN/LEAVE activity. The dynamic characteristics of this   activity will depend on the types of multicast applications running   within the cluster. For a constant relative distribution of multicast   applications we can assume that, as the number of MARS Clients in a   given cluster rises, so does the ambient level of MARS_JOIN/LEAVE   activity. This increases the average frequency with which the MARS   processes and propagates MARS_JOIN/LEAVE messages.   The existence of MARS_JOIN/LEAVE traffic also has a consequential   impact on signaling activity at the ATM level (across the UNI and   {P}NNI boundaries). For groups that are VC Mesh supported, each   MARS_JOIN or MARS_LEAVE propagated on ClusterControlVC will result in   an ADD_PARTY or DROP_PARTY message sent across the UNIs of all MARS   Clients that are transmitting to a given group. As a cluster's   membership increases, so does the average number of MARS Clients that   trigger ATM signaling activity in response to MARS_JOIN/LEAVEs.   The size of a cluster needs to be chosen to provide some level of   containment to this ambient level of MARS and UNI/NNI signaling.   Some refinements to the MARS Client behaviour may also be explored to   smooth out UNI signaling transients. MARS Clients are currently   required to initiate revalidation of group memberships only when the   Client next sends a packet to an invalidated group SVC. A Client   could apply a similar algorithm to decide when it should issue   ADD_PARTYs. For example, after seeing a MARS_JOIN, wait until it   actually has a packet to send, send the packet, then initiate the   ADD_PARTY. As a result actively transmitting Clients would update   their SVCs sooner than intermittently transmitting Clients.4. Group change latencies   The group change latency can be defined as the time it takes for all   the senders to a group to have correctly updated their forwarding   SVCs after a MARS_JOIN or MARS_LEAVE is received from the MARS. This   is affected by both the number of Cluster members and the   geographical distribution of Cluster members. (Groups that are MCS   based create the lowest impact when new members join or leave, since   only the MCS needs to update its forwarding SVC.) Under some   circumstances, especially modelling or simulation environments, group   change latencies within a cluster may be an important characteristic   to control.Armitage                     Informational                      [Page 5]RFC 2121           Issues affecting MARS Cluster Size         March 1997   As noted in the previous section, the ADD_PARTY/DROP_PARTY signaling   load created by membership changes in VC Mesh based groups goes up as   the number of cluster members rises (assuming worst case scenario of   each cluster member being a sender to the group). As the UNI load   rises, the ATM network itself may start delivering slower processing   of the requested events.   Wide geographic distribution of Cluster members also delays the   propagation of MARS_JOIN/LEAVE and ATM UNI/NNI messages. The further   apart various members are, the longer it takes for them to receive   MARS_JOIN/LEAVE traffic on ClusterControlVC, and the longer it takes   for the ATM network to react to ADD_PARTY and DROP_PARTY requests. If   the long distance paths are populated by many ATM switches,   propagation delays due to per-switch processing will add   substantially to delays due to the speed of light.   (Unfortunately, mechanisms for smoothing out the transient ATM   signaling load described in section 3 have a consequence of   increasing the group change latency, since the goal is for some of   the senders to deliberately delay updating their forwarding SVCs.   This is an area where the system architect needs to make a   situation-specific trade-off.)   It is not clear what affect the internal processing of the MARS   itself has on group change latency, and how this might be impacted by   cluster size.  A component of the MARS processing latency will depend   on the specific database implementation and search algorithms as much   as on the number of group members for the group being modified at any   instant. Since the maximum number of group members for a given group   is equal to the number of cluster members, there will be an indirect   (even if small) relationship between worst case MARS processing   latencies and cluster size.5. Large IP/ATM networks using Mrouters   Building a large scale, multicast capable IP over ATM network is a   tradeoff between Cluster sizes and numbers of Mrouters. For a given   number of hosts, the number of clusters goes up as individual   clusters shrink. Since Mrouters are the topological intersections   between clusters, the number of Mrouters rises as the size of   individual clusters shrinks. (The actual number of Mrouters depends   largely on the logical IP topology you choose to implement, since a   single physical Mrouter may interconnect more than two Clusters at   once.) It is a local deployment question as to what the optimal mix   of Clusters and Mrouters will be.Armitage                     Informational                      [Page 6]

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -