📄 rfc2382.txt
字号:
4.2.3.2 Limited Heterogeneity Model We define the "limited heterogeneity" model as the case where the receivers of a multicast session are limited to use either best effort service or a single alternate quality of service. The alternate QoS can be chosen either by higher level protocols or byCrawley, et. al. Informational [Page 15]RFC 2382 Integrated Services and RSVP over ATM August 1998 dynamic renegotiation of QoS as described below. In order to support limited heterogeneity, each ATM edge device participating in a session would need at most two VCs. One VC would be a point-to-multipoint best effort service VC and would serve all best effort service IP destinations for this RSVP session. The other VC would be a point to multipoint VC with QoS and would serve all IP destinations for this RSVP session that have an RSVP reservation established. As with full heterogeneity, a disadvantage of the limited heterogeneity scheme is that each packet will need to be duplicated at the network layer and one copy sent into each of the 2 VCs. Again, the exact amount of excess traffic will depend on the network topology and group membership. If any of the existing QoS VC end- points cannot upgrade to the new QoS, then the new reservation fails though the resources exist for the new receiver.4.2.3.3 Homogeneous and Modified Homogeneous Models We define the "homogeneous" model as the case where all receivers of a multicast session use a single quality of service VC. Best-effort receivers also use the single RSVP triggered QoS VC. The single VC can be a point-to-point or point-to-multipoint as appropriate. The QoS VC is sized to provide the maximum resources requested by all RSVP next- hops. This model matches the way the current RSVP specification addresses heterogeneous requests. The current processing rules and traffic control interface describe a model where the largest requested reservation for a specific outgoing interface is used in resource allocation, and traffic is transmitted at the higher rate to all next-hops. This approach would be the simplest method for RSVP over ATM implementations. While this approach is simple to implement, providing better than best-effort service may actually be the opposite of what the user desires. There may be charges incurred or resources that are wrongfully allocated. There are two specific problems. The first problem is that a user making a small or no reservation would share a QoS VC resources without making (and perhaps paying for) an RSVP reservation. The second problem is that a receiver may not receive any data. This may occur when there is insufficient resources to add a receiver. The rejected user would not be added to the single VC and it would not even receive traffic on a best effort basis.Crawley, et. al. Informational [Page 16]RFC 2382 Integrated Services and RSVP over ATM August 1998 Not sending data traffic to best-effort receivers because of another receiver's RSVP request is clearly unacceptable. The previously described limited heterogeneous model ensures that data is always sent to both QoS and best-effort receivers, but it does so by requiring replication of data at the sender in all cases. It is possible to extend the homogeneous model to both ensure that data is always sent to best-effort receivers and also to avoid replication in the normal case. This extension is to add special handling for the case where a best- effort receiver cannot be added to the QoS VC. In this case, a best effort VC can be established to any receivers that could not be added to the QoS VC. Only in this special error case would senders be required to replicate data. We define this approach as the "modified homogeneous" model.4.2.3.4 Aggregation The last scheme is the multiple RSVP reservations per VC (or aggregation) model. With this model, large VCs could be set up between IP routers and hosts in an ATM network. These VCs could be managed much like IP Integrated Service (IIS) point-to-point links (e.g. T-1, DS-3) are managed now. Traffic from multiple sources over multiple RSVP sessions might be multiplexed on the same VC. This approach has a number of advantages. First, there is typically no signalling latency as VCs would be in existence when the traffic started flowing, so no time is wasted in setting up VCs. Second, the heterogeneity problem in full over ATM has been reduced to a solved problem. Finally, the dynamic QoS problem for ATM has also been reduced to a solved problem. This approach can be used with point-to-point and point-to-multipoint VCs. The problem with the aggregation approach is that the choice of what QoS to use for which of the VCs is difficult, but is made easier if the VCs can be changed as needed.4.2.4 Multicast End-Point Identification Implementations must be able to identify ATM end-points participating in an IP multicast group. The ATM end-points will be IP multicast receivers and/or next-hops. Both QoS and best-effort end-points must be identified. RSVP next-hop information will provide QoS end- points, but not best-effort end-points. Another issue is identifying end-points of multicast traffic handled by non-RSVP capable next- hops. In this case a PATH message travels through a non-RSVP egress router on the way to the next hop RSVP node. When the next hop RSVP node sends a RESV message it may arrive at the source over a different route than what the data is using. The source will get the RESV message, but will not know which egress router needs the QoS. For unicast sessions, there is no problem since the ATM end-point will be the IP next-hop router. Unfortunately, multicast routing mayCrawley, et. al. Informational [Page 17]RFC 2382 Integrated Services and RSVP over ATM August 1998 not be able to uniquely identify the IP next-hop router. So it is possible that a multicast end-point can not be identified. In the most common case, MARS will be used to identify all end-points of a multicast group. In the router to router case, a multicast routing protocol may provide all next-hops for a particular multicast group. In either case, RSVP over ATM implementations must obtain a full list of end-points, both QoS and non-QoS, using the appropriate mechanisms. The full list can be compared against the RSVP identified end-points to determine the list of best-effort receivers. There is no straightforward solution to uniquely identifying end- points of multicast traffic handled by non-RSVP next hops. The preferred solution is to use multicast routing protocols that support unique end-point identification. In cases where such routing protocols are unavailable, all IP routers that will be used to support RSVP over ATM should support RSVP. To ensure proper behavior, implementations should, by default, only establish RSVP- initiated VCs to RSVP capable end-points.4.2.5 Multicast Data Distribution Two models are planned for IP multicast data distribution over ATM. In one model, senders establish point-to-multipoint VCs to all ATM attached destinations, and data is then sent over these VCs. This model is often called "multicast mesh" or "VC mesh" mode distribution. In the second model, senders send data over point-to- point VCs to a central point and the central point relays the data onto point-to-multipoint VCs that have been established to all receivers of the IP multicast group. This model is often referred to as "multicast server" mode distribution. RSVP over ATM solutions must ensure that IP multicast data is distributed with appropriate QoS. In the Classical IP context, multicast server support is provided via MARS [5]. MARS does not currently provide a way to communicate QoS requirements to a MARS multicast server. Therefore, RSVP over ATM implementations must, by default, support "mesh-mode" distribution for RSVP controlled multicast flows. When using multicast servers that do not support QoS requests, a sender must set the service, not global, break bit(s).4.2.6 Receiver Transitions When setting up a point-to-multipoint VCs for multicast RSVP sessions, there will be a time when some receivers have been added to a QoS VC and some have not. During such transition times it is possible to start sending data on the newly established VC. The issue is when to start send data on the new VC. If data is sent both on the new VC and the old VC, then data will be delivered with properCrawley, et. al. Informational [Page 18]RFC 2382 Integrated Services and RSVP over ATM August 1998 QoS to some receivers and with the old QoS to all receivers. This means the QoS receivers can get duplicate data. If data is sent just on the new QoS VC, the receivers that have not yet been added will lose information. So, the issue comes down to whether to send to both the old and new VCs, or to send to just one of the VCs. In one case duplicate information will be received, in the other some information may not be received. This issue needs to be considered for three cases: - When establishing the first QoS VC - When establishing a VC to support a QoS change - When adding a new end-point to an already established QoS VC The first two cases are very similar. It both, it is possible to send data on the partially completed new VC, and the issue of duplicate versus lost information is the same. The last case is when an end-point must be added to an existing QoS VC. In this case the end-point must be both added to the QoS VC and dropped from a best- effort VC. The issue is which to do first. If the add is first requested, then the end-point may get duplicate information. If the drop is requested first, then the end-point may loose information. In order to ensure predictable behavior and delivery of data to all receivers, data can only be sent on a new VCs once all parties have been added. This will ensure that all data is only delivered once to all receivers. This approach does not quite apply for the last case. In the last case, the add operation should be completed first, then the drop operation. This means that receivers must be prepared to receive some duplicate packets at times of QoS setup.4.2.7 Dynamic QoS RSVP provides dynamic quality of service (QoS) in that the resources that are requested may change at any time. There are several common reasons for a change of reservation QoS. 1. An existing receiver can request a new larger (or smaller) QoS. 2. A sender may change its traffic specification (TSpec), which can trigger a change in the reservation requests of the receivers. 3. A new sender can start sending to a multicast group with a larger traffic specification than existing senders, triggering larger reservations. 4. A new receiver can make a reservation that is larger than existing reservations.Crawley, et. al. Informational [Page 19]RFC 2382 Integrated Services and RSVP over ATM August 1998 If the limited heterogeneity model is being used and the merge node for the larger reservation is an ATM edge device, a new larger reservation must be set up across the ATM network. Since ATM service, as currently defined in UNI 3.x and UNI 4.0, does not allow renegotiating the QoS of a VC, dynamically changing the reservation means creating a new VC with the new QoS, and tearing down an established VC. Tearing down a VC and setting up a new VC in ATM are complex operations that involve a non-trivial amount of processing time, and may have a substantial latency. There are several options for dealing with this mismatch in service. A specific approach will need to be a part of any RSVP over ATM solution. The default method for supporting changes in RSVP reservations is to attempt to replace an existing VC with a new appropriately sized VC. During setup of the replacement VC, the old VC must be left in place unmodified. The old VC is left unmodified to minimize interruption of QoS data delivery. Once the replacement VC is established, data transmission is shifted to the new VC, and the old VC is then closed. If setup of the replacement VC fails, then the old QoS VC should continue to be used. When the new reservation is greater than the old reservation, the reservation request should be answered with an error. When the new reservation is less than the old reservation, the request should be treated as if the modification was successful. While leaving the larger allocation in place is suboptimal, it maximizes delivery of service to the user. Implementations should retry replacing the too large VC after some appropriate elapsed time. One additional issue is that only one QoS change can be processed at one time per reservation. If the (RSVP) requested QoS is changed while the first replacement VC is still being setup, then the replacement VC is released and the whole VC replacement process is restarted. To limit the number of changes and to avoid excessive signalling load, implementations may limit the number of changes that will be processed in a given period. One implementation approach would have each ATM edge device configured with a time parameter T (which can change over time) that gives the minimum amount of time the edge device will wait between successive changes of the QoS of a particular VC. Thus if the QoS of a VC is changed at time t, all
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -