rfc2380.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 788 行 · 第 1/3 页
TXT
788 行
the direction of RSVP. Therefore, data VCs set up to support RSVP
controlled flows should only be released at the direction of RSVP.
Such VCs must not be timed out due to inactivity by either the VC
initiator or the VC receiver. This conflicts with VCs timing out as
described in RFC 1755 [11], section 3.4 on VC Teardown. RFC 1755
recommends tearing down a VC that is inactive for a certain length of
time. Twenty minutes is recommended. This timeout is typically
implemented at both the VC initiator and the VC receiver. Although,
section 3.1 of the update to RFC 1755 [12] states that inactivity
timers must not be used at the VC receiver.
In RSVP over ATM implementations, the configurable inactivity timer
mentioned in [11] MUST be set to "infinite" for VCs initiated at the
request of RSVP. Setting the inactivity timer value at the VC
initiator should not be problematic since the proper value can be
Berger Standards Track [Page 5]
RFC 2380 RSVP over ATM Implementation Requirements August 1998
relayed internally at the originator. Setting the inactivity timer
at the VC receiver is more difficult, and would require some
mechanism to signal that an incoming VC was RSVP initiated. To avoid
this complexity and to conform to [12], RSVP over ATM implementations
MUST not use an inactivity timer to clear any received connections.
2.4 Dynamic QoS
As stated in [9], there is a mismatch in the service provided by RSVP
and that provided by ATM UNI3.x and 4.0. RSVP allows modifications
to QoS parameters at any time while ATM does not support any
modifications to QoS parameters post VC setup. See [9] for more
detail.
The method for supporting changes in RSVP reservations is to attempt
to replace an existing VC with a new appropriately sized VC. During
setup of the replacement VC, the old VC MUST be left in place
unmodified. The old VC is left unmodified to minimize interruption of
QoS data delivery. Once the replacement VC is established, data
transmission is shifted to the new VC, and only then is the old VC
closed.
If setup of the replacement VC fails, then the old QoS VC MUST
continue to be used. When the new reservation is greater than the
old reservation, the reservation request MUST be answered with an
error. When the new reservation is less than the old reservation, the
request MUST be treated as if the modification was successful. While
leaving the larger allocation in place is suboptimal, it maximizes
delivery of service to the user. The behavior is also required in
order to conform to RSVP error handling as defined in sections 2.5,
3.1.8 and 3.11.2 of [8]. Implementations SHOULD retry replacing a
too large VC after some appropriate elapsed time.
One additional issue is that only one QoS change can be processed at
one time per reservation. If the (RSVP) requested QoS is changed
while the first replacement VC is still being setup, then the
replacement VC SHOULD BE released and the whole VC replacement
process is restarted. Implementations MAY also limit number of
changes processed in a time period per [9].
2.5 Encapsulation
There are multiple encapsulation options for data sent over RSVP
triggered QoS VCs. All RSVP over ATM implementations MUST be able to
support LLC encapsulation per RFC 1483 [10] on such QoS VCs.
Implementations MAY negotiate alternative encapsulations using the
B-LLI negotiation procedures defined in ATM Signalling, see [11] for
Berger Standards Track [Page 6]
RFC 2380 RSVP over ATM Implementation Requirements August 1998
details. When a QoS VC is only being used to carry IP packets,
implementations SHOULD negotiate VC based multiplexing to avoid
incurring the overhead of the LLC header.
3. Multicast RSVP Session Support
There are several aspects to running RSVP over ATM that are unique to
multicast sessions. This section addresses multicast end-point
identification, multicast data distribution, multicast receiver
transitions and next-hops requesting different QoS values
(heterogeneity) which includes the handling of multicast best effort
receivers. Handling of best effort receivers is not strictly an RSVP
issue, but needs to be addressed by any RSVP over ATM implementation
in order to maintain expected best effort internet service.
3.1 Data VC Management for Heterogeneous Sessions
The issues relating to data VC management of heterogeneous sessions
are covered in detail in [9]. In summary, heterogeneity occurs when
receivers request different levels of QoS within a single session,
and also when some receivers do not request any QoS. Both types of
heterogeneity are shown in figure 2.
+----+
+------> | R1 |
| +----+
|
| +----+
+-----+ -----+ +--> | R2 |
| | ---------+ +----+ Receiver Request Types:
| Src | ----> QoS 1 and QoS 2
| | .........+ +----+ ....> Best-Effort
+-----+ .....+ +..> | R3 |
: +----+
/\ :
|| : +----+
|| +......> | R4 |
|| +----+
Single
IP Mulicast
Group
Figure 2: Types of Multicast Receivers
[9] provides four models for dealing with heterogeneity: full
heterogeneity, limited heterogeneity, homogeneous, and modified
homogeneous models. No matter which model or combination of models
is used by an implementation, implementations MUST NOT normally send
Berger Standards Track [Page 7]
RFC 2380 RSVP over ATM Implementation Requirements August 1998
more than one copy of a particular data packet to a particular next-
hop (ATM end-point). Some transient duplicate transmission is
acceptable, but only during VC setup and transition.
Implementations MUST also ensure that data traffic is sent to best
effort receivers. Data traffic MAY be sent to best effort receivers
via best effort or QoS VCs as is appropriate for the implemented
model. In all cases, implementations MUST NOT create VCs in such a
way that data cannot be sent to best effort receivers. This includes
the case of not being able to add a best effort receiver to a QoS VC,
but does not include the case where best effort VCs cannot be setup.
The failure to establish best effort VCs is considered to be a
general IP over ATM failure and is therefore beyond the scope of this
document.
There is an interesting interaction between dynamic QoS and
heterogeneous requests when using the limited heterogeneity,
homogeneous, or modified homogeneous models. In the case where a
RESV message is received from a new next-hop and the requested
resources are larger than any existing reservation, both dynamic QoS
and heterogeneity need to be addressed. A key issue is whether to
first add the new next-hop or to change to the new QoS. This is a
fairly straight forward special case. Since the older, smaller
reservation does not support the new next-hop, the dynamic QoS
process SHOULD be initiated first. Since the new QoS is only needed
by the new next-hop, it SHOULD be the first end-point of the new VC.
This way signaling is minimized when the setup to the new next-hop
fails.
3.2 Multicast End-Point Identification
Implementations must be able to identify ATM end-points participating
in an IP multicast group. The ATM end-points will be IP multicast
receivers and/or next-hops. Both QoS and best effort end-points must
be identified. RSVP next-hop information will usually provide QoS
end-points, but not best effort end-points.
There is a special case where RSVP next-hop information will not
provide the appropriate end-points. This occurs when a next-hop is
not RSVP capable and RSVP is being automatically tunneled. In this
case a PATH message travels through a non-RSVP egress router on the
way to the next-hop RSVP node. When the next-hop RSVP node sends a
RESV message it may arrive at the source via a different route than
used by the PATH message. The source will get the RESV message, but
will not know which ATM end-point should be associated with the
reservation. For unicast sessions, there is no problem since the ATM
end-point will be the IP next-hop router. There is a problem with
Berger Standards Track [Page 8]
RFC 2380 RSVP over ATM Implementation Requirements August 1998
multicast, since multicast routing may not be able to uniquely
identify the IP next-hop router. It is therefore possible for a
multicast end-point to not be properly identified.
In certain cases it is also possible to identify the list of all best
effort end-points. Some multicast over ATM control mechanisms, such
as MARS in mesh mode, can be used to identify all end-points of a
multicast group. Also, some multicast routing protocols can provide
all next-hops for a particular multicast group. In both cases, RSVP
over ATM implementations can obtain a full list of end-points, both
QoS and non-QoS, using the appropriate mechanisms. The full list can
then be compared against the RSVP identified end-points to determine
the list of best effort receivers.
While there are cases where QoS and best effort end-points can be
identified, there is no straightforward solution to uniquely
identifying end-points of multicast traffic handled by non-RSVP
next-hops. The preferred solution is to use multicast control
mechanisms and routing protocols that support unique end-point
identification. In cases where such mechanisms and routing protocols
are unavailable, all IP routers that will be used to support RSVP
over ATM should support RSVP. To ensure proper behavior, baseline
RSVP over ATM implementations MUST only establish RSVP-initiated VCs
to RSVP capable end-points. It is permissible to allow a user to
override this behavior.
3.3 Multicast Data Distribution
Two basic models exist for IP multicast data distribution over ATM.
In one model, senders establish point-to-multipoint VCs to all ATM
attached destinations, and data is then sent over these VCs. This
model is often called "multicast mesh" or "VC mesh" mode
distribution. In the second model, senders send data over point-to-
point VCs to a central point and the central point relays the data
onto point-to-multipoint VCs that have been established to all
receivers of the IP multicast group. This model is often referred to
as "multicast server" mode distribution. Figure 3 shows data flow for
both modes of IP multicast data distribution.
Berger Standards Track [Page 9]
RFC 2380 RSVP over ATM Implementation Requirements August 1998
_________
/ \
/ Multicast \
\ Server /
\_________/
^ | |
| | +--------+
+-----+ | | |
| | -------+ | | Data Flow:
| Src | ...+......|....+ V ----> Server
| | : | : +----+ ....> Mesh
+-----+ : | +...>| R1 |
: | +----+
: V
: +----+
+..> | R2 |
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?