📄 rfc3063.txt
字号:
Ohba, et al. Experimental [Page 12]
RFC 3063 MPLS Loop Prevention Mechanism February 2001
Fig.9 shows an example of thread withdrawing. In Fig.9(a), the red
thread on incoming link i2 is withdrawn. Then Hmax decreases from 3
to 1, and node B creates a new green thread and extends it
downstream, as shown in Fig.9(b).
(bl,1) (re,4) (bl,1) (gr,2)+
----[i1]--->B---[o1]---> ----[i1]--->B----[o1]--->
^
|
----[i2]----+
(re,3)-
Fig.9(a) Fig.9(b)
Fig.9 Thread withdrawing (1)
Fig.10 shows another example of thread withdrawing. In Fig.10(a),
the red thread on incoming link i3 is withdrawn. In this case, Hmax
decreases from unknown to 1, however, no thread is extended as shown
in Fig.10(b), since the outgoing link has a colored thread and the
hop count is unknown.
(bl,1) (re,U) (bl,1) (re,U)
----[i2]--->B----[o1]---> ----[i2]--->B----[o1]--->
^
|
----[i3]----+
(re,U)-
Fig.10(a) Fig.10(b)
Fig.10 Thread withdrawing (2)
Fig.11 shows another example of thread withdrawing. In Fig.11(a),
the transparent thread on incoming link i3 is withdrawn. In this
case, a transparent thread is extended (Fig.11(b)), since Hmax
decreases and the outgoing link is transparent.
(tr,1) (tr,U) (tr,1) (tr,2)+
----[i2]--->B----[o1]---> ----[i2]--->B----[o1]--->
^
|
----[i3]----+
(tr,U)-
Fig.11(a) Fig.11(b)
Fig.11 Thread withdrawing (3)
Ohba, et al. Experimental [Page 13]
RFC 3063 MPLS Loop Prevention Mechanism February 2001
4. Thread algorithm
The ordered downstream-on-demand allocation is assumed here, however,
the algorithm can be adapted to the ordered downstream allocation, as
shown in section 5.
In the algorithm, a next hop change event will be separated into two
events: a next hop loss event on the old next hop and a next hop
acquisition event on the new next hop, in this order.
The following notations are defined:
Hmax: the largest incoming link hop count
Ni: the number of unstalled incoming links
The thread algorithm is described as follows.
When a node acquires a new next hop, it creates a colored thread and
extends it downstream.
When a node loses a next hop to which it has extended a thread, it
may withdraw that thread. As described in section 3, this may or may
not cause the next hop to take some action. Among the actions the
next hop may take are withdrawing the thread from its own next hop,
or extending a new thread to its own next hop.
A received colored thread is either stalled, merged, rewound, or
extended. A thread with TTL zero is never extended.
When a received thread is stalled at a node, if Ni=0 and the node is
not an eligible leaf node, initiate a thread withdrawing. Otherwise,
if Ni>0 and the received thread hop count is not unknown, a colored
thread of hop count unknown is created and extended. If the received
thread hop count is unknown, no thread is extended and no further
action is taken.
When a thread being extended is rewound, if the thread hop count is
greater than one more than Hmax, a transparent thread of hop count
(Hmax+1) is extended downstream.
When a node that has an transparent outgoing link receives a
transparent thread, if Hmax decreases the node extends it downstream
without changing color.
5. Applicability of the algorithm
The thread algorithm described in section 4 can be applied to various
LSP management policies.
Ohba, et al. Experimental [Page 14]
RFC 3063 MPLS Loop Prevention Mechanism February 2001
5.1. LSP Loop prevention/detection
The same thread algorithm is applicable to both LSP loop prevention
and detection.
In loop prevention mode, a node transmits a label mapping (including
a thread object) for a particular LSP only when it rewinds the thread
for that LSP. No mapping message is sent until the thread rewinds.
On the other hand, if a node operates in loop detection mode, it
returns a label mapping message without a thread object immediately
after receiving a colored thread. A node which receives a label
mapping message that does not have a thread object will not rewind
the thread.
5.2. Using old path while looping on new path
When a route changes, one might want to continue to use the old path
if the new route is looping. This is achieved simply by holding the
label assigned to the downstream link on the old path until the
thread being extended on the new route gets rewound. This is an
implementation choice.
5.3. How to deal with ordered downstream allocation
The thread mechanism can be also adapted to ordered downstream
allocation mode (or the egress-initiated ordered control) by
regarding the event of newly receiving of a label mapping message [4]
from the next hop as a next hop acquisition event.
Note that a node which doesn't yet have an incoming link behaves as a
leaf. In the case where the tree is being initially built up (e.g.,
the egress node has just come up), each node in turn will behave as a
leaf for a short period of time.
5.4. How to realize load splitting
A leaf node can easily perform load splitting by setting up two
different LSPs for the same FEC. The downstream links for the two
LSPs are simply assigned different colors. The thread algorithm now
prevents a loop in either path, but also allows the two paths to have
a common downstream node.
If some intermediate node wants to do load splitting, the following
modification is made. Assume that there are multiple next hops for
the same FEC. If there are n next hops for a particular FEC, the set
of incoming links for that FEC's LSP can be partitioned into n
subsets, where each subset can be mapped to a distinct outgoing link.
Ohba, et al. Experimental [Page 15]
RFC 3063 MPLS Loop Prevention Mechanism February 2001
This provides n LSPs for the FEC. Each such LSP uses a distinct
color for its outgoing link. The thread algorithm now prevents a
loop in any of the paths, but also allows two or more of the paths to
have a common downstream node.
In this case, an interesting situation may happen. Let's say that in
Fig.12, node B has two incoming links, i1 and i2, and two outgoing
links, o1 and o2, such that i1 is mapped to o1, while i2 is mapped to
o2.
If a blue thread received on i1 and extended on o1 is again received
at node B on i2, the blue thread is not regarded as forming a loop,
since i1 and i2 are regarded as belonging to different subsets.
Instead, the blue thread received on i2 is extended on o2. If the
thread extended on o2 is rewound, a single loop-free LSP which
traverses node B twice is established.
+------------------...--------------------+
. (bl,3) (bl,4) |
. ----[i1]---+ +--[o1]---> .... --+
. \ /
. v /
| B
|
+-----------[i2]--->B----[o2]--->
(bl,10)+ (bl,11)
Fig.12 Load splitting at intermediate node
There is another type of load splitting, in which packets arrived at
single incoming link can be label switched to any one of multiple
outgoing links. This case does not seem to be a good load-splitting
scheme, since the packet order in the same FEC is not preserved.
Thus, this document does not focus on this case.
Whether that's a good type of load splitting or not, the fact remains
that ATM-LSRs cannot load split like this because ATM switches just
don't have the capability to make forwarding decisions on a per-
packet basis.
6. Why this works
6.1. Why a thread with unknown hop count is extended
In the algorithm, a thread of unknown hop count is extended when a
thread loop is detected. This reduces the number of loop prevention
Ohba, et al. Experimental [Page 16]
RFC 3063 MPLS Loop Prevention Mechanism February 2001
messages by merging threads (of known hop count) that are flowing
inside or outside the loop. See Appendix A.12.
6.2. Why a rewound thread cannot contain a loop
6.2.1. Case1: LSP with known link hop counts
How can we be sure that an established path does not contain a loop
when the outgoing link hop count is NOT "unknown"?
Consider a sequence of LSRs <R1, ..., Rn>, such that there is a loop
traversing the LSRs in the sequence. (I.e., packets from R1 go to
R2, then to R3, etc., then to Rn, and then from Rn to R1.)
Suppose that the thread hop count of the link between R1 and R2 is k.
Then by the above procedures, the hop counts between Rn and R1 must
be k+n-1. But the algorithm also ensures that if a node has an
incoming hop count of j, its outgoing link hop count must be at least
of j+1. Hence, if we assume that the LSP established as a result of
thread rewinding contains a loop, the hop counts between R1 and R2
must be at least k+n. From this we may derive the absurd conclusion
that n=0, and we may therefore conclude that there is no such
sequence of LSRs.
6.2.1. Case2: LSP with unknown link hop counts
An established path does not contain a loop as well, when the
outgoing link hop count is "unknown". This is because a colored
thread of unknown hop count is never rewound unless it reaches
egress.
6.3. Why L3 loop is detected
Regardless of whether the thread hop count is known or unknown, if
there is a loop, then some node in the loop will be the last node to
receive a thread over a new incoming link. This thread will always
arrive back at that node, without its color having changed. Hence
the loop will always be detected by at least one of the nodes in the
loop.
6.4. Why L3 loop is not mis-detected
Since no node ever extends the same colored thread downstream twice,
a thread loop is not detected unless there actually is an L3 routing
loop.
Ohba, et al. Experimental [Page 17]
RFC 3063 MPLS Loop Prevention Mechanism February 2001
6.5. How a stalled thread automatically recovers from loop
Once a thread is stalled in a loop, the thread (or the path setup
request) effectively remains in the loop, so that a path
reconfiguration (i.e., thread withdrawing on the old path and thread
extending on the new path) can be issued from any node that may
receive a route change event so as to break the loop.
6.6. Why different colored threads do not chase each other
In the algorithm, multiple thread color and/or hop count updates may
happen if several leaf nodes start extending threads at the same
time. How can we prevent multiple threads from looping unlimitedly?
First, when a node finds that a thread forms a loop, it creates a new
thread of hop count "unknown". All the looping threads of a known
hop count which later arrive at the node would be merged into this
thread. Such a thread behaves like a thread absorber.
Second, the "thread extending with changing color" prevents two
threads from chasing each other.
Suppose that a received thread were always extended without changing
color. Then we would encounter the following situation.
G Y
| |
v v
R1------>R2
^ |
| v
R4<------R3
Fig.13 Example of thread chasing
In Fig.13, (1) node G acquires R1 as a next hop, and starts to extend
a green thread of hop count 1, (2) node Y acquires R2 as a next hop,
and starts to extend a yellow thread of hop count 1, and (3) both
node G and node Y withdraws their threads before these threads go
round.
In this case, the yellow and green threads would go round and get
back to R2 and R1, respectively. When the threads get back to R2 and
R1, however, the incoming links that store the yellow and green
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -