⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pim_arch.tex

📁 xorp源码hg
💻 TEX
📖 第 1 页 / 共 3 页
字号:
\begin{itemize}  \item \verb=pim/pim_vif.hh=  \item \verb=libxorp/vif.hh=  \item \verb=libproto/proto_unit.hh=\end{itemize}PimVif contains state such as PIM Hello related information, andprotocol-related statistics for this virtual interface. Also, allthe PIM-specific methods for parsing or constructing PIM controlmessages when a PIM packet is received or sent are implemented asmethods in PimVif. The parsing or construction of each message type isimplemented in a separate file with a name prefix of \verb=pim_proto=.For example, \verb=pim_proto_cand_rp_adv.cc= implements sending andreceiving of PIM Candidate-RP-Advertisement messages. The handing ofother message types is implemented in similarly named files.By default, each PimVif is disabled; therefore, on startup it must beenabled explicitly. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\subsection{PimScopeZoneTable Description}PimScopeZoneTable is a table that contains information about scopedzones. There is one such table per PimNode. This table is used to checkwhether various control messages are allowed to be sent or accepted aspecific network interface~\footnote{Note that in the current implementation(March 2007) the PimScopeZoneTable is used only for PIMBootstrap messages. In the future, the scope zone information would beused for other control messages as well.}.By default, PimScopeZoneTable is empty; \ie there are no scoping zonerestrictions.%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\subsection{PimMrt Description}PimMrt is the PIM-specific multicast routing table. It is the centraland most important component: its state is modified by the PIM controlmessages, and the output of it is the multicast forwarding stateinformation that is installed in the multicast forwarding engine.The multicast routing table is composed of four tables. Each tablecontains PimMre entries (described in file \verb=pim/pim_mre.hh=):\begin{itemize}  \item (*,*,RP) multicast routing table. This table contains all  (*,*,RP) multicast routing entries~\footnote{(*,*,RP) entry is an  entry that matches all multicast groups that use one specific  RP.}. For simplicity of implementation, this table contains an  (*,*,RP) entry for each RP in the RpTable, even if no (*,*,RP) Join  messages for that RP were received.  The iterator for this table  returns the entries ordered by their RP address: the numerically  smallest addresses first. Note that each PimMre entry in this table  has the source address set to the RP address, and the group address  set to zero (\ie \verb=IPvX::ZERO()=).  \item (*,G) multicast routing table. This table contains all (*,G)  multicast routing entries. Each entry in that table contains a pointer  to the corresponding (*,*,RP) entry for that group, or NULL if the  group has no RP yet. The iterator for this table returns the entries  ordered by their group address: the numerically smallest addresses  first. Note that each PimMre entry in this table has the source  address set to zero (\ie \verb=IPvX::ZERO()=).  \item (S,G) multicast routing table. This table contains all (S,G)  multicast routing entries. Each entry in that table contains a pointer  to the corresponding (*,G) entry for that group, or NULL if there is  no (*,G) entry. It also contains a pointer to the corresponding  (S,G,rpt) entry if such entry exists (seen below). There are two iterators  for this table: an iterator for the entries ordered by the numerically  smallest source address first, and an iterator for the entries ordered  by the numerically smallest group address first.  \item (S,G,rpt) multicast routing table. This table contains all  (S,G,rpt) multicast routing entries. Each entry in that table contains  a pointer to the corresponding (*,G) entry for that group, or NULL if  there is no (*,G) entry. It also contains a pointer to the  corresponding (S,G) entry if such entry exists. There are two  iterators for this table: an iterator for the entries ordered by the  numerically smallest source address first, and an iterator for the  entries ordered by the numerically smallest group address first.\end{itemize}For simplicity of implementation, currently (March 2007) PimMrtcontains one more table: PimMrtMfc PIM-specific table with MulticastForwarding Cache (PimMfc) entries (in the future, this table may bemoved out of PimMrt to PimNode). This table contains allentries that have been installed in the multicast forwarding table inthe multicast forwarding engine. Currently (March 2007), thoseentries are source-group-specific, and are installed ``on-demand'' (\ieonly if there is an active source for some multicast group). In the future,group-specific entries may be supported as well (assuming that thatmulticast forwarding engine supports (*,G) multicast forwardingentries).In addition to the above tables, PimMrt contains a mechanism fortracking dependencies among the PimMre and PimMfc entries, as well asthe PimMre and PimMfc dependencies on external state such as the RP setor the MRIB information. For example, if the MRIB for a specific networkprefix changes, then all PimMre and PimMfc entries that depend on thatnetwork prefix must be updated accordingly. A single change may triggera number of operations that must be performed on a number of entries,therefore we need to carefully track the state dependency. Below is asummary of some of the events that may trigger actions to processentries in PimMrt:\begin{itemize}  \item RP-Set change: \eg if there is any change to the RP-Set that  affects the group-to-RP mapping.  \item MRIB change: any change in the underlying unicast routing that  affects the Reverse-Path Forwarding information toward an RP or a  source.  \item Next-Hop PIM neighbor change: any change to the set of PIM  neighbors that may affect the Next-Hop PIM Router toward a destination.  \item Reception of a PIM Join/Prune message.  \item Reception of a PIM Assert message.  \item Add/deletion of a local multicast member.  \item Change in the Designated Router on an interface.  \item Change in the IP address or IP subnet on an interface.  \item Start or stop a virtual interface.  \item Addition or deletion of a PimMre entry.\end{itemize}A complete list of all input events that may trigger actions is in file\verb=pim/pim_mre_track_state.hh= (see the\verb=input_state_t INPUT_STATE_*= events).In some cases, keeping track of the entries that need to be processedfor a given input event is relatively simple. For example, if the MRIBfor a network prefix changes, processing all (S,G) PimMre entries thatmight be affected can be done by using the source-first iterator for the(S,G) multicast routing table, and then iterating over all (S,G) PimMreentries whose source address matches that network prefix. However, inother cases we cannot use those table iterators. For example, if an RP isdeleted, we need to process all corresponding (*,G) entries that matchto that RP, and to reassign them to a new RP. In that case, to keep trackof the dependencies between the RP and the (*,G) entries, each RP entryin the RpTable contains a list of PimMre entries that match to thatRP. Similarly, each PimNbr entry (an entry that contains informationabout a PIM neighbor) contains a list of all PimMre entries that usethat PIM neighbor as the Next-Hop Router toward the RP or the source.The dependency tracking mechanism needs to solve another problem: foreach input event, find all the operations and their ordering that needto be performed on some of the PimMre and PimMfc entries.  The solutionchosen to solve this problem is to enumerate all possible input eventsand output operations, and to compute in advance a table. Lookup to thistable for a given input event returns a list of the ordered outputoperations that need to be performed for that event.If there are just few input events and output operations, it might bepossible to create such table manually. However, there are tens of inputand output events, therefore it is not feasible to crate manually suchtable. The solution is on startup to automatically compute this tablebased on a set of rules about the various state dependencies as definedin the PIM-SM spec. Those state dependencies are derived from the macrosin the PIM-SM protocol specification. For example, the specificationdocument contains macros like:\begin{verbatim}pim_include(S,G) =    { all interfaces I such that:      ( (I_am_DR( I ) AND lost_assert(S,G,I) == FALSE )        OR AssertWinner(S,G,I) == me )       AND  local_receiver_include(S,G,I) }\end{verbatim}Then, the corresponding state dependency rule in the implementation is:\begin{verbatim}voidPimMreTrackState::track_state_pim_include_sg(list<PimMreAction> action_list){    track_state_i_am_dr(action_list);    track_state_lost_assert_sg(action_list);    track_state_assert_winner_sg(action_list);    track_state_local_receiver_include_sg(action_list);}\end{verbatim}In other words, if the value of \verb=lost_assert(S,G,I)= for example changes,then the value of \verb=pim_include(S,G)= must be recomputed.However, we may have some state dependency rules for\verb=lost_assert(S,G,I)= itself, hence if we combine all statedependency rules, we can represent the dependencies with a collectionof uni-directional graphs. Then, to create the list of actions for eachinput entry, we need to consider all paths from the graph node forthat input entry to all reachable output actions.The uni-directional graphs creation and the extraction of the lists ofactions for each input entry is performed once on startup. The resultlists are saved internally inside PimMrt, and used during processing ofinput actions.Finally, the last major problem that the dependency tracking mechanismneeds to solve is how to process a large number of entries triggered bya single event without stopping processing of other components in therouter (\eg receiving PIM control packets, or responding to a commandsent by the CLI). This problem requires attentionbecause the implementation is single-threaded, therefore if processing asingle event takes too long, the rest of the pending events maybe processed too late (\eg if the periodic sending of PIM Hello messagesis delayed for too long, the PIM neighbors may timeout thisrouter). The solution of this problem is to voluntarily suspendthe processing if it is taking too long, then save the necessary stateto continue the processing sometime later, and finally return control tothe control loop which handles all events. Typically, the processing ofsome event may take toolong if there is a large number of PimMre or PimMrt entries thatneed to be processed (for example, thousands of (*,G) entries if the RPfor those entries changes). In that case, we use ``time-slices'' tocompute how long has taken the processing so far.In the above example, we check the processing time after we process each(*,G) entry: if the elapsed time is above a threshold (\eg 100ms),we save the appropriate state to continue theprocessing later (\eg in the above example we save the address of thenext multicast group to process).All dependency tracking processing and time-slicing uses PimMreTaskentries to keep the appropriate state. There is a single list ofPimMreTask entries per PimNode, and the list is FIFO: new tasks areadded to the end of the lists, and the task at the front of the list isprocessed until it is completed (\eg within one or several time-slices).%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\subsection{PimBsr Description}PimBsr is the PIM-Bootstrap mechanism unit. It implements the Bootstrapmechanism as described in \cite{PIM-SM-BOOTSTRAP}. There is one PimBsr

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -