⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 multicast.tex

📁 柯老师网站上找到的
💻 TEX
📖 第 1 页 / 共 3 页
字号:
\chapter{Multicast Routing}\label{chap:multicast}This section describes the usage and the internals of multicastrouting implementation in \ns.We first describe \href{the user interface to enable multicast routing}{Section}{sec:mcast-api},specify the multicast routing protocol to be used and thevarious methods and configuration parameters specific to theprotocols currently supported in \ns.We then describe in detail \href{the internals and the architecture of themulticast routing implementation in \ns}{Section}{sec:mcast-internals}.The procedures and functions described in this chapter can be found invarious files in the directories \nsf{tcl/mcast}, \nsf{tcl/ctr-mcast};additional support routinesare in \nsf{mcast\_ctrl.\{cc,h\}},\nsf{tcl/lib/ns-lib.tcl}, and \nsf{tcl/lib/ns-node.tcl}.\section{Multicast API}\label{sec:mcast-api}Multicast forwarding requires enhancementsto the nodes and links in the topology.Therefore, the user must specify multicast requirementsto the Simulator class before creating the topology.This is done in one of two ways:\begin{program}        set ns [new Simulator -multicast on]    {\rm or}        set ns [new Simulator]        $ns multicast\end{program}			%$When multicast extensions are thus enabled, nodes will be created withadditional classifiers and replicators for multicast forwarding, andlinks will contain elements to assign incoming interface labels to allpackets entering a node.A multicast routing strategy is the mechanism by whichthe multicast distribution tree is computed in the simulation.\ns\ supports three multiast route computation strategies:	centralised, dense mode(DM) or shared tree mode(ST).The method \proc[]{mrtproto} in the Class Simulator specifies eitherthe route computation strategy, for centralised multicast routing, orthe specific detailed multicast routing protocol that should be used.%%For detailed multicast routing, \proc[]{mrtproto} will accept, as%%additional arguments, a list of nodes that will run an instance of%%that routing protocol.  %%Polly Huang Wed Oct 13 09:58:40 EDT 199: the above statement%%is no longer supported.The following are examples of validinvocations of multicast routing in \ns:\begin{program}        set cmc [$ns mrtproto CtrMcast]    \; specify centralized multicast for all nodes;        \; cmc is the handle for multicast protocol object;        $ns mrtproto DM                   \; specify dense mode multicast for all nodes;        $ns mrtproto ST                  \; specify shared tree mode to run on all nodes;\end{program}Notice in the above examples that CtrMcast returns a handle that canbe used for additional configuration of centralised multicast routing.The other routing protocols will return a null string.  All thenodes in the topology will run instances of the same protocol.Multiple multicast routing protocols can be run at a node, but in thiscase the user must specify which protocol owns which incominginterface.  For this finer control \proc[]{mrtproto-iifs} is used.New/unused multicast address are allocated using the procedure\proc[]{allocaddr}.%%The default configuration in \ns\ only provides for 128 node%%topologies when multicast routing is enabled.  The procedure%%\proc[]{expandaddr} expands the address space to support $2^{30} - 1$%%node topologies.\proc[]{allocaddr} %%and \proc[]{expandaddr} is a class procedure in the class Node.The agents use the instance procedures\proc[]{join-group} and \proc[]{leave-group}, inthe class Node to join and leave multicast groups. These procedurestake two mandatory arguments. The first argument identifies thecorresponding agent and second argument specifies the group address.An example of a relatively simple multicast configuration is:\begin{program}        set ns [new Simulator {\bfseries{}-multicast on}] \; enable multicast routing;        set group [{\bfseries{}Node allocaddr}]   \; allocate a multicast address;        set node0 [$ns node]         \; create multicast capable nodes;        set node1 [$ns node]        $ns duplex-link $node0 $node1 1.5Mb 10ms DropTail        set mproto DM          \; configure multicast protocol;        set mrthandle [{\bfseries{}$ns mrtproto $mproto}] \; all nodes will contain multicast protocol agents;        set udp [new Agent/UDP]		\; create a source agent at node 0;        $ns attach-agent $node0 $udp         set src [new Application/Traffic/CBR]                $src attach-agent $udp        {\bfseries{}$udp set dst_addr_ $group}        {\bfseries{}$udp set dst_port_ 0}        set rcvr [new Agent/LossMonitor]  \; create a receiver agent at node 1;        $ns attach-agent $node1 $rcvr        $ns at 0.3 "{\bfseries{}$node1 join-group $rcvr $group}" \; join the group at simulation time 0.3 (sec);\end{program} %$\subsection{Multicast Behavior Monitor Configuration}\ns\ supports a multicast monitor module that can traceuser-defined packet activity.The module counts the number of packets in transit periodicallyand prints the results to specified files. \proc[]{attach} enables a monitor module to print output to a file. \proc[]{trace-topo} insets monitor modules into all links. \proc[]{filter} allows accounting on specified packet header, field in the header), and value for the field).  Calling \proc[]{filter}repeatedly will result in an AND effect on the filtering condition.\proc[]{print-trace} notifies the monitor module to begin dumping data.\code{ptype()} is a global arrary that takes a packet type name (as seen in\proc[]{trace-all} output) and maps it into the corresponding value.  A simple configuration to filter cbr packets on a particular group is:\begin{program}        set mcastmonitor [$ns McastMonitor]        set chan [open cbr.tr w] \; open trace file;        $mmonitor attach $chan1  \; attach trace file to McastMoniotor object;        $mcastmonitor set period_ 0.02         \; default 0.03 (sec);        $mmonitor trace-topo   \; trace entire topology;        $mmonitor filter Common ptype_ $ptype(cbr) \; filter on ptype_ in Common header;        $mmonitor filter IP dst_ $group \; AND filter on dst_ address in IP header;        $mmonitor print-trace  \; begin dumping periodic traces to specified files;\end{program} %$% SAMPLE OUTPUT?The following sample output illustrates the output file format (time, count):{\small\begin{verbatim}0.16 00.17999999999999999 00.19999999999999998 00.21999999999999997 60.23999999999999996 110.25999999999999995 120.27999999999999997 12\end{verbatim}}\subsection{Protocol Specific configuration}In this section, we briefly illustrate theprotocol specific configuration mechanismsfor all the protocols implemented in \ns.\paragraph{Centralized Multicast}The centralized multicast is a sparse mode implementation of multicastsimilar to PIM-SM \cite{Deer94a:Architecture}.A Rendezvous Point (RP) rooted shared tree is builtfor a multicast group.  The actual sending of prune, join messagesetc. to set up state at the nodes is not simulated.  A centralizedcomputation agent is used to compute the forwarding trees and set upmulticast forwarding state, \tup{S, G} at the relevant nodes as newreceivers join a group.  Data packets from the senders to a group areunicast to the RP.  Note that data packets from the senders areunicast to the RP even if there are no receivers for the group.The method of enabling centralised multicast routing in a simulation is:\begin{program}        set mproto CtrMcast    \; set multicast protocol;        set mrthandle [$ns mrtproto $mproto] \end{program}The command procedure \proc[]{mrtproto}returns a handle to the multicast protocol object.This handle can be used to control the RP and the boot-strap-router (BSR),switch tree-types for a particular group,from shared trees to source specific trees, andrecompute multicast routes.\begin{program}        $mrthandle set_c_rp $node0 $node1          \; set the RPs;        $mrthandle set_c_bsr $node0:0 $node1:1     \; set the BSR, specified as list of node:priority;        $mrthandle get_c_rp $node0 $group          \; get the current RP ???;        $mrthandle get_c_bsr $node0                \; get the current BSR;        $mrthandle switch-treetype $group         \; to source specific or shared tree;        $mrthandle compute-mroutes       \; recompute routes. usually invoked automatically as needed;\end{program}Note that whenever network dynamics occur or unicast routing changes,\proc[]{compute-mroutes} could be invoked to recompute the multicast routes.The instantaneous re-computation feature of centralised algorithmsmay result in causality violations during the transientperiods.\paragraph{Dense Mode}The Dense Mode protocol (\code{DM.tcl}) is an implementation of adense--mode--like protocol.  Depending on the value of DM classvariable \code{CacheMissMode} it can run in one of two modes.  If\code{CacheMissMode} is set to \code{pimdm} (default), PIM-DM-likeforwarding rules will be used.  Alternatively, \code{CacheMissMode}can be set to \code{dvmrp} (loosely based on DVMRP \cite{rfc1075}).The main difference between these two modes is that DVMRP maintainsparent--child relationships among nodes to reduce the number of linksover which data packets are broadcast.  The implementation works onpoint-to-point links as well as LANs and adapts to the networkdynamics (links going up and down).Any node that receives data for a particular group for which it has nodownstream receivers, send a prune upstream.  A prune message causesthe upstream node to initiate prune state at that node.  The prunestate prevents that node from sending data for that group downstreamto the node that sent the original prune message while the state isactive.  The time duration for which a prune state is active isconfigured through the DM class variable, \code{PruneTimeout}.  Atypical DM configuration is shown below:\begin{program}        DM set PruneTimeout 0.3           \; default 0.5 (sec);        DM set CacheMissMode dvmrp	  \; default pimdm;        $ns mrtproto DM\end{program} %$\paragraph{Shared Tree Mode}Simplified sparse mode \code{ST.tcl} is a version of a shared--treemulticast protocol.  Class variable array \code{RP\_} indexed by groupdetermines which node is the RP for a particular group.  For example:\begin{program}        ST set RP_($group) $node0        $ns mrtproto ST\end{program}At the time the multicast simulation is started, the protocol willcreate and install encapsulator objects at nodes that have multicastsenders, decapsulator objects at RPs and connect them.  To join agroup, a node sends a graft message towards the RP of the group.  Toleave a group, it sends a prune message.  The protocol currently doesnot support dynamic changes and LANs.\paragraph{Bi-directional Shared Tree Mode}\code{BST.tcl} is an experimental version of a bi--directional sharedtree protocol.  As in shared tree mode, RPs must be configuredmanually by using the class array \code{RP\_}.  The protocol currentlydoes not support dynamic changes and LANs.\section{Internals of Multicast Routing}\label{sec:mcast-internals}We describe the internals in three parts: first the classes toimplement and support multicast routing; second, the specific protocolimplementation details; and finally, provide a list of the variablesthat are used in the implementations.\subsection{The classes}The main classes in the implementation are the\clsref{mrtObject}{../ns-2/tcl/mcast/McastProto.tcl} and the\clsref{McastProtocol}{../ns-2/tcl/mcast/McastProto.tcl}.  There arealso extensions to the base classes: Simulator, Node, Classifier,\etc.  We describe these classes and extensions in this subsection.The specific protocol implementations also use adjunct data structuresfor specific tasks, such as timer mechanisms by detailed dense mode,encapsulation/decapsulation agents for centralised multicast \etc.; wedefer the description of these objects to the section on thedescription of the particular protocol itself.\paragraph{mrtObject class}There is one mrtObject (aka Arbiter) object per multicast capablenode.  This object supports the ability for a node to run multiplemulticast routing protocols by maintaining an array of multicastprotocols indexed by the incoming interface.  Thus, if there areseveral multicast protocols at a node, each interface is owned by justone protocol.  Therefore, this object supports the ability for a nodeto run multiple multicast routing protocols.  The node uses thearbiter to perform protocol actions, either to a specific protocolinstance active at that node, or to all protocol instances at thatnode.\begin{alist}{\let\[=[\let\]=]\proc[instance, \[iiflist\]]{addproto}} &	adds the handle for a protocol instance to its array of	protocols.  The second optional argument is the incoming	interface list controlled by the protocol.  If this argument	is an empty list or not specified, the protocol is assumed to	run on all interfaces (just one protocol). \\\proc[protocol]{getType} &	returns the handle to the protocol instance active at that	node that matches the specified type (first and only	argument).  This function is often used to locate a protocol's	peer at another node.  An empty string is returned if the	protocol of the given type could not be found. \\\proc[op, args]{all-mprotos} &

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -