⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2490.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   5. Delete the PSB and return.   f. ResvTearPro: This function is invoked by the Arr state when a   ResvTear message is received.   1. Determine the Outgoing Interface.   2. Process the flow descriptor list of the arrived ResvTear message.   3. Check for the RSB whose Session Object, Filter Spec Object matches      that of ResvTear message and if there is no such RSB return.   4. If such an RSB is found and Resv Refresh Needed Flag is on send      ResvTear message to all the Previous Hops that are in Refresh PHOP      List.   5. Finally delete the RSB.   g. ResvConfPro: This function is invoked by the Arr state when a   ResvConf message is received. The Resv Confirm is forwarded to the IP   address that was in the Resv Confirm Object of the received ResvConf   message.   h. UpdateTrafficControl: This function is called by PathMsgPro and   ResvMsgPro and input to this function is RSB.   1. The RSB list is searched for a matching RSB that matches the      Session Object, and Filter Spec Object with the input RSB.   2. Effective Kernel TC_Flowspec are computed for all these RSB's.Pullen, et. al.              Informational                     [Page 16]RFC 2490                 IP Multicast with RSVP             January 1999   3. If the Filter Spec Object of the RSB doesn't match the one of the      Filter Spec Object in the TC Filter Spec List then add the Filter      Spec Object to the TC Filter Spec List.   4. If the FlowSpec Object of the input RSB is greater than the      TC_Flowspec then turn on the Is_Biggest flag.   5. Search for the matching Traffic Control State Block(TCSB) whose      Session Object, Outgoing Interface, and Filter Spec Object matches      with those of the Input RSB.   6. If such a TCSB is not found create a new TCSB.   7. If matching TCSB is found modify the reservations.   8. If Is_Biggest flag is on turn on the Resv Refresh Needed Flag      flag, else send a ResvConf Message to the IP address in the      ResvConfirm Object of the input RSB.4.2.4 pathmsg: The functions to be done by this state are done through   the function call PathMsgPro described above.4.2.5 resvmsg: The functions that would be done by this state are done   through the function call ResvMsgPro described above.4.2.6 ptearmsg: The functions that would be done by this state are done   through the function call PathTearPro described above.4.2.7 rtearmsg: The functions that would be done by this state are done  through the function call ResvTearPro described above.4.2.8 rconfmsg: The functions that would be done by this state are done  through the function call ResvConfPro described above.4.3 RSVP on Hosts   Figure 9 shows the process of RSVP on hosts.4.3.1  Init   Initializes all the variables. Default transition to idle state.                     [Figure 9: RSVP process on hosts]4.3.2  idle   This state transit to the Arr state on packet arrival.4.3.3  Arr   This state calls the appropriate functions depending on the type of   message received. Default transition to idle state.Pullen, et. al.              Informational                     [Page 17]RFC 2490                 IP Multicast with RSVP             January 1999   a. MakeSessionCall: This function is called from the Arr state   whenever a Session call is received from the local application.   1. Search for the Session Information.   2. If one is found return the corresponding Session Id.   3. If the session information is not found assign a new session Id to      the session to the corresponding session.   4. Make an UpCall to the local application with this Session Id.   b. MakeSenderCall: This function is called from the Arr state   whenever a Sender call is received from the local application.   1. Get the information corresponding to the Session Id and create a      Path message corresponding to this session.   2. A copy of the packet is buffered and used by the host to send the      PATH message periodically.   3. This packet is sent to the IP layer.   c. MakeReserveCall: This function is called from the Arr state   whenever a Reserve call is received from the local application. This   function will create and send a Resv message. Also, the packet is   buffered for later use.   d. MakeReleaseCall: This function is called from the Arr state   whenever a Release call is received from the local application. This   function will generate a PathTear message if the local application is   sender or generates a ResvTear message if the local application is   receiver.4.3.4  Session                  This state's function is performed by   the MakeSessionCall function.4.3.5  Sender   This state's function is han by the MakeSenderCall function.4.3.6  Reserve                                                   This state's function   is performed by the MakeReserveCall function.4.3.7  Release   This state's function is performed by the MakeReleaseCall function.Pullen, et. al.              Informational                     [Page 18]RFC 2490                 IP Multicast with RSVP             January 19995. Multicast Routing Model Interface   Because this set of models was intended particularly to enable   evaluation by simulation of various multicast routing protocols, we   give particular attention in this section to the steps necessary to   interface a routing protocol model to the other models.  We have   available implementations of DVMRP and OSPF, which we will describe   below.  Instructions for invoking these models are contained in a   separate User's Guide for the models.5.1  Creation of multicast routing processor node   Interfacing a multicast routing protocol using the OPNET Simulation   package requires the creation of a new routing processor node in the   node editor and linking it via packet streams.  Packet streams are   unidirectional links used to interconnect processor nodes, queue   nodes, transmitters and receiver nodes.  A duplex connection between   two nodes is represented by using two unidirectional links to connect   the two nodes to and from each other.   A multicast routing processor node is created in the node editor and   links are created to and from the processors(duplex connection) that   interact with this module, the IGMP processor node and the IP   processor node.  Within the node editor, a new processor node can be   created by selecting the button for processor creation (plain gray   node on the node editor control panel) and by clicking on the desired   location in the node editor to place the node.  Upon creation of the   processor node, the name of the processor can be specified by right   clicking on the mouse button and entering the name value on the   attribute box presented.  Links to and from this node are generated   by selecting the packet stream button (represented by two gray nodes   connected with a solid green arrow on the node editor control panel),   left clicking on the mouse button to specify the source of the link   and right clicking on the mouse button to mark the destination of the   link.5.2  Interfacing processor nodes   The multicast routing processor node is linked to the IP processor   node and the IGMP processor node each with a duplex connection. A   duplex connection between two nodes is represented by two uni-   directional links interconnecting them providing a bidirectional flow   of information or interrupts, as shown in Figure 6.  The IP processor   node (in the subnet router) interfaces with the multicast routing   processor node, the unicast routing processor node, the Resource   Reservation processor node(RSVP), the ARP processor node( only onPullen, et. al.              Informational                     [Page 19]RFC 2490                 IP Multicast with RSVP             January 1999   subnet routers and hosts), the IGMP processor node, and finally the   MAC processor node (only on subnet routers and hosts) each with a   duplex connection with exceptions for ARP and MAC nodes.5.2.1  Interfacing ARP and MAC processor nodes   The service of the ARP node is required only in the direction from   the IP layer to the MAC layer(requiring only a unidirectional link   from IP processor node to ARP processor node).  The MAC processor   node on the subnet router receives multicast packets destined for all   multicast groups in the subnet, in contrast to the MAC node on subnet   hosts which only receives multicast packets destined specifically for   its multicast group.  The MAC node connects to the IP processor node   with a single uni-directional link from it to the IP node.5.2.2  Interfacing IGMP, IP, and multicast routing processor nodes   The IGMP processor node interacts with the multicast routing   processor node, unicast routing processor node, and the IP processor   node. Because the IGMP node is linked to the IP node, it is thus able   to update the group membership table(in this model, the group   membership table is represented by the local interface(interface 0)   of the multicast routing table data structure) within the IP node.   This update triggers a signal to the multicast routing processor node   from the IGMP node causing it to reassess the multicast routing table   within the IP node.  If the change in the group membership table   warrants a modification of the multicast routing table, the multicast   routing processor node interacts with the IP node to modify the   current multicast routing table according to the new group membership   information updated by IGMP.5.2.2.1  Modification of group membership table   The change in the group membership occurs with the decision at a host   to leave or join a particular multicast group.  The IGMP process on   the gateway periodically sends out queries to the IGMP processes on   hosts within the subnet in an attempt to determine which hosts   currently are receiving packets from particular groups.  Not   receiving a response for a pending IGMP host query specific to a   group indicates to the gateway IGMP that no host belonging to the   particular group exists in the subnet.  This occurs when the last   remaining member of a multicast group in the subnet leaves.  In this   case the IGMP processor node updates the group membership able and   triggers a modification of the multicast routing table by alerting   the multicast routing processor node.  A prune message specific to   the group is initiated and propagated upward establishing a  prune   state for the interface leading to the present subnet, effectively   removing this subnet from the group-specific multicast spanning treePullen, et. al.              Informational                     [Page 20]RFC 2490                 IP Multicast with RSVP             January 1999   and potentially leading to additional pruning of spanning tree edges   as the prune message travels higher up the tree.  Joining a multicast   group is also managed by the IGMP process which updates the group   membership table leading to a possible modification of the multicast   routing table.5.2.2.2  Dependency on unicast routing protocol   The multicast routing protocol is dependent on a unicast routing   protocol (RIP or OSPF) to handle  multicast routing.  The next hop   interface to the source of the packet received, or the upstream   interface, is determined using the unicast routing protocol to trace   the reverse path back to the source of the packet.  If the packet   received arrived on this upstream interface, then the packet can be   propagated downstream through its downstream interfaces (excluding   the interface in which the packet was received). Otherwise, the   packet is deemed to be a duplicate and dropped, halting the   propagation of the packet downstream.  This repeated reverse path   checking and broadcasting eventually generates the spanning tree for   multicast routing of packets.  To determine the reverse path forward   interface of a received multicast packet propagated up from the IP   layer, the multicast routing processor node retrieves a copy of the   unicast routing table from the IP processor node and uses it to   recalculate the multicast routing table in the IP processor node.5.3  Interrupt Generation   Using the OPNET tools, interrupts to the multicast routing processor   node are generated in several ways.  One is the arrival of a   multicast packet along a packet stream (at the multicast routing   processor node) when the packet is received by the MAC node and   propagated up the IP node where upon discarding the IP header   determination is made as to which upper layer protocol to send the   packet.  A second type of interrupt generation occurs by remote

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -