⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 collection.draft

📁 tinyos-2.0源代码!转载而已!要的尽管拿!
💻 DRAFT
字号:
Collection Service$Id: collection.draft,v 1.1.2.2 2006/02/02 03:01:59 rfonseca76 Exp $Rodrigo Fonseca, UCBOmprakash Gnawali, USCKyle Jamieson, MITThis document describes inital thoughts on the collection protocol to beproduced by the net2-wg. The main improvements over the defacto standardcollection from TinyOS 1.x are as follows: 1. decoupling of neighbor management and link estimation from the routing    establising element 2. more general modularization, in the direction of allowing very different    multihop protocols to be implemented along the same lines. This structure can    evolve to a more general network layer architecture. 3. decoupling of tree identifier from node addressService:The service provided by the collection network service is best-effort, multihopdelivery of packets to the root of a specified tree. The interfaces providedare for sending, receiving, intercepting, and snooping packets.  Packets intransit can be intercepted for in-network processing, and traffic can besnooped by forwarding nodes.[Rodrigo: This needs discussion, as we are parameterizing the send interface per tree id, and then want to further demultiplex by N_ID. These interfaces are parameterized by a network-layer id (in contrast to an AM id). AM ids are used for multiplexing at the link layer, and NID is used for demultiplexing among different users of the network layer.]Best-effort means that absolute reliability should be obtained by higher layermechanisms, such as end-to-end retransmissions or forward error correctingcoding. However, it does not preclude network level retransmissions andlink-level retransmissions. There can be multiple trees in a network, and there can be multiple roots in atree.  A network with a single root is a special case of the former. A treewith multiple roots will provide the semantics of anycast: the message will bedelivered to one of the roots.  The specific tree is identified by a treeidentifier (tree_id).  Tree_id is explicitly decoupled from the node id that isthe root of the tree, and is considered a network-level name.This decoupling has some advantages.  1. allows transparent substitution of one root by another, in case of     failures for example. 2. allows any node to become the root of a tree 3. allows trees with multiple rootsThe specific tree(s) an application sends messages to can be configured atcompile time.  A shim module can be interposed between the network layer andthe application and provide an address-free sending interface to a specifiedtree_id.Service decomposition:There are two main functionalities in the collection network protocol,corresponding to data and control planes.  The data plane is responsible forforwarding of messages, and the control plane is responsible for theestablishment of the routes in the network.  In other words, the control planetells *where* to send messages to, while the data plane is responsible for*how* and *when* to send messages.  Correspondingly, there are two main modulesin the implementation: a forwarding engine and a routing engine. The basicinterface between the two is a lookup call that obtains a set of next hops toforward the message to.The forwarding engine is responsible for the forwarding discipline: queueing,scheduling, network-level retransmission. The routing engine is responsible forbuilding and maintaining the information necessary to get the next hops for agiven message. For example, it should maintain the tree structure.  It shoulduse the services of a link estimator to provide link quality estimates fordifferent neighbors.Control-plane protocol:The tree formation protocol is based on a variation of the distance vectorprotocol.  Control packet format: (in bits)[sizeof(bits) neighbor_t]:source 4:count 4:total | [ 8:tree_id | 8:root_id | 8:root_seqno | 8:hopcount | 16:cost  ]+ source is the neighbor idtotal is how many control packets in this message (if there are many roots that      don't fit into a packet)count is which one of the total packets in this message this belongs toThen there is a sequence of cost-to-root messages, specifying the tree_id, theroot_id in that tree, thehopcount of the sending node, the cost to that root through the sender, and thesequence number of the root message.A tree formation message is simply one initiated by the root, in which thehopcount and the cost are 0, and the sequence number is incremented with eachbroadcast. The root is the only node which increments the cost-to-root entrysequence numbers.The tree_id allows multiple trees, hopcount allows the tree formation andestablishment of hopcounts. Cost allows parent selection. Cost MUST be acumulative (additive or multiplicative) measure that represents the cost orquality to get a message to the root starting from sending node.  The root_idand root_seqno are needed for the prevention of count to infinity problems. Each node actively maintains a parent towards the root of a tree. This is theneighbor with the lowest composed cost (my cost to the neighbor + theneighbor's cost). This parent is not necessarily used for routing: it is usedto establish the hopcount, and thus the tree structure. A packet can be routedto any node that decreases the cost to the root.  Should a parent die, it willbe readily replaced by another neighbor with a lower cost.  This can be doneproactively by looking at the neighbor table, or can wait for the next distancevector update from a neighbor.Data-plane protocol:There are two header fields in the data-plane packets that allow successfulrouting:typedef struct {	uint8_t tree_id;	uint16_t min_cost;} nc_header;  //network collection headerThe forwarding engine can elect to perform network-level retransmissions toalternate next hops should the trasmission to a given next hop fail. Thisallows recoveries in routing on time scales shorter than those required forroute convergence, and leverages routing engines that can provide multiple nexthops towards a given destination.  This technique can reduce the bufferrequirements at forwarding nodes, and spread the load in face of congestion.However it is still an open question whether this is the best response: itmight also spread congestion.Link Estimation:Link estimation can be done in a variety of ways, and we do not impose onehere. It is decoupled from the establishment of routes. There is a narrowinterface between the link estimator and the routing engine, see interfaceLinkEstimator below.  The one requirement is that the quality returned bestandardized. Two candidates are the probability of success of a packet, andthe ETX, or expected number of transmissions to get the message across to theother node.  Some protocols might also be interested in one of the reverse orforward qualities.  It is the job of the link estimator to convert othermetrics to the standardized one, and to possibly have its own control trafficto exchange reverse link qualities.There are currently three ways of doing this: using LQI, using RSSI, or usingan average history of packet losses on a link. In the case of the former, it ispossible that the link estimator be interposed in the network stack and insertsa header with the source address in each outgoing packet.Some Issues:Any node can become the root of a tree. Nodes can keep state about a limitednumber of trees. An issue arises when there are more trees active than space inthe nodes, and there has to be a decision on which tree to keep informationabout.One solution is to have it be first-come first-serve. Another is to have adeterministic priority between tree ids, such that a new tree message willdisplace a lower priority one. Eventually a new tree with higher priority idwill reach the root of a lower priority one, and the root will silence.Maybe the simpler solution for now is to have a first come-first serve policy,such that a new root will fail to propagate its messages on a network with morethan one tree.Interfaces:(The send interface is to be used, parameterized by tree_id)interface BasicRouting	//To be parameterized by tree_id	command result_t getNextHops(neighbor_t* nextHops, uint8_t* n);interface CostBasedRouting	//To be parameterized by tree_id	command uint16_t getMyCost();	command uint16_t getNeighborCost(neighbor_t neighbor);interface REControl	command result_t initializeRH(message_t *msg, uint8_t tree_id);	command uint8_t getHeaderSize();	command result_t startRoot(uint8_t tree_id);	command result_t stopRoot(uint8_t tree_id);interface LinkEstimator:	command uint8_t getLinkQuality(neighbot_t neighbor);	command uint8_t getReverseQuality(neighbot_t neighbor);	command uint8_t getForwardQuality(neighbot_t neighbor);interface NeighborTable:	event void evicted(neighbot_t neighbor)(Initially, typedef uint16_t neighbor_t)	Components:LinkEstimator {	provides {		interface LinkEstimator;		interface NeighborTable;	}}MTreeRoutingEngine	provides {		REControl;		BasicRouting[uint8_t tree_id];		CostBasedRouting[uint8_t tree_id];	} 	uses {		LinkEstimator;		NeighborTable;		SendMsg[CONTROL_AM_ID];		ReceiveMsg[CONTROL_AM_ID];	}}ForwardingEngine {	provides {		interface Send[uint8_t tree_id]		interface Receive[uint8_t tree_id];		interface Intercept;		interface Snoop;		interface Packet;	}	uses {		interface REControl;		interface BasicRouting[uint8_t tree_id];		interface CostBasedRouting[uint8_t tree_id];		interface SendMsg[COLLECTION_AM_ID];		interface ReceiveMsg[COLLECTION_AM_ID];	}}Observation: while some of these interfaces are similar to the ones specifiedfor multihop routing in TinyOS 1.x, there were several required changes. TheTinyOS framework assumed address-free protocols, and only one next hop permessage. There was also a coupling assumed between parent selection andneighbor/link quality estimation, which resulted in the exposure of someparameters not necessary for the forwarding part. There is parallel betweenRouteSelect and BasicRouting, CostBasedRouting and REControl, while calls suchas getQuality shouldn't be exposed to the ForwardingEngine.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -