⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme

📁 rtai-3.1-test3的源代码(Real-Time Application Interface )
💻
📖 第 1 页 / 共 2 页
字号:
NETRPC======What found here makes RTAI a distributed system, both for kernel and user space applications. The use of "net" in front of "rpc", for the name of this directory main module, is due to the existence of rpc functions internally to all RTAI schedulers, the concept of intertask Remote Procedure Call (RPC)messaging being strongly supported within RTAI since its inception. Clearly nothing new, synchronous intertask message passing is an old concept and the basis of microkernels, either distributed or not. So this implementation is nothing but the RTAI specific way of doing it.The basic specifications for making it easy are:- use just the name of any already available function substituting "rt_..."   with "RT_...",- add two initial arguments more, i.e. the "node" of and the "port" on the   remote machine that will execute "rt_..." functions that became "RT_...".e.g.: 	  rt_mbx_send(mbx, msg, msglen);becomes: 	RT_mbx_send(node, port, mbx, msg, msglen);.Using 0 for the "node" forces a local execution of the corresponding "rt_..."function. In this way you revert to a standard local application automatically."Port" is not used with a zero "node" but should be kept to a meaningfull value anyhow, to allow remote calls when "node" is set to point to a valid remotemachine again. So using "RT_..." naming with a null variable "node" allows a local application to be ready for becoming distributed, once you reassign a real "node". The only added cost will be a longer arguments list and the execution of a "C" "if" instruction. The "node" and "port" arguments will receive further attention shortly afterward, i.e. when we'll explain how torequest "port"s on "node"s.Naturally you can also run a distributed application wholly in local mode byjust setting all your "node"s to the local host (dotted decimal notation: 127.0.0.1); less effective than using a null "node" but useful for testing its networked execution on a single machine.The only change in the formal arguments, with respect to the usual normal local "rt_..." usage, is related to any possible argument associated with a time value. It must be expressed in nanosecs always, instead of the RTAI standard internal timer units count. In this way you are not required to know the frequency of any timer running on a remote machine and be sure that a correct timing will be guaranteed anyhow, irrespective of the node on which it will run.These are, more or less, the only things to know to make any RTAI application distributed. There is however a possible usage variation obtained by using "-port" insteadof "port" in any call. In fact a "port" is a positive number and making it negative forces an immediate return, without waiting for any result, i.e an asynchronous rpc. It is your responsability to ensure that an asynchronous rpc is reasonable, e.g. a "sem_signal" can be used in such a way, while a "sem_wait"will likely cause missing a possible synchronization. Notice that the remote support stub will answer anyhow but the returned values will be discarded. So it should never be used if you want to know the result of your remote call. Nonetheless, as it will be explained later on, a mechanism is provided to allow recovering the results of any asynchonous call left behind.In any case it is important to know that any further asynchronous call will not be sent if the previous one has not been answered yet. "Net_rpc" assumes no buffering at the remote node, i.e. one can be sure the remote port is readyto accept calls only if its stub has answered to any previous service request anyhow. So, if the previous one was not answered yet, launching an asynchronous call will cause an immediate return without any action being taken. In relation to this it is important to note that the overall maximum message length allowed, both in sending and receiving is the MAX_MSG_SIZE found in netrpc.h, actually 1500 bytes, i.e. the number of octets defined by Linux ETH_DATA_LEN.The user can check if the port has or has not answered an asynchronous RPC by calling:	void *rt_waiting_return(unsigned long node, int port);a non null return value implying "port" at "node" is waiting an rpc return.Remote calls that make sense being used asynchronously can be used ininterrupt handlers also, see directory "resumefromintr" for an example.Before doing anything remotely a task working on remote services has to ask for a "port" at its remote peer. Such a port is obtained by a call of the type: 	myport = rt_request_soft_port(node); for a soft real time service or	myport = rt_request_soft_port(node); for a hard real time service and released by:	rt_release(node, myport);when the "port" is needed nomore. A task cannot ask for more then one "port",multiple "port" requests from a task will always return the same "port". Theassigned "port" will be an integer >= MAX_STUBS, defined in netrpc.h, it isrelated internally to a socket and port but its value as no reference whatsoever to either of them.Nonetheless a task can create and access more "port"s by using: 	anotherport = rt_request_soft_port(node, id);	anotherport = rt_request_hard_port(node, id);"id" being a unique unsigned long "id" the task must have agreed with anyother local application. Releasing a "port" defined with a specific "id" makes no difference with the simpler request above, i.e.:	rt_release(node, anotherport);must be used anyhow.Multiple ports per task can be used by an application to implement some sortof buffered requests, whose detailed implementation is at a user's taste. Soeven if NETRPC does not provide fully asynchronous buffered APIs it makesavailable the tool for a relatively easy implementation of such an rpc policy.It nonetheless important to remark that the actual implementation of "netrpc"without buffering requests is a design choice. Buffering would be trivial toimplement by using RTAI mailboxes in place of semaphores, to synchronizeremote stubs, but real time require synchronization to let one know if everything is marching appropriately. Too many things buffered are doomed tobe related to missing deadlines.In requesting a "port" there is also the possibility of providing a mailbox to recover results of asyncrhounous calls. So you can use either:	myport = rt_request_soft_port(node, mbx);	myport = rt_request_hard_port(node, mbx);or:	myport = rt_request_soft_port(node, id, mbx);	myport = rt_request_hard_port(node, id, mbx);"mbx" being a pointer to a mailbox. When a new rpc is made and there is theresult of any previous asynchronous call pending it will be sent to such amailbox. A typical use of this possibility consists in providing a servertask that reads the mailbox and uses the returned values in a manner agreedwith the original RPCs sender. A more direct way to ensure a pending returnis sent to the mailbox is to use:	int rt_sync_net_rpc(unsigned long node, int port); which forces a synchronization, thus sending any pending return to a mailbox, if one is made available at the port request, it returns 1 always. Such afunction allows to recover a pending return immediately. It is likely itwill be used in combination with "rt_waiting_return". A helper functions is provided to obtain any result queued in the mailbox:	int rt_get_net_rpc_ret(		MBX *mbx, 		unsigned long long *retval, 		void *msg1, 		int *msglen1, 		void *msg2, 		int *msglen2, RTIME timeout, 		int type	);mbx:		The mailboxretval:		The value returned by the async call, if any. A double long 		can contain any value returned by RTAI functions, it is up to		you to use it properly.msg1 and msg2:	Buffers for possibly returned messages.msglen1, msglen2:	The length of msg1 and msg2, the helper function return the 		actual length of the messages, truncating them to msglen1 and 		msglen2 if their buffers are not larger enough to contain the		whole returned messages.timeout:	any timeout value to be used if needed by the mbx_receive type:		defined by type, the mailbox receive function to be		used, i.e.: NET_MBX_RECEIVE for rt_mbx_receive, 		NET_MBX_RECEIVE_WP for rt_mbx_receive_wp, NET_MBX_RECEIVE_IF 		for rt_mbx_receive_if, NET_MBX_RECEIVE_UNTIL for 		rt_mbx_receive_until and NET_MBX_RECEIVE_TIMED		for rt_mbx_receive_timed.This function is just a helper, a user can see it as a suggestion for his/her own optimised implementation, e.g getting just the returned value or a single message, because s/he knows those are the only returned values. See the test "uasync" for a specific example. It is stressed again that even such an asynchronous form of "netrpc" does not queue messages, as said it allows just one effective async call, but can help in increasing the application parallelism without loosing determinism. "Port"s requests cause a task rescheduling, to wait for the answer, and have

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -