⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pod.c

📁 rtai-3.1-test3的源代码(Real-Time Application Interface )
💻 C
📖 第 1 页 / 共 5 页
字号:
	   (obviously distinct from the running thread) safely. */	thread = link2thread(thread->rlink.plink.next,rlink);	nkpod->schedhook(thread,XNREADY);	}}/*!  * \fn void xnpod_init_thread(xnthread_t *thread,                              const char *name,			         int prio,			         xnflags_t flags,			         unsigned stacksize,			         void *adcookie,				  unsigned magic); * \brief Initialize a new thread. * * Initializes a new thread attached to the active pod. The thread is * left in an innocuous mode until it is actually started by * xnpod_start_thread(). * * @param thread The address of a thread descriptor Xenomai will use * to store the thread-specific data.  This descriptor must always be * valid while the thread is active therefore it must be allocated in * permanent memory. * * @param name An ASCII string standing for the symbolic name of the * thread. This name is copied to a safe place into the thread * descriptor. This name might be used in various situations by the * nanokernel for issuing human-readable diagnostic messages, so it is * usually a good idea to provide a sensible value here. The MVM layer * even uses this name intensively to identify threads in the * debugging GUI it provides. However, passing NULL is always legal * and means "anonymous". * * @param prio The base priority of the new thread. This value must * range from [minpri .. maxpri] (inclusive) as specified when calling * the xnpod_init() service. * * @param flags A set of creation flags affecting the operation. The * only defined flag available to the upper interfaces is XNFPU * (enable FPU), which tells the nanokernel that the new thread will * use the floating-point unit. In such a case, the nanokernel will * handle the FPU context save/restore ops upon thread switches at the * expense of a few additional cycles per context switch. By default, * a thread is not expected to use the FPU. This flag is simply * ignored when Xenomai runs on behalf of a userspace-based real-time * control layer since the FPU management is always active if * present. * * @param stacksize The size of the stack (in bytes) for the new * thread. If zero is passed, the nanokernel will use a reasonable * pre-defined size depending on the underlying real-time control * layer. * * @param adcookie An architecture-dependent cookie. The caller should * pass the XNARCH_THREAD_COOKIE value defined for all real-time * control layers in their respective interface file. This * system-defined cookie must not be confused with the user-defined * thread cookie passed to the xnpod_start_thread() service. * * @param magic A magic cookie each skin can define to unambiguously * identify threads created in their realm. This value is copied as-is * to the "magic" field of the thread struct. 0 is a conventional * value for "no magic". * * @return XN_OK is returned on success. Otherwise, one of the * following error codes indicates the cause of the failure: * *         - XNERR_PARAM is returned if @a flags has invalid bits set. * *         - XNERR_NOMEM is returned if @a thread is NULL or not *         enough memory is available from the system heap to create *         the new thread's stack. * * Side-effect: This routine does not call the rescheduling procedure. * * Context: This routine must be called on behalf of a thread context. */int xnpod_init_thread (xnthread_t *thread,		       const char *name,		       int prio,		       xnflags_t flags,		       unsigned stacksize,		       void *adcookie,		       unsigned magic){    spl_t s;    int err;    if (!thread)	/* Allow the caller to bypass parametrical checks... */	return XNERR_NOMEM;    if (flags & ~(XNFPU|XNSHADOW|XNISVC))	return XNERR_PARAM;    if (stacksize == 0)	stacksize = XNARCH_THREAD_STACKSZ;    err = xnthread_init(thread,name,prio,flags,stacksize,adcookie,magic);    if (err)	return err;    splhigh(s);    appendq(&nkpod->threadq,&thread->glink);    xnpod_suspend_thread(thread,XNDORMANT,XN_INFINITE,NULL,NULL);    splexit(s);    return XN_OK;}/*!  * \fn void xnpod_start_thread(xnthread_t *thread,			       xnflags_t mode,			       int imask,			       void (*entry)(void *cookie),			       void *cookie); * \brief Initial start of a newly created thread. * * Starts a (newly) created thread, scheduling it for the first * time. This call releases the target thread from the XNDORMANT * state. This service also sets the initial mode and interrupt mask * for the new thread. * * @param thread The descriptor address of the affected thread which * must have been previously initialized by the xnpod_init_thread() * service. * * @param mode The initial thread mode. The following flags can be * part of this bitmask, each of them affecting the nanokernel * behaviour regarding the started thread: * * - XNLOCK causes the thread to lock the scheduler when it starts. * The target thread will have to call the xnpod_unlock_sched() * service to unlock the scheduler. * * - XNRRB causes the thread to be marked as undergoing the * round-robin scheduling policy at startup.  The contents of the * thread.rrperiod field determines the time quantum (in ticks) * allowed for its next slice. * * - XNASDI disables the asynchronous signal handling for this thread. * See xnpod_schedule() for more on this. * * - XNSUSP makes the thread start in a suspended state. In such a * case, the thread will have to be explicitely resumed using the * xnpod_resume_thread() service for its execution to actually begin. * * @param imask The interrupt mask that should be asserted when the * thread starts. The processor interrupt state will be set to the * given value when the thread starts running. The interpretation of * this value might be different across real-time layers, but a * non-zero value should always mark an interrupt masking in effect * (e.g. cli()). Conversely, a zero value should always mark a fully * preemptible state regarding interrupts (i.e. sti()). * * @param entry The address of the thread's body routine. In other * words, it is the thread entry point. * * @param cookie A user-defined opaque cookie the nanokernel will pass * to the emerging thread as the sole argument of its entry point. * * The START hooks are called on behalf of the calling context (if * any). * * Side-effect: This routine calls the rescheduling procedure. * * Context: This routine must be called on behalf of a thread context. */void xnpod_start_thread (xnthread_t *thread,			 xnflags_t mode,			 int imask,			 void (*entry)(void *cookie),			 void *cookie){    spl_t s;    if (!testbits(thread->status,XNDORMANT))	return;    splhigh(s);    if (testbits(thread->status,XNSTARTED))	{	splexit(s);	return;	}    /* Setup the TCB and initial stack frame */    xnarch_init_tcb(xnthread_archtcb(thread),		    thread->adcookie);    xnarch_init_thread(xnthread_archtcb(thread),		       entry,		       cookie,		       imask,		       thread,		       thread->name);    setbits(thread->status,(mode & (XNTHREAD_MODE_BITS|XNSUSP))|XNSTARTED);    thread->imask = imask;    thread->imode = (mode & XNTHREAD_MODE_BITS);    thread->entry = entry;    thread->cookie = cookie;    thread->stime = xnarch_get_cpu_time();    if (testbits(thread->status,XNRRB))	thread->rrcredit = thread->rrperiod;    xnpod_resume_thread(thread,XNDORMANT);    splexit(s);    if (!(mode & XNSUSP) && nkpod->schedhook)	nkpod->schedhook(thread,XNREADY);    if (countq(&nkpod->tstartq) > 0 &&	!testbits(thread->status,XNTHREAD_SYSTEM_BITS))	xnpod_fire_callouts(&nkpod->tstartq,thread);    xnpod_schedule(NULL);}/*!  * \fn void xnpod_restart_thread(xnthread_t *thread, xnmutex_t *imutex); * \brief Restart a thread. * * Restarts a previously started thread.  The thread is first * terminated then respawned using the same information that prevailed * when it was first started, including the mode bits and interrupt * mask initially passed to the xnpod_start_thread() service. As a * consequence of this call, the thread entry point is rerun. * * @param thread The descriptor address of the affected thread which * must have been previously started by the xnpod_start_thread() * service. * * @param imutex The address of an interface mutex currently held by * the caller which will be subject to a lock-breaking preemption if * the current thread restarts itself.  Passing NULL when no * lock-breaking preemption is required is valid. See xnpod_schedule() * for more on lock-breaking preemption points. * * Self-restarting a thread is allowed. However, restarting the root * thread is not. * * Side-effect: This routine calls the rescheduling procedure. * * Context: This routine must be called on behalf of a thread context. */void xnpod_restart_thread (xnthread_t *thread, xnmutex_t *imutex){    atomic_counter_t imutexval;    int simutex = 0;    spl_t s;    if (!testbits(thread->status,XNSTARTED))	return; /* Not started yet or not restartable. */    if (testbits(thread->status,XNROOT|XNSHADOW))	xnpod_fatal("attempt to restart a user-space thread");    splhigh(s);    /* Break the thread out of any wait it is currently in. */    xnpod_unblock_thread(thread);    /* Release all ownerships held by the thread on synch. objects */    xnsynch_release_all_ownerships(thread);    /* If the task has been explicitely suspended, resume it. */    if (testbits(thread->status,XNSUSP))	xnpod_resume_thread(thread,XNSUSP);    /* Reset modebits. */    clrbits(thread->status,XNTHREAD_MODE_BITS);    setbits(thread->status,thread->imode);    /* Reset task priority to the initial one. */    thread->cprio = thread->iprio;    thread->bprio = thread->iprio;    /* Clear pending signals. */    thread->signals = 0;    if (thread == xnpod_current_sched()->runthread)	{	/* Clear all sched locks held by the restarted thread. */	if (testbits(thread->status,XNLOCK))	    {	    clrbits(thread->status,XNLOCK);	    xnarch_atomic_set(&nkpod->schedlck,0);	    }	setbits(thread->status,XNRESTART);	}    if (imutex)	{	simutex = xnmutex_clear_lock(imutex,&imutexval);	if (simutex < 0)	    xnpod_schedule_runnable(xnpod_current_thread(),XNPOD_SCHEDLIFO);	}    /* Reset the initial stack frame. */    xnarch_init_thread(xnthread_archtcb(thread),		       thread->entry,		       thread->cookie,		       thread->imask,		       thread,		       thread->name);    /* Running this code tells us that xnpod_restart_thread() was not       self-directed, so we must reschedule now since our priority may       be lower than the restarted thread's priority, and re-acquire       the interface mutex as needed. */    xnpod_schedule(NULL);    if (simutex)	xnmutex_set_lock(imutex,&imutexval);    splexit(s);}/*!  * \fn void xnpod_set_thread_mode(xnthread_t *thread,			          xnflags_t clrmask,			          xnflags_t setmask); * \brief Change a thread's control mode. * * Change the control mode of a given thread. The control mode affects * the behaviour of the nanokernel regarding the specified thread. * * @param thread The descriptor address of the affected thread. * * @param clrmask Clears the corresponding bits from the control field * before setmask is applied. The scheduler lock held by the current * thread can be forcibly released by passing the XNLOCK bit in this * mask. In this case, the lock nesting count is also reset to zero. * * @param setmask The new thread mode. The following flags can be part * of this bitmask, each of them affecting the nanokernel behaviour * regarding the thread: * * - XNLOCK causes the thread to lock the scheduler.  The target * thread will have to call the xnpod_unlock_sched() service to unlock * the scheduler or clear the XNLOCK bit forcibly using this service. * * - XNRRB causes the thread to be marked as undergoing the * round-robin scheduling policy.  The contents of the thread.rrperiod * field determines the time quantum (in ticks) allowed for its * next slice. If the thread is already undergoing the round-robin * scheduling policy at the time this service is called, the time * quantum remains unchanged. * * - XNASDI disables the asynchronous signal handling for this thread. * See xnpod_schedule() for more on this. * * Side-effect: This routine does not call the rescheduling procedure. * * Context: This routine can be called on behalf of a thread or IST * context. */xnflags_t xnpod_set_thread_mode (xnthread_t *thread,				 xnflags_t clrmask,				 xnflags_t setmask){    xnflags_t oldmode;    spl_t s;    splhigh(s);    oldmode = (thread->status & XNTHREAD_MODE_BITS);    clrbits(thread->status,clrmask & XNTHREAD_MODE_BITS);    setbits(thread->status,setmask & XNTHREAD_MODE_BITS);    if (!(oldmode & XNLOCK))	{	if (testbits(thread->status,XNLOCK))	    /* Actually grab the scheduler lock. */	    xnpod_lock_sched();	}    else if (!testbits(thread->status,XNLOCK))	xnarch_atomic_set(&nkpod->schedlck,0);    if (!(oldmode & XNRRB) && testbits(thread->status,XNRRB))	thread->rrcredit = thread->rrperiod;    splexit(s);    return oldmode;}/*!  * \fn void xnpod_delete_thread(xnthread_t *thread,                                xnmutex_t *imutex) * \brief Delete a thread. * * Terminates a thread and releases all the nanokernel resources it * currently holds. A thread exists in the system since * xnpod_init_thread() has been called to create it, so this service * must be called in order to destroy it afterwards. * * @param thread The descriptor address of the terminated thread. * * @param imutex The address of an interface mutex currently held by * the caller which will be subject to a lock-breaking preemption if * the current thread is deleted. This parameter only makes sense for * self-deleting threads. Passing NULL when no lock-breaking * preemption is required is valid. See xnpod_schedule() for more on * lock-breaking preemption points. * * The DELETE hooks are called on behalf of the calling context (if * any). The information stored in the thread control block remains * valid until all hooks have been called. * * Self-terminating a thread is allowed. In such a case, this service * does not return to the caller. * * Side-effect: This routine calls the rescheduling procedure.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -