⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pod.c

📁 rtai-3.1-test3的源代码(Real-Time Application Interface )
💻 C
📖 第 1 页 / 共 5 页
字号:
 * point it becomes eligible anew for scheduling. * * @param thread The descriptor address of the unblocked thread. * * This call neither releases the thread from the XNSUSP, XNRELAX, * XNFROZEN nor the XNDORMANT suspensive conditions. * * When the thread resumes execution, the XNBREAK bit is set in the * unblocked thread's status mask. Unblocking a non-blocked thread is * perfectly harmless. * * Side-effect: This service does not call the rescheduling procedure * but may affect the ready queue. * * Context: This routine can be called on behalf of a thread or IST * context. */void xnpod_unblock_thread (xnthread_t *thread){    /* Attempt to abort an undergoing "counted delay" wait.  If this       state is due to an alarm that has been armed to limit the       sleeping thread's waiting time while it pends for a resource,       the corresponding XNPEND state will be cleared by       xnpod_resume_thread() in the same move. Otherwise, this call       may abort an undergoing infinite wait for a resource (if       any). */    if (testbits(thread->status,XNDELAY))	xnpod_resume_thread(thread,XNDELAY);    else if (testbits(thread->status,XNPEND))	xnpod_resume_thread(thread,XNPEND);    setbits(thread->status,XNBREAK);}/*! * \fn void xnpod_renice_thread(xnthread_t *thread,                                int prio); * \brief Change the base priority of a thread. * * Changes the base priority of a thread.  If the XNDREORD flag has * not been passed to xnpod_init() and the reniced thread is currently * blocked waiting in priority-pending mode (XNSYNCH_PRIO) for a * synchronization object to be signaled, the nanokernel will attempt * to reorder the object's pend queue so that it reflects the new * sleeper's priority. * * @param thread The descriptor address of the affected thread. * * @param prio The new thread priority. * * It is absolutely required to use this service to change a thread * priority, in order to have all the needed housekeeping chores * correctly performed. i.e. Do *not* change the thread.cprio field by * hand, unless the thread is known to be in an innocuous state * (e.g. dormant). * * Side-effects: * * - This service does not call the rescheduling procedure but may * affect the ready queue. * * - Assigning the same priority to a running or ready thread moves it * at the end of the ready queue, thus might cause a manual * round-robin effect. * * - If the reniced thread is a user-space shadow, propagate the * request to the mated Linux task. * * Context: This routine can be called on behalf of a thread or IST * context. */void xnpod_renice_thread (xnthread_t *thread, int prio) {    xnpod_renice_thread_inner(thread,prio,1);}void xnpod_renice_thread_inner (xnthread_t *thread, int prio, int propagate){    int oldprio;    spl_t s;    splhigh(s);    oldprio = thread->cprio;    /* Change the thread priority, taking in account an undergoing PIP       boost. */    thread->bprio = prio;    /* Since we don't want to mess with the priority inheritance       scheme, we must take care of never lowering the target thread's       priority level if it undergoes a PIP boost. */    if (!testbits(thread->status,XNBOOST) ||	xnpod_priocompare(prio,oldprio) > 0)	{	thread->cprio = prio;	if (prio != oldprio &&	    thread->wchan != NULL &&	    !testbits(nkpod->status,XNDREORD))	    /* Renice the pending order of the thread inside its wait	       queue, unless this behaviour has been explicitely	       disabled at pod's level (XNDREORD), or the requested	       priority has not changed, thus preventing spurious	       round-robin effects. */	    xnsynch_renice_sleeper(thread);	if (!testbits(thread->status,XNTHREAD_BLOCK_BITS|XNLOCK))	    /* Call xnpod_resume_thread() in order to have the XNREADY	       bit set, *except* if the thread holds the scheduling,	       which prevents its preemption. */	    xnpod_resume_thread(thread,0);	}    splexit(s);#ifdef __KERNEL__    if (propagate && testbits(thread->status,XNSHADOW))	xnshadow_renice(thread);#endif /* __KERNEL__ */}/*! * \fn void xnpod_rotate_readyq(int prio); * \brief Rotate a priority level in the ready queue. * * The thread at the head of the ready queue of the given priority * level is moved to the end of this queue. Therefore, the execution * of threads having the same priority is switched.  Round-robin * scheduling policies may be implemented by periodically issuing this * call in a given period of time. It should be noted that the * nanokernel already provides a built-in round-robin mode though (see * xnpod_activate_rr()). * * @param prio The priority level to rotate. if XNPOD_RUNPRI is given, * the running thread priority is used to rotate the queue. * * The priority level which is considered is always the base priority * of a thread, not the possibly PIP-boosted current priority * value. Specifying a priority level with no thread on it is harmless, * and will simply lead to a null-effect. * * Side-effect: This service does not call the rescheduling procedure * but affects the ready queue. * * Context: This routine can be called on behalf of a thread or IST * context. */void xnpod_rotate_readyq (int prio){    xnpholder_t *pholder;    xnsched_t *sched;    spl_t s;    sched = xnpod_current_sched();    if (countpq(&sched->readyq) == 0)	return; /* Nobody is ready. */    splhigh(s);    /* There is _always_ a regular thread, ultimately the root       one. Use the base priority, not the priority boost. */    if (prio == XNPOD_RUNPRI ||	prio == xnthread_base_priority(sched->usrthread))	xnpod_resume_thread(sched->usrthread,0);    else	{	pholder = findpqh(&sched->readyq,prio);	if (pholder)	    /* This call performs the actual rotation. */	    xnpod_resume_thread(link2thread(pholder,rlink),0);	}    splexit(s);}/*!  * \fn void xnpod_activate_rr(xnticks_t quantum); * \brief Globally activate the round-robin scheduling. * * This service activates the round-robin scheduling for all threads * which have the XNRRB flag set in their status mask (see * xnpod_set_thread_mode()). Each of them will run for the given time * quantum, then preempted and moved at the end of its priority group * in the ready queue. This process is repeated until the round-robin * scheduling is disabled for those threads. * * @param quantum The time credit which will be given to each * rr-enabled thread (in ticks). * * Side-effect: This routine does not call the rescheduling procedure. * * Context: This routine can be called on behalf of a thread or IST * context. */void xnpod_activate_rr (xnticks_t quantum){    xnholder_t *holder;    spl_t s;    splhigh(s);    holder = getheadq(&nkpod->threadq);    while (holder)	{	xnthread_t *thread = link2thread(holder,glink);	if (testbits(thread->status,XNRRB))	    {	    thread->rrperiod = quantum;	    thread->rrcredit = quantum;	    }	holder = nextq(&nkpod->threadq,holder);	}    splexit(s);}/*!  * \fn void xnpod_deactivate_rr(void); * \brief Globally deactivate the round-robin scheduling. * * This service deactivates the round-robin scheduling for all threads * which have the XNRRB flag set in their status mask (see * xnpod_set_thread_mode()). * * Side-effect: This routine does not call the rescheduling procedure. * * Context: This routine can be called on behalf of a thread or IST * context. */void xnpod_deactivate_rr (void){    xnholder_t *holder;    spl_t s;    splhigh(s);    holder = getheadq(&nkpod->threadq);    while (holder)	{	xnthread_t *thread = link2thread(holder,glink);	if (testbits(thread->status,XNRRB))	    thread->rrcredit = XN_INFINITE;	holder = nextq(&nkpod->threadq,holder);	}    splexit(s);}/*!  * \fn void xnpod_dispatch_signals(void) * \brief Deliver pending asynchronous signals to the running thread - * INTERNAL. * * This internal routine checks for the presence of asynchronous * signals directed to the running thread, and attempt to start the * asynchronous service routine (ASR) if any. */static void xnpod_dispatch_signals (void){    xnthread_t *thread = xnpod_current_thread();    xnflags_t oldmode;    xnsigmask_t sigs;    int asrimask;    xnasr_t asr;    spl_t s;    /* Are signals pending and ASR enabled for this thread ? */    if (thread->signals == 0 ||	testbits(thread->status,XNASDI) ||	thread->asr == XNTHREAD_INVALID_ASR)	return;    /* Start the asynchronous service routine */    oldmode = testbits(thread->status,XNTHREAD_MODE_BITS);    sigs = thread->signals;    asrimask = thread->asrimask;    asr = thread->asr;    /* Clear pending signals mask since an ASR can be reentrant */    thread->signals = 0;    /* Reset ASR mode bits */    clrbits(thread->status,XNTHREAD_MODE_BITS);    setbits(thread->status,thread->asrmode);    thread->asrlevel++;    /* Setup ASR interrupt mask then fire it. */    splhigh(s);    xnarch_setimask(asrimask);    asr(sigs);    splexit(s);    /* Reset the thread mode bits */    thread->asrlevel--;    clrbits(thread->status,XNTHREAD_MODE_BITS);    setbits(thread->status,oldmode);}/*!  * \fn void xnpod_welcome_thread(xnthread_t *thread); * \brief Thread prologue - INTERNAL. * * This internal routine is called on behalf of a (re)starting * thread's prologue before the user entry point is invoked. This call * is reserved for internal housekeeping chores and cannot be inlined. */void xnpod_welcome_thread (xnthread_t *thread){    if (thread->signals)	xnpod_dispatch_signals();    if (testbits(thread->status,XNLOCK))	/* Actually grab the scheduler lock. */	xnpod_lock_sched();    if (testbits(thread->status,XNFPU))	xnarch_init_fpu(xnthread_archtcb(thread));    clrbits(thread->status,XNRESTART);}#ifdef CONFIG_RTAI_FPU_SUPPORT/* xnpod_switch_fpu() -- Switches to the current thread's FPU   context, saving the previous one as needed. */void xnpod_switch_fpu (void){    xnsched_t *sched = xnpod_current_sched();    xnthread_t *runthread = sched->runthread;    if (testbits(runthread->status,XNFPU) && sched->fpuholder != runthread)	{	if (sched->fpuholder == NULL ||	    xnarch_fpu_ptr(xnthread_archtcb(sched->fpuholder)) !=	    xnarch_fpu_ptr(xnthread_archtcb(runthread)))	    {	    if (sched->fpuholder)		xnarch_save_fpu(xnthread_archtcb(sched->fpuholder));	    xnarch_restore_fpu(xnthread_archtcb(runthread));	    }	sched->fpuholder = runthread;	}}#endif /* CONFIG_RTAI_FPU_SUPPORT *//*!  * \fn void xnpod_schedule(xnmutex_t *imutex); * \brief Rescheduling procedure entry point. * * This is the central rescheduling routine which should be called to * validate and apply changes which have previously been made to the * nanokernel scheduling state, such as suspending, resuming or * changing the priority of threads.  This call first determines if a * thread switch should take place, and performs it as * needed. xnpod_schedule() actually switches threads if: * * - the running thread has been blocked or deleted. * - or, the running thread has become less prioritary than the first *   ready to run thread. * - or, the running thread does not lead no more the ready threads * (round-robin). * - or, a real-time thread became ready to run, ending the *   scheduler idle state (i.e. The root thread was *   running so far). * * @param imutex The address of an interface mutex currently held by * the caller which will be subject to a lock-breaking preemption * before the current thread is actually switched out. The * corresponding kernel mutex will be automatically reacquired by the * nanokernel when the thread is eventually switched in again, before * xnpod_schedule() returns to its caller. Passing NULL when no * lock-breaking preemption is required is valid. See below. * * The nanokernel implements a lazy rescheduling scheme so that most * of the services affecting the threads state MUST be followed by a * call to the rescheduling procedure for the new scheduling state to * be applied. In other words, multiple changes on the scheduler state * can be done in a row, waking threads up, blocking others, without * being immediately translated into the corresponding context * switches, like it would be necessary would it appear that a more * prioritary thread than the current one became runnable for * instance. When all changes have been applied, the rescheduling * procedure is then called to consider those changes, and possibly * replace the current thread by another. * * As a notable exception to the previous principle however, every * action which ends up suspending or deleting the current thread * begets an immediate call to the rescheduling procedure on behalf of * the service causing the state transition. For instance, * self-suspension, self-destruction, or sleeping on a synchronization * object automatically leads to a call to the rescheduling procedure, * therefore the caller does not need to explicitely issue * xnpod_schedule() after such operations. * * Lock-breaking preemption is a mean by which a thread who holds a * nanokernel mutex (i.e. xnmutex_t) can rely on xnpod_schedule() to

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -