📄 tasklib.c
字号:
STATUS taskDestroy ( int tid, /* task ID of task to delete */ BOOL dealloc, /* deallocate associated memory */ int timeout, /* time willing to wait */ BOOL forceDestroy /* force deletion if protected */ ) { FAST int ix; /* delete hook index */ FAST WIND_TCB *pTcb; /* convenient pointer to WIND_TCB */ FAST int lock; /* to lock interrupts */ int status; /* windDelete return status */ if (INT_RESTRICT () != OK) /* no ISR use */ return (ERROR); if (tid == 0) pTcb = taskIdCurrent; /* suicide */ else pTcb = (WIND_TCB *) tid; /* convenient pointer */#ifdef WV_INSTRUMENTATION /* windview - level 1 event logging */ EVT_OBJ_2 (TASK, pTcb, taskClassId, EVENT_TASKDESTROY, pTcb, pTcb->safeCnt);#endif if ((pTcb == taskIdCurrent) && (_func_excJobAdd != NULL)) { /* If exception task is available, delete task from its context. * While suicides are supported without an exception task, it seems * safer to utilize another context for deletion. */ while (pTcb->safeCnt > 0) /* make task unsafe */ TASK_UNSAFE (); _func_excJobAdd (taskDestroy, (int)pTcb, dealloc, NO_WAIT, FALSE); FOREVER taskSuspend (0); /* wait to die */ }again: lock = intLock (); /* LOCK INTERRTUPTS */ if (TASK_ID_VERIFY (pTcb) != OK) /* valid task ID? */ { intUnlock (lock); /* UNLOCK INTERRUPTS */ return (ERROR); } /* * Mask all signals of pTcb (may be suicide) * This is the same as calling sigfillset(&pTcb->pSignalInfo->sigt_blocked) * without the call to sigLib */ if (pTcb->pSignalInfo != NULL) pTcb->pSignalInfo->sigt_blocked = 0xffffffff; while ((pTcb->safeCnt > 0) || ((pTcb->status == WIND_READY) && (pTcb->lockCnt > 0))) { kernelState = TRUE; /* KERNEL ENTER */ intUnlock (lock); /* UNLOCK INTERRUPTS */ if ((forceDestroy) || (pTcb == taskIdCurrent)) { pTcb->safeCnt = 0; /* unprotect */ pTcb->lockCnt = 0; /* unlock */ if (Q_FIRST (&pTcb->safetyQHead) != NULL) /* flush safe queue */ {#ifdef WV_INSTRUMENTATION /* windview - level 2 event logging */ EVT_TASK_1 (EVENT_OBJ_TASK, pTcb);#endif windPendQFlush (&pTcb->safetyQHead); } windExit (); /* KERNEL EXIT */ } else /* wait to destroy */ {#ifdef WV_INSTRUMENTATION /* windview - level 2 event logging */ EVT_TASK_1 (EVENT_OBJ_TASK, pTcb); /* log event */#endif if (windPendQPut (&pTcb->safetyQHead, timeout) != OK) { windExit (); /* KERNEL EXIT */ return (ERROR); } switch (windExit()) { case RESTART : /* Always go back and reverify, this is because we have * been running in a signal handler for who knows how long. */ timeout = SIG_TIMEOUT_RECALC(timeout); goto again; case ERROR : /* timed out */ return (ERROR); default : /* we were flushed */ break; } /* All deleters of safe tasks block here. When the safeCnt goes * back to zero (or we taskUnlock) the deleters will be unblocked * and the highest priority task among them will be elected to * complete the deletion. All unelected deleters will ultimately * find the ID invalid, and return ERROR when they proceed from * here. The complete algorithm is summarized below. */ } lock = intLock (); /* LOCK INTERRTUPTS */ if (TASK_ID_VERIFY (pTcb) != OK) { intUnlock (lock); /* UNLOCK INTERRUPTS */ errno = S_objLib_OBJ_DELETED; return (ERROR); } } /* We can now assert that one and only one task has been elected to * perform the actual deletion. The elected task may even be the task * to be deleted in the case of suicide. To guarantee all other tasks * flushed from the safe queue receive an ERROR notification, the * elected task reprotects the victim from deletion. * * A task flushed from the safe queue checks if the task ID is invalid, * which would mean the deletion is completed. If, on the other hand, * the task ID is valid, one of two possibilities exist. One outcome is * the flushed task performs the test condition in the while statement * above and finds the safe count equal to zero. In this case the * flushed task is the elected deleter. * * The second case is that the safe count is non-zero. The only way the * safe count can be non zero after being flushed from the delete queue * is if the elected deleter blocked before completing the deletion or * the victim managed to legitimately taskSafe() itself in one way or * another. A deleter can block because before performing the deletion * and hence task ID invalidation, the deleter must call the delete * hooks that possibly deallocate memory which involves taking a * semaphore. So observe that nothing prevents the deleter from being * preempted by some other task which might also try a deletion on the * same victim. We need not account for this case in any special way * because the task will atomically find the ID valid but the safe count * non zero and thus block on the safe queue. It is therefore * impossible for two deletions to occur on the same task being killed * by one or more deleters. * * We must also protect the deleter from being deleted by utilizing * taskSafe(). When a safe task is deleting itself the safe count is * set equal to zero, and other deleters are flushed from the safe * queue. From this point on the algorithm remains the same. * * The only special problem a suicide presents is deallocating the * memory associated with the task. When we free the memory, we must * prevent any preemption from occuring, thus opening up an opportunity * for the memory to be allocated out from under us and corrupted. We * lock preemption before the objFree() call. The task may block * waiting for the partition, but once access is gained no further * preemption will occur. An alternative to locking preemption is to * lock the partition by taking the partition's semaphore. If the * partition utilizes mutex semaphores which permit recursive access, this * alternative seems attractive. However, the memory manager will utilize * binary semaphores when scaled down. With a fixed duration memPartFree() * algorithm, a taskLock() does not seem excessive compared to a more * intimate coupling with memPartLib. * * One final complication exists before task invalidation and kernel * queue removal can be completed. If we enter the kernel and * invalidate the task ID, there is a brief opportunity for an ISR to * add work to the kernel work queue referencing the soon to be defunct * task ID. To prevent this we lock interrupts before invalidating the * task ID, and then enter the kernel. Conclusion of the algorithm * consists of removing the task from the kernel queues, flushing the * unelected deleters to receive ERROR notification, exiting the kernel, * and finally restoring the deleter to its original state with * taskUnsafe(), and taskUnlock(). */ TASK_SAFE (); /* protect deleter */ pTcb->safeCnt ++; /* reprotect victim */ if (pTcb != taskIdCurrent) /* if not a suicide */ { kernelState = TRUE; /* ENTER KERNEL */ intUnlock (lock); /* UNLOCK INTERRUPTS */ windSuspend (pTcb); /* suspend victim */ windExit (); /* EXIT KERNEL */ } else intUnlock (lock); /* UNLOCK INTERRUPTS */ /* run the delete hooks in the context of the deleting task */ for (ix = 0; ix < VX_MAX_TASK_DELETE_RTNS; ++ix) if (taskDeleteTable[ix] != NULL) (*taskDeleteTable[ix]) (pTcb); TASK_LOCK (); /* LOCK PREEMPTION */ if ((dealloc) && (pTcb->options & VX_DEALLOC_STACK)) { if (pTcb == (WIND_TCB *) rootTaskId) memAddToPool (pRootMemStart, rootMemNBytes);/* add root into pool */ else#if (_STACK_DIR == _STACK_GROWS_DOWN) /* * A portion of the very top of the stack is clobbered with a * FREE_BLOCK in the objFree() associated with taskDestroy(). * There is no adverse consequence of this, and is thus not * accounted for. */ objFree (taskClassId, pTcb->pStackEnd);#else /* _STACK_GROWS_UP */ /* * To protect a portion of the WIND_TCB that is clobbered with a * FREE_BLOCK in this objFree() we previously goosed up the base of * tcb by 16 bytes. */ objFree (taskClassId, (char *) pTcb - 16);#endif } lock = intLock (); /* LOCK INTERRUPTS */ objCoreTerminate (&pTcb->objCore); /* INVALIDATE TASK */ kernelState = TRUE; /* KERNEL ENTER */ intUnlock (lock); /* UNLOCK INTERRUPTS */ status = windDelete (pTcb); /* delete task */ /* * If the task being deleted is the last Fp task from fppSwapHook then * reset pTaskLastFpTcb. */ if (pTcb == pTaskLastFpTcb) pTaskLastFpTcb = NULL; /* * If the task being deleted is the last DSP task from dspSwapHook then * reset pTaskLastDspTcb. */ if (pTcb == pTaskLastDspTcb) pTaskLastDspTcb = NULL;#ifdef _WRS_ALTIVEC_SUPPORT /* * If the task being deleted is the last Altivec task from * altivecSwapHook then reset pTaskLastAltivecTcb. */ if (pTcb == pTaskLastAltivecTcb) pTaskLastAltivecTcb = NULL;#endif /* _WRS_ALTIVEC_SUPPORT */ /* * Now if the task has used shared memory objects the following * can happen : * * 1) windDelete has return OK indicating that the task * was not pending on a shared semaphore or was pending on a * shared semaphore but its shared TCB has been removed from the * shared semaphore pendQ. In that case we simply give the * shared TCB back to the shared TCB partition. * If an error occurs while giving back the shared TCB a warning * message in sent to the user saying the shared TCB is lost. * * 2) windDelete has return ALREADY_REMOVED indicating that the task * was pending on a shared semaphore but its shared TCB has already * been removed from the shared semaphore pendQ by another CPU. * Its shared TCB is now in this CPU event list but has not yet * shown-up. In that case we don't free the shared TCB now since * it will screw up the CPU event list, the shared TCB will be freed * by smObjEventProcess when it will show-up. * * 3) This is the worst case, windDelete has return ERROR * indicating that the task was pending on a shared semaphore * and qFifoRemove has failed when trying to get the lock to * the shared semaphore structure. * In that case the shared semaphore pendQ is in an inconsistant * state because it still contains a shared TCB of task which * no longer exist. We send a message to the user saying * that access to a shared structure has failed. */ /* no failure notification until we have a better solution */ if (pTcb->pSmObjTcb != NULL) /* sm tcb to free? */ { if (status == OK) { /* free sm tcb */ (*smObjTcbFreeRtn) (pTcb->pSmObjTcb); } } if (Q_FIRST (&pTcb->safetyQHead) != NULL) /*flush any deleters */ {#ifdef WV_INSTRUMENTATION /* windview - level 2 event logging */ EVT_TASK_1 (EVENT_OBJ_TASK, pTcb); /* log event */#endif windPendQFlush (&pTcb->safetyQHead); } windExit (); /* KERNEL EXIT */ /* we won't get here if we committed suicide */ taskUnlock (); /* UNLOCK PREEMPTION */ taskUnsafe (); /* TASK UNSAFE */ return (OK); }/********************************************************************************* taskSuspend - suspend a task** This routine suspends a specified task. A task ID of zero results in* the suspension of the calling task. Suspension is additive, thus tasks* can be delayed and suspended, or pended and suspended. Suspended, delayed* tasks whose delays expire remain suspended. Likewise, suspended,* pended tasks that unblock remain suspended only.** Care should be taken with asynchronous use of this facility. The specified* task is suspended regardless of its current state. The task could, for* instance, have mutual exclusion to some system resource, such as the network * or system memory partition. If suspended during such a time, the facilities* engaged are unavailable, and the situation often ends in deadlock.** This routine is the basis of the debugging and exception handling packages.* However, as a synchronization mechanism, this facility should be rejected * in favor of the more general semaphore facility.*
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -