⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 internalcontextbase.cpp

📁 C语言库函数的原型,有用的拿去
💻 CPP
📖 第 1 页 / 共 5 页
字号:

#if defined(_DEBUG)
        if (pNextContext != NULL && pNextContext->m_pAssociatedChore != NULL)
            pNextContext->SetDebugBits(CTX_DEBUGBIT_SWITCHTOWITHASSOCIATEDCHORE);
#endif // _DEBUG
        
        IExecutionContext *pDestination = (IExecutionContext *)pNextContext;
        if (pDestination == NULL)
        {
            pDestination = pVirtualProcessor->GetDefaultDestination();
            CORE_ASSERT(pDestination != NULL);
        }

        pThreadProxy->SwitchTo(pDestination, switchState);
        //
        // The m_blockedState is cleared in Affinitize() when someone tries to re-execute this context.
        //

        if (reason != GoingIdle)
            ExitHyperCriticalRegion();
    }

    /// <summary>
    ///     Switches out the internal context. Useful, when the virtual processor is to be retired.
    ///     Is also used when un-nesting a scheduler and the context is returning to its original scheduler.
    /// </summary>
    /// <param name="reason>
    ///     Specifies the reason the context is switching out.
    /// </param>
    /// <returns>
    ///     True if the context is canceled. This would happen only when reason == GoingIdle
    /// </returns>
    bool InternalContextBase::SwitchOut(ReasonForSwitch reason)
    {
        // If this context is about to be added to the idle pool, it could get picked up for reuse and reinitialized in
        // a call to GetInternalContext(). It will *NOT* get re-affinitized or reinitialized until we set the blocked flag 
        // below. Save away the thread proxy before setting the blocked flag

        IThreadProxy * pThreadProxy = m_pThreadProxy;
        bool isCanceled = false;

        if (m_pVirtualProcessor != NULL) 
        {
            // If this internal context is in the last stage of looking for work when it was asked
            // to switch out (retire), then we need to make sure that no one grabs it to do work.
            // That's why we make sure it has exclusive access to virtual processor.
            ReclaimVirtualProcessor();

            // The context is switching out due to the underlying virtual processor being marked for retirement.
            CORE_ASSERT(!m_pVirtualProcessor->IsAvailable());

            // The vproc can safely be removed from lists within the scheduling node, etc. The finalization sweep that suspends
            // virtual processors and waits for them to check in cannot be executing at this time since the counts of idle
            // and active vprocs are guaranteed to be unequal.
            CORE_ASSERT(!m_pScheduler->InFinalizationSweep() && !m_pScheduler->HasCompletedShutdown());

            // Virtual processor retirement needs to be in a hypercritical region. Since the vproc is being
            // retired it is safe to assume that we are not responsible for scheduling other work on this vproc.

#if defined(_DEBUG)
            SetShutdownValidations();
#endif // _DEBUG

            // Make a copy of the safepoint marker so that we could trigger commit later
            SafePointMarker safePointMarker = m_pVirtualProcessor->m_safePointMarker;

            EnterHyperCriticalRegion();
            m_pVirtualProcessor->Retire();
            m_pVirtualProcessor = NULL;

            if (reason != GoingIdle)
            {
                ASSERT(reason == Blocking || reason == Yielding);
                // For the cases where we are switching out while blocking, we need to exit the hypercritical region,
                // as the context could be unblocked later and run user code (chore). If we're going Idle,
                // the hypercritical region will be exited when this context is reinitialized, after being picked up
                // off of the free pool.
#if defined(_DEBUG)
                ClearShutdownValidations();
#endif // _DEBUG
                ExitHyperCriticalRegion();

                //
                // For blocking and yielding contexts, the context should be marked as blocked *before* calling
                // VirtualProcessorActive(false), so that finalization will roll back if the idle and active vproc counts become
                // equal (gate count is 0). Essentially, the moment this virtual processor decrements the gate count in
                // VirtualProcessorActive(false), it is not part of the scheduler anymore, and unless the blocked flag is set
                // here, the scheduler may finalize without resuming this context when it is ready to run.
                //
                // In addition, this should be done *after* all accesses to m_virtualProcessor. If this is a 'Block' operation,
                // an unblock could put this context on a runnables collection, and it could be taken off and re-affinitized,
                // changing the value of m_virtualProcessor out from under us. Moreover, setting m_virtualProcessor to null here,
                // ensures that we will quickly catch future bugs where it is accessed after this point and before the context
                // waits on the block event.
                //
                CORE_ASSERT(!IsBlocked());
                InterlockedExchange(&m_blockedState, CONTEXT_BLOCKED);
            }
            else
            {
                // For the GoingIdle case, the sequence of events strictly needs to be as follows:
                //        1.  add 'this' to the idle pool
                //        2.  invoke VirtualProcessorActive(false) - making the virtual processor inactive
                //        3.  all other accesses to 'this'
                //        4.  set the blocked flag.
                // After the blocked flag is set while the context is on the idle pool, it is unsafe to touch *this*. The context
                // could be repurposed, or even deleted if the scheduler shuts down. It is important to note that the context is
                // inside a hypercritical region here. Therefore, in the UMS scheduler, all blocking operations are hidden from us.

                //
                // If the context is going idle, it should *not* be marked as blocked until after it is put on the idle queue.
                // During finalization, the scheduler ignores all contexts marked as blocked that are also on the idle queue for
                // the purpose of determining if any work is remaining to be done. If this context is marked as blocked before
                // it is on the idle queue, and the scheduler is simultaneously sweeping for finalize, it may incorrectly assume
                // that this is a blocked context that will become runnable in the future. This could hang finalization.
                //

                // Since we're going idle on a switch out operation, once we pass the VirtualProcessorActive(false) call below, we're no
                // longer considered part of the scheduler, and we need to worry about the scheduler shutting down simultaneously.
                // If we're blocking or yielding, there's no problem, because a scheduler cannot shutdown while there is a blocked
                // context. However, since we're going idle, a different thread (either a virtual processor or an external thread)
                // could initiate a sweep for finalize (if the conditions are met). It is unsafe for us to add this context to
                // the idle pool WHILE a sweep is concurrently going on. The sweep code goes through and checks to see if any contexts
                // not on the free list (not marked idle) have their blocked flag set. If we're racing with the sweep, we could add this
                // context to the free list and set its blocked flag between the time the sweep checks the idle and blocked state.
                // It is possible to hang finalization in this case, since the sweeping thread will believe it has found a 'blocked context'
                // and roll back finalization. Therefore we MUST add this context to the idle pool BEFORE making the VirtualProcessorActive
                // call.

                // Return to the idle pool. This first puts the context into the idle pool of the scheduler instance.
                // If the idle pool is full, the scheduler will return the context to the resource manager.
                TraceContextEvent(CONCRT_EVENT_IDLE, TRACE_LEVEL_INFORMATION, m_pScheduler->Id(), m_id);
                TRACE(TRACE_SCHEDULER, L"ThreadInternalContext::SwitchOut(idle)");
                m_pGroup->ReleaseInternalContext(this);
            }

            //
            // If the reason is "blocking", the context could now appear on the runnables list. As a result we shall not make
            // any synchronous UMS blocking calls such as attempting to acquire heap lock etc from this point on. If we do and
            // are blocked on a lock that is held by a UT, the remaining vproc might not be able to run the UT as it could be
            // spinning on this context.
            //

            //
            // In the event that this virtual processor hadn't yet observed safe points, we need to make sure that its removal commits
            // all data observations that are okay with other virtual processors. Since safe point invocations could take arbitrary
            // locks and block, we trigger safe points on all the virtual processors (we have removed ourselves from that list).
            //
            m_pScheduler->TriggerCommitSafePoints(&safePointMarker);

            // Reducing the active vproc count could potentially lead to finalization if we're in a shutdown semantic.
            // If that happens to be the case (it can only happen if the context switch reason is GoingIdle), we will exit the
            // dispatch loop and return the thread proxy to the RM - the virtual processor has been retired which means the
            // underlying virtual processor root has been destroyed. We have already removed the virtual processor from the
            // lists in the scheduler, so the underlying thread proxy will not get 'woken up' via a subsequent call to Activate
            // on the underlying vproc root.
            m_pScheduler->VirtualProcessorActive(false);
            CORE_ASSERT(!m_fCanceled || (m_pScheduler->HasCompletedShutdown() && (reason == GoingIdle)));

            // Make a local copy of m_fCanceled before we set m_blockedState. On sheduler shutdown,
            // m_fCanceled is set to true.  In this case, we need to do cleanup. The field need
            // to be cached since another vproc could pick this context up and set the m_fCanceled flag
            // before we do the check again to invoke cleanup.
            isCanceled = m_fCanceled;

            if (reason == GoingIdle)
            {
                // After VirtualProcessorActive(false) and all accesses to 'this' it is safe to set m_blockedState while going idle.
                CORE_ASSERT(!IsBlocked());
                InterlockedExchange(&m_blockedState, CONTEXT_BLOCKED);
            }
        }
        else
        {
            // This is a nested context returning to its parent scheduler.
            CORE_ASSERT(reason == Nesting);
            CORE_ASSERT(IsBlocked());
        }

        switch (reason)
        {
        // We've already added the context to the free list for GoingIdle
        case Yielding:
        case Nesting:

            // Add this to the runnables collection in the schedule group.
            TRACE(TRACE_SCHEDULER, L"ThreadInternalContext::SwitchOut(nest/yield)");
            m_pGroup->AddRunnableContext(this);
            break;

        case Blocking:
            TRACE(TRACE_SCHEDULER, L"ThreadInternalContext::SwitchTo(block)");
            break;
        }

        if (reason != GoingIdle)
        {
            // There's no need to invoke SwitchOut on the thread proxy if we're going idle, we can simply return and the 
            // context will exit its dispatch loop.
            pThreadProxy->SwitchOut();
            //
            // m_blockedState will be reset when we affinitize the context to re-execute it.
            //
        }

        if (isCanceled)
        {
            // We could be canceled only if we are going idle.
            CORE_ASSERT(reason == GoingIdle);
        }

        return isCanceled;
    }

    /// <summary>
    ///     Called when a context is nesting a scheduler. If nesting takes place on what is an internal context in
    ///     the 'parent' scheduler, the context must return the virtual processor to the parent scheduler
    /// </summary>
    void InternalContextBase::LeaveScheduler()
    {
        EnterCriticalRegion();

        // Find a context to take over the underlying virtual processor and switch to it. When a context switches to a 
        // different context with the reason 'Nesting', the SwitchTo API will affinitize the context we found to 
        // the virtual processor 'this' context is running on, and return - allowing the underlying thread proxy to
        // join a nested scheduler as an external context.

        InternalContextBase *pContext = NULL;
        WorkItem work;
        if (m_pVirtualProcessor->SearchForWork(&work, m_pGroup))
        {
            ExitCriticalRegion();
            pContext = work.Bind();
            EnterCriticalRegion();
        }
        else
        {
            ExitCriticalRegion();
            pContext = m_pGroup->GetInternalContext();
            EnterCriticalRegion();
        }

        ASSERT(this != pContext);

        SwitchTo(pContext, Nesting);

        ASSERT(SchedulerBase::FastCurrentContext() == this);
        ASSERT(m_pVirtualProcessor == NULL);
        ASSERT(m_pGroup != NULL);
        ASSERT(IsBlocked());

        ExitCriticalRegion();
    }

    /// <summary>
    ///     Called when a internal context detaches from a nested scheduler. The context must find a virtual processor
    ///     on a previous context before it may run.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -