⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 internalcontextbase.cpp

📁 C语言库函数的原型,有用的拿去
💻 CPP
📖 第 1 页 / 共 5 页
字号:
#endif // _DEBUG
                pContext->m_pVirtualProcessor->m_localRunnableContexts.Push(this);
                // IMPORTANT NOTE: 'this' could be recycled and reused by this point, unless the cross group runnables flag is set. (If the
                // flag IS set, we are guaranteed that the context's group will not be set to NULL/destroyed, and that the context will not
                // be recycled until we set the flag to false below).
                // We can, however, access m_pScheduler for a recycled context, since it retains the same value until the context is destroyed,
                // and contexts are only destroyed during scheduler shutdown.
                CMTRACE(MTRACE_EVT_AVAILABLEVPROCS, this, pContext->m_pVirtualProcessor, m_pScheduler->m_virtualProcessorAvailableCount);

                if (m_pScheduler->m_virtualProcessorAvailableCount > 0)
                {
#if defined(_DEBUG)
                    pContext->SetDebugBits(CTX_DEBUGBIT_STARTUPIDLEVPROCONADD);
#endif // _DEBUG
                    m_pScheduler->StartupIdleVirtualProcessor(pGroup, pBias);
                }

                if (pContext->GetScheduleGroup() != pGroup)
                {
                    // Reset the flag, if it was set, since we're done with touching scheduler/context data.
                    // This flag is not fenced. This means the reader could end up spinning a little longer until the data is
                    // propagated by the cache coherency mechanism.
                    CrossGroupRunnable(FALSE);
                    // NOTE: It is not safe to touch 'this' after this point, if this was a cross group runnable.
                }

                pContext->ExitCriticalRegion();
                return;
            }
            pContext->ExitCriticalRegion();
        }

#if defined(_DEBUG)
        SetDebugBits(CTX_DEBUGBIT_ADDEDTORUNNABLES);
#endif // _DEBUG

        m_pGroup->AddRunnableContext(this, pBias);
    }

    /// <summary>
    ///     Spins until the 'this' context is in a firmly blocked state. 
    /// </summary>
    /// <remarks>
    ///     This implements a sort of barrier. At certain points during execution, it is essential to wait until a context
    ///     has set the flag inidicating it is blocked, in order to preserve correct behavior. 
    ///     One example is if there is a race between block and unblock for the same context, i.e. if a context is trying to
    ///     block at the same time a different context is trying to unblock it.
    /// </remarks>
    void InternalContextBase::SpinUntilBlocked()
    {
        ASSERT(SchedulerBase::FastCurrentContext() != this);

        if (!IsBlocked())
        {
            _SpinWaitBackoffNone spinWait(_Sleep0);

            do
            {
                spinWait._SpinOnce();

            } while (!IsBlocked());
        }
        ASSERT(IsBlocked());
    }

    /// <summary>
    ///     Swaps the existing schedule group with the one supplied. This function should be called when the context already
    ///     has a schedule group. It decrements the existing group reference count, and references the new one if the caller
    ///     indicates so.
    /// </summary>
    /// <param name="pNewGroup">
    ///     The new group to assign to the context. This may be NULL.
    /// </param>
    /// <param name="referenceNewGroup">
    ///     Whether the context should reference the new group. In some cases, there may be an existing reference
    ///     transferred to the context, in which case this parameter is false.
    /// </param>
    void InternalContextBase::SwapScheduleGroup(ScheduleGroupBase* pNewGroup, bool referenceNewGroup)
    {
        if (m_pGroup == NULL)
        {
            ASSERT(pNewGroup == NULL);
            return;
        }

        // We expect that a context modifies its non-null schedule group only when it is running.
        ASSERT(SchedulerBase::FastCurrentContext() == this);
        ASSERT((pNewGroup != NULL) || (!referenceNewGroup));

        // Before releasing the reference count on the schedule group, which could end up destroying the schedule group if the ref
        // count falls to zero, check if the m_fCrossGroupRunnable flag is set. If it is, it means a different thread that previously added
        // this context to a runnables collection, is relying on the group being alive. Also, since the current call is executing within
        // some context's dispatch loop, and every running dispatch loop has a reference on the scheduler, we are guaranteed that scheduler
        // finalization will not proceed while this flag is set on any context inside a scheduler.
        SpinUntilValueEquals(&m_fCrossGroupRunnable, FALSE);

        m_pGroup->InternalRelease();
        if (referenceNewGroup)
        {
            pNewGroup->InternalReference();
        }
        m_pGroup = pNewGroup;
    }

    /// <summary>
    ///     Switches from one internal context to another.
    /// </summary>
    /// <param name="pNextContext">
    ///     The context to switch to.  If this is NULL on the UMS scheduler, we will switch back to the primary.
    /// </param>
    /// <param name="reason">
    ///     Specifies the reason the switch is occuring.
    /// </param>
    void InternalContextBase::SwitchTo(InternalContextBase* pNextContext, ReasonForSwitch reason)
    {
        CMTRACE(MTRACE_EVT_SWITCHTO, this, m_pVirtualProcessor, pNextContext);

        SwitchingProxyState switchState = ::Concurrency::Blocking;

        // **************************************************
        //
        // There is a dangerous zone between the call to Affinitize and the end of pThreadProxy->SwitchTo.  If we trigger a UMS block for
        // any reason, we can corrupt the virtual processor state as we reschedule someone else, come back, and don't properly have pNextContext
        // affinitized.
        //
        // If we call any BLOCKING APIs (including utilization of our own locks), there are potential issues as something else might
        // be rescheduled on this virtual processor from the scheduling context.
        //
        // **************************************************

        //
        // Various state manipulations which may take locks or make arbitrary blocking calls happen here.  This must be done outside the inclusive
        // region of [Affinitize, pThreadProxy->SwitchTo].  Otherwise, our state can become corrupted if a page fault or blocking operation triggers
        // UMS activation in that region.
        //
        switch (reason)
        {
        case GoingIdle:
            CORE_ASSERT(m_pAssociatedChore == NULL);
            VCMTRACE(MTRACE_EVT_SWITCHTO_IDLE, this, m_pVirtualProcessor, pNextContext);

            //
            // The scheduler has an idle pool of contexts, however, before putting a context on this pool, we must
            // disassociate it from its thread proxy - so that if it is picked up off the free list by a different
            // caller, that caller will associate a new thread proxy with it. The reason for this disassociation is,
            // that we want to pool thread proxies in the RM, and not the scheduler.
            //
            // The state of the context cannot be cleared until context reaches the blocked state. It's possible we
            // block/page fault somewhere lower and require the information until m_blockedState is set to blocked.
            //
            TraceContextEvent(CONCRT_EVENT_IDLE, TRACE_LEVEL_INFORMATION, m_pScheduler->Id(), m_id);

            if (pNextContext != NULL)
            {
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::SwitchTo(dispatch:pNextContext->(ctx=%d,grp=%d))", pNextContext->Id(), pNextContext->ScheduleGroupId());
            }

            m_pGroup->ReleaseInternalContext(this);

            // **************************************************
            // Read this extraordinarily carefully:
            // 
            // This context is on the free list.  Meaning someone can grab and switch to it.  Unfortunately, this means
            // we might page fault or block here.  That operation would instantly set m_blockedState, which would release 
            // the guy spinning and suddenly we have two virtual processors in-fighting over the same context. 
            //
            // Because we are inside a critical region, no page faults are observable to the scheduler code.  This does
            // mean that you cannot call *ANY BLOCKING* API between this marker and the EnterHyperCriticalRegion below.
            // API between this marker and the EnterHyperCriticalRegion below.  If you do, you will see random behavior
            // or the primary will assert at you.
            // **************************************************

            switchState = ::Concurrency::Idle;
            break;

        case Yielding:
            //
            // Add this to the runnables collection in the schedule group.
            //
            VCMTRACE(MTRACE_EVT_SWITCHTO_YIELDING, this, m_pVirtualProcessor, pNextContext);

            if (pNextContext != NULL)
            {
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::SwitchTo(yield:pNextContext->(ctx=%d,grp=%d))", pNextContext->Id(), pNextContext->ScheduleGroupId());
            }

            CORE_ASSERT(switchState == ::Concurrency::Blocking);
            m_pGroup->AddRunnableContext(this);
            break;

        case Blocking:
            VCMTRACE(MTRACE_EVT_SWITCHTO_BLOCKING, this, m_pVirtualProcessor, pNextContext);

            if (pNextContext != NULL)
            {
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::SwitchTo(block:pNextContext->(ctx=%d,grp=%d))", pNextContext->Id(), pNextContext->ScheduleGroupId());
            }

            CORE_ASSERT(switchState == ::Concurrency::Blocking);
            break;

        case Nesting:
            VCMTRACE(MTRACE_EVT_SWITCHTO_NESTING, this, m_pVirtualProcessor, pNextContext);

            if (pNextContext != NULL)
            {
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::SwitchTo(nest:pNextContext->(ctx=%d,grp=%d))", pNextContext->Id(), pNextContext->ScheduleGroupId());
            }

            switchState = ::Concurrency::Nesting;
            break;
        }

        EnterHyperCriticalRegion();

        //
        // No one can reuse the context until we set the blocked flag.  It can come off the idle list, but the thread pulling it off the idle list will
        // immediately spin until blocked inside the acquisition.  It is entirely possible, however, that the moment we flip the blocked flag, the spinner
        // gets released and the proxy fields, etc...  are overwritten.  We still own the thread proxy from the RM's perspective and the RM will sort out
        // races in its own way.  We must, however, cache the thread proxy before we set the blocked flag and not rely on *ANY* fields maintained by the *this*
        // pointer after the flag set.
        //
        VirtualProcessor *pVirtualProcessor = m_pVirtualProcessor;
        m_pVirtualProcessor = NULL;

        CORE_ASSERT(!IsBlocked());

#if defined(_DEBUG)
        ClearDebugBits(CTX_DEBUGBIT_AFFINITIZED);

        if (reason != GoingIdle)
            SetDebugBits(CTX_DEBUGBIT_COOPERATIVEBLOCKED);
#endif // _DEBUG

        CORE_ASSERT(m_pThreadProxy != NULL);
        IThreadProxy *pThreadProxy = m_pThreadProxy;

        //
        // The blocked flag needs to be set on the context to prevent the block-unblock race as described in
        // VirtualProcessor::Affinitize. In addition, it is used during finalization to determine whether
        // work exists in the scheduler.
        //
        InterlockedExchange(&m_blockedState, CONTEXT_BLOCKED);

        // **************************************************
        // At this point, it unsafe to touch the *this* pointer.  You cannot touch it, debug with it, rely on it.  It may be reused if 
        // reason == GoingIdle and represent another thread.
        // **************************************************

        // The 'next' context must be affinitized to a copy of the 'this' context's vproc that was snapped, BEFORE
        // the blocked flag was set. Not doing this could result in vproc orphanage. See VirtualProcessor::Affinitize
        // for details. We cache the vproc pointer in a local variable before setting m_blockedState. Thus re-affinitizing 
        // the 'this' context would not affect the vproc that the 'next' context is going to get affinitized to.
        // With UMS, if the pNextContext is NULL, the vproc affinitizes the scheduling Context.
        pVirtualProcessor->Affinitize(pNextContext);

        CORE_ASSERT(pNextContext == NULL || pNextContext->m_pThreadProxy != NULL); 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -