⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 internalcontextbase.cpp

📁 C语言库函数的原型,有用的拿去
💻 CPP
📖 第 1 页 / 共 5 页
字号:
// ==++==
//
// Copyright (c) Microsoft Corporation.  All rights reserved.
//
// ==--==
// =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
//
// InternalContextBase.cpp
//
// Source file containing the implementation for an internal execution context.
//
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#include "concrtinternal.h"

namespace Concurrency
{
namespace details
{

#if defined(_DEBUG)
    void SetContextDebugBits(InternalContextBase *pContext, DWORD bits)
    {
        if (pContext != NULL)
            pContext->SetDebugBits(bits);
    }
#endif 

    /// <summary>
    ///     Constructs the base class object for an internal context.
    /// </summary>
    InternalContextBase::InternalContextBase(SchedulerBase *pScheduler) :
        ContextBase(pScheduler, false),
#if defined(_DEBUG)
        m_fEverRecycled(false),
        m_pAssignedThreadProxy(NULL),
        m_pLastAssignedThreadProxy(NULL),
        _m_pVirtualProcessor(NULL),
        m_ctxDebugBits(0),
        m_lastDispatchedTid(0),
        m_lastAcquiredTid(0),
        m_lastAffinitizedTid(0),
        m_workStartTimeStamp(0),
        m_lastRunPrepareTimeStamp(0),
        m_prepareCount(0),
#else
        m_pVirtualProcessor(NULL),
#endif
        m_pAssociatedChore(NULL),
        m_pThreadProxy(NULL),
        m_searchCount(0),
        m_fCanceled(false),
        m_fIsVisibleVirtualProcessor(false),
        m_fHasDequeuedTask(false),
        m_pOversubscribedVProc(NULL),
        m_fCrossGroupRunnable(FALSE),
        m_fIdle(true),
        m_fWorkSkipped(false)
    {
        // Initialize base class memebers.
        m_pGroup = NULL;
    }

    /// <summary>
    ///     Causes the internal context to block yielding the virtual processor to a different internal context.
    /// </summary>
    void InternalContextBase::Block()
    {
        EnterCriticalRegion();
        ASSERT(this == SchedulerBase::FastCurrentContext());
        ASSERT(m_pVirtualProcessor != NULL);

        TraceContextEvent(CONCRT_EVENT_BLOCK, TRACE_LEVEL_INFORMATION, m_pScheduler->Id(), m_id);

        if (m_pVirtualProcessor->IsMarkedForRetirement())
        {
            // The virtual processor has been marked for retirement. The context needs to switch out rather 
            // than switching to a different context.

            // The context switching fence needs to be modified in two steps to maintain parity 
            // with the regular block/unblock sequence. Else, we could get into a situation where
            // it has an invalid value.
            if ((InterlockedIncrement(&m_contextSwitchingFence) == 1) && (InterlockedCompareExchange(&m_contextSwitchingFence, 2, 1) == 1))
            {
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::Block->switching out");
                SwitchOut(Blocking);
            }
            else
            {
                // Even if the unblock is skipped, we should not continue running this context since the 
                // virtual processor needs to be retired. It should be put on the runnables list and 
                // the context should block (which is the same series of steps as when yielding).
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::Block->Unblock was skipped, switching out");
                SwitchOut(Yielding);
            }
        }
        else
        {
            // Execute a different context on the underlying virtual processor.

            if (InterlockedIncrement(&m_contextSwitchingFence) == 1)
            {
                InternalContextBase *pContext = NULL;
                WorkItem work;
                if (m_pVirtualProcessor->SearchForWork(&work, m_pGroup))
                {
                    if (!work.IsContext())
                    {
                        //
                        // Bind the work item to a context outside of a critical region -- this prevents the huge cost of allocation
                        // (or worse -- thread creation) within a critical region.
                        //
                        ExitCriticalRegion();
                        CORE_ASSERT(GetCriticalRegionType() == OutsideCriticalRegion);
                        pContext = work.Bind();
                        EnterCriticalRegion();
                    }
                    else
                    {
                        //
                        // Avoid the enter/exit cost if we found a context to switch to.
                        //
                        pContext = work.GetContext();
                    }
                }

#if defined(_DEBUG)
                CORE_ASSERT(this != pContext);
                if (pContext != NULL)
                {
                    CMTRACE(MTRACE_EVT_SFW_FOUND, this, m_pVirtualProcessor, pContext);
                    CMTRACE(MTRACE_EVT_SFW_FOUNDBY, pContext, m_pVirtualProcessor, this);
                }
#endif

                // Only switch to the other context if unblock has not been called since we last touched the
                // context switching fence. If there was an unblock since, the comparison below will fail.
                if (InterlockedCompareExchange(&m_contextSwitchingFence, 2, 1) == 1)
                {
                    //
                    // *NOTE* After this point, we dare not block.  A racing ::Unblock call can put *US* on the runnables list and the scheduler
                    // will get awfully confused if a UMS activation happens between now and the time we SwitchTo the context below.  Note that 
                    // page faults and suspensions are masked by the effect of being in a critical region.  It just means that we cannot call
                    // *ANY* blocking API (including creating a new thread).
                    // 
                    if (pContext == NULL)
                    {
                        //
                        // A runnable was not found - we'd like to schedule an internal context that will look for realized/unrealized
                        // chores (run a SFW loop or deactivate).  The unfortunate reality is that we cannot necessarily just schedule.
                        //
                        pContext = m_pGroup->GetInternalContext();
                    }

                    SwitchTo(pContext, Blocking);
                }
                else
                {
                    // A matching unblock was detected. Skip the block. If a runnable context was found, it needs to be 
                    // put back into the runnables collection.

                    // NOTE -- don't look at pContext after pContext->AddToRunnables; it might be gone
                    if (pContext != NULL)
                    {
                        TRACE(TRACE_SCHEDULER, L"InternalContextBase::Block->innerskipblock(ctx=%d,grp=%d)", pContext->GetId(), pContext->GetScheduleGroupId());
                        VCMTRACE(MTRACE_EVT_BLOCKUNBLOCKRACE, pContext, m_pVirtualProcessor, this);

#if defined(_DEBUG)
                        //
                        // For a recycled context, this allows other assertions elsewhere in the codebase to be valid and continue to catch
                        // issues around recycling.
                        //
                        pContext->ClearDebugBits(CTX_DEBUGBIT_RELEASED);
#endif // _DEBUG

                        pContext->AddToRunnables();
                    }
                }
            }
            else
            {
                // Skip the block
                TRACE(TRACE_SCHEDULER, L"InternalContextBase::Block->outerskipblock(ctx=%d,grp=%d)", GetId(), GetScheduleGroupId());
            }
        }
        ExitCriticalRegion();
    }

    /// <summary>
    ///     Unblocks the internal context putting it on the runnables collection in its schedule group.
    /// </summary>
    void InternalContextBase::Unblock()
    {
        if (this != SchedulerBase::FastCurrentContext())
        {
            LONG newValue = 0;

            newValue = InterlockedDecrement(&m_contextSwitchingFence);

            TraceContextEvent(CONCRT_EVENT_UNBLOCK, TRACE_LEVEL_INFORMATION, m_pScheduler->Id(), m_id);

            if (newValue == 1)
            {
                // Weak assign is ok.  Any other 'LOCK' interaction with m_contextSwitchingFence will
                // flush the correct value through.
                m_contextSwitchingFence = 0;

                // Wait until this context is blocked.
                //
                // SpinUntilBlocked is essential here. Consider the case where the context being unblocked is currently executing the Block
                // API on virtual processor VP1. It is at a point very close to SwitchTo, (after the second interlocked operation), which implies a
                // different context is about to affinitized VP1, to take its place before, before it is switched out.
                
                // If Unblock puts the 'this' context on a runnables list, it could be pulled off by a different context running on VP2 and get
                // affinitized to VP2. Then SwitchTo in Block is called and the new context is affinitized to VP2 instead of VP1 and VP1 is orphaned.
                
                // The wait until blocked ensures that the affinitize step in the Block takes place before the context is put onto runnables, by which
                // the correct affinity is set for the new context by the blocking context.

                SpinUntilBlocked();

                TRACE(TRACE_SCHEDULER, L"InternalContextBase::Unblock->runnables(ctx=%d,grp=%d)", GetId(), GetScheduleGroupId());
                AddToRunnables();
            }
            else
            {
                if ((newValue < -1) || (newValue > 0))
                {
                    // Should not be able to get m_contextSwitchingFence above 0.
                    ASSERT(newValue < -1); 

                    // Too many unblocks without intervening blocks. Block/unblock calls need to balance.
                    TRACE(TRACE_SCHEDULER, L"InternalContextBase::Unblock->unbalanced(ctx=%d,grp=%d)", GetId(), GetScheduleGroupId());

                    throw context_unblock_unbalanced();
                }
                else
                {
                    TRACE(TRACE_SCHEDULER, L"InternalContextBase::Unblock->skipunblock(ctx=%d)", GetId());
                }
            }
        }
        else
        {
            // A context is not allowed to unblock itself.
            TRACE(TRACE_SCHEDULER, L"InternalContextBase::Unblock->selfunblock(ctx=%d,grp=%d)", GetId(), GetScheduleGroupId());
            throw context_self_unblock();
        }
    }

    /// <summary>
    ///     Yields the virtual processor to a different runnable internal context if one is found.
    /// </summary>
    void InternalContextBase::Yield()
    {
        bool bSwitchToThread = false;

        EnterCriticalRegion();

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -