⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 umsfreevirtualprocessorroot.cpp

📁 C语言库函数的原型,有用的拿去
💻 CPP
📖 第 1 页 / 共 4 页
字号:
// ==++==
//
// Copyright (c) Microsoft Corporation.  All rights reserved.
//
// ==--==
// =+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
//
// UMSFreeVirtualProcessorRoot.cpp
//
// Part of the ConcRT Resource Manager -- this header file contains the internal implementation for the UMS free virtual
// processor root (represents a virtual processor as handed to a scheduler).
//
// =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

#include "concrtinternal.h"

namespace Concurrency
{
namespace details
{

    //
    // Defines the number of times we retry ExecuteUmsThread if the thread was suspended before we go on to some other thread.
    //
    const int NUMBER_OF_EXECUTE_SPINS = 10;

#if defined(_DEBUG)
    //
    // **************************************************
    // READ THIS VERY CAREFULLY:
    // **************************************************
    //
    // If you single step around in the UMS code, it's quite likely that the debugger triggered suspensions and context fetches will cause
    // the ExecuteUmsThread calls that SwitchTo utilizes to fail.  This will effectively trigger *DRAMATICALLY* different behavior when single
    // stepping the UMS code than not single stepping it.  If you set this flag (available for special kinds of debugging only), we will force
    // all ExecuteUmsThread calls to loop until they succeed.  This means that the vproc will spin wait until the thread is no longer suspended.
    // Note that if you set this flag, you *CANNOT* selectively suspend and resume threads in the debugger.  Doing so may cause the entire 
    // scheduler to freeze.
    //
    BOOL g_InfiniteSpinOnExecuteFailure = FALSE;
#endif

    /// <summary>
    ///     Constructs a new free virtual processor root.
    /// </summary>
    /// <param name="pSchedulerProxy">
    ///     The scheduler proxy this root is created for. A scheduler proxy holds RM data associated with an instance of
    ///     a scheduler.
    /// </param>
    /// <param name="pNode">
    ///     The processor node that this root belongs to. The processor node is one among the nodes allocated to the
    ///     scheduler proxy.
    /// </param>
    /// <param name="coreIndex">
    ///     The index into the array of cores for the processor node specified.
    /// </param>
    UMSFreeVirtualProcessorRoot::UMSFreeVirtualProcessorRoot(UMSSchedulerProxy *pSchedulerProxy, SchedulerNode* pNode, unsigned int coreIndex)
        : VirtualProcessorRoot(pSchedulerProxy, pNode, coreIndex),
          m_hPrimary(NULL), m_pSchedulingContext(NULL), m_pExecutingProxy(NULL), m_hBlock(NULL), m_fDelete(false), m_fStarted(false), m_fActivated(false),
          m_fWokenByScheduler(true)
    {
        m_id = ResourceManager::GetThreadProxyId();

        m_hCriticalNotificationEvent = CreateEventW(NULL, FALSE, FALSE, NULL);
        if (m_hCriticalNotificationEvent == NULL)
            throw scheduler_resource_allocation_error(HRESULT_FROM_WIN32(GetLastError()));

        m_hBlock = CreateEventW(NULL, FALSE, FALSE, NULL);
        if (m_hBlock == NULL)
            throw scheduler_resource_allocation_error(HRESULT_FROM_WIN32(GetLastError()));

        CreatePrimary();
    }

    /// <summary>
    ///     Destroys a free virtual processor root.
    /// </summary>
    UMSFreeVirtualProcessorRoot::~UMSFreeVirtualProcessorRoot()
    {
        CloseHandle(m_hCriticalNotificationEvent);
        CloseHandle(m_hBlock);
        CloseHandle(m_hPrimary);
    }

    /// <summary>
    ///     Deletes the virtual processor.
    /// </summary>
    void UMSFreeVirtualProcessorRoot::DeleteThis()
    {
        //
        // We must be extraordinarily careful here!  The scheduler might have called for the removal of the virtual processor from one of two threads:
        // an arbitrary thread (no worries), the virtual processor thread itself (many worries).  Because the primary *IS* the virtual processor root,
        // we cannot simply delete the virtual processor out from underneath the running thread.  What if it page faults on the way out!  We must defer this
        // to the primary after the thread has exited the dispatch loop.  Hence, the deletion happens in a virtual function that can detect this!
        //
        CORE_ASSERT(!OnPrimary());

        UMSThreadProxy *pProxy = UMSThreadProxy::GetCurrent();

        //
        // From now until the end of time, the proxy is in a hyper-critical region.  Let the thread running EXIT. This will be reset once the proxy
        // is on the free list.
        //
        if (pProxy != NULL)
            pProxy->EnterHyperCriticalRegion();

        m_fDelete = true;

        if (pProxy != NULL && pProxy->m_pRoot == this)
        {
            //
            // We are running atop *THIS* virtual processor.  The deletion must be deferred back to the primary thread *AFTER* getting off this one.
            // The switch back to the primary after getting OFF this thread will exit the primary dispatch loop and perform deletion of the virtual processor root.
            //

#if defined(_DEBUG)
            pProxy->SetShutdownValidations();
#endif // _DEBUG
        }
        else
        {
            //
            // We were running atop a *DIFFERENT* virtual processor (or an external context).  It's okay to let go of the critical region.
            //
            if (pProxy != NULL)
                pProxy->ExitHyperCriticalRegion();

            if (m_hPrimary != NULL)
            {
                //
                // We're not on the primary. It must be blocked on m_hBlock.  Wake it up and let it go.  The exit of the loop inside the primary will
                // delete this.
                //
                if (!m_fStarted)
                {
                    StartupPrimary();
                }
                else
                {
                    SetEvent(m_hBlock);
                }
            }
        }
    }

    /// <summary>
    ///     Creates the primary thread.
    /// </summary>
    void UMSFreeVirtualProcessorRoot::CreatePrimary()
    {
        CORE_ASSERT(m_hPrimary == NULL);
        InitialThreadParam param(this);

        m_hPrimary = LoadLibraryAndCreateThread(NULL,
                                  0,
                                  PrimaryMain,
                                  &param,
                                  0,
                                  &m_primaryId);

        //
        // Keep a reference on the scheduler proxy.  The primary needs it as long as it is running!  The reference count needs to be placed after we are guaranteed
        // that the thread will run, before it does.
        //
        SchedulerProxy()->Reference();

        //
        // Make sure that the primary is appropriately affinitized before we actually run anything atop it.  The primary should **NEVER** need
        // to be reaffinitized and any UT which runs atop it will magically pick up this affinity through UMS.  The only cavaet to this is that
        // the affinity will only apply to K(p) and any UT running atop it.  Once a UT makes a transition into the kernel, a directed switch happens
        // to K(u) which has a separate affinity from KT(p).  If this only happens when the code is in the kernel, it might not be a problem; however --
        // if this kernel call does **NOT** block, UMS allows the UT to ride out on KT(u) instead of switching back to KT(p) as an optimization.  Now,
        // user code will be running atop KT(u) with a differing affinity than the primary.
        //
        // How often this happens is subject to performance analysis to determine whether it is worth it to reaffinitize KT(u) on user mode switching.  This
        // should only be done if **ABSOLUTELY NECESSARY** as it will force a call into the kernel for a user mode context switch (which somewhat defeats
        // the purpose).
        //
        SchedulerProxy()->GetNodeAffinity(GetNodeId()).ApplyTo(m_hPrimary);
        SetThreadPriority(m_hPrimary, SchedulerProxy()->ContextPriority());

        //
        // Wait for the thread to start. This ensures that the thread is at its PrimaryMain. When we start the primary due to an activation, it 
        // needs to be able to handle blocked UTs. Therefore the primary shall not take any locks shared with the UT during StartupPrimary.
        //
        WaitForSingleObject(param.m_hEvent, INFINITE);
    }

    /// <summary>
    ///     Causes the scheduler to start running a thread proxy on the specified virtual processor root which will execute
    ///     the Dispatch method of the context supplied by pContext. Alternatively, it can be used to resume activate a
    ///     virtual processor root that was de-activated by a previous call to Deactivate.
    /// </summary>
    /// <param name="pContext">
    ///     The context which will be dispatched on a (potentially) new thread running atop this virtual processor root.
    /// </param>
    void UMSFreeVirtualProcessorRoot::Activate(Concurrency::IExecutionContext *pContext)
    {
        //
        // We must allow the scheduling context to be here in addition to what we think is here if it is allowed to activate/deactivate.
        // It's entirely possible that executing proxy is INVALID (already freed) if we get here from that path.
        //

        if (m_fActivated)
        {
            //
            // m_pExecutingProxy could be NULL. When a vproot is initially activated, it attempts to create an
            // internal context for SFW. However, the creation needs to happen outside the primary. Thus it is
            // possible that the vproot fails to get an internal context and deactivates. Note that its 
            // m_pExecutingProxy is NULL since we haven't run any context on it.
            //

            //
            // All calls to Activate after the first one can potentially race with the paired deactivate. This is allowed by the API, and we use the fence below
            // to reduce kernel transitions in case of this race.
            //
            LONG newVal = InterlockedIncrement(&m_activationFence);
            if (newVal == 2)
            {
                // We received two activations in a row. According to the contract with the client, this is allowed, but we should expect a deactivation
                // soon after. Simply return instead of signalling the event. The deactivation will reduce the count back to 1. In addition, we're not responsible
                // for changing the idle state on the core.
            }
            else
            {
                ASSERT(newVal == 1);
                SetEvent(m_hBlock);

                //
                // In the event of an activation/completion race, the scheduler must swallow this set by performing a deactivate.  The scheduler can tell
                // via the return code from Deactivate.
                //
            }
        }
        else
        {
            CORE_ASSERT(m_pExecutingProxy == NULL);

            //
            // The first activation *MUST* be the scheduling context.  It is uniquely bound to the virtual processor on which activate was called.
            //

            m_pSchedulingContext = static_cast<IExecutionContext *> (pContext);
            pContext->SetProxy(this);

            //
            // This is the first time a virtual processor root is activated. Mark it as non-idle for dynamic RM. In future, the core will
            // be marked as idle and non-idle in Deactivate. Also remember that the root is activated. A brand new root is considered idle
            // by dynamic RM until it is activated, but if it is removed from a scheduler before ever being activated, we need to revert the
            // idle state on the core.
            //
            m_fActivated = true;

            //
            // The activation fence Need not be interlocked, since it is not possible that this variable is being synchronously accessed
            // at the same time. The only other place the variable is accessed is in deactivate, and since this is the first activation -
            // a concurrent deactivation is impossible.
            //
            m_activationFence = 1;

            //
            /// An activated root increases the subscription level on the underlying core. Future changes to the subscription
            // level are made in Deactivate (before and after blocking).
            //
            GetSchedulerProxy()->IncrementCoreSubscription(GetExecutionResource());

            //
            // Only the primary has responsibility for affinitizing and actually executing the thread proxy.
            //
            StartupPrimary();
        }

        return;
    }

    /// <summary>
    ///     Causes the thread proxy running atop this virtual processor root to temporarily stop dispatching pContext.
    /// </summary>
    /// <param name="pContext">
    ///     The context which should temporarily stop being dispatched by the thread proxy running atop this virtual processor root.
    /// </param>
    bool UMSFreeVirtualProcessorRoot::Deactivate(Concurrency::IExecutionContext *pContext)
    {
        bool fPrimary = OnPrimary();

        if (pContext == NULL || (fPrimary && pContext != m_pSchedulingContext))
        {
            throw std::invalid_argument("pContext");
        }

        if (m_pExecutingProxy == NULL && !fPrimary)
        {
            throw invalid_operation();
        }

        //
        // As with Activate, the scheduling context may activate and deactivate which requires it to utilize its own IContext and not
        // the previously executing one.  Handle this case.
        //
        // Note that if pProxy is NULL at the end of this, we cannot touch m_pExecutingContext other than comparisons.  No fields may be
        // touched.  It may already be gone and freed.
        //
        UMSFreeThreadProxy *pProxy = NULL;
        IThreadProxy *pProxyIf = pContext->GetProxy();
        if (pProxyIf != this)
            pProxy = static_cast<UMSFreeThreadProxy *> (pContext->GetProxy());

        if (!fPrimary)
        {
            //
            // Deactivate has to come from the running thread (or the primary)
            //
            if (pProxy != NULL && (m_pExecutingProxy != pProxy || UMSThreadProxy::GetCurrent() != static_cast<UMSThreadProxy *>(pProxy)))
            {
                throw invalid_operation();
            }

            //
            // We had better be in a critical region on the **SCHEDULER SIDE** prior to calling this or all sorts of fun will ensue.
            //
            CORE_ASSERT(pProxy == NULL || pProxy->GetCriticalRegionType() != OutsideCriticalRegion);
        }

        //
        // The activation fence is used to pair scheduler activates with corresponding deactivates. After the first activation, it is possible
        // that the next activation may arrive before the deactivation that it was meant for. In this case we skip the kernel transitions, and
        // avoid having to change the core subscription. Now, with UMS, it's possible that an 'activation' arrives from the RM. We can tell
        // that this is the case if the return value from the ->Deactivate and ->InternalDeactivate APIs is false. We count this as an RM 
        // awakening, and don't decrement the fence on a subsequent deactivate.
        //

        LONG newVal = 0;

        if (m_fWokenByScheduler)
        {
            newVal = InterlockedDecrement(&m_activationFence);
        }
        else
        {
            //
            // We were woken up by the RM. newVal is left at 0, which will force us to deactivate. The activation fence could change
            // from 0 to 1 if a corresponding activation arives from the scheduler. The order of the assert below is important.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -