📄 numa_memory_policy.txt
字号:
What is Linux Memory Policy?In the Linux kernel, "memory policy" determines from which node the kernel willallocate memory in a NUMA system or in an emulated NUMA system. Linux hassupported platforms with Non-Uniform Memory Access architectures since 2.4.?.The current memory policy support was added to Linux 2.6 around May 2004. Thisdocument attempts to describe the concepts and APIs of the 2.6 memory policysupport.Memory policies should not be confused with cpusets (Documentation/cpusets.txt)which is an administrative mechanism for restricting the nodes from whichmemory may be allocated by a set of processes. Memory policies are aprogramming interface that a NUMA-aware application can take advantage of. Whenboth cpusets and policies are applied to a task, the restrictions of the cpusettakes priority. See "MEMORY POLICIES AND CPUSETS" below for more details.MEMORY POLICY CONCEPTSScope of Memory PoliciesThe Linux kernel supports _scopes_ of memory policy, described here frommost general to most specific: System Default Policy: this policy is "hard coded" into the kernel. It is the policy that governs all page allocations that aren't controlled by one of the more specific policy scopes discussed below. When the system is "up and running", the system default policy will use "local allocation" described below. However, during boot up, the system default policy will be set to interleave allocations across all nodes with "sufficient" memory, so as not to overload the initial boot node with boot-time allocations. Task/Process Policy: this is an optional, per-task policy. When defined for a specific task, this policy controls all page allocations made by or on behalf of the task that aren't controlled by a more specific scope. If a task does not define a task policy, then all page allocations that would have been controlled by the task policy "fall back" to the System Default Policy. The task policy applies to the entire address space of a task. Thus, it is inheritable, and indeed is inherited, across both fork() [clone() w/o the CLONE_VM flag] and exec*(). This allows a parent task to establish the task policy for a child task exec()'d from an executable image that has no awareness of memory policy. See the MEMORY POLICY APIS section, below, for an overview of the system call that a task may use to set/change it's task/process policy. In a multi-threaded task, task policies apply only to the thread [Linux kernel task] that installs the policy and any threads subsequently created by that thread. Any sibling threads existing at the time a new task policy is installed retain their current policy. A task policy applies only to pages allocated after the policy is installed. Any pages already faulted in by the task when the task changes its task policy remain where they were allocated based on the policy at the time they were allocated. VMA Policy: A "VMA" or "Virtual Memory Area" refers to a range of a task's virtual adddress space. A task may define a specific policy for a range of its virtual address space. See the MEMORY POLICIES APIS section, below, for an overview of the mbind() system call used to set a VMA policy. A VMA policy will govern the allocation of pages that back this region of the address space. Any regions of the task's address space that don't have an explicit VMA policy will fall back to the task policy, which may itself fall back to the System Default Policy. VMA policies have a few complicating details: VMA policy applies ONLY to anonymous pages. These include pages allocated for anonymous segments, such as the task stack and heap, and any regions of the address space mmap()ed with the MAP_ANONYMOUS flag. If a VMA policy is applied to a file mapping, it will be ignored if the mapping used the MAP_SHARED flag. If the file mapping used the MAP_PRIVATE flag, the VMA policy will only be applied when an anonymous page is allocated on an attempt to write to the mapping-- i.e., at Copy-On-Write. VMA policies are shared between all tasks that share a virtual address space--a.k.a. threads--independent of when the policy is installed; and they are inherited across fork(). However, because VMA policies refer to a specific region of a task's address space, and because the address space is discarded and recreated on exec*(), VMA policies are NOT inheritable across exec(). Thus, only NUMA-aware applications may use VMA policies. A task may install a new VMA policy on a sub-range of a previously mmap()ed region. When this happens, Linux splits the existing virtual memory area into 2 or 3 VMAs, each with it's own policy. By default, VMA policy applies only to pages allocated after the policy is installed. Any pages already faulted into the VMA range remain where they were allocated based on the policy at the time they were allocated. However, since 2.6.16, Linux supports page migration via the mbind() system call, so that page contents can be moved to match a newly installed policy. Shared Policy: Conceptually, shared policies apply to "memory objects" mapped shared into one or more tasks' distinct address spaces. An application installs a shared policies the same way as VMA policies--using the mbind() system call specifying a range of virtual addresses that map the shared object. However, unlike VMA policies, which can be considered to be an attribute of a range of a task's address space, shared policies apply directly to the shared object. Thus, all tasks that attach to the object share the policy, and all pages allocated for the shared object, by any task, will obey the shared policy. As of 2.6.22, only shared memory segments, created by shmget() or mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared policy support was added to Linux, the associated data structures were added to hugetlbfs shmem segments. At the time, hugetlbfs did not support allocation at fault time--a.k.a lazy allocation--so hugetlbfs shmem segments were never "hooked up" to the shared policy support. Although hugetlbfs segments now support lazy allocation, their support for shared policy has not been completed. As mentioned above [re: VMA policies], allocations of page cache pages for regular files mmap()ed with MAP_SHARED ignore any VMA policy installed on the virtual address range backed by the shared file mapping. Rather, shared page cache pages, including pages backing private mappings that have not yet been written by the task, follow task policy, if any, else System Default Policy. The shared policy infrastructure supports different policies on subset ranges of the shared object. However, Linux still splits the VMA of the task that installs the policy for each range of distinct policy. Thus, different tasks that attach to a shared memory segment can have different VMA configurations mapping that one shared object. This can be seen by examining the /proc/<pid>/numa_maps of tasks sharing a shared memory region, when one task has installed shared policy on one or more ranges of the region.Components of Memory Policies A Linux memory policy is a tuple consisting of a "mode" and an optional set of nodes. The mode determine the behavior of the policy, while the optional set of nodes can be viewed as the arguments to the behavior. Internally, memory policies are implemented by a reference counted structure, struct mempolicy. Details of this structure will be discussed in context, below, as required to explain the behavior. Note: in some functions AND in the struct mempolicy itself, the mode is called "policy". However, to avoid confusion with the policy tuple, this document will continue to use the term "mode". Linux memory policy supports the following 4 behavioral modes: Default Mode--MPOL_DEFAULT: The behavior specified by this mode is context or scope dependent. As mentioned in the Policy Scope section above, during normal system operation, the System Default Policy is hard coded to contain the Default mode. In this context, default mode means "local" allocation--that is attempt to allocate the page from the node associated with the cpu where the fault occurs. If the "local" node has no memory, or the node's memory can be exhausted [no free pages available], local allocation will "fallback to"--attempt to allocate pages from-- "nearby" nodes, in order of increasing "distance". Implementation detail -- subject to change: "Fallback" uses
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -