📄 glossary.xml
字号:
<?xml version="1.0" encoding="iso-8859-1"?><!--This is an ATutor Glossary terms document--><!--Created from the ATutor Content Package Generator - http://www.atutor.ca--><!DOCTYPE glossary [ <!ELEMENT item (term, definition)> <!ELEMENT term (#PCDATA)> <!ELEMENT definition (#PCDATA)>]><glossary> <item> <term>Amdahl's Law</term> <definition>A simple relation expressing the maximum speedup factor as a function of the number of processors and the fraction of the execution time that can be parallelized. Its most important implication is that the maximum speedup is limited by the portion of the code that cannot be parallelized. Thus, for example, if a third of the execution time must be carried out serially, then the maximum overall speedup that can be achieved is a factor of three, even in the limit of an infinite number of processors (so that the parallelized portion takes zero time). For further details, see <...></definition> </item> <item> <term>Completion criteria</term> <definition>The property that must be met in order to classify an MPI communication as successfully completed.</definition> </item> <item> <term>Critical section</term> <definition></definition> </item> <item> <term>Dirichlet problem</term> <definition>A boundary value problem in which the boundary values are specified.</definition> </item> <item> <term>Laplace's equation</term> <definition>A partial differential equation in two or more spacial dimensions in which the sum of the second partial derivatives of a function with respect to the spacial dimensions is equal to zero. Laplace\'s equation has applications to gravitation, heat transfer, electromagnetic potential, and hydrodynamics.</definition> </item> <item> <term>MPI</term> <definition>A standard library for message passing. The MPI 1.2 specification includes two-sided point-to-point and collective communication primitives. The MPI 2.0 specification is a superset of MPI 1.2, adding support for parallel I/O, one-sided communication, and dynamic process allocation.</definition> </item> <item> <term>MPI Communicator</term> <definition>Collection of processes in an MPI program. Within a communicator, each process is given a unique integer identifier called its rank. Therefore an MPI process is identified by giving its communicator and rank within the communicator.</definition> </item> <item> <term>Message Passing Interface</term> <definition>A standard library for message passing. The MPI 1.2 specification includes two-sided point-to-point and collective communication primitives. The MPI 2.0 specification is a superset of MPI 1.2, adding support for parallel I/O, one-sided communication, and dynamic process allocation.</definition> </item> <item> <term>Neumann problem</term> <definition>A boundary value problem in which the normal gradients of the boundary values are specified.</definition> </item> <item> <term>OpenMP</term> <definition>An industry standard set of compiler directives for parallelizing applications on shared memory systems.</definition> </item> <item> <term>Private variable</term> <definition></definition> </item> <item> <term>atomic update</term> <definition>Insures that a thread that is in the process of accessing, modifying, and restoring a value at a shared memory location will not be interfered with by another thread.</definition> </item> <item> <term>broadcast</term> <definition></definition> </item> <item> <term>ccNUMA</term> <definition>"Cache-Coherent non-Uniform Memory Access". A type of system in which the memory is shared but access to different portions of it occurs at different speeds for a given processor. Typically such systems have memory that is physically distributed but "logically" shared, for example the SGI Origin architecture, and references to a memory location attached to a remote processor will be slower than a local reference because of the need to proceed through the interconnection network. On such systems it is often necessary to control the location of data to achieve maximum efficiency for shared-memory parallel applications.</definition> </item> <item> <term>cluster of SMPs</term> <definition>A cluster of systems in which each node is a symmetric multiprocessor. Clusters of SMPs have the property that processes on the same node can communicate using shared memory, while processes on different nodes must usemessage passing over a network.</definition> </item> <item> <term>collective communication</term> <definition></definition> </item> <item> <term>compiler directives</term> <definition>Special comments in code that are used as instructions or hints to an optimizing compiler to modify how the compiler generates machine code. One common use of compiler directives is to mark regions of code that can be parallelized.</definition> </item> <item> <term>distributed memory</term> <definition>A system composed of nodes in which some number of processors share a private memory, and in which nodes are tied together using some form of network. Distributed shared memory systems are typically either non-uniform memory access (NUMA)or clusters of SMPs.</definition> </item> <item> <term>distributed shared memory</term> <definition>A system composed of nodes in which some number of processors share a private memory, and in which nodes are tied together using some form of network. Distributed shared memory systems are typically either non-uniform memory access (NUMA) or clusters of SMPs.</definition> </item> <item> <term>domain decomposition</term> <definition>A technique in which a large problem domain is partitioned into smaller segments, each of which can be treated in parallel. This is a common technique used in parallelized applications using message passing.</definition> </item> <item> <term>hybrid parallel programming</term> <definition>A parallel programming method in which both message passing and shared memory techniques are used.</definition> </item> <item> <term>load balancing</term> <definition>The process of insuring that the computational load is balanced between processors, so that all are doing useful work. If a significant load imbalance is present, the parallel speedup will be limited by the performance of the slowest process.</definition> </item> <item> <term>master thread</term> <definition>The thread with rank 0.</definition> </item> <item> <term>message passing</term> <definition>An interprocess communication method in which data is sent from one process to another in discrete messages.</definition> </item> <item> <term>multilevel parallel programming</term> <definition>A parallel programming method in which both message passing and shared memory techniques are used.</definition> </item> <item> <term>non-uniform memory access (NUMA)</term> <definition>A distributed shared memory scheme in which all memory locations equally accessible, but some locations in memory have higher access latency than others. NUMA systems usually have a single system image, as opposed to clusters of SMPs which are composed of multiple distinct systems.</definition> </item> <item> <term>parallel efficiency</term> <definition>The speedup factor for N processors divided by N. It measures the speedup achieved per processor.</definition> </item> <item> <term>parallel region</term> <definition>A block of code that is to be executed in parallel. A team of threads is created at the beginning of a parallel region; all threads other than the master go out of existence at the end of a parallel region.</definition> </item> <item> <term>potential equation</term> <definition>A partial differential equation in two or more spacial dimensions in which the sum of the second partial derivatives of a function with respect to the spacial dimensions is equal to zero. Laplace\'s equation has applications to gravitation, heat transfer, electromagnetic potential, and hydrodynamics.</definition> </item> <item> <term>processes</term> <definition></definition> </item> <item> <term>processors</term> <definition></definition> </item> <item> <term>rank</term> <definition>A thread identifier. Threads are numbered from zero (the master thread) to n-1, where n is the number of threads. A thread can determine its rank by calling the routine omp_get_thread_num.</definition> </item> <item> <term>reduction operation</term> <definition>An operation that involves combining ("reducing") results produced by each thread, for example computing global sums or maxima/minima.</definition> </item> <item> <term>shared memory</term> <definition>A system in which several processors share a single unified memory. Interprocess communications on shared memory systems is typically handled by sharing segments of memory between processes.</definition> </item> <item> <term>shared memory system</term> <definition></definition> </item> <item> <term>shared variable</term> <definition>A variable that is shared among the threads in a team; that is, every thread can read from and write to the same memory location.</definition> </item> <item> <term>speedup factor</term> <definition>The ratio of total execution time on a single processor to that on several processors.</definition> </item> <item> <term>structured block</term> <definition>Any block of code that has only one entry point and one exit point, i.e. no branches into or out of it. The STOP statement in Fortran and exit() are allowed.</definition> </item> <item> <term>team</term> <definition>A group of threads created when a parallel region is entered. Members of the team normally work on separate pieces of one larger task, allowing the task to be completed in less time.</definition> </item></glossary>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -