📄 43.html
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text/css"> body { font-family: Verdana, Arial, Helvetica, sans-serif;} a.at-term { font-style: italic; } </style> <title>Message Passing Interface (MPI)</title> <meta name="Generator" content="ATutor"> <meta name="Keywords" content=""></head><body> <p>Chapter 3 introduced the different architectures for parallel systems and discussed how they can be classified by two general categories:</p>
<ul>
<li>Distributed memory systems</li>
<li>Shared memory systems</li>
</ul>
<p>As discussed in Chapter 3, OpenMP is one method for programming on shared memory systems.For distributed memory systems, the Message Passing Interface (MPI) is currently the most utilized method. The reason for its wide acceptance is that the MPI Forum, which established
the MPI standard, is a diverse group of representatives from industry, academia and government. The MPI standard was originally developed from 1992 - 1994 to address programming on distributed memory systems. More than 60 people from 40 different organizations worked on the standard. The result of their efforts, the MPI standard, defines a collection of routines and their behavior that can be used to send data from one processor to another in a distributed memory system. The vendors (Cray, SGI, Sun, etc.) <em>implement</em> the MPI standard by creating a library of routines that match the behavior of those outlined in the standard. The user can then insert calls to these MPI functions to enable one processor to communicate with another.</p>
<h3>MPI Communicator</h3>
<p>MPI v1.2, which is the version discussed in this document, is structured to work with a fixed set of processes. Note that we refer to <em>processes</em> and not physical <em>processors</em>.</p>
<p>An MPI process can be thought of as a thread. Multiple threads, or MPI processes, can run on a single physical processor. The system is responsible for mapping the MPI processes to the physical processors.
Multiple MPI processes can be mapped to a given processor.</p>
<p>An <em><a href="../glossary.html#MPI+Communicator" target="body" class="at-term">MPI Communicator</a></em> refers to a collection of processes. The default communicator MPI_COMM_WORLD, which is created when you initialize MPI, is the collection of all of your processes.</p>
<dl>
<dt>Processes</dt>
<dd>Logical threads of execution within MPI.</dd>
<dt>Processors</dt>
<dd>The number of physical processors allocated to an MPI program.
The number of processors is not necessarily the same as the number of processes.</blockquote>
</dd>
</dl>
<p>At runtime, the user specifies how many processes should be allocated to this run. MPI then starts that many copies of the program. At this time the implementation of MPI on the particular system will map the MPI processes to given physical processors.
</p>
<img SRC="mpirun_anim.gif" height=330 width=618>
<p>Each process is an exact copy of the program, with the exception that each has a unique identification number. Logic within the program is used to vary the execution paths and loop ranges of the individual processes. MPI refers to the collection of process identifiers as a Communicator.</p>
<p>When you run your MPI program, the MPI library will create a default communicator called MPI_COMM_WORLD that is the collection of all of the processes. The image below is a graphical representation of a communicator
that contains six processes. Note that the unique identifier for each process starts at 0 rather than 1. Therefore, in a communicator of size <var>N</var>, the identifiers of the processes will range from 0 to
(<var>N</var>-1).</p>
<img SRC="communicator1.gif" height=330 width=515></h3>
<!--
MPI Communicator</h3>
--></body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -