📄 44.html
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text/css"> body { font-family: Verdana, Arial, Helvetica, sans-serif;} a.at-term { font-style: italic; } </style> <title>Adding MPI to Fortran and C Programs</title> <meta name="Generator" content="ATutor"> <meta name="Keywords" content=""></head><body> <p>The MPI Standard, which defines a set of functions and their behavior, is implemented on computer systems as a library of routines and functions that is linked to an existing C or Fortran program. Standard C and
Fortran statements control program flow, and MPI routines are called for initialization and synchronization.</p>
<p>The first step of writing an MPI program is to understand how a standard C or Fortran program is modified to enable calls to the MPI library.</p>
<h3>Header Files</h3>
<p>MPI maintains a large set of constants and internal variables called <em>handles</em>,
which are used as arguments in various MPI routines. These are initialized and defined in the MPI header file. There is a header file for both C and Fortran:</p>
<p class="codelang">
C:</p>
<pre><code>#include <mpi.h></code></pre>
<p class="codelang">
Fortran:</p>
<pre><code>include 'mpif.h'</code></pre>
<p>These include files must be included in any routine where one of the MPI constants is used or an MPI routine is called.</p>
<h3>MPI Function Format</h3>
<p>In OpenMP we saw that much of the implementation was by means of directives; comments placed in the code that an OpenMP-enabled compiler would read and insert the necessary parallel code. We implement MPI as a library of functions that we call from within our program. In C, MPI routines are called as functions and in Fortran they are called as subroutines:</p>
<p class="codelang">
C:</p>
<pre><code>error = MPI_Xxxxx(parameter,...);
MPI_Xxxxx(parameter,...);</code></pre>
<p class="codelang">
Fortran:</p>
<pre><code>CALL MPI_XXXX(parameter, ... , IERROR)</code></pre>
<p>As we examine the syntax of the function calls, we see that Fortran always has one extra argument. This is the error code that the MPI routine returns. Since the routines are called as functions in C, the error code is simply the return value of the function. In Fortran, however, we must add the error variable to the end of the argument list as the subroutine calls cannot return a value.</p>
<h3>Initializing MPI</h3>
<p>When an MPI program is started, a fixed and finite number of executables are started on multiple physical processors. Each of these programs is running under the distributed memory program model, where each has its own independent memory space. To allow the processors to communicate data
with MPI function calls, you must initialize MPI.</p>
<p>MPI_Init performs the initialization tasks, and it must be the first MPI routine called. It can be called only once.</p>
<p class="codelang">
C:</p>
<pre><code>int MPI_Init(int argc, char***argv)</code></pre>
<p class="codelang">
Fortran:</p>
<pre><code>INTEGER IERROR
CALL MPI_INIT(IERROR)</code></pre>
<h3>Communicator Size</h3>
<p>As we saw when we introduced the concept of an MPI communicator, when MPI is initialized a default communicator is created that contains the number of processes that were requested on the command line. The size of the default communicator MPI_COMM_WORLD is fixed through the run of the
program.</p>
<p>An important value that will be needed by the program is how many processes are contained within a communicator. The MPI_COMM_SIZE function takes a communicator name as an argument and returns how many processes are contained within that communicator.</p>
<p class="codelang">
C:</p>
<pre><code>MPI_Comm_size(MPI_Comm comm, int *size)</code></pre>
<p class="codelang">
Fortran:</p>
<pre><code>INTEGER COMM, SIZE, IERROR
CALL MPI_COMM_SIZE(COMM, SIZE, IERROR)</code></pre>
<ul>
<li>
<em>COMM</em>: which communicator the function
will operate on</li>
<li>
<em>SIZE</em>: number of processes in this communicator</li>
</ul>
<h3>Process Rank</h3>
<p>Since each MPI process is running an exact copy of the program, the key difference between them is a unique integer assigned to each process which defines it identity within the communicator. Combined with the size of the communicator, each process knows who they are and in what size group they belong. These two pieces of information will be critical in designing the program flow control logic into MPI programs.</p>
<p>The process ID numbers start with zero and go to (<var>N</var>-1), where <var>N</var> is the number of processes in the named communicator. For the default
communicator MPI_COMM_WORLD, this is the total number of processes requested. The IDs are also used to identify the source and destination of the messages</p>
<p class="codelang">
C:</p>
<pre><code>MPI_Comm_rank(MPI_Comm comm, int *rank)</code></pre>
<p class="codelang">
Fortran:</p>
<pre><code>INTEGER COMM, RANK, IERROR
CALL MPI_COMM_RANK(COMM, RANK, IERROR)</code></pre>
<ul>
<li>
<em>COMM</em>: which communicator the function
will operate on</li>
<li>
<em>RANK</em>: unique integer which defines the
identity of the process in this communicator</li>
</ul></body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -