⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node96.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<html><!DOCTYPE HTML PUBLIC "-//W3O//DTD W3 HTML 2.0//EN">
<!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds >
<HEAD>
<TITLE>8.2 MPI Basics</TITLE>
</HEAD>
<BODY>
<meta name="description" value="8.2 MPI Basics">
<meta name="keywords" value="book">
<meta name="resource-type" value="document">
<meta name="distribution" value="global">
<P>
 <BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html3117 HREF="node95.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node95.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html3125 HREF="node97.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node97.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html3123 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html3127 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html3128 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html3126 HREF="node97.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node97.html">8.3 Global Operations</A>
<B>Up:</B> <A NAME=tex2html3124 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html">8 Message Passing Interface</A>
<B> Previous:</B> <A NAME=tex2html3118 HREF="node95.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node95.html">8.1 The MPI Programming Model</A>
<BR><HR><P>
<H1><A NAME=SECTION03520000000000000000>8.2 MPI Basics</A></H1>
<P>
<A NAME=secmpibasics>&#160;</A>
<P>
Although MPI is a complex and multifaceted system, we can solve a wide
range of problems using just six of its functions!  We introduce MPI
<A NAME=12208>&#160;</A>
by describing these six functions, which initiate and
terminate a computation, identify processes, and send and
receive messages:
<P>
<PRE><TT> 
<tt> MPI_INIT</tt>      		:		 Initiate an MPI computation.
<P>
<tt> MPI_FINALIZE</tt>  		:		 Terminate a computation.
<P>
<tt> MPI_COMM_SIZE</tt>		:		 Determine number of processes.
<P>
<tt> MPI_COMM_RANK</tt>		:		 Determine my process identifier.
<P>
<tt> MPI_SEND</tt>      		:		 Send a message.
<P>
<tt> MPI_RECV</tt>      		:		 Receive a message.
<P>
</TT></PRE>
<P>
Function parameters are detailed in Figure <A HREF="node96.html#figmpibasics" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#figmpibasics">8.1</A>.  In
this and subsequent figures, the labels <tt> IN</tt>, <tt> OUT</tt>, and <tt>
INOUT</tt> indicate whether the function uses but does not modify the
parameter (<tt> IN</tt>), does not use but may update the parameter (<tt>
OUT</tt>), or both uses and updates the parameter (<tt> INOUT</tt>).
<P>
<P><A NAME=13842>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1005.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1005.gif">
<BR><STRONG>Figure 8.1:</STRONG>  Basic MPI.  These six functions suffice to write a
wide range of parallel programs.  The arguments are characterized as
having mode <tt> IN</tt> or <tt> OUT</tt> and as having type integer, choice,
handle, or status.  These terms are explained in the text.
<P>
<A NAME=figmpibasics>&#160;</A><BR>
<P>
<P>
All but the first two calls take a communicator handle as
an argument.  A communicator identifies the process group and context
with respect to which the operation is to be performed.  As explained
later in this chapter, communicators provide a mechanism for
identifying process subsets during development of modular programs and
for ensuring that messages intended for different purposes are not
confused.  For now, it suffices to provide the default value <tt>
MPI_COMM_WORLD</tt>, which identifies <em> all
 </em> processes involved
in a computation.  Other arguments have type integer, datatype handle,
or status.  These datatypes are explained in the following.
<P>
<A NAME=12297>&#160;</A>
The functions <tt> MPI_INIT</tt> and <tt> MPI_FINALIZE</tt> are used to
<A NAME=12300>&#160;</A>
initiate and shut down an MPI computation, respectively.  <tt>
MPI_INIT</tt> must be called before any other MPI function and must be
called exactly once per process.  No further MPI functions can be
called after <tt> MPI_FINALIZE</tt>.
<P>
<A NAME=12303>&#160;</A>
The functions <tt> MPI_COMM_SIZE</tt> and <tt> MPI_COMM_RANK</tt>
<A NAME=12306>&#160;</A>
determine the number of processes in the current computation and the
integer identifier assigned to the current process, respectively.
(The processes in a process group are identified with unique,
contiguous integers numbered from 0.)  For example, consider the
following program.  This is not written in any particular language: we
shall see in the next section how to call MPI routines from Fortran and C.
<P>

<PRE><TT> 
<tt> program main</tt>
<P>
<tt> begin</tt>
<P>
		<tt> MPI_INIT()</tt>              		 Initiate computation
<P>
		<tt> MPI_COMM_SIZE(MPI_COMM_WORLD, count)</tt> 		 Find # of processes
<P>
		<tt> MPI_COMM_RANK(MPI_COMM_WORLD, myid)</tt>  		 Find my id
<P>
		<tt> print(&quot;I am&quot;, myid, &quot;of&quot;, count)</tt>         		 Print message
<P>
		<tt> MPI_FINALIZE()</tt>                          		 Shut down
<P>
<tt> end</tt>
<P>
</TT></PRE>

<P>
The MPI standard does not specify how a parallel computation is
<A NAME=12331>&#160;</A>
started.  However, a typical mechanism could be a command line
argument indicating the number of processes that are to be created:
for example, <tt> myprog -n 4</tt>, where <tt> myprog</tt> is the name of
the executable.  Additional arguments might be used to specify
processor names in a networked environment or executable names in an
MPMD computation.
<P>
If the above program is executed by four processes, we will obtain
something like the following output.  The order in which the output
appears is not defined; however, we assume here that the output from
individual print statements is not interleaved.
<P>

<PRE>                   I am 1 of 4
                   I am 3 of 4
                   I am 0 of 4
                   I am 2 of 4
</PRE>

<P>
Finally, we consider the functions <tt> MPI_SEND</tt> and <tt>
MPI_RECV</tt>, which are used to send and receive messages,
<A NAME=12337>&#160;</A>
respectively.  A call to <tt> MPI_SEND</tt> has the general form
<A NAME=12339>&#160;</A>
<tt> MPI_SEND(buf, count, datatype, dest, tag, comm)</tt>
<P>
and specifies that a message containing <tt> count</tt> elements of the
specified <tt> datatype</tt> starting at address <tt> buf</tt> is to be sent
to the process with identifier <tt> dest</tt>.  As will be explained in
greater detail subsequently, this message is associated with an 
envelope comprising the specified <tt> tag</tt>, the source
process's identifier, and the specified communicator (<tt> comm</tt>).
<P>
A
call to <tt> MPI_RECV</tt> has the general form
<P>
<tt> MPI_RECV(buf, count, datatype, source, tag, comm, status)</tt>
<P>
and attempts to receive a message that has an envelope corresponding to
the specified <tt> tag</tt>, <tt> source</tt>, and <tt> comm</tt>, blocking until
such a message is available.  When the message arrives, elements of
the specified <tt> datatype</tt> are placed into the buffer at address
<tt> buf</tt>.  This buffer is guaranteed to be large enough to contain at
least <tt> count</tt> elements.  The <tt> status</tt> variable can be used
subsequently to inquire about the size, tag, and source of the
received message (Section <A HREF="node98.html#secmpinquire" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node98.html#secmpinquire">8.4</A>).
<P>
<P><A NAME=progmpi1>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1006.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1006.gif"><P>
<P>
Program <A HREF="node96.html#progmpi1" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#progmpi1">8.1</A> illustrates the use of the six basic calls.
This is an implementation of the bridge construction algorithm
developed in Example <A HREF="node9.html#exbridge" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node9.html#exbridge">1.1</A>.  The program is designed to be
<A NAME=12398>&#160;</A>
executed by two processes.  The first process calls a procedure <tt>
<A NAME=12399>&#160;</A>
foundry</tt> and the second calls <tt> bridge</tt>, effectively creating two
different tasks.  The first process makes a series of <tt> MPI_SEND</tt>
calls to communicate 100 integer messages to the second process,
terminating the sequence by sending a negative number.  The second
process receives these messages using <tt> MPI_RECV</tt>.
<P>
<H2><A NAME=SECTION03521000000000000000>8.2.1 Language Bindings</A></H2>
<P>
Much of the discussion in this chapter will be language independent;
that is, the functions described can be used in C, Fortran, or any
other language for which an MPI library has been defined.  Only when
we present example programs will a particular language be used.  In
that case, programs will be presented using the syntax of either the
Fortran or C language binding.  Different language bindings
have slightly different syntaxes that reflect a language's peculiarities.
Sources of syntactic difference include the function names themselves,
the mechanism used for return codes, the representation of the
<A NAME=12404>&#160;</A>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -