⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node96.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
handles used to access specialized MPI data structures such as
<A NAME=12405>&#160;</A>
communicators, and the implementation of the <tt> status</tt> datatype
<A NAME=13813>&#160;</A>
returned by <tt> MPI_RECV</tt>.  The use of handles hides the internal
representation of MPI data structures.
<P>
<H4><A NAME=SECTION03521010000000000000> C Language Binding.</A></H4>
<P>
In the C language binding, function names are as in the MPI
<A NAME=12410>&#160;</A>
definition but with only the <tt> MPI</tt> prefix and the first letter of
the function name in upper case.  Status values are returned as integer
return codes.  The return code for successful completion is <tt>
MPI_SUCCESS</tt>; a set of error codes is also defined.  Compile-time
constants are all in upper case and are defined in the file <tt>
mpi.h</tt>, which must be included in any program that makes MPI calls.
Handles are represented by special defined types, defined in <tt>
mpi.h</tt>.  These will be introduced as needed in the following discussion.
Function parameters with type <tt> IN</tt> are passed by value, while
parameters with type <tt> OUT</tt> and <tt> INOUT</tt> are passed by reference
(that is, as pointers).  A <tt> status</tt> variable has type <tt>
MPI_Status</tt> and is a structure with fields <tt> status.MPI_SOURCE</tt>
and <tt> status.MPI_TAG</tt> containing source and tag information.
Finally, an MPI datatype is defined for each C datatype: <tt>
MPI_CHAR</tt>, <tt> MPI_INT</tt>, <tt> MPI_LONG</tt>, <tt>
MPI_UNSIGNED_CHAR</tt>, <tt> MPI_UNSIGNED</tt>, <tt> MPI_UNSIGNED_LONG</tt>,
<tt> MPI_FLOAT</tt>, <tt> MPI_DOUBLE</tt>, <tt> MPI_LONG_DOUBLE</tt>, etc.
<P>
<H4><A NAME=SECTION03521020000000000000> Fortran Language Binding.</A></H4>
<P>
In the Fortran language binding, function names are in upper case.
Function return codes are represented by an additional integer
<A NAME=12432>&#160;</A>
argument.  The return code for successful completion is <tt>
MPI_SUCCESS</tt>; a set of error codes is also defined.  Compile-time
constants are all in upper case and are defined in the file <tt>
mpif.h</tt>, which must be included in any program that makes MPI
calls.  All handles have type <tt> INTEGER</tt>.  A <tt> status</tt> variable
is an array of integers of size <tt> MPI_STATUS_SIZE</tt>, with the
constants <tt> MPI_SOURCE</tt> and <tt> MPI_TAG</tt> indexing the source and
tag fields, respectively.  Finally, an MPI datatype is defined for each Fortran
datatype: <tt> MPI_INTEGER</tt>, <tt> MPI_REAL</tt>, <tt>
MPI_DOUBLE_PRECISION</tt>, <tt> MPI_COMPLEX</tt>, <tt> MPI_LOGICAL</tt>, <tt>
MPI_CHARACTER</tt>, etc.
<P>
<BR><HR>
<b> Example <IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1008.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1008.gif">.<IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1007.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1007.gif">    Pairwise Interactions</b>:<A NAME=expio>&#160;</A>
<P>
The pairwise interactions algorithm of
Section <A HREF="node10.html#exinteractions" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html#exinteractions">1.4.2</A> illustrate the two language bindings.
<A NAME=12449>&#160;</A>
Recall that in this algorithm, <em> T</em>
 tasks (<em> T</em>
 an odd number)
<A NAME=12452>&#160;</A>
are connected in a ring.  Each task is responsible for computing
interactions involving <em> N</em>
 data.  Data are circulated around the
ring in <em> T-1</em>
 phases, with interactions computed at each phase.
Programs <A HREF="node96.html#progmp1" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#progmp1">8.2</A> and <A HREF="node96.html#progmp2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#progmp2">8.3</A> are C and Fortran versions
of an MPI implementation, respectively.
<P>
The number of processes created is specified when the program is
invoked.  Each process is responsible for 100 objects, and each object
is represented by three floating-point values, so the various work
arrays have size 300.  As each process executes the same program, the
first few lines are used to determine the total number of processes
involved in the computation (<tt> np</tt>), the process's identifier (<tt>
myid</tt>), and the identify of the process's neighbors in the ring (<tt>
lnbr</tt>, <tt> rnbr</tt>).  The computation then proceeds as described in
Section <A HREF="node10.html#exinteractions" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html#exinteractions">1.4.2</A> but with messages sent to numbered
processes rather than on channels.
<P>
<BR><HR>
<P>
<P><A NAME=progmp1>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1009.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1009.gif"><P>
<P>
<A NAME=12490>&#160;</A>
<P>
<P><A NAME=progmp2>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1010.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1010.gif"><P>
<P>

<H2><A NAME=SECTION03522000000000000000>8.2.2 Determinism</A></H2>
<P>
<A NAME=sectags>&#160;</A>
<P>
Before proceeding to more sophisticated aspects of MPI, we consider
the important topic of determinism.  Message-passing programming
<A NAME=12523>&#160;</A>
models are by default nondeterministic: the arrival order of messages
<A NAME=12524>&#160;</A>
sent from two processes, A and B, to a third process, C, is not
defined.  (However, MPI <em> does
 </em> guarantee that two messages
sent from one process, A, to another process, B, will arrive in the
order sent.)  It is the programmer's responsibility to ensure that a
computation is deterministic when (as is usually the case) this is
required.
<P>
<A NAME=12526>&#160;</A>
In the task/channel programming model, determinism is guaranteed by
defining separate channels for different communications and by
ensuring that each channel has a single writer and a single reader.
Hence, a process C can distinguish messages received from A or B as
they arrive on separate channels.  MPI does not support channels
directly, but it does provide similar mechanisms.  In particular, it
allows a receive operation to specify a source, tag, and/or context.
(Recall that these data constitute a message's envelope.)  We consider
the first two of these mechanisms in this section.
<P>
The <em> source
 </em> specifier in the <tt> MPI_RECV</tt> function allows
the programmer to specify that a message is to be received either from
a single named process (specified by its integer process identifier)
or from any process (specified by the special value <tt>
MPI_ANY_SOURCE</tt>).  The latter option allows a process to receive
data from any source; this is sometimes useful. However, the former
is preferable because it eliminates errors due to messages arriving
in time-dependent order.
<P>
Message <em> tags
 </em> provide a further mechanism for distinguishing
between different messages.  A sending process must associate an
integer tag with a message. This is achieved via the tag field in the
<A NAME=12531>&#160;</A>
<tt> MPI_SEND</tt> call.  (This tag has always been set to 0 in the
examples presented so far.)  A receiving process can then specify that
it wishes to receive messages either with a specified tag or with any
tag (<tt> MPI_ANY_TAG</tt>).  Again, the former option is preferable
because it reduces the possibility of error.
<P>
<BR><HR>
<b> Example <IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1013.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1013.gif">.<IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1011.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1011.gif">    Nondeterministic Program</b>:<A NAME=exndx>&#160;</A>
<P>
To illustrate the importance of source specifiers and tags, we examine
a program that fails to use them and that, consequently, suffers from
<A NAME=12536>&#160;</A>
nondeterminism.  Program <A HREF="node96.html#progmpnondet" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#progmpnondet">8.4</A> is part of an MPI
<A NAME=12538>&#160;</A>
implementation of the symmetric pairwise interaction algorithm of
Section <A HREF="node10.html#exinteractions" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html#exinteractions">1.4.2</A>.  Recall that in this algorithm,
messages are communicated only half way around the ring (in
<em> T/2-1</em>
 steps, if the number of tasks <em> T</em>
 is odd), with
interactions accumulated both in processes and in messages.  As in
Example <A HREF="node96.html#expio" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#expio">8.1</A>, we assume 100 objects, so the arrays to be
communicated in this phase have size 100.3.2=600.  In a final step,
each message (with size 100.3=300) is returned to its originating
process.  Hence, each process sends and receives <em> N/2-1</em>
 <em>
data
 </em> messages and one <em> result
 </em> message.
<P>
<P><A NAME=progmpnondet>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1012.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1012.gif"><P>
<P>
<A NAME=12573>&#160;</A>
<P>
Program <A HREF="node96.html#progmpnondet" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#progmpnondet">8.4</A> specifies neither sources nor tags in its
<tt> MPI_RECV</tt> calls.  Consequently, a result message arriving before
the final data message may be received as if it were a data message,
thereby resulting in an incorrect computation.  Determinism can be
achieved by specifying either a source processor or a tag in the
receive calls.  It is good practice to use <em> both
 </em> mechanisms.
In effect, each ``channel'' in the original design is then represented
by a unique (source, destination, tag) triple.
<P>
<BR><HR>
<P>

<BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html3117 HREF="node95.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node95.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html3125 HREF="node97.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node97.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html3123 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html3127 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html3128 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html3126 HREF="node97.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node97.html">8.3 Global Operations</A>
<B>Up:</B> <A NAME=tex2html3124 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html">8 Message Passing Interface</A>
<B> Previous:</B> <A NAME=tex2html3118 HREF="node95.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node95.html">8.1 The MPI Programming Model</A>
<BR><HR><P>
<P><ADDRESS>
<I>&#169 Copyright 1995 by <A href="msgs0.htm#6" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#6">Ian Foster</a></I>
</ADDRESS>
</BODY>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -