📄 node99.html
字号:
<html><!DOCTYPE HTML PUBLIC "-//W3O//DTD W3 HTML 2.0//EN">
<!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds >
<HEAD>
<TITLE>8.5 Modularity</TITLE>
</HEAD>
<BODY>
<meta name="description" value="8.5 Modularity">
<meta name="keywords" value="book">
<meta name="resource-type" value="document">
<meta name="distribution" value="global">
<P>
<BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a> <A NAME=tex2html3153 HREF="node98.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node98.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html3161 HREF="node100.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node100.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html3159 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html3163 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html3164 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a> <BR>
<B> Next:</B> <A NAME=tex2html3162 HREF="node100.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node100.html">8.6 Other MPI Features</A>
<B>Up:</B> <A NAME=tex2html3160 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html">8 Message Passing Interface</A>
<B> Previous:</B> <A NAME=tex2html3154 HREF="node98.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node98.html">8.4 Asynchronous Communication</A>
<BR><HR><P>
<H1><A NAME=SECTION03550000000000000000>8.5 Modularity</A></H1>
<P>
<A NAME=secmpmod> </A>
<P>
In Chapter <A HREF="node39.html#chapmod" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node39.html#chapmod">4</A>, we distinguished three general
forms of composition that can be used for the modular construction of
<A NAME=12981> </A>
parallel programs: sequential, parallel, and concurrent. Recall that
<A NAME=12982> </A>
in sequential composition, two program components execute in sequence
on the same set of processors. In parallel composition, two program
components execute concurrently on disjoint sets of processors. In
concurrent composition, two program components execute on potentially
nondisjoint sets of processors.
<P>
<A NAME=12983> </A>
MPI supports modular programming via its communicator
mechanism, which provides the information hiding needed when
building modular programs, by allowing the specification of program
components that encapsulate internal communication operations and
provide a local name space for processes. In this section, we show
how communicators can be used to implement various forms of sequential
and parallel composition. MPI's MPMD programming model means that the
full generality of concurrent composition is not generally available.
<P>
An MPI communication operation always specifies a communicator. This
identifies the process group that is engaged in the communication
operation and the context in which the communication occurs. As we
shall see, process groups allow a subset of processes to communicate
among themselves using local process identifiers and to perform
collective communication operations without involving other processes.
The context forms part of the envelope associated with a message. A
receive operation can receive a message only if the message was sent
in the same context. Hence, if two routines use different contexts
for their internal communication, there can be no danger of their
communications being confused.
<P>
In preceding sections, all communication operations have used the
default communicator <tt> MPI_COMM_WORLD</tt>, which incorporates all
processes involved in an MPI computation and defines a default
<A NAME=12985> </A>
context. We now describe four functions that allow communicators to
be used in more flexible ways. These functions, and their roles in
modular design, are as follows.
<P>
<OL><LI>
<tt> MPI_COMM_DUP</tt>. A program may create a new communicator
<A NAME=12988> </A>
comprising the same process group but a new context to ensure that
communications performed for different purposes are not confused.
<A NAME=12989> </A>
This mechanism supports sequential composition.
<A NAME=12990> </A>
<P>
<LI>
<tt> MPI_COMM_SPLIT</tt>. A program may create a new communicator
<A NAME=12992> </A>
comprising just a subset of a given group of processes. These
processes can then communicate among themselves without fear of
conflict with other concurrent computations. This mechanism supports
<A NAME=12993> </A>
parallel composition.
<A NAME=12994> </A>
<P>
<LI>
<tt> MPI_INTERCOMM_CREATE</tt>. A program may construct an <em>
<A NAME=12996> </A>
intercommunicator
</em>, which links processes in two groups. This
mechanism supports parallel composition.
<P>
<LI>
<tt> MPI_COMM_FREE</tt>. This function can be used to release a
communicator created using the preceding three functions.
<A NAME=12998> </A>
<P>
</OL>
<P>
The four functions are summarized in Figure <A HREF="node99.html#figmpicommun" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node99.html#figmpicommun">8.7</A>;
their arguments and the ways they are called are described
next.
<P>
<P><A NAME=13824> </A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1027.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1027.gif">
<BR><STRONG>Figure 8.7:</STRONG> MPI communicator functions.
<A NAME=figmpicommun> </A><BR>
<P>
<A NAME=13042> </A>
<P>
<H2><A NAME=SECTION03551000000000000000>8.5.1 Creating Communicators</A></H2>
<P>
<A NAME=secmpco1> </A>
<P>
<P><A NAME=14071> </A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1028.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1028.gif">
<BR><STRONG>Figure 8.8:</STRONG> <em> Errors can occur in a sequential composition of two
parallel program components (e.g., an application program and a
parallel library) if the two components use the same message tags.
The figure on the left shows how this can occur. Each of the four
vertical lines represents a single thread of control (process) in an
SPMD program. All call an SPMD library, which are represented by the
boxes. One process finishes sooner than the others, and a message
that this process generates during subsequent computation (the dashed
arrow) is intercepted by the library. The figure on the right shows
how this problem is avoided
by using contexts: the library communicates using a distinct tag space,
which cannot be penetrated by other
messages.</em><A NAME=figmpconflict> </A><BR>
<P>
<P>
As discussed in Section <A HREF="node96.html#sectags" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node96.html#sectags">8.2.2</A>, message tags provide a
mechanism for distinguishing between messages used for different
purposes. However, they do not provide a sufficient basis for modular
design. For example, consider an application that calls a library
routine implementing (for example) an array transpose operation. It
is important to ensure that the message tags used in the library are
distinct from those used in the rest of the application
(Figure <A HREF="node99.html#figmpconflict" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node99.html#figmpconflict">8.8</A>). Yet the user of a library routine may
not know the tags the library uses; indeed, tag values may be
computed on the fly.
<P>
<A NAME=13051> </A>
Communicators provide a solution to this problem. A call of the form
<A NAME=13052> </A>
<tt> MPI_COMM_DUP(comm, newcomm)</tt>
<P>
<A NAME=13056> </A>
creates a new communicator <tt> newcomm</tt> comprising the same processes
as <tt> comm</tt> but with a new context. This new communicator can be
passed as an argument to the library routine, as in the following
code, which calls <tt> transpose</tt> to transpose an array <tt> A</tt>.
<P>
<PRE><TT>
<tt> integer comm, newcomm, ierr</tt> ! Handles are integers
<P>
<tt> ...</tt>
<P>
<tt> call MPI_COMM_DUP(comm, newcomm, ierr)</tt> ! Create new context
<P>
<tt> call transpose(newcomm, A)</tt> ! Pass to library
<P>
<tt> call MPI_COMM_FREE(newcomm, ierr)</tt> ! Free new context
<P>
</TT></PRE>
<P>
The transpose routine itself will be defined to use the communicator
<tt> newcomm</tt> in all communication operations, thereby ensuring that
communications performed within this routine cannot be confused with
communications performed outside.
<P>
<H2><A NAME=SECTION03552000000000000000>8.5.2 Partitioning Processes</A></H2>
<P>
<A NAME=secmpco2> </A>
<P>
<P><A NAME=14086> </A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1029.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1029.gif">
<BR><STRONG>Figure 8.9:</STRONG> <em> Different views of parallel composition. On the left
is the task-parallel view, in which new tasks are created
dynamically to execute two different program components. Four tasks
are created: two perform one computation (dark shading) and two
another (light shading). On the right is the MPMD view. Here,
a fixed set of processes (represented by vertical arrows) change
character, for example, by calling different
subroutines.</em><A NAME=figmpview> </A><BR>
<P>
<P>
Recall that we use the term <em> parallel composition
</em> to denote
<A NAME=13076> </A>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -