⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node99.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
the parallel execution of two or more program components on disjoint
<A NAME=13077>&#160;</A>
sets of processors (Section <A HREF="node41.html#secmodpar" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node41.html#secmodpar">4.2</A>).  One approach to the
implementation of parallel composition is to create tasks dynamically
and to place newly created tasks on different processors.  This <em>
task-parallel
 </em> approach is taken in CC++
  and Fortran M, for
example.  In MPMD programs, parallel composition is implemented
differently.  As illustrated in Figure <A HREF="node99.html#figmpview" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node99.html#figmpview">8.9</A>, available
processes are partitioned into disjoint sets, with each set executing
the appropriate program.  This partitioning is achieved by using the
function <tt> MPI_COMM_SPLIT</tt>.  A call of the form
<P>
<tt> MPI_COMM_SPLIT(comm, color, key, newcomm)</tt>
<P>
creates one or more new communicators.  This function is a collective
communication operation, meaning that it must be executed by each
process in the process group associated with <tt> comm</tt>.  A new
communicator is created for each unique value of <tt> color</tt> other
than the defined constant <tt> MPI_UNDEFINED</tt>.  Each new communicator
comprises those processes that specified its value of <tt> color</tt> in
the <tt> MPI_COMM_SPLIT</tt> call.  These processes are assigned
identifiers within the new communicator starting from zero, with order
determined by the value of <tt> key</tt> or, in the event of ties, by the identifier in the old
communicator.  Thus, a call of the form
<tt> MPI_COMM_SPLIT(comm, 0, 0, newcomm)</tt>
<P>
in which all processes specify the same color and key, is equivalent
to a call
<tt> MPI_COMM_DUP(comm, newcomm)</tt>
<P>
That is, both calls create a new communicator containing all the
processes in the old communicator <tt> comm</tt>.  In contrast, the
following code creates three new communicators if <tt> comm</tt> contains
at least three processes.
<P>

<PRE><TT> 
		<tt> MPI_Comm comm, newcomm;</tt>
<P>
		<tt> int myid, color;</tt>
<P>
		<tt> MPI_Comm_rank(comm, &amp;myid);</tt>
<P>
		<tt> color = myid%3;</tt>
<P>
		<tt> MPI_Comm_split(comm, color, myid, &amp;newcomm);</tt>
<P>
</TT></PRE>

<P>
For example, if <tt> comm</tt> contains eight processes, then processes 0,
3, and 6 form a new communicator of size three, as do processes 1, 4,
and 7, while processes 2 and 5 form a new communicator of size two
(Figure <A HREF="node99.html#figmpcomm" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node99.html#figmpcomm">8.10</A>).
<P>
<P><A NAME=14105>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1030.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1030.gif">
<BR><STRONG>Figure:</STRONG> <em> Using <tt> MPI_COMM_SPLIT</tt> to form new communicators.
The first communicator is a group of eight processes. Setting color to
<tt> myid%3</tt> and calling <tt> MPI_COMM_SPLIT(comm, color, myid,
newcomm)</tt> split this into three disjoint process
groups.</em><A NAME=figmpcomm>&#160;</A><BR>
<P>
<P>
As a final example, the following code fragment creates a new
communicator (<tt> newcomm</tt>) containing at most eight processes.
Processes with identifiers greater than eight in communicator <tt>
comm</tt> call <tt> MPI_COMM_SPLIT</tt> with <tt> newid=MPI_UNDEFINED</tt> and
hence are not part of the new communicator.
<P>

<PRE><TT> 
		<tt> MPI_Comm comm, newcomm;</tt>
<P>
		<tt> int myid, color;</tt>
<P>
		<tt> MPI_Comm_rank(comm, &amp;myid);</tt>
<P>
		<tt> if (myid &lt; 8)</tt>         				 /* Select first 8 processes */
<P>
				<tt> color = 1;</tt>
<P>
		<tt> else</tt>                  				 /* Others are not in group */
<P>
				<tt> color = MPI_UNDEFINED;</tt>
<P>
		<tt> MPI_Comm_split(comm, color, myid, &amp;newcomm);</tt>
<P>
</TT></PRE>

<P>
<H2><A NAME=SECTION03553000000000000000>8.5.3 Communicating between Groups</A></H2>
<P>
<A NAME=secmpco3>&#160;</A>
<P>
A communicator returned by <tt> MPI_COMM_SPLIT</tt> can be used to
communicate within a group of processes.  Hence, it is called an <em>
intracommunicator</em>.  (The default communicator, <tt>
MPI_COMM_WORLD</tt>, is an intracommunicator.)  It is also possible to
<A NAME=13133>&#160;</A>
create an <em> intercommunicator
 </em> that can be used to communicate
<A NAME=13135>&#160;</A>
between process groups.  An intercommunicator that connects two groups
<em> A</em>
 and <em> B</em>
 containing <IMG BORDER=0 ALIGN=MIDDLE ALT="" SRC="img1031.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1031.gif"> and <IMG BORDER=0 ALIGN=MIDDLE ALT="" SRC="img1032.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1032.gif"> processes,
respectively, allows processes in group <em> A</em>
 to communicate with
processes 0..<IMG BORDER=0 ALIGN=MIDDLE ALT="" SRC="img1033.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1033.gif"> in group <em> B</em>
 by using MPI send and receive calls
(collective operations are not supported). Similarly, processes in
group <em> B</em>
 can communicate with processes 0..<IMG BORDER=0 ALIGN=MIDDLE ALT="" SRC="img1034.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1034.gif"> in group
<em> A</em>
.
<P>
An intercommunicator is created by a collective call executed in the
two groups that are to be connected.  In making this call, the
processes in the two groups must each supply a local intracommunicator
that identifies the processes involved in their group.  They must also
agree on the identifier of a ``leader'' process in each group and a
parent communicator that contains all the processes in both groups, via
which the connection can be established.  The default communicator
<tt> MPI_COMM_WORLD</tt> can always be used for this purpose.  The
collective call has the general form
<PRE>     MPI_INTERCOMM_CREATE(comm, local_leader, peercomm,
                          remote_leader, tag, intercomm)
</PRE>
where <tt> comm</tt> is an intracommunicator in the local group and <tt>
local_leader</tt> is the identifier of the nominated leader process
within this group.  (It does not matter which process is chosen as the
leader; however, all participants in the collective operation must
nominate the same process.)  The parent communicator is specified by
<tt> peercomm</tt>, while <tt> remote_leader</tt> is the identifier of the
other group's leader process <em> within the parent communicator</em>.
The two other arguments are (1) a ``safe'' tag that the two groups'
leader processes can use to communicate within the parent
communicator's context without confusion with other communications and
(2) the new intercommunicator <tt> intercomm</tt>.
<P>
Program <A HREF="node99.html#progmpic" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node99.html#progmpic">8.7</A> illustrates these ideas.  It first
uses <tt> MPI_COMM_SPLIT</tt> to split available processes into two
disjoint groups.  Even-numbered processes are in one group;
odd-numbered processes are in a second.  Calls to <tt>
MPI_COMM_RANK</tt> are used to determine the values of the variables
<tt> myid</tt> and <tt>
<A NAME=13153>&#160;</A>
newid</tt>, which represent each process's identifier in the original
communicator and the appropriate new communicator, respectively.  In
this example, <tt> newid=myid/2</tt>.  Then, the <tt>
MPI_INTERCOMM_CREATE</tt> call defines an intercommunicator that links
the two groups (Figure <A HREF="node99.html#figmpcomm3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node99.html#figmpcomm3">8.11</A>).  Process 0 within each
group are selected as the two leaders; these processes correspond to
processes 0 and 1 within the original group, respectively.  Once the
intercommunicator is created, each process in the first group sends a
message to the corresponding process in the second group.  Finally,
the new communicators created by the program are deleted.
<P>
<P><A NAME=14129>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1035.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1035.gif">
<BR><STRONG>Figure:</STRONG> <em> Establishing an intercommunicator between two process
groups.  At the top is an original group of eight processes; this is
<tt> MPI_COMM_WORLD</tt>.  An <tt> MPI_COMM_SPLIT</tt> call creates two
process groups, each containing four processes. Then, an <tt>
MPI_INTERCOMM_CREATE</tt> call creates an intercommunicator between the
two groups.</em><A NAME=figmpcomm3>&#160;</A><BR>
<P>
<P>
<P><A NAME=progmpic>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img1036.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img1036.gif"><P>
<P>
<A NAME=13199>&#160;</A>
<P>

<BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html3153 HREF="node98.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node98.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html3161 HREF="node100.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node100.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html3159 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html3163 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html3164 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html3162 HREF="node100.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node100.html">8.6 Other MPI Features</A>
<B>Up:</B> <A NAME=tex2html3160 HREF="node94.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html">8 Message Passing Interface</A>
<B> Previous:</B> <A NAME=tex2html3154 HREF="node98.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node98.html">8.4 Asynchronous Communication</A>
<BR><HR><P>
<P><ADDRESS>
<I>&#169 Copyright 1995 by <A href="msgs0.htm#6" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#6">Ian Foster</a></I>
</ADDRESS>
</BODY>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -