⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node41.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<html><!DOCTYPE HTML PUBLIC "-//W3O//DTD W3 HTML 2.0//EN">
<!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds >
<HEAD>
<TITLE>4.2 Modularity and Parallel Computing</TITLE>
</HEAD>
<BODY>
<meta name="description" value="4.2 Modularity and Parallel Computing">
<meta name="keywords" value="book">
<meta name="resource-type" value="document">
<meta name="distribution" value="global">
<P>
 <BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html2350 HREF="node40.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node40.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html2358 HREF="node42.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node42.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html2356 HREF="node39.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node39.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html2360 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html2361 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html2359 HREF="node42.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node42.html">4.3 Performance Analysis</A>
<B>Up:</B> <A NAME=tex2html2357 HREF="node39.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node39.html">4 Putting Components Together</A>
<B> Previous:</B> <A NAME=tex2html2351 HREF="node40.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node40.html">4.1 Modular Design Review</A>
<BR><HR><P>
<H1><A NAME=SECTION02520000000000000000>4.2 Modularity and Parallel Computing</A></H1>
<P>
<A NAME=secmodpar>&#160;</A>
<P>
<A NAME=5491>&#160;</A>
The design principles reviewed in the preceding section apply directly
to parallel programming.  However, parallelism also introduces
additional concerns.  A sequential module encapsulates the code that
implements the functions provided by the module's interface and the
data structures accessed by those functions.  In parallel programming,
we need to consider not only code and data but also the tasks created
by a module, the way in which data structures are partitioned and
mapped to processors, and internal communication structures.  Probably
the most fundamental issue is that of data distribution.
<P>
<P><A NAME=6199>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img729.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img729.gif">
<BR><STRONG>Figure 4.1:</STRONG> <em> Three forms of parallel program composition.  In each
case, the program is shown executing on four processors, with each arrow
representing a separate thread of control and shading denoting two
different program components.  In sequential composition, different
program components execute in sequence on all processors.  In parallel
composition, different program components execute concurrently on
different processors.  In concurrent composition, different program
components execute concurrently on the same
processors.</em><A NAME=figmodcomp>&#160;</A><BR>
<P>
<P>
Another difference between sequential and parallel programming is that
in the former, modules can be put together (composed) in just one way:
sequentially.  Execution of a program leads to a sequence of calls to
<A NAME=5496>&#160;</A>
functions defined in different modules.  This is called <em>
sequential composition
 </em> and can also be used in parallel
<A NAME=5498>&#160;</A>
programming, and indeed is fundamental to the SPMD programming model
<A NAME=5499>&#160;</A>
used in many parallel programs.  However, we often need to compose
program components in other ways (Figure <A HREF="node41.html#figmodcomp" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node41.html#figmodcomp">4.1</A>).  In
<A NAME=5501>&#160;</A>
<em> parallel composition</em>, different modules execute concurrently on
disjoint sets of processors.  This strategy can enhance modularity and
improve scalability and locality.  In 
<A NAME=5503>&#160;</A>
<em> concurrent composition</em>, different modules execute concurrently
<A NAME=5505>&#160;</A>
on the same processors, with execution of a particular module enabled
by the availability of data.  Concurrent composition can both reduce
design complexity and allow overlapping of computation and
communication.
<P>
We distinguish between sequential, parallel, and concurrent
composition both because they are different ways of thinking about
programs and because not all parallel programming tools support all
three compositional forms.  Data-parallel languages (such as HPF) tend
to support only sequential composition.  Message-passing libraries
(such as MPI) typically support both sequential and parallel
composition but not concurrent composition.  Other languages and
libraries (such as CC++
  and Fortran M) support all three forms of
composition.
<P>

<H2><A NAME=SECTION02521000000000000000>4.2.1 Data Distribution</A></H2>
<P>
<A NAME=secmoddd>&#160;</A>
<P>
In Chapters <A HREF="node14.html#chap2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node14.html#chap2">2</A> and <A HREF="node26.html#chapperf" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node26.html#chapperf">3</A>, we showed that the
distribution of a program's data structures among tasks and processors
(that is, the way in which data structures are partitioned and mapped)
is an important aspect of parallel algorithm design.  We also showed how
to design data distributions that maximize performance and/or minimize
software engineering costs.
<P>
<A NAME=5510>&#160;</A>
Data distribution can become a more complex issue in programs

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -