⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node16.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<html><!DOCTYPE HTML PUBLIC "-//W3O//DTD W3 HTML 2.0//EN">
<!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds >
<HEAD>
<TITLE>2.2 Partitioning</TITLE>
</HEAD>
<BODY>
<meta name="description" value="2.2 Partitioning">
<meta name="keywords" value="book">
<meta name="resource-type" value="document">
<meta name="distribution" value="global">
<P>
 <BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html2035 HREF="node15.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node15.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html2043 HREF="node17.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node17.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html2041 HREF="node14.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node14.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html2045 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html2046 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html2044 HREF="node17.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node17.html">2.3 Communication</A>
<B>Up:</B> <A NAME=tex2html2042 HREF="node14.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node14.html">2 Designing Parallel Algorithms</A>
<B> Previous:</B> <A NAME=tex2html2036 HREF="node15.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node15.html">2.1 Methodical Design</A>
<BR><HR><P>
<H1><A NAME=SECTION02320000000000000000>2.2 Partitioning</A></H1>
<P>
<A NAME=secpartition>&#160;</A>
<P>
<A NAME=1109>&#160;</A>
The partitioning stage of a design is intended to expose opportunities
for parallel execution.  Hence, the focus is on defining a large
number of small tasks in order to yield what is termed a <em>
<A NAME=1110>&#160;</A>
fine-grained
 </em> decomposition of a problem.  Just as fine sand is
more easily poured than a pile of bricks, a fine-grained decomposition
provides the greatest flexibility in terms of potential parallel
algorithms.  In later design stages, evaluation of communication
requirements, the target architecture, or software engineering issues
may lead us to forego opportunities for parallel execution identified
at this stage.  We then revisit the original partition and agglomerate
tasks to increase
<A NAME=1111>&#160;</A>
their size, or granularity.  However, in this first stage we
wish to avoid prejudging alternative partitioning strategies.
<P>
A good partition divides into small pieces both the <em>
computation
 </em> associated with a problem and the <em> data
 </em> on
which this computation operates.  When designing a partition, programmers
most commonly first focus on the data associated with a problem, then
determine an appropriate partition for the data, and finally 
work out how to associate computation with data.  This partitioning
<A NAME=1114>&#160;</A>
technique is termed <em> domain decomposition</em>.  The alternative
<A NAME=1116>&#160;</A>
approach---first decomposing the computation to be performed and
<A NAME=1117>&#160;</A>
then dealing with the data---is termed <em> functional decomposition</em>.
These are complementary techniques which may be applied to different
components of a single problem or even applied to the same problem to
obtain alternative parallel algorithms.
<P>
In this first stage of a design, we seek to avoid replicating
computation and data; that is, we seek to define tasks that partition
both computation and data into disjoint sets.  Like granularity, this
is an aspect of the design that we may revisit later.  It
can be worthwhile replicating either computation or data if doing so
allows us to reduce communication requirements.
<P>
<H2><A NAME=SECTION02321000000000000000>2.2.1 Domain Decomposition</A></H2>
<P>
<A NAME=1120>&#160;</A>
In the domain decomposition approach to problem partitioning, we seek
<A NAME=2306>&#160;</A>
first to decompose the data associated with a problem.  If possible,
we divide these data into small pieces of approximately equal size.
Next, we partition the computation that is to be performed, typically
by associating each operation with the data on which it operates.
This partitioning yields a number of tasks, each comprising some data
and a set of operations on that data.  An operation may require data
from several tasks.  In this case, communication is required to move
data between tasks.  This requirement is addressed in the next phase
of the design process.
<P>
The data that are decomposed may be the input to the program, the
output computed by the program, or intermediate values maintained by
the program.  Different partitions may be possible, based on different
data structures.  Good rules of thumb are to focus first on the
largest data structure or on the data structure that is accessed most
frequently.  Different phases of the computation may operate on
different data structures or demand different decompositions for the
same data structures.  In this case, we treat each phase separately
and then determine how the decompositions and parallel algorithms
developed for each phase fit together.  The issues that arise in this
situation are discussed in Chapter <A HREF="node39.html#chapmod" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node39.html#chapmod">4</A>.
<P>
Figure <A HREF="node16.html#figfd" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node16.html#figfd">2.2</A> illustrates domain decomposition in a simple
problem involving a three-dimensional grid.  (This grid could
represent the state of the atmosphere in a weather model, or a
three-dimensional space in an image-processing problem.)  Computation
is performed repeatedly on each grid point.  Decompositions in the
<em> x</em>
, <em> y</em>
, and/or <em> z</em>
 dimensions are possible.  In the
early stages of a design, we favor the most aggressive decomposition
possible, which in this case defines one task for each grid point.
Each task maintains as its state the various values associated with
its grid point and is responsible for the computation required to
update that state.
<P>
<P><A NAME=2374>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img161.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img161.gif">
<BR><STRONG>Figure 2.2:</STRONG> <em> Domain decompositions for a problem involving a
three-dimensional grid.  One-, two-, and three-dimensional
decompositions are possible; in each case, data associated with a
single task are shaded.  A three-dimensional decomposition offers the
greatest flexibility and is adopted in the early stages of a
design.</em><A NAME=figfd>&#160;</A><BR>
<P><H2><A NAME=SECTION02322000000000000000>2.2.2 Functional Decomposition</A></H2>
<P>
<A NAME=1132>&#160;</A>
Functional decomposition represents a different and complementary way
of thinking about problems.  In this approach, the initial focus is on
the computation that is to be performed rather than on the data
manipulated by the computation.  If we are successful in dividing this
computation into disjoint tasks, we proceed to examine the data
requirements of these tasks.  These data requirements may be disjoint,
in which case the partition is complete.  Alternatively, they may
overlap significantly, in which case considerable communication will

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -