⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node9.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
processor, some more efficient mechanism can be used.
<P>
<A NAME=476>&#160;</A>
<em> Mapping Independence.</em> Because tasks interact using the same
mechanism (channels) regardless of task location, the result computed
by a program does not depend on where tasks execute.  Hence,
algorithms can be designed and implemented without concern for the
number of processors on which they will execute; in fact, algorithms
are frequently designed that create many more tasks than processors.
This is a
<A NAME=478>&#160;</A>
straightforward way of achieving <em> scalability
 </em>: as the number
<A NAME=480>&#160;</A>
of processors increases, the number of tasks per processor is reduced
but the algorithm itself need not be modified.  The creation of more
tasks than processors can also serve to mask communication delays, by
providing other computation that can be performed while communication
is performed to access remote data.
<P>
<em> Modularity.</em> In modular program design, various components of a
program are developed separately, as independent modules, and then
combined to obtain a complete program.  Interactions between
<A NAME=483>&#160;</A>
modules are restricted to well-defined interfaces.  Hence, module
<A NAME=484>&#160;</A>
implementations can be changed without modifying other components, and
the properties of a program can be determined from the specifications
for its modules and the code that plugs these modules together.  When
successfully applied, modular design reduces program complexity and
facilitates code reuse.
<P>
<P><A NAME=956>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img107.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img107.gif">
<BR><STRONG>Figure 1.10:</STRONG> <em> The task as building block.  (a) The <tt> foundry</tt> and
<tt> bridge</tt> tasks are building blocks with complementary interfaces.
(b) Hence, the two tasks can be plugged together to form a complete
program.  (c) Tasks are interchangeable: another task with a
compatible interface can be substituted to obtain a different
program.</em><A NAME=figblock>&#160;</A><BR>
<P>
<P>
The task is a natural building block for modular design.  As
illustrated in Figure <A HREF="node9.html#figblock" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node9.html#figblock">1.10</A>, a task encapsulates both data
and the code that operates on those data; the ports on which it sends
and receives messages constitute its interface.  Hence, the advantages
of modular design summarized in the previous paragraph are directly
accessible in the task/channel model.
<P>
Strong similarities exist between the task/channel model and the
<A NAME=491>&#160;</A>
popular object-oriented programming paradigm.  Tasks, like objects,
<A NAME=492>&#160;</A>
encapsulate data and the code that operates on those data.
Distinguishing features of the task/channel model are its concurrency,
its use of channels rather than method calls to specify interactions,
and its lack of support for inheritance.
<P>
<em> Determinism.</em> An algorithm or program is deterministic if
execution with a particular input always yields the same output.  It
is nondeterministic if multiple executions with the same input can
give different outputs.  Although nondeterminism is sometimes useful
<A NAME=495>&#160;</A>
and must be supported, a parallel programming model that makes it easy
to write deterministic programs is highly desirable.
<A NAME=496>&#160;</A>
Deterministic programs tend to be easier to understand. Also, when
<A NAME=497>&#160;</A>
checking for correctness, only one execution sequence of a parallel
program needs to be considered, rather than all possible executions.
<P>
The ``arms-length'' interactions supported by the task/channel model
makes determinism relatively easy to guarantee.  As we shall see in
Part II when we consider programming tools, it suffices to verify
that each channel has a single sender and a single receiver
and that a task receiving on a channel blocks until a receive
operation is complete.  These conditions can be relaxed when
nondeterministic interactions are required.
<P>
In the bridge construction example, determinism means that the same
bridge will be constructed regardless of the rates at which the
foundry builds girders and the assembly crew puts girders together.
If the assembly crew runs ahead of the foundry, it will block, waiting
for girders to arrive.  Hence, it simply suspends its operations until
more girders are available, rather than attempting to continue
construction with half-completed girders.  Similarly, if the foundry
produces girders faster than the assembly crew can use them, these
girders simply accumulate until they are needed.  Determinism would be
guaranteed even if several bridges were constructed simultaneously: As
long as girders destined for different bridges travel on
<A NAME=498>&#160;</A>
distinct channels, they cannot be confused.
<P>
<A NAME=499>&#160;</A>
<P>

<H2><A NAME=SECTION02232000000000000000>1.3.2 Other Programming Models</A></H2>
<P>
<A NAME=sec1other>&#160;</A>
<P>
In subsequent chapters, the task/channel model will often be used to
describe algorithms.  However, this model is certainly not the only
approach that can be taken to representing parallel computation.  Many
other models have been proposed, differing in their flexibility, task
interaction mechanisms, task granularities, and support for locality,
scalability, and modularity.  Here, we review several alternatives.
<P>
<A NAME=502>&#160;</A>
<em> Message passing.
 </em> Message passing is probably the most
<A NAME=504>&#160;</A>
widely used parallel
<A NAME=505>&#160;</A>
programming model today.  Message-passing programs, like task/channel
programs, create multiple tasks, with each task encapsulating local
data.  Each task is identified by a unique name, and tasks interact by
sending and receiving messages to and from named tasks.  In this
respect, message passing is really just a minor variation on the
task/channel model, differing only in the mechanism used for data
transfer.  For example, rather than sending a message on ``channel
<tt> ch</tt>,'' we may send a message to ``task <tt> 17</tt>.''  We study the
message-passing model in more detail in Chapter <A HREF="node94.html#chapmpi" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node94.html#chapmpi">8</A>, where
we discuss the <em> Message Passing Interface</em>.  In that chapter, we
explain that the definition of channels is a useful discipline even
when designing message-passing programs, because it forces us to
conceptualize the communication structure of a parallel program.
<P>
The message-passing model does not preclude the dynamic creation of
tasks, the execution of multiple tasks per processor, or the execution
of different programs by different tasks.  However, in practice most
message-passing systems create a fixed number of identical tasks at
program startup and do not allow tasks to be created or destroyed
during program execution.  These systems are said to implement a <em>
<A NAME=510>&#160;</A>
single program multiple data
 </em> (SPMD) programming model because each
<A NAME=511>&#160;</A>
task executes the same program but operates on different data.  As 
explained in subsequent chapters, the SPMD model is sufficient for a
wide range of parallel programming problems but does hinder some
parallel algorithm developments.
<P>
<em> Data Parallelism.
 </em> Another commonly used parallel
<A NAME=513>&#160;</A>
programming model, data parallelism, calls for exploitation of the
concurrency that derives from the application of the same operation to
<A NAME=514>&#160;</A>
multiple elements of
<A NAME=515>&#160;</A>
a data structure, for example, ``add 2 to all elements of this
array,'' or ``increase the salary of all employees with 5 years
service.''  A data-parallel program consists of a sequence of such
operations.  As each operation on each data element can be thought of
as an independent task, the natural granularity of a data-parallel
computation is small, and the concept of ``locality'' does not arise
naturally.  Hence, data-parallel compilers often require the
programmer to provide information about how data are to be distributed
over processors, in other words, how data are to be partitioned into
tasks.  The compiler can then translate the data-parallel program into
an SPMD formulation, thereby generating communication code
automatically.  We discuss the data-parallel model in more detail in
Chapter <A HREF="node82.html#chaphpf" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node82.html#chaphpf">7</A> under the topic of <em> High Performance
Fortran</em>.  In that chapter, we show that the algorithm design and
analysis techniques developed for the task/channel model apply
directly to data-parallel programs; in particular, they provide the
concepts required to understand the locality and scalability of
data-parallel programs.
<P>
<em> Shared Memory.
 </em> In the shared-memory programming model,
tasks share a common address space, which they read and write
<A NAME=519>&#160;</A>
asynchronously.  Various mechanisms such as locks and
<A NAME=520>&#160;</A>
semaphores may be used to control access to the shared memory.  An
<A NAME=521>&#160;</A>
advantage of this model from the programmer's point of view is that
the notion of data ``ownership'' is lacking, and hence there is no
<A NAME=522>&#160;</A>
need to specify explicitly the communication of data from producers to
consumers.  This model can simplify program development.  However,
understanding and managing locality becomes more difficult, an
important consideration (as noted earlier) on most shared-memory
architectures.  It can also be more difficult to write deterministic
programs.
<P>

<BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html1942 HREF="node8.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node8.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html1950 HREF="node10.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html1948 HREF="node6.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node6.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html1952 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html1953 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html1951 HREF="node10.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html">1.4 Parallel Algorithm Examples</A>
<B>Up:</B> <A NAME=tex2html1949 HREF="node6.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node6.html">1 Parallel Computers and Computation</A>
<B> Previous:</B> <A NAME=tex2html1943 HREF="node8.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node8.html">1.2 A Parallel Machine Model</A>
<BR><HR><P>
<P><ADDRESS>
<I>&#169 Copyright 1995 by <A href="msgs0.htm#6" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#6">Ian Foster</a></I>
</ADDRESS>
</BODY>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -