⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 node9.html

📁 Design and building parallel program
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<html><!DOCTYPE HTML PUBLIC "-//W3O//DTD W3 HTML 2.0//EN">
<!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds >
<HEAD>
<TITLE>1.3 A Parallel Programming Model</TITLE>
</HEAD>
<BODY>
<meta name="description" value="1.3 A Parallel Programming Model">
<meta name="keywords" value="book">
<meta name="resource-type" value="document">
<meta name="distribution" value="global">
<P>
 <BR> <HR><a href="msgs0.htm#2" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#2"><img ALIGN=MIDDLE src="asm_color_tiny.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/asm_color_tiny.gif" alt="[DBPP]"></a>    <A NAME=tex2html1942 HREF="node8.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node8.html"><IMG ALIGN=MIDDLE ALT="previous" SRC="previous_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/previous_motif.gif"></A> <A NAME=tex2html1950 HREF="node10.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html"><IMG ALIGN=MIDDLE ALT="next" SRC="next_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/next_motif.gif"></A> <A NAME=tex2html1948 HREF="node6.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node6.html"><IMG ALIGN=MIDDLE ALT="up" SRC="up_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/up_motif.gif"></A> <A NAME=tex2html1952 HREF="node1.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node1.html"><IMG ALIGN=MIDDLE ALT="contents" SRC="contents_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/contents_motif.gif"></A> <A NAME=tex2html1953 HREF="node133.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node133.html"><IMG ALIGN=MIDDLE ALT="index" SRC="index_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/index_motif.gif"></A> <a href="msgs0.htm#3" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/tppmsgs/msgs0.htm#3"><img ALIGN=MIDDLE src="search_motif.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/search_motif.gif" alt="[Search]"></a>   <BR>
<B> Next:</B> <A NAME=tex2html1951 HREF="node10.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node10.html">1.4 Parallel Algorithm Examples</A>
<B>Up:</B> <A NAME=tex2html1949 HREF="node6.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node6.html">1 Parallel Computers and Computation</A>
<B> Previous:</B> <A NAME=tex2html1943 HREF="node8.html" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node8.html">1.2 A Parallel Machine Model</A>
<BR><HR><P>
<H1><A NAME=SECTION02230000000000000000>1.3 A Parallel Programming Model</A></H1>
<P>
<A NAME=secmodel>&#160;</A>
<P>
The von Neumann machine model assumes a processor able to execute
sequences of instructions.  An instruction can specify, in addition to
various arithmetic operations, the address of a datum to be read or
written in memory and/or the address of the next instruction to be
executed.  While it is possible to program a computer in terms of this
basic model by writing machine language, this method is for most
purposes prohibitively complex, because we must keep track of millions
of memory locations and organize the execution of thousands of machine
<A NAME=422>&#160;</A>
instructions.  Hence, modular design techniques are applied, whereby
complex programs are constructed from simple components, and
components are structured in terms of higher-level abstractions such
as data structures, iterative loops, and procedures.  Abstractions
such as procedures make the exploitation of modularity easier by
allowing objects to be manipulated without concern for their internal
structure.  So do high-level languages such as Fortran, Pascal, C, and
Ada, which allow designs expressed in terms of these abstractions to
be translated automatically into executable code.
<P>
<A NAME=423>&#160;</A>
Parallel programming introduces additional sources of complexity: if
we were to program at the lowest level, not only would the number of
instructions executed increase, but we would also need to manage
explicitly the execution of thousands of processors and coordinate
millions of interprocessor interactions.  Hence, abstraction and
modularity are at least as important as in sequential programming.  In
fact, we shall emphasize <em> modularity
 </em> as a fourth fundamental
<A NAME=425>&#160;</A>
requirement for parallel software, in addition to concurrency,
scalability, and locality.
<P>
<H2><A NAME=SECTION02231000000000000000>1.3.1 Tasks and Channels</A></H2>
<P>
<A NAME=secc1det>&#160;</A>
<P>
<P><A NAME=893>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img102.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img102.gif">
<BR><STRONG>Figure 1.7:</STRONG> <em> A simple parallel programming model.  The figure shows
both the instantaneous state of a computation and a detailed picture
of a single task.  A computation consists of a set of tasks
(represented by circles) connected by channels (arrows).  A task
encapsulates a program and local memory and defines a set of ports
that define its interface to its environment.  A channel is a message
queue into which a sender can place messages and from which a receiver
can remove messages, ``blocking'' if messages are not
available.</em><A NAME=figcm>&#160;</A><BR>
<P>
<P>
<A NAME=432>&#160;</A>
We consider next the question of which abstractions are appropriate
<A NAME=433>&#160;</A>
and useful in a parallel programming model.  Clearly,
mechanisms are needed that allow explicit discussion about concurrency and
locality and that facilitate development of scalable and modular
programs.  Also needed are abstractions that are simple to work with and
that match the architectural model, the multicomputer.  While numerous
possible abstractions could be considered for this purpose, two fit
<A NAME=434>&#160;</A>
these requirements particularly well: the <em> task
 </em> and <em>
channel</em>.  These are illustrated in Figure <A HREF="node9.html#figcm" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node9.html#figcm">1.7</A> and can be
summarized as follows:
<P>
<P><A NAME=908>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img103.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img103.gif">
<BR><STRONG>Figure 1.8:</STRONG> <em> The four basic task actions. In addition to reading and
writing local memory, a task can send a message, receive a message,
create new tasks (suspending until they terminate), and
terminate.</em><A NAME=figactions>&#160;</A><BR>
<P>
<P>
<OL><LI>
A parallel computation consists of one or more tasks.  Tasks
execute concurrently.  The number of tasks can vary during program
execution.
<P>
<LI>
A task encapsulates a sequential program and local memory.  (In
effect, it is a virtual von Neumann machine.)  In addition, a set of
<em> inports
 </em> and <em> outports
 </em> define its interface to its
<A NAME=445>&#160;</A>
environment.
<P>
<LI>
A task can perform four basic actions in addition to reading and
writing its local memory (Figure <A HREF="node9.html#figactions" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node9.html#figactions">1.8</A>): send messages on
its outports, receive messages on its inports, create new tasks, and
terminate.
<P>
<LI>
A send operation is asynchronous: it completes immediately.  A receive
operation is synchronous: it causes execution of the task to block
until a message is available.
<P>
<LI>
<A NAME=447>&#160;</A>
Outport/inport pairs can be connected by message queues called <em>
channels</em>.  Channels can be created and deleted, and references to
channels (ports) can be included in messages, so connectivity can vary
dynamically.
<P>
<LI>
<A NAME=449>&#160;</A>
Tasks can be mapped to physical processors in various ways; the
mapping employed does not affect the semantics of a program.  In
particular, multiple tasks can be mapped to a single processor.  (We
can also imagine a single task being mapped to multiple processors,
but that possibility is not considered here.)
<P>
</OL>
<P>
<A NAME=451>&#160;</A>
The task abstraction provides a mechanism for talking about locality:
<A NAME=452>&#160;</A>
data contained in a task's local memory are ``close''; other data are
``remote.''  The channel abstraction provides a mechanism for
indicating that computation in one task requires data in another task
in order to proceed.  (This is termed a <em> data dependency
 </em>).
The
<A NAME=454>&#160;</A>
following simple example illustrates some
<A NAME=455>&#160;</A>
of these features.
<P>
<BR><HR>
<b> Example <IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img106.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img106.gif">.<IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img104.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img104.gif">    Bridge Construction</b>:<A NAME=exbridge>&#160;</A>
<P>
<A NAME=458>&#160;</A>
Consider the following real-world problem.  A bridge is to be
assembled from girders being constructed at a foundry.
These two activities are organized by providing trucks to transport girders
from the foundry to the bridge site.  This situation is illustrated in
Figure <A HREF="node9.html#figbridge" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node9.html#figbridge">1.9</A>(a), with the foundry and bridge represented as
tasks and the stream of trucks as a channel.  Notice that this
approach allows assembly of the bridge and construction of girders to
proceed in parallel without any explicit coordination: the foundry
crew puts girders on trucks as they are produced, and the assembly crew
adds girders to the bridge as and when they arrive.
<P>
<P><A NAME=938>&#160;</A><IMG BORDER=0 ALIGN=BOTTOM ALT="" SRC="img105.gif" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/img105.gif">
<BR><STRONG>Figure 1.9:</STRONG> <em> Two solutions to the bridge construction problem.  Both
represent the foundry and the bridge assembly site as separate tasks,
<tt> foundry</tt> and <tt> bridge</tt>.  The first uses a single channel on
which girders generated by <tt> foundry</tt> are transported as fast as
they are generated.  If <tt> foundry</tt> generates girders faster than
they are consumed by <tt> bridge</tt>, then girders accumulate at the
construction site.  The second solution uses a second channel to pass
flow control messages from <tt> bridge</tt> to <tt> foundry</tt> so as to
avoid overflow.</em><A NAME=figbridge>&#160;</A><BR>
<P>
<P>
A disadvantage of this scheme is that the foundry may produce girders
much faster than the assembly crew can use them.  To prevent the
bridge site from overflowing with girders, the assembly crew instead
can explicitly request more girders when stocks run low.  This refined
approach is illustrated in Figure <A HREF="node9.html#figbridge" tppabs="http://www.dit.hcmut.edu.vn/books/system/par_anl/node9.html#figbridge">1.9</A>(b), with the stream
of requests represented as a second channel.  The second channel can
also be used to shut down the flow of girders when the bridge is
complete.
<P>
<BR><HR>
<P>
We now examine some other properties of this task/channel programming
model: performance, mapping independence, modularity, and determinism.
<A NAME=471>&#160;</A>
<P>
<em> Performance.</em> Sequential programming abstractions such as
procedures and data structures are effective because they can be
mapped simply and efficiently to the von Neumann computer.  The task
and channel have a similarly direct mapping to the multicomputer.  A
task represents a piece of code that can be executed sequentially, on
a single processor.  If two tasks that share a channel are mapped to
different processors, the channel connection is implemented as
interprocessor communication; if they are mapped to the same
<A NAME=474>&#160;</A>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -