⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 13.html

📁 国外MPI教材
💻 HTML
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head>	<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />	<style type="text/css">	body { font-family: Verdana, Arial, Helvetica, sans-serif;}	a.at-term {	font-style: italic; }	</style>	<title>Parallel Programming Approaches</title>	<meta name="Generator" content="ATutor">	<meta name="Keywords" content=""></head><body> <p> As parallel computer architectures have changed, the programming approaches used on those computers have changed as well. The most widely used methods of 
parallel programming are <em>explicit threading</em>, <em>message passing</em>, and <em>compiler directives</em>. </p>

<h3> <a name="threads"> Explicit Threading </a> </h3>

<p> Explicit threading is an approach in which an application creates several parallel "threads" of control within the same process space in memory. These 
threads communicate with each other using shared areas in memory, the consistency of which is controlled using <em>locks</em>, <em>semaphores</em>, or <em>mutexes</em>. Explicit threading typically requires that synchronization and locking issues be handled by the programmer. The decomposition of the application into parallel tasks is similarly left to the programmer. Because of the need for shared memory regions, explicit threading is generally not supported well on distributed memory systems. </p>

<p> Explicit threading is very common in data server applications but to date has not been used much in scientific programming. The most commonly used interface 
to threads is the POSIX Threads or <em>pthreads</em> library, although several others are also available. </p>

<h3> <a name="msgpassing"> Message Passing </a> </h3>

<p> In <em><a href="../glossary.html#message+passing" target="body" class="at-term">message passing</a></em>, an application consists of several <em><a href="../glossary.html#Processes" target="body" class="at-term">processes</a></em> that may or may not be the same. These processes communicate by passing data to one another in a pairwise(send/receive), one-sided (get/put), or collective (broadcast/scatter/gather) fashion. As with explicit threading, programmers are required to do their own synchronization. However, locking is typically not required as nothing is shared. Decomposing an application into parallel tasks is also the programmer's responsibility in message passing. A commonly used approach is domain decomposition where each task is assigned a subset of the computational domain and communicates the solution values on the edges of its subdomain to the tasks responsible for neighboring subdomains. Message passing is typically considered best suited for distributed memory systems, however, it is also easily implementable on shared memory systems as well. </p>

<p> Message passing in scientific programming has been more or less standardized on the <em><a href="../glossary.html#Message+Passing+Interface" target="body" class="at-term">Message Passing Interface</a></em> or <em><a href="../glossary.html#MPI" target="body" class="at-term">MPI</a></em> library, although other interfaces such as Parallel Virtual Machine (<em>PVM</em>) and Cray's <em>SHMEM</em> also exist. In the commercial arena, interfaces such as <em>CORBA</em> and Microsoft's <em>DCOM</em> are used. </p>

<h3> <a name="directives"> Compiler Directives </a> </h3>

<p> In the <em><a href="../glossary.html#compiler+directives" target="body" class="at-term">compiler directives</a></em> approach, special comments are added to an otherwise serial program that indicate which regions in the program may be parallelized. This requires a special compiler that understands the set of directives used and generates the appropriate code which can use message passing or (more often) explicit threading to implement communication. Locking and synchronization are generally handled by the compiler unless overriden by directives. The decomposition of an application into parallel tasks is done primarily by the compiler with help from the programmer in the form of directives. Because compiler directive approaches abstract the communication layer, they are, in principle, equally applicable to shared and distributed memory systems. However, in practice they are used much more widely on shared memory systems. Scalability of applications parallelized with a directive-based approach is often much more limited than that seen in message passing applications. This is due to the lesser amount of control the programmer has over how the code is parallelized. </p>

<p> Historically, a number of vendor-specific compiler directive sets have been available. However, in recent years these have largely converged on the <em><a href="../glossary.html#OpenMP" target="body" class="at-term">OpenMP</a></em> directive set which supports C, C++, and Fortran and is aimed at shared memory systems. Another widely used directive set is High Performance Fortran (<em>HPF</em>), which is aimed at both shared and distributed memory systems. </p>

<h3> <a name="multilevel"> Multilevel Parallel Programming </a> </h3>

<p> The concept of <em><a href="../glossary.html#multilevel+parallel+programming" target="body" class="at-term">multilevel parallel programming</a></em> is a mixture of message passing and either compiler directives or explicit threading. This concept comes partly from the recognition of distributed shared memory systems as a major direction in the future of parallel computing architectures and partly from an understanding of the structure of most scientific applications that use message passing. These applications are typically made up of large, computationally expensive loops punctuated by calls to the message passing library. In many cases, these loop structures can be further parallelized using compiler directives. </p>

<p> As will be seen in the next few chapters, this approach has considerable potential for performance gains and does not increase the difficulty of programming substantially over exclusive use of message passing or compiler directives. While not a solution to all parallel programming problems, the multilevel approach can be of great benefit for some classes of scientific applications. </p></body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -