⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 index(4).html

📁 Pthread lib库完整说明文档
💻 HTML
📖 第 1 页 / 共 5 页
字号:
<P><A NAME=ModelsMessage> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Programming Models</SPAN></TD></TD></TR></TABLE><H2>Message Passing Model</H2><UL><P><LI>The message passing model demonstrates the following characteristics:<IMG SRC=images/msg_pass_model.gif ALIGN=right WIDTH=397 HEIGHT=142 BORDER=0HSPACE=10 VSPACE=10 ALT='Message Passing Model'>    <UL>    <P>    <LI>A set of tasks that use their own local memory during computation.        Multiple tasks can reside on the same physical machine as well         across an arbitrary number of machines.    <P>    <LI>Tasks exchange data through communications by sending and         receiving messages.    <P>    <LI>Data transfer usually requires cooperative operations to be performed         by each process. For example, a send operation must have a matching         receive operation.    </UL> </UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Implementations:</SPAN><UL>    <P>    <LI>From a programming perspective, message passing implementations commonly         comprise a library of subroutines that are imbedded in source code.          The programmer is responsible for determining all parallelism.    <P>    <LI>Historically, a variety of message passing libraries have been         available since the 1980s. These implementations differed substantially              from each other making it difficult for programmers to develop portable        applications.     <P>    <LI>In 1992, the MPI Forum was formed with the primary goal of establishing        a standard interface for message passing implementations.      <P>    <LI>Part 1 of the <B>Message Passing Interface (MPI)</B> was released in        1994. Part 2 (MPI-2) was released in 1996.        Both MPI specifications are available on the web at        <A HREF=http://www.mcs.anl.gov/Projects/mpi/standard.html        TARGET=mpistandard>www.mcs.anl.gov/Projects/mpi/standard.html</A>.    <P>    <LI>MPI is now the "de facto" industry         standard for message passing, replacing virtually all other         message passing implementations used for production work.        Most, if not all of the popular parallel computing platforms        offer at least one implementation of MPI. A few offer a full        implementation of MPI-2.    <P>    <LI>For shared memory architectures, MPI implementations usually don't        use a network for task communications.  Instead, they use shared        memory (memory copies) for performance reasons.    </UL></UL><!--========================================================================--><P><A NAME=ModelsData> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Programming Models</SPAN></TD></TD></TR></TABLE><H2>Data Parallel Model</H2><UL><P><LI>The data parallel model demonstrates the following characteristics: <IMG SRC=images/data_parallel_model.gif ALIGN=right WIDTH=409 HEIGHT=362BORDER=0 HSPACE=10 VSPACE=10 ALT='Data Parallel Model'>    <UL>    <P>    <LI>Most of the parallel work focuses on performing operations on a        data set.  The data set is typically organized into a common        structure, such as an array or cube.    <P>    <LI>A set of tasks work collectively on the same data structure, however,        each task works on a different partition of the same data structure.    <P>    <LI>Tasks perform the same operation on their partition of work, for        example, "add 4 to every array element".    </UL><P><LI>On shared memory architectures, all tasks may have access to the data    structure through global memory.  On distributed memory architectures    the data structure is split up and resides as "chunks" in the local    memory of each task.</UL><BR CLEAR><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Implementations:</SPAN><UL>    <P>    <LI>Programming with the data parallel model is usually accomplished by         writing          a program with data parallel constructs.  The constructs can be calls to         a data parallel subroutine library or, compiler directives recognized by            a data parallel compiler.    <P>    <LI><B>Fortran 90 and 95 (F90, F95):</B> ISO/ANSI standard extensions to         Fortran 77.        <UL>        <LI>Contains everything that is in Fortran 77        <LI>New source code format; additions to character set        <LI>Additions to program structure and commands        <LI>Variable additions - methods and arguments        <LI>Pointers and dynamic memory allocation added        <LI>Array processing (arrays treated as objects) added        <LI>Recursive and new intrinsic functions added        <LI>Many other new features        </UL>         <P>        Implementations are available for most common parallel platforms.    <P>    <LI><B>High Performance Fortran (HPF):</B> Extensions to Fortran 90 to        support data         parallel programming.         <UL>        <LI>Contains everything in Fortran 90        <LI>Directives to tell compiler how to distribute data added        <LI>Assertions that can improve optimization of generated code added        <LI>Data parallel constructs added (now part of Fortran 95)        </UL>         <P>        Implementations are available for most common parallel platforms.    <P>    <LI><B>Compiler Directives:</B> Allow the programmer to specify the        distribution and alignment of data. Fortran implementations are        available for most common parallel platforms.    <P>    <LI>Distributed memory implementations of this model usually have the        compiler convert the program into standard code with calls to a message        passing library (MPI usually) to distribute the data to all the        processes. All message passing is done invisibly to the programmer.    </UL></UL><!--========================================================================--><P><A NAME=ModelsOther> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Programming Models</SPAN></TD></TD></TR></TABLE><H2>Other Models</H2><UL><P><LI>Other parallel programming models besides those previously mentioned     certainly exist, and will continue to evolve along with the ever    changing world of computer hardware and software.  Only three of    the more common ones are mentioned here.</UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Hybrid:</SPAN>    <UL>    <P>    <LI>In this model, any two or more parallel programming models        are combined.    <P>    <LI>Currently, a common example of a hybrid model is the combination        of the message passing model (MPI) with either the threads model        (POSIX threads) or the shared memory model (OpenMP). This hybrid        model lends itself well to the increasingly common hardware        environment of networked SMP machines.    <P>    <LI>Another common example of a hybrid model is combining data        parallel with message passing.  As mentioned in the         data parallel model section previously, data parallel         implementations (F90, HPF) on distributed memory architectures        actually use message passing to transmit data between tasks,        transparently to the programmer.    </UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Single Program Multiple Data (SPMD):</SPAN>    <UL>    <P>    <LI>SPMD is actually a "high level" programming model that can be        built upon any combination of the previously mentioned parallel           programming models.<IMG SRC=images/spmd_model.gif ALIGN=right WIDTH=395 HEIGHT=110BORDER=0 HSPACE=10 VSPACE=10 ALT='SPMD Model'>    <P>    <LI>A single program is executed by all tasks simultaneously.    <P>    <LI>At any moment in time, tasks can be executing the same or different         instructions within the same program.    <P>    <LI>SPMD programs usually have the necessary logic programmed into them to         allow different tasks to branch or conditionally execute only those        parts of the program they are designed to execute. That is, tasks         do not necessarily have to execute the entire program - perhaps only a         portion of it.    <P>    <LI>All tasks may use different data    </UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Multiple Program Multiple Data (MPMD):</SPAN>    <UL>    <P>    <LI>Like SPMD, MPMD is actually a "high level" programming model that can         be built upon any combination of the previously mentioned parallel           programming models.<IMG SRC=images/mpmd_model.gif ALIGN=right WIDTH=395 HEIGHT=110BORDER=0 HSPACE=10 VSPACE=10 ALT='MPMD Model'>    <P>    <LI>MPMD applications typically have multiple executable object files        (programs). While the application is being run in parallel, each          task can be executing        the same or different program as other tasks.    <P>    <LI>All tasks may use different data    </UL><!--========================================================================--><P><A NAME=Designing> <BR><BR> </A><A NAME=DesignAutomatic> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Designing Parallel Programs</SPAN></TD></TD></TR></TABLE><H2>Automatic vs. Manual Parallelization</H2><UL><P><LI>Designing and developing parallel programs has characteristically been a    very manual process.  The programmer is typically responsible for     both identifying and actually implementing parallelism. <P><LI>Very often, manually developing parallel codes is a time consuming,    complex, error-prone and <I><B>iterative</B></I> process.<P><LI>For a number of years now, various tools have been available to assist    the programmer with converting serial programs into parallel programs.    The most common type of tool used to automatically parallelize a serial    program is a parallelizing compiler or pre-processor.<P><LI>A parallelizing compiler generally works in two different ways:    <UL>    <P>    <LI>Fully Automatic        <UL TYPE=circle>        <LI>The compiler analyzes the source code and            identifies opportunities for parallelism.          <LI>The analysis includes            identifying inhibitors to parallelism and possibly a cost             weighting on whether or not the parallelism would actually            improve performance.        <LI>Loops (do, for) loops are the most frequent target for            automatic parallelization.        </UL>    <P>    <LI>Programmer Directed        <UL TYPE=circle>        <LI>Using "compiler directives" or possibly compiler flags,            the programmer explicitly tells the compiler how to            parallelize the code.        <LI>May be able to be used in conjunction with some degree of             automatic parallelization also.        </UL>    </UL><P><LI>If you are beginning with an existing serial code and have time    or budget constraints, then automatic parallelization may be     the answer.  However, there are several important caveats that    apply to automatic parallelization:    <UL>    <LI>Wrong results may be produced    <LI>Performance may actually degrade    <LI>Much less flexible than manual parallelization    <LI>Limited to a subset (mostly loops) of code    <LI>May actually not parallelize code if the analysis suggests there        are inhibitors or the code is too complex    <LI>Most automatic parallelization tools are for Fortran    </UL><P><LI>The remainder of this section applies to the manual method of     developing parallel codes.</UL><!--========================================================================--><P><A NAME=DesignUnderstand> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Designing Parallel Programs</SPAN></TD></TD></TR></TABLE><H2>Understand the Problem and the Program</H2><UL><P><LI>Undoubtedly, the first step in developing parallel software is to    first understand the problem that you wish to solve in parallel.    If you are starting with a serial program, this necessitates     understanding the existing code also. <P><LI>Before spending time in an attempt to develop a parallel solution    for a problem, determine whether or not the problem is one that can    actually be parallelized.  <UL><P><LI>Example of Parallelizable Problem: <P><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=75%><TR><TD>    Calculate the potential energy for each of several thousand     independent conformations of a molecule.    When done, find the minimum energy conformation.</TABLE>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -