⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 index(4).html

📁 Pthread lib库完整说明文档
💻 HTML
📖 第 1 页 / 共 5 页
字号:
<P><IMG SRC=images/distributed_mem.gif WIDTH=484 HEIGHT=196 BORDER=0 HSPACE=10ALT='Distributed memory architecture'><P><LI>Processors have their own local memory.  Memory addresses in one     processor do not map to another processor, so there is no concept of    global address space across all processors.<P><LI>Because each processor has its own local memory, it operates     independently. Changes it makes to its local memory have no effect    on the memory of other processors.  Hence, the concept of cache    coherency does not apply.<P><LI>When a processor needs access to data in another processor, it is     usually the task of the programmer to explicitly define how and when    data is communicated.  Synchronization between tasks is likewise the    programmer's responsibility.<P><LI>The network "fabric" used for data transfer varies widely, though it can    can be as simple as Ethernet.</UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Advantages:</SPAN>    <UL>    <LI>Memory is scalable with number of processors. Increase the number of            processors and the size of memory increases proportionately.     <LI>Each processor can rapidly access its own memory without interference        and without the overhead incurred with trying to maintain cache        coherency.    <LI>Cost effectiveness: can use commodity, off-the-shelf processors and        networking.    </UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Disadvantages:</SPAN>    <UL>    <LI>The programmer is responsible for many of the details associated with        data communication between processors.     <LI>It may be difficult to map existing data structures, based on global        memory, to this memory organization.     <LI>Non-uniform memory access (NUMA) times    </UL></UL><!--========================================================================--><P><A NAME=HybridMemory> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Computer Memory Architectures</SPAN></TD></TD></TR></TABLE><H2>Hybrid Distributed-Shared Memory</H2><UL><P><LI>Summarizing a few of the key characteristics of     shared and distributed memory machines:<P><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0><TR><TH COLSPAN=4>Comparison of Shared and Distributed Memory Architectures</TH></TR><TR><TD><B>Architecture</B></TD> <TD>CC-UMA</TD> <TD>CC-NUMA</TD> <TD>Distributed</TD> </TR><TR><TD><B>Examples</B></TD><TD>SMPs <BR>Sun Vexx <BR>DEC/Compaq <BR>SGI Challenge <BR>IBM POWER3</TD><TD>SGI Origin <BR>Sequent <BR>HP Exemplar <BR>DEC/Compaq <BR>IBM POWER4 (MCM)</TD><TD>Cray T3E  <BR>Maspar <BR>IBM SP2</TD></TR><TR><TD><B>Communications</B></TD><TD>MPI <BR>Threads <BR>OpenMP <BR>shmem</TD><TD>MPI <BR>Threads <BR>OpenMP <BR>shmem</TD><TD>MPI </TD></TR><TR><TD><B>Scalability</B></TD><TD>to 10s of processors</TD><TD>to 100s of processors</TD><TD>to 1000s of processors </TD></TR><TR><TD><B>Draw Backs</B></TD><TD>Memory-CPU bandwidth</TD><TD>Memory-CPU bandwidth<BR>Non-uniform access times</TD><TD>System administration <BR>Programming is hard to develop and maintain</TD></TR><TR><TD><B>Software Availability</B></TD><TD>many 1000s ISVs</TD><TD>many 1000s ISVs</TD><TD>100s ISVs </TD></TR></TABLE><P><LI>The largest and fastest computers in the world today employ both shared    and distributed memory architectures.<P><IMG SRC=images/hybrid_mem.gif WIDTH=484 HEIGHT=196 BORDER=0 HSPACE=10 ALT='Hybrid memory architecture'><P><LI>The shared memory component is usually a cache coherent SMP machine.    Processors on a given SMP can address that machine's memory as global.<P><LI>The distributed memory component is the networking of multiple SMPs.      SMPs know only about their own memory - not the memory on another SMP.    Therefore, network communications are required to move data from one    SMP to another.<P><LI>Current trends seem to indicate that this type of memory architecture    will continue to prevail and increase at the high end of computing for    the foreseeable future.<P><LI>Advantages and Disadvantages: whatever is common to both shared and    distributed memory architectures. </UL><!--========================================================================--><P><A NAME=Models> <BR><BR> </A><A NAME=ModelsOverview> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Programming Models</SPAN></TD></TD></TR></TABLE><H2>Overview</H2><UL><P><LI>There are several parallel programming models in common use:    <UL>    <LI>Shared Memory     <LI>Threads    <LI>Message Passing    <LI>Data Parallel    <LI>Hybrid    </UL><P><LI>Parallel programming models exist as an abstraction above hardware    and memory architectures.<P><LI>Although it might not seem apparent, these models are NOT specific    to a particular type of machine or memory architecture.  In fact, any    of these models can (theoretically) be implemented on any underlying    hardware.  Two examples:    <OL>    <P>    <LI>Shared memory model on a distributed memory machine:         Kendall Square Research (KSR) ALLCACHE approach.      <P> Machine memory was physically        distributed, but appeared to the user as a single shared memory         (global address space). Generically, this approach is referred to as        "virtual shared memory". Note: although KSR is no longer in business,        there is no reason to suggest that a similar implementation will not        be made available by another vendor in the future.    <P>    <LI>Message passing model on a shared memory machine: MPI on SGI Origin.    <P> The SGI Origin employed the CC-NUMA type of shared memory architecture,        where every task has direct access to global memory.  However, the         ability to          send and receive messages with MPI, as is commonly done over a network        of distributed memory machines, is not only implemented but is very         commonly used.     </OL><P><LI>Which model to use is often a combination of what is available and personal    choice.  There is no "best" model, although there certainly are better    implementations of some models over others.<P><LI>The following sections describe each of the models mentioned above, and    also discuss some of their actual implementations.</UL><!--========================================================================--><P><A NAME=ModelsShared> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Programming Models</SPAN></TD></TD></TR></TABLE><H2>Shared Memory Model</H2><UL><P><LI>In the shared-memory programming model, tasks share a common address space,      which they read and write asynchronously. <P><LI>Various mechanisms such as locks / semaphores may be used to control    access to the shared memory. <P><LI>An advantage of this model from the programmer's point of view is that the    notion of data "ownership" is lacking, so there is no need to specify     explicitly the communication of data between tasks.  Program     development can often be simplified.<P><LI>An important disadvantage in terms of performance is that it becomes    more difficult to understand and manage data locality.</UL></UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Implementations:</SPAN><UL>    <P>    <LI>On shared memory platforms, the native compilers translate         user program variables into actual memory addresses, which are global.    <P>    <LI>No common distributed memory platform implementations currently exist.        However, as mentioned previously in the Overview section, the KSR        ALLCACHE approach provided a shared memory view of data even though        the physical memory of the machine was distributed.    </UL></UL><!--========================================================================--><P><A NAME=ModelsThreads> <BR><BR> </A><TABLE BORDER=1 CELLPADDING=5 CELLSPACING=0 WIDTH=100%><TR><TD BGCOLOR=#98ABCE><SPAN class=heading1>Parallel Programming Models</SPAN></TD></TD></TR></TABLE><H2>Threads Model</H2><UL><P><LI>In the threads model of parallel programming, a single process can have    multiple, concurrent execution paths.<P><LI>Perhaps the most simple analogy that can be used to describe threads is the    concept of a single program that includes a number of subroutines:<IMG SRC=images/threads_model.gif ALIGN=right WIDTH=348 HEIGHT=238 BORDER=0HSPACE=10 VSPACE=10 ALT='Threads Model'>    <UL>    <P>    <LI>The main program <TT><B>a.out</B></TT> is scheduled to run by the        native operating system. <TT>a.out</TT> loads and acquires all of the        necessary system and user resources to run.    <P>    <LI><TT>a.out</TT> performs some serial work, and then creates        a number of tasks (threads) that can be scheduled and run by the        operating system concurrently.      <P>    <LI>Each thread has local data, but also, shares the entire resources of         <TT>a.out</TT>.  This saves the overhead associated with        replicating a program's resources for each thread.  Each thread also         benefits from a global memory view because it shares the memory space        of <TT>a.out</TT>.          <P>    <LI>A thread's work may best be described as a subroutine within        the main program.  Any thread can execute any subroutine at the        same time as other threads.    <P>    <LI>Threads communicate with each other through global memory (updating        address locations).  This requires synchronization constructs to insure        that more than one thread is not updating the same global address at        any time.    <P>    <LI>Threads can come and go, but <TT>a.out</TT> remains present         to provide the necessary shared resources until the        application has completed.    </UL>    <P><P><LI>Threads are commonly associated with shared memory architectures and     operating systems.</UL><P><IMG SRC=../images/arrowBullet.gif ALIGN=top HSPACE=3><SPAN CLASS=heading3>Implementations:</SPAN><UL>    <P>    <LI>From a programming perspective, threads implementations commonly         comprise:        <UL TYPE=circle>        <LI>A library of subroutines that are called from within             parallel source code        <LI>A set of compiler directives imbedded in either serial             or parallel source code        </UL>    <P>        In both cases, the programmer is responsible for determining all         parallelism.    <P>    <LI>Threaded implementations are not new in computing.  Historically,        hardware vendors have implemented their own proprietary versions of        threads. These implementations differed substantially from each other        making it difficult for programmers to develop portable threaded        applications.     <P>    <LI>Unrelated standardization efforts have resulted in         two very different implementations of threads:        <B><I>POSIX Threads</I></B> and <B><I>OpenMP</I></B>.    <P>    <LI><B>POSIX Threads</B>         <UL>        <LI>Library based; requires parallel coding         <LI>Specified by the IEEE POSIX 1003.1c standard (1995).        <LI>C Language only         <LI>Commonly referred to as Pthreads.          <LI>Most hardware vendors now offer Pthreads in addition to their            proprietary threads implementations.        <LI>Very explicit parallelism; requires significant programmer            attention to detail.        </UL>    <P>    <LI><B>OpenMP</B>          <UL>        <LI>Compiler directive based; can use serial code        <LI>Jointly defined and endorsed by a group of major computer hardware            and software vendors.             The OpenMP Fortran API was released October 28, 1997. The C/C++ API             was released in late 1998.        <LI>Portable / multi-platform, including Unix and Windows NT platforms        <LI>Available in C/C++ and Fortran implementations        <LI>Can be very easy and simple to use - provides for "incremental             parallelism"        </UL>    <P>    <LI>Microsoft has its own implementation for threads, which is not related        to the UNIX POSIX standard or OpenMP.    </UL></UL><!--========================================================================-->

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -