index(1).html

来自「Pthread lib库完整说明文档」· HTML 代码 · 共 402 行 · 第 1/2 页

HTML
402
字号
differ from other resources. It covers available compilers, libraries, and tools. It describes how to set up and run parallel batch jobs. The tutorial demonstrates how to run with Digital MPI (DMPI) and MPICH. The lecture is followed by a lab exercise. There will be a description of common errors and ways to troubleshoot MPI jobs.  A description of performance analysis tools and further documentation will also be presented.<P><I>Level/Prerequisites: </I>Ideal for new users of the Compaq clusters. A basic understanding of message passing tools is assumed.<P>NOTE: This tutorial is no longer being maintained as of 12/04 since the remaining LC Compaq clusters are scheduled for decommission in the near future.</UL>--------------------------------------------------------------------------><A NAME=lcrm> </A><A NAME=dpcs> </A><A HREF=../lcrm><B>Livermore Computing Resource Management System (LCRM)</B></A> (EC3515)<UL>The Livermore Computing Resource Management System (LCRM) is a product of LLNLLivermore Computing Center. Its primary purpose is to allocate computerresources, according to resource delivery goals, for UNIX-basedproduction computer systems. It is the batch system that most LCusers use to submit, monitor, and interact with their productioncomputing jobs.<P>This tutorial begins with a brief overview of LCRM and its two primaryfunctional components, the Resource Allocation and Control System andthe Production Workload Scheduler. Each of these components is thenfurther explored, with a practical focus on describing commands andutilities that are provided for the user's interaction with LCRM. Building jobcommand scripts, running parallel jobs, and job scheduling policiesare also included. The lecture is followed by a lab exercise.<P><I>Level/Prerequisites: </I> Beginner. The material covered by the followingtutorials would also be helpful.<BR><A HREF=#lc_resources>EC3501: Introduction to Livermore Computing Resources</A><BR><A HREF=#ibm_sp>EC3503: IBM SP Systems Overview</A> <BR><A HREF=#linux_clusters>EC3516: IA32 Linux Clusters Overview</A></UL><A NAME=mpi> </A><A HREF=../mpi><B>Message Passing Interface (MPI)</B></A> (EC3505)<UL>The Message Passing Interface Standard (MPI) is a message passing librarystandard based on the consensus of the MPI Forum, which has over 40 participating organizations, including vendors, researchers, software librarydevelopers, and users. The goal of the Message Passing Interface is toestablish a portable, efficient, and flexible standard for message passingthat will be widely used for writing message passing programs. As such, MPIis the first standardized, vendor independent, message passing library. Theadvantages of developing message passing software using MPI closely match thedesign goals of portability, efficiency, and flexibility. <P>This tutorial will provide a means for those interested in exploring these advantages to become familiar with MPI and also to learn the basics ofdeveloping MPI programs. The primary topics that are presented focus on thosewhich are the most useful for beginning MPI programmers. The tutorial beginswith an introduction, background, and basic information for getting startedwith MPI. This is followed by a detailed look at the MPI routines that aremost useful for new MPI programmers, including MPI Environment Management,Point to Point Communications, and Collective Communications routines. Numerous examples in both C and Fortran are provided, as well as a labexercise. <P><I>Level/Prerequisites: </I> Ideal for those who are new to parallel programmingwith MPI. A basic understanding of parallel programming in C or Fortran is assumed.</UL><A NAME=pthreads> </A><A HREF=../pthreads><B>POSIX Threads Programming</B></A> (EC3506)<UL>In shared memory multiprocessor architectures, such as SMPs, threads can beused to implement parallelism. Historically, hardware vendors haveimplemented their own proprietary versions of threads, making portability aconcern for software developers. For UNIX systems, a standardized C language threads programming interface has been specified by the IEEE POSIX 1003.1c standard. Implementations that adhere to this standard arereferred to as POSIX threads, or Pthreads. <P>The tutorial begins with an introduction to concepts, motivations, and designconsiderations for using Pthreads. Each of the three major classes of routines in the Pthreads API are then covered: Thread Management, MutexVariables, and Condition Variables. Example codes are used throughout todemonstrate how to use most of the Pthreads routines needed by a new Pthreadsprogrammer. The tutorial concludes with a discussion and examples of how todevelop hybrid MPI/Pthreads programs in an IBM SMP environment. A labexercise, with numerous example codes (C Language) is also included. <P><I>Level/Prerequisites: </I> Ideal for those who are new to parallel programming with  threads. A basic understanding of parallel programming in C is assumed. For those who are unfamiliar with Parallel Programming in general, the material covered in <A HREF=#parallel_comp>EC3500: Introduction to Parallel Computing</A> would be helpful. </UL><A NAME=openMP> </A><A HREF=../openMP><B>OpenMP</B></A> (EC3507)<UL>OpenMP is an Application Program Interface (API), jointly defined by a groupof major computer hardware and software vendors. OpenMP provides a portable,scalable model for developers of shared memory parallel applications. The APIsupports C/C++ and Fortran on multiple architectures, including UNIX & WindowsNT. This tutorial covers most of the major features of OpenMP, including itsvarious constructs and directives for specifying parallel regions, worksharing, synchronization and data environment. Runtime library functionsand  environment variables are also covered. This tutorial includes both Cand Fortran example codes and a lab exercise.<P><I>Level/Prerequisites: </I> Geared to those who are new to parallel programmingwith OpenMP. Basic understanding of parallel programming in C or Fortran assumed.  For those who are unfamiliar with Parallel Programming ingeneral, the material covered in <A HREF=#parallel_comp>EC3500: Introductionto Parallel Computing</A> would be helpful.</UL><A NAME=totalview> </A><A HREF=../totalview><B>TotalView Debugger</B></A> (EC3508)<UL>The TotalView debugger is part of a suite of software development tools fromEtnus, Inc., that debug, analyze, and tune program performance. TotalViewprovides source level debugging for serial, parallel, multiprocess andmultithreaded codes, and can be used in a variety of UNIX environments,including those with distributed, clustered, stand-alone and SMP machines. TotalView is easy to use because of its completely graphical, X Windows GUI. TotalView has been selected as the Department of Energy's ASCI debugger. <P>This tutorial has three parts, each of which includes a lab exercise. Part 1 begins with an overview of TotalView and then provides detailed instructions on how to set up and use its basic functions. Part 2 continues by introducing a number of new functions and also providing a more in-depth look at some of the basic functions. Part 3 covers parallel debugging, including threads, MPI, OpenMP and hybrid programs. Part 3 concludes with a discussion on debugging in batch mode. <P><I>Level/Prerequisites: </I> Intended for those who are new to TotalView. A basic understanding of parallel programming in C or Fortran is assumed.The material covered in the following tutorialswould also be beneficial for those who are unfamiliar with parallel programming in MPI, OpenMP and/or POSIX threads:<BR><A HREF=#mpi>EC3505: MPI</A><BR><A HREF=#pthreads>EC3506: POSIX Threads</A><BR><A HREF=#openMP>EC3507: OpenMP</A></UL><A NAME=perf_analysis> </A><B>Performance Analysis Tools And Topics For LC 'S IBM ASCI Systems</B>(EC3509)<BR>Part 1: <A HREF=../mpi_performance>MPI Performance Topics</A><BR>Part 2: <A HREF=../performance_tools>Performance Analysis Tools for the IBM SP Environment</A><UL>Part 1 assumes a previous knowledge and use parallel computingwith MPI.  It begins with a brief review of basic MPI terminology,message passing routines, and factors that affect an MPI program'sperformance. A more in-depth examination of a number of specific factorsthat affect performance is pursued. The topics covered include messagebuffering, message passing protocols, synchronization issues, message sizefactors, collective communications, derived datatypes, communicationcontention and more. Comparisons between various options are made andsuggestions for how to improve program performance are offered. The topicscovered apply to MPI programs in general, though the example performanceresults cited are derived from program executions on the IBM SP platform.<P><I>Level/Prerequisites: </I> Intermediate to experienced MPI programmers.</UL><UL>Part 2: An essential prerequisite for optimizing an application is to firstunderstand its execution characteristics. A number of tools are availablefor the application developer to accomplish this, ranging from simpleshell utilities, timers and profilers, to sophisticated graphicaltools. This tutorial investigates, in varying depths, a number of toolsthat can be used to analyze an application's performance towards the goalsof optimization and trouble-shooting. A lab exercise featuringa subset of these tools is provided.<P><I>Level/Prerequisites: </I> A basic understanding of parallel programming in C or Fortran is assumed. </UL><!--------------------------------------------------------------------------><SCRIPT LANGUAGE="JavaScript">PrintFooter("UCRL-MI-133316")</SCRIPT><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></BODY></HTML>

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?