📄 http:^^www.cs.wisc.edu^~paradyn^papers.html
字号:
to be gathered to support multiple views of performance data anddescribe how we can mine mapping information from the compilerand run-time environment. We also describe how we use thisinformation to produce performance data at the higher levels, andhow we present this data in terms of both the code and paralleldata structures.<P>We have developed an implementation of these mapping techniquesfor the data parallel CM Fortran language running on the TMC CM-5. We have augmented the Paradyn Parallel Performance Tools withthese mapping and high-level language facilities and used them tostudy several real data parallel Fortran (CM Fortran)applications. Our mapping and high-level language techniquesallowed us to quickly understand these applications and modifythem to obtain significant performance improvements.<!WA8><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/mapping.ps.Z"><H3>Mechanisms for Mapping High-Level Parallel Performance Data</H3></A>R. Bruce Irvin and Barton P. Miller.<br>ICPP Workshop on Challenges for Parallel Processing (Chicago, August 1996).<P><em>Note: this paper contains several color postscript pages. Itshould print acceptably on b/w printers.</em><P>A primary problem in the performance measurement of high-level parallel programming languages is to map low-level events to high-level programming constructs. We discuss several aspects of this problem and presents three methods with which performance tools can map performance data and provide accurate performance information to programmers. In particular, we discuss static mapping, dynamic mapping, and a new technique that uses a data structure called the set of active sentences.Because each of these methods requires cooperation between compilers and performance tools, we describe the nature and amount of cooperation required. The three mapping methods are orthogonal; we describe how they should be combined in a complete tool. Although we concentrate on mapping upward through layers of abstraction, our techniques are independent of mapping direction.<!WA9><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/nv.ps.Z"><H3>A Performance Tool for High-Level Parallel Programming Languages</H3></A>R. Bruce Irvin and Barton P. Miller.IFIP WG10.3 Working Conference on Programming Environments for Massively Parallel Distributed Systems (Ascona Switzerland, April 1994)<P>Users of high-level parallel programming languages require accurateperformance information that is relevant to their source code.Furthermore, when their programs cause performance problems at thelowest levels of their hardware and software systems, programmers needto be able to peel back layers of abstraction to examine low-levelproblems while maintaining references to the high-level source codethat ultimately caused the problem. In this paper, we present NV, amodel for the explanation of performance information for programsbuilt on multiple levels of abstraction. In NV, a level ofabstraction includes a collection of nouns (code and data objects),verbs (activities), and performance information measured for the nounsand verbs. Performance information is mapped from level to level tomaintain the relationships between low-level activities and high-levelcode, even when such relationships are implicit.<P>We have used the NV model to build ParaMap, a performance tool for theCM Fortran language that has, in practice, guided us to substantialimprovements in real CM Fortran applications. We describe the designand implementation of our tool and show how its simple tabular andgraphical performance displays helped us to find performance problemsin two applications. In each case, we found that performanceinformation at all levels was most useful when related to parallel CMFortran arrays, and that we could subsequently reduce eachapplication's execution time by more than half.<!WA10><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/paradyn_and_devise.ps.Z"><H3>Integrated Visualization of Parallel Program Performance Data</H3></A>Karen L. Karavanic, Jussi Myllymaki, Miron Livny, and Barton P. Miller.to appear in "Environments and Tools for Parallel Scientific Computing,"SIAM Press, J. Dongarra and B. Tourancheau, eds., 1996<P> Performance tuning a parallel application involves integrating performance data from many components of the system, including the message passing library, performance monitoring tool, resource manager, operating system, and the application itself. The current practice of visualizing these data streams using a separate, customized tool for each source is inconvenient from a usability perspective, and there is no easy way to visualize the data in an integrated fashion. We demonstrate a solution to this problem using Devise, a generic visualization tool which is designed to allow an arbitrary number of different but related data streams to be integrated and explored visually in a flexible manner. We display data emanating from a variety of sources side by side in three case studies. First we interface the Paradyn Parallel Performance Tool and Devise, using two simple data export modules and Paradyn's simple visualization interface. We show several Devise/Paradyn visualizations which are useful for performance tuning parallel codes, and which incorporate data from Unix utilities and application output. Next we describe the visualization of trace data from a parallel application running in a Condor cluster of workstations. Finally we demonstrate the utility of Devise visualizations in a study of Condor cluster activity.<P><!WA11><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/paradynPVM.ps.Z"><H3>The Paradyn Parallel Performance Tools and PVM</H3></A>Barton P. Miller, Jeffrey K. Hollingsworth and Mark D. Callaghan."Environments and Tools for Parallel Scientific Computing",SIAM Press, J. Dongarra and B. Tourancheau, eds., 1994<P>Paradyn is a performance tool for large-scale parallel applications.By using dynamic instrumentation and automating the search for bottlenecks,it can measure long running applications on production-sized data sets.Paradyn has recently been ported to measure native PVM applications.<P>Programmers run their unmodified PVM application programs with Paradyn.Paradyn automatically inserts and modifies instrumentation during theexecution of the application,systematically searching for the causes of performance problems.In most cases, Paradyn can isolate major causes of performance problems,and the part of the program that is responsible the problem.<P>Paradyn currently runs on the Thinking Machine CM-5, Sun workstations, and PVM(currently only on Suns).It can measure heterogeneous programs across any of these platforms.<P>This paper presents an overview of Paradyn, describes the new facility inPVM that supports Paradyn, and reports experience with PVM applications.<P><!WA12><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/array_distrib.ps.Z"><H3>Optimizing Array Distributions in Data-Parallel Programs</H3></A>Krishna Kunchithapadam and Barton P. Miller. Languages and Compilers for Parallel Computing, August 1994.<P>Data parallel programs are sensitive to the distribution of data acrossprocessor nodes.We formulate the reduction of inter-node communication as anoptimization on a colored graph.We present a technique that records the run time inter-node communication causedby the movement of array data between nodes during execution and buildsthe colored graph, and provide a simple algorithm that optimizes thecoloring of this graph to describe new data distributions that wouldresult in less inter-node communication.From the distribution information, we write compiler pragmas to be usedin the application program.<P>Using these techniques, we traced the execution of a real data-parallelapplication (written in CM Fortran) and collected the array accessinformation.We computed new distributions that should provide an overall reduction inprogram execution time.However, compiler optimizations andpoor interfaces between the compiler and runtime systems counteracted anypotential benefit from the new data layouts.In this context, we provide a set of recommendations for compilerwriters that we think are needed to both write efficient programs and tobuild the next generation of tools for parallel systems.<P>The techniques that we have developed form the basis for future work inmonitoring array access patterns and generate on-the-flyredistributions of arrays.<!WA13><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/rbi_thesis.ps.Z"><H3>Performance Measurement Tools for High-Level Parallel Programming Languages</H3></A>R. Bruce Irvin. Ph.D. Thesis, October 1995.<P><em>Note: this paper contains several color postscript pages.</em><P>Users of high-level parallel programming languages require accurateperformance information that is relevant to their source code. Furthermore,when their programs experience performance problems at the lowest levels oftheir hardware and software systems, programmers need to be able to peelback layers of abstraction to examine low-level problems while maintainingreferences to the high-level source code that ultimately caused theproblem. This dissertation addresses the problems associated with providinguseful performance data to users of high-level parallel programminglanguages. In particular it describes techniques for providing source-levelperformance data to programmers, for mapping performance data amongmultiple layers of abstraction, and for providing data-oriented views ofperformance.<P>We present NV, a model for the explanation of performance information forhigh-level parallel language programs. In NV, a level of abstractionincludes a collection of nouns (code and data objects), verbs (activities),and performance information measured for the nouns and verbs. Performanceinformation is mapped from level to level to maintain relationships betweenlow-level activities and high-level code, even when such relationships areimplicit.<P>The NV model has helped us to implement support for performance measurementof high-level parallel language applications in two performance measurementtools (ParaMap and Paradyn). We describe the design and implementation ofthese tools and show how they provide performance information for CMFortran programmers.<P>Finally, we present results of measurement studies in which we have usedParaMap and Paradyn to improve the performance of a variety of real CMFortran applications running on CM-5 parallel computers. In each case, wefound that overall performance trends could be observed at the source codelevel and that both data views and code views of performance were useful.We found that some performance problems could not be explained at thesource code level. In these cases, we used the performance tools to examinelower levels of abstraction to find performance problems. We found thatlow-level information was most useful when related to source-level codestructures and (especially) data structures. Finally, we made relativelysmall changes to the applications' source code to achieve substantialperformance improvements.<p><!WA14><A HREF="ftp://grilled.cs.wisc.edu/technical_papers/steering.ps.Z"><H3>Integrating a Debugger and a Performance Tool for Steering</H3></A>Krishna Kunchithapadamand Barton P. Miller.<P>Steering is a performance optimization idiom applicable to many problemdomains.It allows control and performance tuning to take place during programexecution.Steering emphasizes the optimization and control of the performance ofa program using mechanisms that are external to the program.Performance measurement tools and symbolic debuggers already independentlyprovide some of the mechanisms needed to implement a steering tool.In this paper we describe a configuration that integrates a performancetool, Paradyn, and a debugger to build a steering environment.<P>The steering configuration allows fast prototyping of steering policies,and provides support for both interactive and automated steering.<P>Last modified:Thu Sep 26 13:07:15 CDT 1996
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -