⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.utexas.edu^users^rvdg^intercom^index.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
字号:
MIME-Version: 1.0
Server: CERN/3.0
Date: Tuesday, 07-Jan-97 15:24:19 GMT
Content-Type: text/html
Content-Length: 4854
Last-Modified: Wednesday, 28-Aug-96 18:19:38 GMT

<HEAD><TITLE>Interprocessor Collective Communications Library (iCC)</TITLE></HEAD> <H1>Interprocessor Collective Communications Library (iCC)</H1><P><b>D. Payne, Intel SSD <BR>L. Shuler, Sandia National Laboratories <BR><!WA0><!WA0><a href="http://www.cs.utexas.edu/users/rvdg/index.html">R. van de Geijn </a>,University of Texas at Austin <BR><!WA1><!WA1><a href="http://www.scp.caltech.edu/~jwatts/">J. Watts </a>,California Institute of Technology</b><P><P><b></b><P><h2>Current version:  Release R2.1.0, March 1, 1995</h2><h3> Please sign our <!WA2><!WA2><a href="http://www.cs.utexas.edu/users/rvdg/intercom/icc.html"> guestbook </a> </h3><h3>What's new</h3><ul><li>  MPI-like group interface<li>  Version of iCC for OSF R1.3 <li>  Version of iCC for SUNMOS R1.6<li>  New reference manual, that includes group interface<li>  New summary, (not yet finished)<li> <!WA3><!WA3><a href="http://www.cs.utexas.edu/users/rvdg/intercom/group_example.f">Fortran example for using groups</a><li> <!WA4><!WA4><a href="http://www.cs.utexas.edu/users/rvdg/abstracts/icc_vs_other.html">New paper comparing iCC to NX, MPI and BLACS <a><li> <!WA5><!WA5><a href="http://www.cs.utexas.edu/users/rvdg/tutorial.html"> Tutorialon Collective Communication (PowerPoint presentation) </a><li> <!WA6><!WA6><a href="http://www.cs.utexas.edu/users/rvdg/intercom/bugs.html"> The first and only (so far) valid bug report since Spring 1994 </a><li> Patch R2.1.0:  Fixes above bug.</ul><P><H1>Introduction</H1><P>This page describes the second release of the Interprocessor CollectiveCommunications (InterCom) Library, iCC release R2.1.0.  This libraryis the result of an ongoing collaboration between David Payne (IntelSSD), Lance Shuler (Sandia National Laboratories), Robert van deGeijn (University of Texas at Austin), and Jerrell Watts (CaliforniaInstitute of Technology), funded by the Intel Research Council, and Intel SSD.  Previous contributors to this effort includeMike Barnett (Univ. of Idaho), Satya Gupta (Intel SSD),Rik Littlefield (PNL), and Prasenjit Mitra (now with Oracle).<p>The library implements a comprehensive approach to collectivecommunication.  The results are best summarized by the followingperformance tables <h2> Comparison of the various libraries </h2>The following tables give the ratios of times requiredfor completion on a 16x32 mesh Paragon using OSF R1.3 </h2><PRE><TT>              <b> Broadcast </b> </br> </br><b>     bytes   NX/iCC   BLACS/iCC   MPI/iCC    </b> </br><b>   ----------------------------------------- </b> </br><b>        16    1.4         1.0        1.6      </b> </br><b>      1024    1.5         1.0        2.5      </b> </br><b>     65536    5.5         2.9        2.8      </b> </br><b>   1048576   11.3         6.1        7.5       </b> </br></TT></PRE><p><PRE><TT>              <b> Sum-to-All </b> </br> </br><b>     bytes    NX/iCC  BLACS/iCC    MPI/iCC    </b> </br><b>   -----------------------------------------  </b> </br><b>        16     1.0        1.2        2.1      </b> </br><b>      1024     1.0        1.0        2.0      </b> </br><b>     65536    21.1        4.1        6.9      </b> </br><b>   1048576    34.6        5.9       11.8      </b> </br></TT></PRE><p>Attaining the improvement in performance is as easy aslinking in a library that automatically translates NX collectivecommunication calls to iCC calls.  Furthermore, the iCC library givesadditional functionality like scatter and gather operations, and moregeneral "gopf" combine operations.  <p>As had been planned, an MPI-like group interface to iCC is now available.  The interface lets the user create and free groups and communicators, and it gives user-defined groups complete access to the high performance routines in the iCC library. <p>We would like to note that this library is not intended to compete with MPI.  It was started as a research project into techniques required to develop high performance implementations of the MPIcollective communication calls.  We are making this library availableas a service to the user community, with the hope that these techniqueseventually are incorporated into efficient MPI implementations. <p><h2> <!WA7><!WA7><a href="http://www.cs.utexas.edu/users/rvdg/intercom/using.html"> Using the library. </a> </h2><h2> Manuals </h2><ul><li><!WA8><!WA8><a href="file://ftp.cs.utexas.edu/pub/rvdg/intercom/R2.0.0/iCC.reference.ps"> Reference manual </a><li><!WA9><!WA9><a href="file://ftp.cs.utexas.edu/pub/rvdg/intercom/R2.0.0/iCC.summary.ps"> Summary </a></ul><h2> How to get iCC </h2>iCC binaries and manuals are available from <!WA10><!WA10><a href="http://www.netlib.org"> netlib </a> (directory intercom)and via anonymous <!WA11><!WA11><a href="file://ftp.cs.utexas.edu/pub/rvdg/intercom/R2.1.0">ftp (net.cs.utexas.edu, directory pub/rvdg/intercom/R2.1.0). </a><h2><!WA12><!WA12><a href="http://www.cs.utexas.edu/users/rvdg/intercom/pubs.html">Related Publications</a></h2><h2><!WA13><!WA13><a href="http://www.cs.utexas.edu/users/rvdg/tutorial.html"> Related Tutorials </a> </h2><h2><!WA14><!WA14><a href="http://www.cs.utexas.edu/users/rvdg/intercom/bugs.html"> Bug Reports </a> </h2><p><!WA15><!WA15><img SRC="http://www.cs.utexas.edu/users/sammy/cgi/spy?/users/rvdg/logs/intercom">

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -