http:^^www.cs.rochester.edu^u^kthanasi^cashmere.html
来自「This data set contains WWW-pages collect」· HTML 代码 · 共 166 行
HTML
166 行
Date: Wednesday, 20-Nov-96 20:04:38 GMTServer: NCSA/1.3MIME-version: 1.0Content-type: text/htmlLast-modified: Thursday, 07-Nov-96 19:41:15 GMTContent-length: 7148<TITLE>CASHMERe Home Page</TITLE><CENTER><!WA0><IMG ALIGN=TOP SRC="http://www.cs.rochester.edu/u/scott/cashmere/cashmere.gif"><P><H2> Coherence Algorithms for SHared MEmory aRchitectures </H2></CENTER><HR><H2>The CASHMERe Project</H2> <UL><LI> <!WA1><A HREF="#OVERVIEW"> Overview</A><LI> <!WA2><A HREF="#PEOPLE"> People</A><LI> <!WA3><A HREF="#PAPERS"> Papers </A></UL><HR><H2> <A NAME="OVERVIEW"> Overview </A> </H2>CASHMERe stands for "Coherence Algorithms for SHared MEmoryaRchitectures" and is an ongoing effort to provide efficient,scalable, shared memory with minimal hardware support. It is wellaccepted today that commercial workstations offer the bestprice/performance ratio and that shared memory provides the mostdesirable programming paradigm for parallel computing. Unfortunatelyshared memory emulations on networks of workstations provideacceptable performance for only a limited class of applications.CASHMERe attempts to bridge the performance gap between shared memoryemulations on networks of workstations and tightly-coupledcache-coherent multiprocessors while using minimal hardware support.<P>In the context of CASHMERe we have discovered that NCC-NUMA (Non CacheCoherent Non Uniform Memory Access) machines can greatly improve theperformance of <!WA4><A HREF="http://www.cs.rochester.edu/u/scott/cashmere/DSM_NCC.gif"> DSM systems</A>, andapproach that of <!WA5><A HREF="http://www.cs.rochester.edu/u/scott/cashmere/CC_NCC.gif"> fully hardwarecoherent multiprocessors</A>. The basic property of NCC-NUMA systemsis the ability to access remote memory directly; such a capability isoffered by a variety of network interfaces including DEC's MemoryChannel, HP's Hamlyn, and the Princeton <!WA6><AHREF="http://www.cs.princeton.edu/shrimp/"> Shrimp</A>. Given currenttechnology the additional hardware cost of NCC-NUMA systems over puremessage passing systems is minimal. Based on this fact and ourperformance results we believe that NCC-NUMA machines lie near the kneeof the <!WA7><A HREF=http://www.cs.rochester.edu/u/scott/cashmere/Cost_Perf.gif> price-performance curve</A>.<P> The department of <!WA8><A HREF="http://www.cs.rochester.edu"> ComputerScience</A> at the <!WA9><A HREF="http://www.rochester.edu"> University ofRochester</A> is building a 32 processor <!WA10><A HREF="http://www.cs.rochester.edu/u/scott/cashmere/HW.gif">Cashmere prototype</A>. Significant part of the funding comes in theform of an equipment grant from <!WA11><A HREF="http://www.dec.com">DigitalEquipment Corporation</A>. The prototype consists of eight4-processor <!WA12><AHREF="ftp://ftp.digital.com/pub/Digital/info/infosheet/EC-F4452-10.txt">DEC2100</A> 4/233 multiprocessors on a <!WA13><AHREF="http://www.digital.com:80/info/hpc/interconnects.html#MC_Overview">MemoryChannel</A> network. The Memory Channel plugs into any PCI bus. Itprovides a memory-mapped network interface with which processors canread and write remote locations without kernel intervention orinter-processor interrupts. End-to-end bandwidth is currently about40MB/sec; remote write latency is about 3.5us. The next hardwaregeneration is expected to increase bandwidth by approximately oneorder of magnitude, and cut latency by half. Cashmere augments thefunctionality of the Memory Channel by providing cache coherence insoftware.<H2> <!WA14><A HREF="http://www.cs.rochester.edu/u/kthanasi/SSMM_96/talk.html">Implementation of Cashmere</A></H2>Slides from the<!WA15><A HREF="ftp://hopscotch.dolphinics.com/pub/asplos/ssm-workshop.html">Workshop on Scalable Shared Memory Multiprocessors</A>,Boston, MA, October 1996.<H2> <A NAME="PEOPLE"> CASHMERe People </A> </H2>The people behind CASHMERe are<!WA16><A HREF="http://www.cs.rochester.edu/u/scott/"> Michael L. Scott</A>,<!WA17><A HREF="http://www.cs.rochester.edu/u/wei/"> Wei Li</A>,<!WA18><A HREF="http://www.cs.rochester.edu/u/sandhya/"> Sandhya Dwarkadas</A>,<!WA19><A HREF="http://www.cs.rochester.edu/u/kthanasi/"> Leonidas Kontothanassis</A>,<!WA20><A HREF="http://www.cs.rochester.edu/u/gchunt"> Galen Hunt</A>,<!WA21><A HREF="http://www.cs.rochester.edu/u/michael"> Maged Michael</A>,<!WA22><A HREF="http://www.cs.rochester.edu/u/stets"> Robert Stets</A>.<!WA23><A HREF="http://www.cs.rochester.edu/u/nikolaos/"> Nikolaos Hardavellas</A>,<!WA24><A HREF="http://www.cs.rochester.edu/u/si/"> Sotirios Ioannidis</A>,<!WA25><A HREF="http://www.cs.rochester.edu/u/miera/"> Wagner Meira</A>,<!WA26><A HREF="http://www.cs.rochester.edu/u/poulos/"> Alexandros Poulos</A>,<!WA27><A HREF="http://www.cs.rochester.edu/u/cierniak/"> Michal Cierniak</A>,<!WA28><A HREF="http://www.cs.rochester.edu/u/srini/"> Srinivasan Parthasarathy</A>,and<!WA29><A HREF="http://www.cs.rochester.edu/u/zaki/"> Mohammed Zaki</A>.<H2> <A NAME="PAPERS"> CASHMERe papers </A> </H2><UL> <LI>G. C. Hunt and M. L. Scott. <!WA30><AHREF="ftp://ftp.cs.rochester.edu/pub/papers/systems/96.tr626.Using_peer_support_to_reduce_fault-tolerant_overhead.ps.gz">``Using Peer Support to Reduce Fault-Tolerant Overhead inDistributed Shared Memories''</A>.TR 626, Computer Science Department, University of Rochester, June 1996.<LI> L. I. Kontothanassis and M. L. Scott. <!WA31><AHREF="ftp://ftp.cs.rochester.edu/pub/u/kthanasi/95.CAN.Efficient_Shared_Memory_Minimal_HW_Support.ps.gz">``Efficient Shared Memory with Minimal Hardware Support''</A>. InComputer Architecture News, September 1995.<LI> L. I. Kontothanassis and M. L. Scott. <!WA32><AHREF="ftp://ftp.cs.rochester.edu/pub/papers/systems/95.tr578.Distributed_shared_memory_for_new_generation_networks.ps.gz">``Using Memory-Mapped Network Interfaces to Improve the Performance ofDistributed Shared Memory''</A>. In Proc., 2nd HPCA, San Jose, CA,February 1996.<LI> L. I. Kontothanassis, M. L. Scott, and R. Bianchini. <!WA33><AHREF="http://www.cs.rochester.edu/u/kthanasi/SC95/sc95.html"> ``Lazy Release Consistency forHardware-Coherent Multiprocessors''</A>. In Proc., SUPERCOMPUTING '95,San Diego, CA, December 1995.<LI> L. I. Kontothanassis and M. L. Scott. <!WA34><AHREF="ftp://ftp.cs.rochester.edu/pub/u/kthanasi/95.JPDC.SW-Coherence.ps.gz">``Software Cache Coherence for Current and FutureArchitectures''</A>. In Special JPDC Issue on Scalable Shared Memory,November 1995, V29, N2, pp 179-195.<LI> L. I. Kontothanassis and M. L. Scott. <!WA35><AHREF="ftp://ftp.cs.rochester.edu/pub/u/kthanasi/94.HPCA.SW_coherence_Large_Scale_Multi.ps.Z">``Software Cache Coherence for Large Scale Multiprocessors''</A>. InProc., 1st HPCA, Raleigh, NC, January 1995.<LI> M. Marchetti, L. I. Kontothanassis, R. Bianchini, andM. L. Scott. <!WA36><AHREF="ftp://ftp.cs.rochester.edu/pub/papers/systems/94.tr535.Using_simple_page_placement_policies.ps.Z">``Using Simple Page Placement Policies to Reduce the Cost of CacheFills in Coherent Shared-Memory Systems''</A>. In Proc., IPPS '95,Santa Barbara, CA, April 1995.<LI> M. Cierniak and Wei Li. <!WA37><AHREF="ftp://ftp.cs.rochester.edu/pub/papers/systems/tr542.Unifying_data_and_control_transformations.ps.Z">``Unifying Data and Control Transformations for DistributedShared-Memory Machines''</A>. In Proc., SIGPLAN '95 PLDI, La Jolla,CA, June 1995. Also available as TR 542.</UL>For comments and/or requests send mail to<!WA38><A HREF="mailto:kthanasi@crl.dec.com"> kthanasi@crl.dec.com </A>or<!WA39><A HREF="mailto:scott@cs.rochester.edu">scott@cs.rochester.edu</A>.<HR><!WA40><A HREF="http://www.cs.rochester.edu/urcs.html"><!WA41><IMG ALIGN=MIDDLE BORDER=NONE SRC="http://www.cs.rochester.edu/images/urcslogo.gif"> URCS Home Page</A><P><HR>
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?