⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 79.html

📁 国外MPI教材
💻 HTML
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head>	<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />	<style type="text/css">	body { font-family: Verdana, Arial, Helvetica, sans-serif;}	a.at-term {	font-style: italic; }	</style>	<title>Hybrid MPI/OpenMP Laplace Solver Performance Characteristics</title>	<meta name="Generator" content="ATutor">	<meta name="Keywords" content=""></head><body> <p>The following graphs show the performance of the MPI Laplace solver on several systems, including an SGI Origin 2000, an IBM SP with 8-processor nodes, and a cluster of quad-Xeon systems connected with Myrinet.  As with the OpenMP and MPI performance data, the results are presented in terms of speedup and parallel efficiency.</p>

<h3> Speedup </h3>

<img src="mlp-speedup.JPG" alt="mlp speedup chart" width="611" height="402">

<h3> Parallel Efficiency </h3>

<img src="mlp-pareff.JPG" alt="mlp parallel efficiency chart" width="713" height="402"> 

<p> Even more so than the previous versions of this code, the multilevel version is extremely dependent on memory bandwidth to perform well. This is especially apparent on the quad-Xeon cluster, where increasing the total number of processes and threads beyond two actually makes the code run slower. In contrast, the SP runs particularly well in this manner, because its intra-node (memory) bandwidth is both greater than its inter-node (message passing) bandwidth and sufficient to keep all the processors on a node busy. </p></body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -