⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 57.html

📁 国外MPI教材
💻 HTML
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head>	<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />	<style type="text/css">	body { font-family: Verdana, Arial, Helvetica, sans-serif;}	a.at-term {	font-style: italic; }	</style>	<title>Broadcast</title>	<meta name="Generator" content="ATutor">	<meta name="Keywords" content=""></head><body> <p>Often in parallel algorithms it is desirable to distribute a piece of data to all processes participating in the computation. Rather than coding this by hand, MPI provides the <em><a href="../glossary.html#Broadcast" target="body" class="at-term">broadcast</a></em> function that is a one-to-all communication.</p>

<h3>One-to-all communication</h3>

<blockquote>Same data sent from root process to all the others in the communicator</blockquote>

<p>One characteristic of the broadcast function is that the MPI standard does not guarantee that the function will synchronize the processes.  There are two possible behaviors, depending upon the implementation:</p>

<ul>
<li>
The processes will wait until all processes have the piece of data being
broadcast, at which point they will all be released to continue executing.&nbsp;
This behavior synchronizes the processes</li>

<li>
The processes will continue executing as soon as they have received the
data being broadcast.&nbsp; This behavior does not necessarily synchronize
the processes.</li>
</ul>

<p class="codelang">C Syntax:</p>

<pre><code>int MPI_Bcast(void *buffer, int count, MPI_Datatype
datatype, int root, MPI_Comm comm)</code></pre>

<p class="codelang">Fortran Syntax:</p>
<pre><code>MPI_BCAST(BUFFER, COUNT, DATATYPE, ROOT, COMM, IERROR)
<type> BUFFER(*)
INTEGER COUNT, DATATYPE, ROOT, COMM, IERROR</code></pre>


<p><em>NOTE:</em> All processes must specify the
same root process and communicator.</p>

<h3>Sample Program - C</h3>

<pre><code>#include &lt;mpi.h&gt;
#include &lt;stdio.h&gt;
int main (int argc, char *argv[]) {
  int rank;
  double param;

  MPI_Init(&amp;argc, &amp;argv);
  MPI_Comm_rank(MPI_COMM_WORLD,&amp;rank);
  if(rank==5) param=23.0;
  MPI_Bcast(&amp;param,1,MPI_DOUBLE,5,MPI_COMM_WORLD);
  printf("P:%d after broadcast parameter is %f\n",rank,param);
  MPI_Finalize();
}</code></pre>

<p class="codelang">Program Output:</p>

<pre><code>P:0 after broadcast parameter is 23.000000
P:6 after broadcast parameter is 23.000000
P:5 after broadcast parameter is 23.000000
P:2 after broadcast parameter is 23.000000
P:3 after broadcast parameter is 23.000000
P:7 after broadcast parameter is 23.000000
P:1 after broadcast parameter is 23.000000
P:4 after broadcast parameter is 23.000000</code></pre>


<h3>Sample Program - Fortran</h3>
<pre><code>PROGRAM broadcast
INCLUDE 'mpif.h'
INTEGER err, rank, size
real param

CALL MPI_INIT(err)
CALL MPI_COMM_RANK(MPI_WORLD_COMM,rank,err)
CALL MPI_COMM_SIZE(MPI_WORLD_COMM,size,err)

if(rank.eq.5) param=23.0

call MPI_BCAST(param,1,MPI_REAL,5,MPI_COMM_WORLD,err)
print *,"P:",rank," after broadcast param is ",param
CALL MPI_FINALIZE(err)
END</code></pre>

<p class="codelang">Program Output:</p>
<pre><code>P:1 after broadcast parameter is 23.
P:3 after broadcast parameter is 23.
P:4 after broadcast parameter is 23
P:0 after broadcast parameter is 23
P:5 after broadcast parameter is 23.
P:6 after broadcast parameter is 23.
P:7 after broadcast parameter is 23.
P:2 after broadcast parameter is 23.</code></pre></body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -