⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 101.html

📁 国外MPI教材
💻 HTML
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head>	<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />	<style type="text/css">	body { font-family: Verdana, Arial, Helvetica, sans-serif;}	a.at-term {	font-style: italic; }	</style>	<title>Problems Specific to MPI</title>	<meta name="Generator" content="ATutor">	<meta name="Keywords" content=""></head><body> <h3> Common syntax errors</h3>

<p>The two most common syntax errors are:</p>

<ol>
  <li>Leaving off extra error argument in Fortran calls</li>
  <li>Not including MPI include file</li>
</ol>

<p>The most common error encountered, especially if you program in both C and Fortran, is leaving off the extra error argument in Fortran MPI subroutine calls.This generally results in a runtime error message that does not specifically identify the missing subroutine argument as the culprit.</p>

<h3>Mismatch between variable datatype and MPI datatype of message data</h3>

<p>MPI takes the memory address of the variable given in the calling statement as the starting location for the send buffer.&nbsp; It will then compute an ending offset based on the count and MPI datatype specified.&nbsp; If the MPI 
datatype does not match the actual datatype of the variable, two possible problems may be encountered:</p>

<ol>
  <li>Garbage results</li>
  <li>Runtime errors</li>
</ol>

<h3>Deadlocking due to order of message send/receives</h3>

<p>The following program was used as an example in the MPI Overview chapter. It runs successfully with 2 processors. When run with 3 processors, the program will deadlock. The logic of the program specifies the following reasons for this deadlock:</p>

<ul>
  <li>If rank=1 then send a message to rank=0</li>
  <li>If rank is other than 1, post a receive with the source process wildcarded</li>
</ul>

<p>When run with two processes, the desired behavior is seen. The rank 1 process sends a message to rank 0, and the rank 0 process posts a receive. Since only one message was sent, the rank 0 process picks this message up and both processes terminate normally.</p>

<p>When run with 3 processes, again the rank 1 process sends a message to rank 0.  This time, however, both the rank 0 and rank 2 processes post a receive. Since the message sent from rank 1 specified rank 0 as its destination, the rank 0 receive will complete and both the rank 0 and rank 1 processes will have their completion criteria satisfied. No message was sent to rank 2, however, and the receive that it posted will never meet its completion criteria.</p>

<p>The rank 2 process will wait indefinitely in the MPI_RECV routine and the program will deadlock. </p>


<p class="codelang">C:</p>
<pre><code>#include &lt;mpi.h&gt;
#include &lt;stdio.h&gt;
/* Run with two processes */

int main(int argc, char *argv[]) {
  int rank, i, count;
  float data[100],value[200];
  MPI_Status status;
  MPI_Init(&amp;argc,&amp;argv);
  MPI_Comm_rank(MPI_COMM_WORLD,&amp;rank);
  if(rank==1) {
    for(i=0;i&lt;100;++i) data[i]=i;
      MPI_Send(data,100,MPI_FLOAT,0,55,MPI_COMM_WORLD);
   } else {
      MPI_Recv(value,200,MPI_FLOAT,MPI_ANY_SOURCE,55,MPI_COMM_WORLD,&amp;status);
      printf("P:%d Got data from processor %d \n",rank, status.MPI_SOURCE);
      MPI_Get_count(&amp;status,MPI_FLOAT,&amp;count);
      printf("P:%d Got %d elements \n",rank,count);
      printf("P:%d value[5]=%1f\n",rank,value[5]);
   }

   MPI_Finalize();
}

     </code></pre>
    


<p class="codelang">Fortran:</p>
<pre><code>
      PROGRAM p2p
!
! Run with two processes
!
      INCLUDE 'mpif.h'
      INTEGER err, rank, size
      real data(100)
      real value(200)
      integer status(MPI_STATUS_SIZE)
      integer count
      CALL MPI_INIT(err)
      CALL MPI_COMM_RANK(MPI_COMM_WORLD,rank,err)
      CALL MPI_COMM_SIZE(MPI_COMM_WORLD,size,err)
      if (rank.eq.1) then
          data=3.0
          call MPI_SEND(data,100,MPI_REAL,0,55,MPI_COMM_WORLD,err)
        else
          call MPI_RECV(value,200,MPI_REAL,MPI_ANY_SOURCE,55, &
           MPI_COMM_WORLD,status,err)
          print *, "P:",rank," got data from processor ",status(MPI_SOURCE)
          call MPI_GET_COUNT(status,MPI_REAL,count,err)
          print *, "P:",rank," got ",count," elements"
          print *, "P:",rank," value(5)=",value(5)
      end if

      CALL MPI_FINALIZE(err)
      END</code></pre></body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -