📄 51.html
字号:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text/css"> body { font-family: Verdana, Arial, Helvetica, sans-serif;} a.at-term { font-style: italic; } </style> <title>Sending a Message with the Point-to-Point Protocol</title> <meta name="Generator" content="ATutor"> <meta name="Keywords" content=""></head><body> <p>There are many methods to send a message from one process to another. For the most part, the different variations have different completion criteria and semantics to allow for improved performance in certain situations. Please see the full MPI course for information on the different modes for MPI send as well as for blocking and non-blocking communications.<p>
<p>The standard MPI send, which is a blocking function call, is optimized on each system for the best performance and has the following properties:</p>
<h3>Standard Send</h3>
<ul>
<li>
Completion Criteria: Completes when message has been sent</li>
<li>
May or may not imply that message has arrived at destination</li>
<li>
Has local or remote completion semantics</li>
</ul>
<p>The standard send, as defined by the MPI standard, is optimized for best performance on each platform. As such, its behavior can vary from system to system. The main differences in its behavior are in its completion criteria:</p>
<ul>
<li>
Local completion semantics: The call to MPI_SEND will copy the data to be communicated to a memory buffer and then return from the call. The completion semantics are independent of any remote process</li>
<li>
Non local completion semantics: The call to MPI send will wait for all of the data to be communicated to be sent out over the network.</li>
</ul>
<p>The local completion semantics should provide good performance in most situations as the communication can proceed in the background while the program continues on with execution. One example where this is not true is in a system that has low memory bandwidth resources. Here it might be more advantageous to wait while the data is sent out over the network rather than incur the performance penalty of the memory copies needed to copy the data into and out of buffers.</p>
<h3>Syntax:</h3>
<p class="codelang">
C:</p>
<pre><code>int MPI_Send(void *buf, int count, MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm)</code></pre>
<p class="codelang">
Fortran:</p>
<pre><code>CALL MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR)
type BUF(*)
INTEGER COUNT, DATATYPE, DEST, TAG
INTEGER COMM, IERROR </code></pre>
<h3>Example:</h3>
<p>Send an array of 10 real values from the process with Rank 0 to the process with Rank 1. To be run with only 2 processes.</p>
<p class="codelang">
Fortran:</p>
<pre><code> PROGRAM SEND_EXAMPLE
INCLUDE 'mpif.h'
PARAMETER ELEMENTS=10
REAL PRESSURE(ELEMENTS)
INTEGER SOURCE,DESTINATION,IERROR
CALL MPI_INIT(IERROR)
DESTINATION=1
DO I=1,ELEMENTS
PRESSURE(I)=I*2.334
ENDDO
IF (MYID.EQ.0) THEN
CALL MPI_SEND(PRESSURE,ELEMENTS,MPI_REAL,DESTINATION,
& 0,MPI_COMM_WORLD,IERROR)
ENDIF
CALL MPI_FINALIZE(IERROR)
STOP
END</code></pre></body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -