⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mpi_alltoallv.3

📁 MPI stands for the Message Passing Interface. Written by the MPI Forum (a large committee comprising
💻 3
字号:
.\"Copyright 2006, Sun Microsystems, Inc..\" Copyright (c) 1996 Thinking Machines Corporation.TH MPI_Alltoallv 3OpenMPI "September 2006" "Open MPI 1.2" " ".SH NAME\fBMPI_Alltoallv\fP \- All processes send different amount of data to, and receive different amount of data from, all processes.SH SYNTAX.ft R.SH C Syntax.nf#include <mpi.h>int MPI_Alltoallv(void *\fIsendbuf\fP, int *\fIsendcounts\fP,	int *\fIsdispls\fP, MPI_Datatype \fIsendtype\fP,	void *\fIrecvbuf\fP, int\fI *recvcounts\fP,	int *\fIrdispls\fP, MPI_Datatype \fIrecvtype\fP, MPI_Comm \fIcomm\fP).SH Fortran Syntax.nfINCLUDE 'mpif.h'MPI_ALLTOALLV(\fISENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE,	RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR\fP)	<type>	\fISENDBUF(*), RECVBUF(*)\fP	INTEGER	\fISENDCOUNTS(*), SDISPLS(*), SENDTYPE\fP	INTEGER	\fIRECVCOUNTS(*), RDISPLS(*), RECVTYPE\fP	INTEGER	\fICOMM, IERROR\fP.SH C++ Syntax.nf#include <mpi.h>void MPI::Comm::Alltoallv(const void* \fIsendbuf\fP,	const int \fIsendcounts\fP[], const int \fIdispls\fP[],	const MPI::Datatype& \fIsendtype\fP, void* \fIrecvbuf\fP,	const int \fIrecvcounts\fP[], const int \fIrdispls\fP[],	const MPI::Datatype& \fIrecvtype\fP).SH INPUT PARAMETERS.ft R.TP 1.2isendbufStarting address of send buffer..TP 1.2isendcountsInteger array, where entry i specifies the number of elements to sendto rank i..TP 1.2isdisplsInteger array, where entry i specifies the displacement (offset from\fIsendbuf\fP, in units of \fIsendtype\fP) from which to send data torank i..TP 1.2isendtypeDatatype of send buffer elements..TP 1.2irecvcountsInteger array, where entry j specifies the number of elements toreceive from rank j..TP 1.2irdisplsInteger array, where entry j specifies the displacement (offset from\fIrecvbuf\fP, in units of \fIrecvtype\fP) to which data from rank jshould be written..TP 1.2irecvtypeDatatype of receive buffer elements..TP 1.2icommCommunicator over which data is to be exchanged..SH OUTPUT PARAMETERS.ft R.TP 1.2irecvbufAddress of receive buffer..ft R.TP 1.2iIERRORFortran only: Error status..SH DESCRIPTION.ft RMPI_Alltoallv is a generalized collective operation in which allprocesses send data to and receive data from all other processes. Itadds flexibility to MPI_Alltoall by allowing the user to specify datato send and receive vector-style (via a displacement and elementcount). The operation of this routine can be thought of as follows,where each process performs 2n (n being the number of processes incommunicator \fIcomm\fP) independent point-to-point communications(including communication with itself)..sp.nf	MPI_Comm_size(\fIcomm\fP, &n);	for (i = 0, i < n; i++)	    MPI_Send(\fIsendbuf\fP + \fIsdispls\fP[i] * extent(\fIsendtype\fP),	        \fIsendcounts\fP[i], \fIsendtype\fP, i, ..., \fIcomm\fP);	for (i = 0, i < n; i++)	    MPI_Recv(\fIrecvbuf\fP + \fIrdispls\fP[i] * extent(\fIrecvtype\fP),	        \fIrecvcounts\fP[i], \fIrecvtype\fP, i, ..., \fIcomm\fP);.fi.spProcess j sends the k-th block of its local \fIsendbuf\fP to processk, which places the data in the j-th block of its local\fIrecvbuf\fP. .spWhen a pair of processes exchanges data, each may pass differentelement count and datatype arguments so long as the sender specifiesthe same amount of data to send (in bytes) as the receiver expectsto receive..spNote that process i may send a different amount of data to process jthan it receives from process j. Also, a process may send entirelydifferent amounts of data to different processes in the communicator..spWHEN COMMUNICATOR IS AN INTER-COMMUNICATOR.spWhen the communicator is an inter-communicator, the gather operation occurs in two phases.  The data is gathered from all the members of the first group and received by all the members of the second group.  Then the data is gathered from all the members of the second group and received by all the members of the first.  The operation exhibits a symmetric, full-duplex behavior.  .spThe first group defines the root process.  The root process uses MPI_ROOT as the value of \fIroot\fR.  All other processes in the first group use MPI_PROC_NULL as the value of \fIroot\fR.  All processes in the second group use the rank of the root process in the first group as the value of \fIroot\fR..spWhen the communicator is an intra-communicator, these groups are the same, and the operation occurs in a single phase..sp  .SH NOTES.ft RThe MPI_IN_PLACE option is not available for any form of all-to-allcommunication..spThe specification of counts and displacements should not causeany location to be written more than once..spAll arguments on all processes are significant. The \fIcomm\fP argument,in particular, must describe the same communicator on all processes..spThe offsets of \fIsdispls\fP and \fIrdispls\fP are measured in unitsof \fIsendtype\fP and \fIrecvtype\fP, respectively. Compare this toMPI_Alltoallw, where these offsets are measured in bytes..SH ERRORS.ft RAlmost all MPI routines return an error value; C routines asthe value of the function and Fortran routines in the last argument. C++functions do not return errors. If the default error handler is set toMPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanismwill be used to throw an MPI:Exception object..spBefore the error value is returned, the current MPI error handler iscalled. By default, this error handler aborts the MPI job, except forI/O function errors. The error handler may be changed withMPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURNmay be used to cause error values to be returned. Note that MPI does notguarantee that an MPI program can continue past an error. .SH SEE ALSO.ft R.nfMPI_AlltoallMPI_Alltoallw' @(#)MPI_Alltoallv.3 1.25 06/03/09

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -