⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mpi_alltoallw.3

📁 MPI stands for the Message Passing Interface. Written by the MPI Forum (a large committee comprising
💻 3
字号:
.\"Copyright 2006, Sun Microsystems, Inc..\" Copyright (c) 1996 Thinking Machines Corporation.TH MPI_Alltoallw 3OpenMPI "September 2006" "Open MPI 1.2" " ".SH NAME\fBMPI_Alltoallw\fP \- All processes send data of different types to, and receive data of different types from, all processes.SH SYNTAX.ft R.SH C Syntax.nf#include <mpi.h>int MPI_Alltoallw(void *\fIsendbuf\fP, int *\fIsendcounts\fP,	int *\fIsdispls\fP, MPI_Datatype *\fIsendtypes\fP,	void *\fIrecvbuf\fP, int *\fIrecvcounts\fP,	int *\fIrdispls\fP, MPI_Datatype *\fIrecvtypes\fP, MPI_Comm \fIcomm\fP).SH Fortran Syntax.nfINCLUDE 'mpif.h'MPI_ALLTOALLW(\fISENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES,	RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR\fP)	<type>	\fISENDBUF(*), RECVBUF(*)\fP	INTEGER	\fISENDCOUNTS(*), SDISPLS(*), SENDTYPES(*)\fP	INTEGER	\fIRECVCOUNTS(*), RDISPLS(*), RECVTYPES(*)\fP	INTEGER	\fICOMM, IERROR\fP.SH C++ Syntax.nf#include <mpi.h>void MPI::Comm::Alltoallw(const void* \fIsendbuf\fP,	const int \fIsendcounts\fP[], const int \fIsdispls\fP[],	const MPI::Datatype \fIsendtypes\fP[], void* \fIrecvbuf\fP,	const int \fIrecvcounts\fP[], const int \fIrdispls\fP[],	const MPI::Datatype \fIrecvtypes\fP[]).SH INPUT PARAMETERS.ft R.TP 1.2isendbufStarting address of send buffer..TP 1.2isendcountsInteger array, where entry i specifies the number of elements to sendto rank i..TP 1.2isdisplsInteger array, where entry i specifies the displacement (in bytes,offset from \fIsendbuf\fP) from which to send data to rank i..TP 1.2isendtypesDatatype array, where entry i specifies the datatype to use whensending data to rank i..TP 1.2irecvcountsInteger array, where entry j specifies the number of elements toreceive from rank j..TP 1.2irdisplsInteger array, where entry j specifies the displacement (in bytes,offset from \fIrecvbuf\fP) to which data from rank j shouldbe written..TP 1.2irecvtypesDatatype array, where entry j specifies the datatype to use whenreceiving data from rank j..TP 1.2icommCommunicator over which data is to be exchanged..SH OUTPUT PARAMETERS.ft R.TP 1.2irecvbufAddress of receive buffer..ft R.TP 1.2iIERRORFortran only: Error status..SH DESCRIPTION.ft RMPI_Alltoallw is a generalized collective operation in which allprocesses send data to and receive data from all other processes. Itadds flexibility to MPI_Alltoallv by allowing the user to specify thedatatype of individual data blocks (in addition to displacement andelement count). Its operation can be thought of in the following way,where each process performs 2n (n being the number of processes incommunicator \fIcomm\fP) independent point-to-point communications(including communication with itself)..sp.nf	MPI_Comm_size(\fIcomm\fP, &n);	for (i = 0, i < n; i++)	    MPI_Send(\fIsendbuf\fP + \fIsdispls\fP[i], \fIsendcounts\fP[i],	        \fIsendtypes\fP[i], i, ..., \fIcomm\fP);	for (i = 0, i < n; i++)	    MPI_Recv(\fIrecvbuf\fP + \fIrdispls\fP[i], \fIrecvcounts\fP[i],	        \fIrecvtypes\fP[i], i, ..., \fIcomm\fP);.fi.spProcess j sends the k-th block of its local \fIsendbuf\fP to processk, which places the data in the j-th block of its local\fIrecvbuf\fP..spWhen a pair of processes exchanges data, each may pass differentelement count and datatype arguments so long as the sender specifiesthe same amount of data to send (in bytes) as the receiver expectsto receive..spNote that process i may send a different amount of data to process jthan it receives from process j. Also, a process may send entirelydifferent amounts and types of data to different processes in thecommunicator.WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR.spWhen the communicator is an inter-communicator, the gather operation occurs in two phases.  The data is gathered from all the members of the first group and received by all the members of the second group.  Then the data is gathered from all the members of the second group and received by all the members of the first.  The operation exhibits a symmetric, full-duplex behavior.  .spThe first group defines the root process.  The root process uses MPI_ROOT as the value of \fIroot\fR.  All other processes in the first group use MPI_PROC_NULL as the value of \fIroot\fR.  All processes in the second group use the rank of the root process in the first group as the value of \fIroot\fR..spWhen the communicator is an intra-communicator, these groups are the same, and the operation occurs in a single phase..sp  .SH NOTES.ft RThe MPI_IN_PLACE option is not available for any form of all-to-allcommunication..spThe specification of counts, types, and displacements should not causeany location to be written more than once..spAll arguments on all processes are significant. The \fIcomm\fP argument,in particular, must describe the same communicator on all processes..spThe offsets of \fIsdispls\fP and \fIrdispls\fP are measured in bytes.Compare this to MPI_Alltoallv, where these offsets are measured in unitsof \fIsendtype\fP and \fIrecvtype\fP, respectively..SH ERRORS.ft RAlmost all MPI routines return an error value; C routines asthe value of the function and Fortran routines in the last argument. C++functions do not return errors. If the default error handler is set toMPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanismwill be used to throw an MPI:Exception object..spBefore the error value is returned, the current MPI error handler iscalled. By default, this error handler aborts the MPI job, except forI/O function errors. The error handler may be changed withMPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURNmay be used to cause error values to be returned. Note that MPI does notguarantee that an MPI program can continue past an error. .SH SEE ALSO.ft R.nfMPI_AlltoallMPI_Alltoallv' @(#)MPI_Alltoallw.3 1.7 06/03/09   

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -