⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mpi_reduce.3

📁 MPI stands for the Message Passing Interface. Written by the MPI Forum (a large committee comprising
💻 3
📖 第 1 页 / 共 2 页
字号:
         (   k   = ( min(i, j)    if u = v         (         (  j           if u > v) .fi.spBoth operations are associative and commutative. Note that if MPI_MAXLOC isapplied to reduce a sequence of pairs (u(0), 0), (u(1), 1),\ ..., (u(n-1),n-1), then the value returned is (u , r), where u= max(i) u(i) and r isthe index of the first global maximum in the sequence. Thus, if eachprocess supplies a value and its rank within the group, then a reduceoperation with op = MPI_MAXLOC will return the maximum value and the rankof the first process with that value. Similarly, MPI_MINLOC can be used toreturn a minimum and its index. More generally, MPI_MINLOC computes alexicographic minimum, where elements are ordered according to the firstcomponent of each pair, and ties are resolved according to the secondcomponent..spThe reduce operation is defined to operate on arguments that consist of apair: value and index. For both Fortran and C, types are provided todescribe the pair. The potentially mixed-type nature of such arguments is aproblem in Fortran. The problem is circumvented, for Fortran, by having theMPI-provided type consist of a pair of the same type as value, and coercingthe index to this type also. In C, the MPI-provided pair type has distincttypes and the index is an int..spIn order to use MPI_MINLOC and MPI_MAXLOC in a reduce operation, one mustprovide a datatype argument that represents a pair (value and index). MPIprovides nine such predefined datatypes. The operations MPI_MAXLOC andMPI_MINLOC can be used with each of the following datatypes:.sp.nf    Fortran:     Name                     Description     MPI_2REAL                pair of REALs     MPI_2DOUBLE_PRECISION    pair of DOUBLE-PRECISION variables     MPI_2INTEGER             pair of INTEGERs         C: 		    Name        	    	Description     MPI_FLOAT_INT            float and int     MPI_DOUBLE_INT           double and int     MPI_LONG_INT             long and int     MPI_2INT                 pair of ints     MPI_SHORT_INT            short and int     MPI_LONG_DOUBLE_INT      long double and int.fi.spThe data type MPI_2REAL is equivalent to:.nf    MPI_TYPE_CONTIGUOUS(2, MPI_REAL, MPI_2REAL)     .fi.spSimilar statements apply for MPI_2INTEGER, MPI_2DOUBLE_PRECISION, andMPI_2INT..sp The datatype MPI_FLOAT_INT is as if defined by the following sequence ofinstructions..sp.nf    type[0] = MPI_FLOAT     type[1] = MPI_INT     disp[0] = 0     disp[1] = sizeof(float)     block[0] = 1     block[1] = 1     MPI_TYPE_STRUCT(2, block, disp, type, MPI_FLOAT_INT).fi.spSimilar statements apply for MPI_LONG_INT and MPI_DOUBLE_INT.  .sp\fBExample 3:\fR Each process has an array of 30 doubles, in C. For each ofthe 30 locations, compute the value and rank of the process containing thelargest value..sp.nf        \&...         /* each process has an array of 30 double: ain[30]          */         double ain[30], aout[30];         int  ind[30];         struct {             double val;             int   rank;         } in[30], out[30];         int i, myrank, root;              MPI_Comm_rank(MPI_COMM_WORLD, &myrank);         for (i=0; i<30; ++i) {             in[i].val = ain[i];             in[i].rank = myrank;         }         MPI_Reduce( in, out, 30, MPI_DOUBLE_INT, MPI_MAXLOC, root, comm );         /* At this point, the answer resides on process root          */         if (myrank == root) {             /* read ranks out              */             for (i=0; i<30; ++i) {                 aout[i] = out[i].val;                 ind[i] = out[i].rank;             }         } .sp.fi\fBExample 4:\fR  Same example, in Fortran.  .sp.nf    \&...     ! each process has an array of 30 double: ain(30)          DOUBLE PRECISION ain(30), aout(30)     INTEGER ind(30);     DOUBLE PRECISION in(2,30), out(2,30)     INTEGER i, myrank, root, ierr;          MPI_COMM_RANK(MPI_COMM_WORLD, myrank);         DO I=1, 30             in(1,i) = ain(i)             in(2,i) = myrank    ! myrank is coerced to a double         END DO          MPI_REDUCE( in, out, 30, MPI_2DOUBLE_PRECISION, MPI_MAXLOC, root,                                                               comm, ierr );     ! At this point, the answer resides on process root          IF (myrank .EQ. root) THEN             ! read ranks out             DO I= 1, 30                 aout(i) = out(1,i)                 ind(i) = out(2,i)  ! rank is coerced back to an integer             END DO         END IF .fi.sp\fBExample 5:\fR Each process has a nonempty array of values.  Find the minimum global value, the rank of the process that holds it, and its index on this process..sp.nf    #define  LEN   1000          float val[LEN];        /* local array of values */     int count;             /* local number of values */     int myrank, minrank, minindex;     float minval;          struct {         float value;         int   index;     } in, out;          /* local minloc */     in.value = val[0];     in.index = 0;     for (i=1; i < count; i++)         if (in.value > val[i]) {             in.value = val[i];             in.index = i;         }          /* global minloc */     MPI_Comm_rank(MPI_COMM_WORLD, &myrank);     in.index = myrank*LEN + in.index;     MPI_Reduce( in, out, 1, MPI_FLOAT_INT, MPI_MINLOC, root, comm );         /* At this point, the answer resides on process root          */     if (myrank == root) {         /* read answer out          */         minval = out.value;         minrank = out.index / LEN;         minindex = out.index % LEN;.fi.spAll MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER in Fortran..SH NOTES ON COLLECTIVE OPERATIONSThe reduction functions (.I MPI_Op) do not return an error value.  As a result,if the functions detect an error, all they can do is either call .I MPI_Abortor silently skip the problem.  Thus, if you change the error handler from.I MPI_ERRORS_ARE_FATALto something else, for example, .I MPI_ERRORS_RETURN,then no error may be indicated.The reason for this is the performance problems in ensuring thatall collective routines return the same error value..SH ERRORSAlmost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI:Exception object..spBefore the error value is returned, the current MPI error handler iscalled. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.  .SH SEE ALSO.ft R.spMPI_Allreduce.brMPI_Reduce_scatter.brMPI_Scan.brMPI_Op_create.brMPI_Op_free' @(#)MPI_Reduce.3 1.22 06/03/09   

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -