📄 mpi45.htm
字号:
<CENTER><P>图4.3: 根进程从组中的每个其他进程收集100个整型数据,每个数据集之间按一定步长分开存放</P></CENTER>
<CENTER><P><A NAME="fig4.4"></A><IMG SRC="mpi44.gif" tppabs="http://arch.cs.pku.edu.cn/parallelprogramming/mpispec/mpi44.gif" HEIGHT=201 WIDTH=468></P></CENTER>
<CENTER><P>图4.4: 根进程从100*150的数组中收集第0列,每个数据集之间按一定步长分开存放</P></CENTER>
<P>例4.6: 在接收方同例4.5,但发送的是每个100*150数组的第0列中的100个数据(以C语言为例)见<A HREF="#fig4.4">图4.4</A>.</P>
<PRE> MPI_Comm comm;
int gsize, sendarray[100][150];
int root, *rbuf, stride;
MPI_Datatype stype;
int *displs, i, *rcounts;
......
MPI_Comm_size(comm, &gsize);
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100;
}
/* 为数组中的第一列数据生成相应的数据类型 */
MPI_Type_vector(100, 1, 150, MPI_INT, &stype);
MPI_Type_commit(&stype);
MPI_Gatherv(sendarray, 1, stype, rbuf, rcounts, displs, MPI_INT,
root, comm);
</PRE>
<P>例4.7:进程i将100*150的整数数组中第i列的100-i个整数发送给根进程(以C语言为例)接收端和上述两例一样也设定步长.见<A HREF="#fig4.5">图4.5</A></P>
<PRE> MPI_Comm comm;
int gsize, sendarray[100][150], *sptr;
int root, *rbuf, stride, myrank;
MPI_Datatype stype;
int *displs, i, *rcounts;
......
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank(comm, &myrank);
rbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100-i; /* 和上例不同处 */
}
/* 为要发送列的数据生成相应的数据类型 */
MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit(&stype);
/* sptr是"myrank"列的起始地址 */
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 1, stype, rbuf, rcounts, displs, MPI_INT,
root, comm);</PRE>
<P>注意:每个进程所接收的数据个数不同. </P>
<CENTER><P><A NAME="fig4.5"></A><IMG SRC="mpi45.gif" tppabs="http://arch.cs.pku.edu.cn/parallelprogramming/mpispec/mpi45.gif" HEIGHT=206 WIDTH=469></P></CENTER>
<CENTER><P>图4.5: 根进程从100*150的数组中收集第i列的100-i个整型数据,每个数据集之间按一定步长分开存放</P></CENTER>
<P>例4.8:和例4.7一样,但发送端不同.为在发送端得到正确的步长,首先生成相应的步长,这样我们就可以读C数组中的某一列了,这同<A HREF="mpi312.htm#3.12.7" tppabs="http://arch.cs.pku.edu.cn/parallelprogramming/mpispec/mpi312.htm#3.12.7">3.12.7</A>中的例3.32相类似.</P>
<PRE> MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root,*rbuf,stride,myrank,disp[2],blocklen[2];
MPI_Datatype stype,type[2];
int *displs,i,*rcounts;
......
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank(comm, &myrank);
rubf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
for (i=0; i<gsize; ++i) {
displs[i] = i*stride;
rcounts[i] = 100-i;
}
/* 在整行范围内为一个整数生成数据类型 */
disp[0] = 0; disp[1] = 150*sizeof(int);
type[0] = MPI_INT; type[1] = MPI_UB;
blocklen[0] = 1; blocklen[1] = 1;
MPI_Type_struct(2, blocklen, disp, type, &stype);
MPI_Type_commit(&stype);
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 100-myrank, stype, rbuf, rcounts, displs,
MPI_INT, root, comm);</PRE>
<P>例4.9: 在发送端同例4.7,但在接收端各数据块之间的步长是随之变化的.见<A HREF="#fig4.6">图4.
6</A>.</P>
<PRE> MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root,*rbuf,*stride,myrank,bufsize;
MPI_Datatype stype;
int *displs,i,*rcounts,offset;
......
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank(comm, &myrank);
stride = (int *)malloc(gsize*sizeof(int));
......
/* 对stride[i]赋初值,i从0到gsize-1 */
/* 首先设置displs和rcounts向量 */
displs = (int *)malloc(gsize*sizeof(int));
rcounts = (int *)malloc(gsize*sizeof(int));
offset = 0;
for (i=0; i<gsize; ++i) {
displs[i] = offset;
offset += stride[i];
rcounts[i] = 100-i;
}
/* rbuf缓冲区的大小很容易得到 */
bufsize = displs[gsize-1]+rcounts[gsize-1];
rbuf = (int *)malloc(bufsize*sizeof(int));
/* 为将要发送的列生成相应的数据类型 */
MPI_Type_vector(100-myrank, 1, 150, MPI_INT, &stype);
MPI_Type_commit(&stype);
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, 1, stype, rbuf, rcounts, displs, MPI_INT,
root, comm);</PRE>
<CENTER><P><A NAME="fig4.6"></A><IMG SRC="mpi46.gif" tppabs="http://arch.cs.pku.edu.cn/parallelprogramming/mpispec/mpi46.gif" HEIGHT=208 WIDTH=475></P></CENTER>
<CENTER><P>图4.6: 根进程从100*150的数组中收集第i列的100-i个整型数据,每个数据集之间按步长stride[i]分开存放</P></CENTER>
<P>例4.10: 进程i从100*150数组的第i列开始发送num个整数(以C语言为例).这里比较困难的是根进程不知道变化的num的确切值,所以必须先收集各个num值,这些数据依次存放在接收端.
</P>
<PRE> MPI_Comm comm;
int gsize,sendarray[100][150],*sptr;
int root,*rbuf,stride,myrank,disp[2],blocklen[2];
MPI_Datatype stype,types[2];
int *displs,i,*rcounts,num;
......
MPI_Comm_size(comm, &gsize);
MPI_Comm_rank(comm, &myrank);
/* 首先根进程收集各个num的值 */
rcounts = (int *)malloc(gsize*sizeof(int));
MPI_Gather(&num, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm);
/* 现在根进程已经得到正确的rcounts,这样我们可以设置displs[] 以便将这些
数据依次存放在接收端 */
displs = (int *)malloc(gsize*sizeof(int));
displs[0] = 0;
for (i=1; i<gsize; ++i) {
displs[i] = displs[i-1]+rcounts[i-1];
}
/* 生成接收消息缓冲区 */
rbuf = (int *)malloc(gsize*(displs[gsize-1]+rcounts[gsize-1])
*sizeof(int));
/* 在整行范围内为一个整数生成数据类型 */
disp[0] = 0; disp[1] = 150*sizeof(int);
type[0] = MPI_INT; type[1] = MPI_UB;
blcklen[0] =1; blcklen[1] = 1;
MPI_Type_struct(2, blocklen, disp, type, &stype);
MPI_Type_commit(&stype);
sptr = &sendarray[0][myrank];
MPI_Gatherv(sptr, num, stype, rbuf, rcounts, displs, MPI_INT,
root, comm);</PRE>
<P>
<HR WIDTH="100%"></P>
<TABLE WIDTH="100%" >
<TR>
<TD align=left>Copyright: NPACT </TD>
<TD align=right><A HREF="mpi44.htm" tppabs="http://arch.cs.pku.edu.cn/parallelprogramming/mpispec/mpi44.htm"><IMG SRC="backward.gif" tppabs="http://arch.cs.pku.edu.cn/image/backward.gif" ALT="BACKWARD" HEIGHT=32 WIDTH=32></A><A HREF="mpi46.htm" tppabs="http://arch.cs.pku.edu.cn/parallelprogramming/mpispec/mpi46.htm"><IMG SRC="forward.gif" tppabs="http://arch.cs.pku.edu.cn/image/forward.gif" ALT="FORWARD" HEIGHT=32 WIDTH=32></A>
</TD>
</TR>
</TABLE>
</BODY>
</HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -