⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 ch3u_port.c

📁 fortran并行计算包
💻 C
📖 第 1 页 / 共 3 页
字号:
/* -*- Mode: C; c-basic-offset:4 ; -*- *//* *  (C) 2001 by Argonne National Laboratory. *      See COPYRIGHT in top-level directory. */#include "mpidi_ch3_impl.h"/* * This file replaces ch3u_comm_connect.c and ch3u_comm_accept.c .  These * routines need to be used together, particularly since they must exchange * information.  In addition, many of the steps that the take are identical, * such as building the new process group information after a connection. * By having these routines in the same file, it is easier for them * to share internal routines and it is easier to ensure that communication * between the two root processes (the connector and acceptor) are * consistent. *//* FIXME: If dynamic processes are not supported, this file will contain   no code and some compilers may warn about an "empty translation unit" */#ifndef MPIDI_CH3_HAS_NO_DYNAMIC_PROCESS/* FIXME: pg_translation is used for ? */typedef struct pg_translation {    int pg_index;    /* index of a process group (index in pg_node) */    int pg_rank;     /* rank in that process group */} pg_translation;typedef struct pg_node {    int  index;            /* Internal index of process group 			      (see pg_translation) */    char *pg_id;    char *str;             /* String describing connection info for pg */    int   lenStr;          /* Length of this string */    struct pg_node *next;} pg_node;/* These functions help implement the connect/accept algorithm */static int ExtractLocalPGInfo( MPID_Comm *, pg_translation [], 			       pg_node **, int * );static int ReceivePGAndDistribute( MPID_Comm *, MPID_Comm *, int, int *,				   int, MPIDI_PG_t *[] );static int SendPGtoPeerAndFree( MPID_Comm *, int *, pg_node * );static int FreeNewVC( MPIDI_VC_t *new_vc );static int SetupNewIntercomm( MPID_Comm *comm_ptr, int remote_comm_size, 			      pg_translation remote_translation[],			      MPIDI_PG_t **remote_pg, 			      MPID_Comm *intercomm );static int MPIDI_CH3I_Initialize_tmp_comm(MPID_Comm **comm_pptr, 					  MPIDI_VC_t *vc_ptr, int is_low_group);/* ------------------------------------------------------------------------- *//* * Structure of this file and the connect/accept algorithm: * * Here are the steps involved in implementating MPI_Comm_connect and * MPI_Comm_accept.  These same steps are used withing MPI_Comm_spawn * and MPI_Comm_spawn_multiple. * * First, the connecting process establishes a connection (not a virtual * connection!) to the designated accepting process.   * This makes use of the usual (channel-specific) connection code.   * Once this connection is established, the connecting process sends a packet  * (type MPIDI_CH3I_PKT_SC_CONN_ACCEPT) to the accepting process. * This packet contains a "port_tag_name", which is a value that * is used to separate different MPI port names (values from MPI_Open_port) * on the same process (this is a way to multiplex many MPI port names on  * a single communication connection port). * * At this point, the accepting process creates a virtual connection (VC) * for this connection, initializes it, sends a packet back with the type * MPIDI_CH3I_PKT_SC_OPEN_RESP.  In addition, the connection is saved in  * an accept queue with the port_tag_name. * * On the accepting side, the process waits until the progress engine * inserts the connect request into the accept queue (this is done with the * routine MPIDI_CH3I_Acceptq_dequeue).  This routine returns the matched * virtual connection (VC). * * Once both sides have established there VC, they both invoke * MPIDI_CH3I_Initialize_tmp_comm to create a temporary intercommunicator. * A temporary intercommunicator is constructed so that we can use * MPI routines to send the other information that we need to complete * the connect/accept operation (described below). * * The above is implemented with the routines *   MPIDI_Create_inter_root_communicator_connect *   MPIDI_Create_inter_root_communicator_accept *   MPIDI_CH3I_Initialize_tmp_comm * * At this point, the two "root" processes of the communicators that are  * connecting can use MPI communication.  They must then exchange the * following information: * *    The size of the "remote" communicator *    Description of all process groups; that is, all of the MPI_COMM_WORLDs *    that they know.   *    The shared context id that will be used * *  *//* ------------------------------------------------------------------------- *//*  * These next two routines are used to create a virtual connection * (VC) and a temporary intercommunicator that can be used to  * communicate between the two "root" processes for the  * connect and accept. */#undef FUNCNAME#define FUNCNAME MPIDI_Create_inter_root_communicator_connect#undef FCNAME#define FCNAME MPIDI_QUOTE(FUNCNAME)static int MPIDI_Create_inter_root_communicator_connect(const char *port_name, 							MPID_Comm **comm_pptr, 							MPIDI_VC_t **vc_pptr){    int mpi_errno = MPI_SUCCESS;    MPID_Comm *tmp_comm;    MPIDI_VC_t *connect_vc = NULL;    MPIDI_STATE_DECL(MPID_STATE_MPIDI_CREATE_INTER_ROOT_COMMUNICATOR_CONNECT);    MPIDI_FUNC_ENTER(MPID_STATE_MPIDI_CREATE_INTER_ROOT_COMMUNICATOR_CONNECT);    /* Connect to the root on the other side. Create a       temporary intercommunicator between the two roots so that       we can use MPI functions to communicate data between them. */    mpi_errno = MPIU_CALL(MPIDI_CH3,Connect_to_root(port_name, &connect_vc));    if (mpi_errno != MPI_SUCCESS) {	MPIU_ERR_POP(mpi_errno);    }    mpi_errno = MPIDI_CH3I_Initialize_tmp_comm(&tmp_comm, connect_vc, 1);    if (mpi_errno != MPI_SUCCESS) {	MPIU_ERR_POP(mpi_errno);    }    *comm_pptr = tmp_comm;    *vc_pptr = connect_vc; fn_exit:    MPIDI_FUNC_EXIT(MPID_STATE_MPIDI_CREATE_INTER_ROOT_COMMUNICATOR_CONNECT);    return mpi_errno; fn_fail:    goto fn_exit;}/* Creates a communicator for the purpose of communicating with one other    process (the root of the other group).  It also returns the virtual   connection */#undef FUNCNAME#define FUNCNAME MPIDI_Create_inter_root_communicator_accept#undef FCNAME#define FCNAME MPIDI_QUOTE(FUNCNAME)static int MPIDI_Create_inter_root_communicator_accept(const char *port_name, 						MPID_Comm **comm_pptr, 						MPIDI_VC_t **vc_pptr){    int mpi_errno = MPI_SUCCESS;    MPID_Comm *tmp_comm;    MPIDI_VC_t *new_vc = NULL;    MPID_Progress_state progress_state;    int port_name_tag;    MPIDI_STATE_DECL(MPID_STATE_MPIDI_CREATE_INTER_ROOT_COMMUNICATOR_ACCEPT);    MPIDI_FUNC_ENTER(MPID_STATE_MPIDI_CREATE_INTER_ROOT_COMMUNICATOR_ACCEPT);    /* extract the tag from the port_name */    mpi_errno = MPIDI_GetTagFromPort( port_name, &port_name_tag);    if (mpi_errno != MPIU_STR_SUCCESS) {	MPIU_ERR_POP(mpi_errno);    }    /* FIXME: Describe the algorithm used here, and what routine        is user on the other side of this connection */    /* dequeue the accept queue to see if a connection with the       root on the connect side has been formed in the progress       engine (the connection is returned in the form of a vc). If       not, poke the progress engine. */    MPID_Progress_start(&progress_state);    for(;;)    {	MPIDI_CH3I_Acceptq_dequeue(&new_vc, port_name_tag);	if (new_vc != NULL)	{	    break;	}	mpi_errno = MPID_Progress_wait(&progress_state);	/* --BEGIN ERROR HANDLING-- */	if (mpi_errno)	{	    MPID_Progress_end(&progress_state);	    MPIU_ERR_POP(mpi_errno);	}	/* --END ERROR HANDLING-- */    }    MPID_Progress_end(&progress_state);    mpi_errno = MPIDI_CH3I_Initialize_tmp_comm(&tmp_comm, new_vc, 0);    if (mpi_errno != MPI_SUCCESS) {	MPIU_ERR_POP(mpi_errno);    }    *comm_pptr = tmp_comm;    *vc_pptr = new_vc;fn_exit:    MPIDI_FUNC_EXIT(MPID_STATE_MPIDI_CREATE_INTER_ROOT_COMMUNICATOR_ACCEPT);    return mpi_errno;fn_fail:    goto fn_exit;}/* This is a utility routine used to initialize temporary communicators   used in connect/accept operations, and is only used in the above two    routines */#undef FUNCNAME#define FUNCNAME  MPIDI_CH3I_Initialize_tmp_comm#undef FCNAME#define FCNAME MPIDI_QUOTE(FUNCNAME)static int MPIDI_CH3I_Initialize_tmp_comm(MPID_Comm **comm_pptr, 					  MPIDI_VC_t *vc_ptr, int is_low_group){    int mpi_errno = MPI_SUCCESS;    MPID_Comm *tmp_comm, *commself_ptr;    MPIDI_STATE_DECL(MPID_STATE_MPIDI_CH3I_INITIALIZE_TMP_COMM);    MPIDI_FUNC_ENTER(MPID_STATE_MPIDI_CH3I_INITIALIZE_TMP_COMM);    MPID_Comm_get_ptr( MPI_COMM_SELF, commself_ptr );    /* WDG-old code allocated a context id that was then discarded */    mpi_errno = MPIR_Comm_create(&tmp_comm);    if (mpi_errno != MPI_SUCCESS) {	MPIU_ERR_POP(mpi_errno);    }    /* fill in all the fields of tmp_comm. */    /* FIXME: Should we allocate a new context id each time ? If       so, how do we make sure that each process in this tmp_comm       has the same context id?        We can make sure by sending the context id using non-MPI       communication (e.g., with the initial connection packet)       before switching to MPI communication.    */    tmp_comm->context_id     = 4095;      tmp_comm->recvcontext_id = tmp_comm->context_id;        /* FIXME - we probably need a unique context_id. */    tmp_comm->remote_size = 1;    /* Fill in new intercomm */    tmp_comm->local_size   = 1;    tmp_comm->rank         = 0;    tmp_comm->comm_kind    = MPID_INTERCOMM;    tmp_comm->local_comm   = NULL;    tmp_comm->is_low_group = is_low_group;    /* No pg structure needed since vc has already been set up        (connection has been established). */    /* Point local vcr, vcrt at those of commself_ptr */    /* FIXME: Explain why */    tmp_comm->local_vcrt = commself_ptr->vcrt;    MPID_VCRT_Add_ref(commself_ptr->vcrt);    tmp_comm->local_vcr  = commself_ptr->vcr;    /* No pg needed since connection has already been formed.        FIXME - ensure that the comm_release code does not try to       free an unallocated pg */    /* Set up VC reference table */    mpi_errno = MPID_VCRT_Create(tmp_comm->remote_size, &tmp_comm->vcrt);    if (mpi_errno != MPI_SUCCESS) {	MPIU_ERR_SETANDJUMP(mpi_errno,MPI_ERR_OTHER, "**init_vcrt");    }    mpi_errno = MPID_VCRT_Get_ptr(tmp_comm->vcrt, &tmp_comm->vcr);    if (mpi_errno != MPI_SUCCESS) {	MPIU_ERR_SETANDJUMP(mpi_errno,MPI_ERR_OTHER, "**init_getptr");    }    /* FIXME: Why do we do a dup here? */    MPID_VCR_Dup(vc_ptr, tmp_comm->vcr);    *comm_pptr = tmp_comm;    /* FIXME: Who sets?  Why? Where is this defined? Document.       Why is this not done as part of the VC initialization? */    /* channels/sshm/include/mpidi_ch3_pre.h defines this */#ifdef MPIDI_CH3_HAS_CONN_ACCEPT_HOOK    /* If the VC creates non-duplex connections then the acceptor will     * need to connect back to form the other half of the connection. */    /* FIXME: A hook should not be such a specific function; instead,       it should invoke a function pointer defined in the channel        interface structure */    mpi_errno = MPIDI_CH3_Complete_unidirectional_connection( vc_ptr );#endiffn_exit:    MPIDI_FUNC_EXIT(MPID_STATE_MPIDI_CH3I_INITIALIZE_TMP_COMM);    return mpi_errno;fn_fail:    goto fn_exit;}/* ------------------------------------------------------------------------- *//*   MPIDI_Comm_connect()   Algorithm: First create a connection (vc) between this root and the   root on the accept side. Using this vc, create a temporary   intercomm between the two roots. Use MPI functions to communicate   the other information needed to create the real intercommunicator   between the processes on the two sides. Then free the   intercommunicator between the roots. Most of the complexity is   because there can be multiple process groups on each side. */ #undef FUNCNAME#define FUNCNAME MPIDI_Comm_connect#undef FCNAME#define FCNAME MPIDI_QUOTE(FUNCNAME)int MPIDI_Comm_connect(const char *port_name, MPID_Info *info, int root, 		       MPID_Comm *comm_ptr, MPID_Comm **newcomm){    int mpi_errno=MPI_SUCCESS;    int j, i, rank, recv_ints[3], send_ints[3], context_id;    int remote_comm_size=0;    MPID_Comm *tmp_comm = NULL, *intercomm;    MPIDI_VC_t *new_vc = NULL;    int sendtag=100, recvtag=100, n_remote_pgs;    int n_local_pgs=1, local_comm_size;    pg_translation *local_translation = NULL, *remote_translation = NULL;    pg_node *pg_list = NULL;    MPIDI_PG_t **remote_pg = NULL;    MPIU_CHKLMEM_DECL(3);    MPIDI_STATE_DECL(MPID_STATE_MPIDI_COMM_CONNECT);    MPIDI_FUNC_ENTER(MPID_STATE_MPIDI_COMM_CONNECT);    /* Create the new intercommunicator here. We need to send the       context id to the other side. */    mpi_errno = MPIR_Comm_create(newcomm);    if (mpi_errno) {	MPIU_ERR_POP(mpi_errno);    }    (*newcomm)->recvcontext_id = MPIR_Get_contextid( comm_ptr );    if ((*newcomm)->recvcontext_id == 0) {	MPIU_ERR_SETANDJUMP(mpi_errno, MPI_ERR_OTHER, "**toomanycomm" );    }    /* (*newcomm)->context_id = (*newcomm)->recvcontext_id; */    rank = comm_ptr->rank;    local_comm_size = comm_ptr->local_size;    if (rank == root)    {	/* Establish a communicator to communicate with the root on the 	   other side. */	mpi_errno = MPIDI_Create_inter_root_communicator_connect(	    port_name, &tmp_comm, &new_vc);	if (mpi_errno != MPI_SUCCESS) {	    MPIU_ERR_POP(mpi_errno);	}	/* Make an array to translate local ranks to process group index 	   and rank */	MPIU_CHKLMEM_MALLOC(local_translation,pg_translation*,			    local_comm_size*sizeof(pg_translation),			    mpi_errno,"local_translation");	/* Make a list of the local communicator's process groups and encode 	   them in strings to be sent to the other side.	   The encoded string for each process group contains the process 	   group id, size and all its KVS values */	mpi_errno = ExtractLocalPGInfo( comm_ptr, local_translation, 					&pg_list, &n_local_pgs );	/* Send the remote root: n_local_pgs, local_comm_size,           Recv from the remote root: n_remote_pgs, remote_comm_size,           context_id for newcomm */        send_ints[0] = n_local_pgs;        send_ints[1] = local_comm_size;        send_ints[2] = (*newcomm)->recvcontext_id;	MPIU_DBG_MSG_FMT(CH3_CONNECT,VERBOSE,(MPIU_DBG_FDEST,		  "sending two ints, %d and %d, and receiving 3 ints",                   send_ints[0], send_ints[1]));        mpi_errno = MPIC_Sendrecv(send_ints, 3, MPI_INT, 0,                                  sendtag++, recv_ints, 3, MPI_INT,                                  0, recvtag++, tmp_comm->handle,                                  MPI_STATUS_IGNORE);        if (mpi_errno != MPI_SUCCESS) {	    MPIU_ERR_POP(mpi_errno);	}    }    /* broadcast the received info to local processes */    MPIU_DBG_MSG(CH3_CONNECT,VERBOSE,"broadcasting the received 3 ints");    mpi_errno = MPIR_Bcast(recv_ints, 3, MPI_INT, root, comm_ptr);    if (mpi_errno) {	MPIU_ERR_POP(mpi_errno);    }    n_remote_pgs     = recv_ints[0];    remote_comm_size = recv_ints[1];    context_id	     = recv_ints[2];    MPIU_CHKLMEM_MALLOC(remote_pg,MPIDI_PG_t**,			n_remote_pgs * sizeof(MPIDI_PG_t*),			mpi_errno,"remote_pg");    MPIU_CHKLMEM_MALLOC(remote_translation,pg_translation*,			remote_comm_size * sizeof(pg_translation),			mpi_errno,"remote_translation");    MPIU_DBG_MSG(CH3_CONNECT,VERBOSE,"allocated remote process groups");    /* Exchange the process groups and their corresponding KVSes */    if (rank == root)    {	mpi_errno = SendPGtoPeerAndFree( tmp_comm, &sendtag, pg_list );	mpi_errno = ReceivePGAndDistribute( tmp_comm, comm_ptr, root, &recvtag,					n_remote_pgs, remote_pg );	/* Receive the translations from remote process rank to process group 	   index */	MPIU_DBG_MSG_FMT(CH3_CONNECT,VERBOSE,(MPIU_DBG_FDEST,               "sending %d ints, receiving %d ints", 	      local_comm_size * 2, remote_comm_size * 2));	mpi_errno = MPIC_Sendrecv(local_translation, local_comm_size * 2, 				  MPI_INT, 0, sendtag++,				  remote_translation, remote_comm_size * 2, 				  MPI_INT, 0, recvtag++, tmp_comm->handle, 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -