⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mpi.qbk

📁 Boost provides free peer-reviewed portable C++ source libraries. We emphasize libraries that work
💻 QBK
📖 第 1 页 / 共 5 页
字号:
text could come out completely garbled, because one process can startwriting "I am a process" before another process has finished writing"of 7.".[section:point_to_point Point-to-Point communication]As a message passing library, MPI's primary purpose is to routinemessages from one process to another, i.e., point-to-point. MPIcontains routines that can send messages, receive messages, and querywhether messages are available. Each message has a source process, atarget process, a tag, and a payload containing arbitrary data. Thesource and target processes are the ranks of the sender and receiverof the message, respectively. Tags are integers that allow thereceiver to distinguish between different messages coming from thesame sender. The following program uses two MPI processes to write "Hello, world!"to the screen (`hello_world.cpp`):  #include <boost/mpi.hpp>  #include <iostream>  #include <boost/serialization/string.hpp>  namespace mpi = boost::mpi;  int main(int argc, char* argv[])   {    mpi::environment env(argc, argv);    mpi::communicator world;    if (world.rank() == 0) {      world.send(1, 0, std::string("Hello"));      std::string msg;      world.recv(1, 1, msg);      std::cout << msg << "!" << std::endl;    } else {      std::string msg;      world.recv(0, 0, msg);      std::cout << msg << ", ";      std::cout.flush();      world.send(0, 1, std::string("world"));    }    return 0;  }The first processor (rank 0) passes the message "Hello" to the secondprocessor (rank 1) using tag 0. The second processor prints the stringit receives, along with a comma, then passes the message "world" backto processor 0 with a different tag. The first processor then writesthis message with the "!" and exits. All sends are accomplished withthe [memberref boost::mpi::communicator::sendcommunicator::send] method and all receives use a corresponding[memberref boost::mpi::communicator::recvcommunicator::recv] call.[section:nonblocking Non-blocking communication]The default MPI communication operations--`send` and `recv`--may haveto wait until the entire transmission is completed before they canreturn. Sometimes this *blocking* behavior has a negative impact onperformance, because the sender could be performing useful computationwhile it is waiting for the transmission to occur. More important,however, are the cases where several communication operations mustoccur simultaneously, e.g., a process will both send and receive atthe same time.Let's revisit our "Hello, world!" program from the previoussection. The core of this program transmits two messages:    if (world.rank() == 0) {      world.send(1, 0, std::string("Hello"));      std::string msg;      world.recv(1, 1, msg);      std::cout << msg << "!" << std::endl;    } else {      std::string msg;      world.recv(0, 0, msg);      std::cout << msg << ", ";      std::cout.flush();      world.send(0, 1, std::string("world"));    }The first process passes a message to the second process, thenprepares to receive a message. The second process does the send andreceive in the opposite order. However, this sequence of events isjust that--a *sequence*--meaning that there is essentially noparallelism. We can use non-blocking communication to ensure that thetwo messages are transmitted simultaneously(`hello_world_nonblocking.cpp`):   #include <boost/mpi.hpp>  #include <iostream>  #include <boost/serialization/string.hpp>  namespace mpi = boost::mpi;  int main(int argc, char* argv[])   {    mpi::environment env(argc, argv);    mpi::communicator world;    if (world.rank() == 0) {      mpi::request reqs[2];      std::string msg, out_msg = "Hello";      reqs[0] = world.isend(1, 0, out_msg);      reqs[1] = world.irecv(1, 1, msg);      mpi::wait_all(reqs, reqs + 2);      std::cout << msg << "!" << std::endl;    } else {      mpi::request reqs[2];      std::string msg, out_msg = "world";      reqs[0] = world.isend(0, 1, out_msg);      reqs[1] = world.irecv(0, 0, msg);      mpi::wait_all(reqs, reqs + 2);      std::cout << msg << ", ";    }    return 0;  }We have replaced calls to the [memberrefboost::mpi::communicator::send communicator::send] and[memberref boost::mpi::communicator::recvcommunicator::recv] members with similar calls to their non-blockingcounterparts, [memberref boost::mpi::communicator::isendcommunicator::isend] and [memberrefboost::mpi::communicator::irecv communicator::irecv]. Theprefix *i* indicates that the operations return immediately with a[classref boost::mpi::request mpi::request] object, whichallows one to query the status of a communication request (see the[memberref boost::mpi::request::test test] method) or waituntil it has completed (see the [memberrefboost::mpi::request::wait wait] method). Multiple requestscan be completed at the same time with the [funcrefboost::mpi::wait_all wait_all] operation. If you run this program multiple times, you may see some strangeresults: namely, some runs will produce:  Hello, world!while others will produce:  world!  Hello,or even some garbled version of the letters in "Hello" and"world". This indicates that there is some parallelism in the program,because after both messages are (simultaneously) transmitted, bothprocesses will concurrent execute their print statements. For bothperformance and correctness, non-blocking communication operations arecritical to many parallel applications using MPI.[endsect][section:user_data_types User-defined data types]The inclusion of `boost/serialization/string.hpp` in the previousexamples is very important: it makes values of type `std::string`serializable, so that they can be be transmitted using Boost.MPI. Ingeneral, built-in C++ types (`int`s, `float`s, characters, etc.) canbe transmitted over MPI directly, while user-defined andlibrary-defined types will need to first be serialized (packed) into aformat that is amenable to transmission. Boost.MPI relies on the_Serialization_ library to serialize and deserialize data types. For types defined by the standard library (such as `std::string` or`std::vector`) and some types in Boost (such as `boost::variant`), the_Serialization_ library already contains all of the requiredserialization code. In these cases, you need only include theappropriate header from the `boost/serialization` directory. For types that do not already have a serialization header, you willfirst need to implement serialization code before the types can betransmitted using Boost.MPI. Consider a simple class `gps_position`that contains members `degrees`, `minutes`, and `seconds`. This classis made serializable by making it a friend of`boost::serialization::access` and introducing the templated`serialize()` function, as follows:  class gps_position  {  private:      friend class boost::serialization::access;      template<class Archive>      void serialize(Archive & ar, const unsigned int version)      {          ar & degrees;          ar & minutes;          ar & seconds;      }      int degrees;      int minutes;      float seconds;  public:      gps_position(){};      gps_position(int d, int m, float s) :          degrees(d), minutes(m), seconds(s)      {}  };Complete information about making types serializable is beyond thescope of this tutorial. For more information, please see the_Serialization_ library tutorial from which the above example wasextracted. One important side benefit of making types serializable forBoost.MPI is that they become serializable for any other usage, suchas storing the objects to disk to manipulated them in XML.Some serializable types, like `gps_position` above, have a fixedamount of data stored at fixed field positions. When this is the case,Boost.MPI can optimize their serialization and transmission to avoidextraneous copy operations. To enable this optimization, users shouldspecialize the type trait [classrefboost::mpi::is_mpi_datatype `is_mpi_datatype`], e.g.:  namespace boost { namespace mpi {    template <>    struct is_mpi_datatype<gps_position> : mpl::true_ { };  } }For non-template types we have defined a macro to simplify declaring a type as an MPI datatype  BOOST_IS_MPI_DATATYPE(gps_position)For composite traits, the specialization of [classrefboost::mpi::is_mpi_datatype `is_mpi_datatype`] may depend on`is_mpi_datatype` itself. For instance, a `boost::array` object isfixed only when the type of the parameter it stores is fixed:  namespace boost { namespace mpi {    template <typename T, std::size_t N>    struct is_mpi_datatype<array<T, N> >       : public is_mpi_datatype<T> { };  } }  The redundant copy elimination optimization can only be applied whenthe shape of the data type is completely fixed. Variable-length types(e.g., strings, linked lists) and types that store pointers cannot usethe optimiation, but Boost.MPI will be unable to detect this error atcompile time. Attempting to perform this optimization when it is notcorrect will likely result in segmentation faults and other strangeprogram behavior.Boost.MPI can transmit any user-defined data type from one process toanother. Built-in types can be transmitted without any extra effort;library-defined types require the inclusion of a serialization header;and user-defined types will require the addition of serializationcode. Fixed data types can be optimized for transmission using the[classref boost::mpi::is_mpi_datatype `is_mpi_datatype`]type trait.[endsect][endsect][section:collectives Collective operations][link mpi.point_to_point Point-to-point operations] are thecore message passing primitives in Boost.MPI. However, manymessage-passing applications also require higher-level communicationalgorithms that combine or summarize the data stored on many differentprocesses. These algorithms support many common tasks such as"broadcast this value to all processes", "compute the sum of thevalues on all processors" or "find the global minimum." [section:broadcast Broadcast]The [funcref boost::mpi::broadcast `broadcast`] algorithm isby far the simplest collective operation. It broadcasts a value from asingle process to all other processes within a [classrefboost::mpi::communicator communicator]. For instance, thefollowing program broadcasts "Hello, World!" from process 0 to everyother process. (`hello_world_broadcast.cpp`)  #include <boost/mpi.hpp>  #include <iostream>  #include <boost/serialization/string.hpp>  namespace mpi = boost::mpi;  int main(int argc, char* argv[])  {    mpi::environment env(argc, argv);    mpi::communicator world;    std::string value;    if (world.rank() == 0) {      value = "Hello, World!";    }    broadcast(world, value, 0);    std::cout << "Process #" << world.rank() << " says " << value               << std::endl;    return 0;  } Running this program with seven processes will produce a result suchas:[preProcess #0 says Hello, World!Process #2 says Hello, World!Process #1 says Hello, World!Process #4 says Hello, World!Process #3 says Hello, World!Process #5 says Hello, World!Process #6 says Hello, World!][endsect][section:gather Gather]The [funcref boost::mpi::gather `gather`] collective gathersthe values produced by every process in a communicator into a vectorof values on the "root" process (specified by an argument to`gather`). The /i/th element in the vector will correspond to thevalue gathered fro mthe /i/th process. For instance, in the followingprogram each process computes its own random number. All of theserandom numbers are gathered at process 0 (the "root" in this case),which prints out the values that correspond to each processor. (`random_gather.cpp`)  #include <boost/mpi.hpp>  #include <iostream>  #include <cstdlib>  namespace mpi = boost::mpi;  int main(int argc, char* argv[])  {    mpi::environment env(argc, argv);    mpi::communicator world;

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -