📄 orterun.1
字号:
.\".\" Man page for ORTE's orterun command.\" .\" .TH name section center-footer left-footer center-header.TH MPIRUN 1 "March 2006" "Open MPI" "OPEN MPI COMMANDS".\" **************************.\" Name Section.\" **************************.SH NAME.orterun, mpirun, mpiexec \- Execute serial and parallel jobs in Open MPI..B Note:\fImpirun\fP, \fImpiexec\fP, and \fIorterun\fP are all exact synonyms for eachother. Using any of the names will result in exactly identical behavior...\" **************************.\" Synopsis Section.\" **************************.SH SYNOPSIS..PPSingle Process Multiple Data (SPMD) Model:.B mpirun .R [ options ] .B <program>.R [ <args> ].Multiple Instruction Multiple Data (MIMD) Model:.B mpirun.R [ global_options ] [ local_options1 ].B <program1>.R [ <args1> ] : [ local_options2 ].B <program2>.R [ <args2> ] : ... : [ local_optionsN ].B <programN>.R [ <argsN> ].PNote that in both models, invoking \fImpirun\fR via an absolute pathname is equivalent to specifying the \fI--prefix\fR option with a\fI<dir>\fR value equivalent to the directory where \fImpirun\fRresides, minus its last subdirectory. For example: \fBshell$\fP /usr/local/bin/mpirun ...is equivalent to \fBshell$\fP mpirun --prefix /usr/local..\" **************************.\" Quick Summary Section.\" **************************.SH QUICK SUMMARY.If you are simply looking for how to run an MPI application, youprobably want to use a command line of the following form: \fBshell$\fP mpirun [ -np X ] [ --hostfile <filename> ] <program>This will run X copies of \fI<program>\fR in your current run-timeenvironment (if running under a supported resource manager, Open MPI's\fImpirun\fR will usually automatically use the corresponding resource managerprocess starter, as opposed to, for example, \fIrsh\fR or \fIssh\fR,which require the use of a hostfile, or will default to running all Xcopies on the localhost), scheduling (by default) in a round-robin fashion byCPU slot. See the rest of this page for more details...\" **************************.\" Options Section.\" **************************.SH OPTIONS..I mpirunwill send the name of the directory where it was invoked on the localnode to each of the remote nodes, and attempt to change to thatdirectory. See the "Current Working Directory" section below for furtherdetails..\".\" Start options listing.\" Indent 10 chacters from start of first column to start of second column.TP 10.B <args>Pass these run-time arguments to every new process. These must alwaysbe the last arguments to \fImpirun\fP. If an app context file is used,\fI<args>\fP will be ignored....TP.B <program>The program executable. This is identified as the first non-recognized argumentto mpirun....TP.B -aborted\fR,\fP --aborted \fR<#>\fPSet the maximum number of aborted processes to display....TP.B --app \fR<appfile>\fPProvide an appfile, ignoring all other command line options....TP.B -bynode\fR,\fP --bynodeAllocate (map) the processes by node in a round-robin scheme....TP.B -byslot\fR,\fP --byslotAllocate (map) the processes by slot in a round-robin scheme. This is thedefault....TP.B -c \fR<#>\fPSynonym for \fI-np\fP....TP.B -debug\fR,\fP --debugInvoke the user-level debugger indicated by the \fIorte_base_user_debugger\fPMCA parameter....TP.B -debugger\fR,\fP --debuggerSequence of debuggers to search for when \fI--debug\fP is used (i.e.a synonym for \fIorte_base_user_debugger\fP MCA parameter)....TP.B -gmca\fR,\fP --gmca \fR<key> <value>\fPPass global MCA parameters that are applicable to all contexts. \fI<key>\fP isthe parameter name; \fI<value>\fP is the parameter value....TP.B -h\fR,\fP --helpDisplay help for this command...TP.B -H \fR<host1,host2,...,hostN>\fPSynonym for \fI-host\fP....TP.B -host\fR,\fP --host \fR<host1,host2,...,hostN>\fPList of hosts on which to invoke processes....TP.B -hostfile\fR,\fP --hostfile \fR<hostfile>\fPProvide a hostfile to use. .\" JJH - Should have man page for how to format a hostfile properly....TP.B -machinefile\fR,\fP --machinefile \fR<machinefile>\fPSynonym for \fI-hostfile\fP....TP.B -mca\fR,\fP --mca <key> <value>Send arguments to various MCA modules. See the "MCA" section, below....TP.B -n\fR,\fP --n \fR<#>\fPSynonym for \fI-np\fP....TP.B -nolocal\fR,\fP --nolocalDo not run any copies of the launched application on the same node asorterun is running. This option will override listing the localhostwith \fB--host\fR or any other host-specifying mechanism....TP.B -nooversubscribe\fR,\fP --nooversubscribeDo not oversubscribe any nodes; error (without starting any processes)if the requested number of processes would cause oversubscription.This option implicitly sets "max_slots" equal to the "slots" value foreach node....TP.B -np \fR<#>\fPRun this many copies of the program on the given nodes. This optionindicates that the specified file is an executable program and not anapplication context. If no value is provided for the number of copies toexecute (i.e., neither the "-np" nor its synonyms are provided on the commandline), Open MPI will automatically execute a copy of the program oneach process slot (see below for description of a "process slot"). Thisfeature, however, can only be used in the SPMD model and will return anerror (without beginning execution of the application) otherwise. ...TP.B -nw\fR,\fP --nwLaunch the processes and do not wait for their completion. mpirun willcomplete as soon as successful launch occurs....TP.B -path\fR,\fP --path \fR<path>\fP<path> that will be used when attempting to locate requested executables....TP.B --prefix \fR<dir>\fPPrefix directory that will be used to set the \fIPATH\fR and\fILD_LIBRARY_PATH\fR on the remote node before invoking Open MPI orthe target process. See the "Remote Execution" section, below....TP.B -q\fR,\fP --quietSuppress informative messages from orterun during application execution....TP.B --tmpdir \fR<dir>\fPSet the root for the session directory tree for mpirun only....TP.B -tv\fR,\fP --tvLaunch processes under the TotalView debugger.Deprecated backwards compatibility flag. Synonym for \fI--debug\fP....TP.B --universe \fR<username@hostname:universe_name>\fPFor this application, set the universe name as: username@hostname:universe_name...TP.B -v\fR,\fP --verboseBe verbose.TP.B -V\fR,\fP --versionPrint version number. If no other arguments are given, this will alsocause orterun to exit....TP.B -wd \fR<dir>\fPSynonym for \fI-wdir\fP....TP.B -wdir \fR<dir>\fPChange to the directory <dir> before the user's program executes.See the "Current Working Directory" section for notes on relative paths..B Note:If the \fI-wdir\fP option appears both on the command line and in anapplication context, the context will take precedence over the commandline....TP.B -x \fR<env>\fPExport the specified environment variables to the remote nodes beforeexecuting the program. Existing environment variables can bespecified (see the Examples section, below), or new variable namesspecified with corresponding values. The parser for the \fI-x\fPoption is not very sophisticated; it does not even understand quotedvalues. Users are advised to set variables in the environment, andthen use \fI-x\fP to export (not define) them....PThe following options are useful for developers; they are not generallyuseful to most ORTE and/or MPI users:..TP.B -d\fR,\fP --debug-develEnable debugging of the OpenRTE (the run-time layer in Open MPI).This is not generally useful for most users....TP.B --debug-daemonsEnable debugging of any OpenRTE daemons used by this application....TP.B --debug-daemons-fileEnable debugging of any OpenRTE daemons used by this application, storingoutput in files....TP.B --no-daemonizeDo not detach OpenRTE daemons used by this application....\" **************************.\" Description Section.\" **************************.SH DESCRIPTION.One invocation of \fImpirun\fP starts an MPI application running under OpenMPI. If the application is single process multiple data (SPMD), the applicationcan be specified on the \fImpirun\fP command line.If the application is multiple instruction multiple data (MIMD), comprising ofmultiple programs, the set of programs and argument can be specified in one oftwo ways: Extended Command Line Arguments, and Application Context..PPAn application context describes the MIMD program set including all argumentsin a separate file..\"See appcontext(5) for a description of the application context syntax.This file essentially contains multiple \fImpirun\fP command lines, less thecommand name itself. The ability to specify different options for differentinstantiations of a program is another reason to use an application context..PPExtended command line arguments allow for the description of the applicationlayout on the command line using colons (\fI:\fP) to separate the specificationof programs and arguments. Some options are globally set across all specifiedprograms (e.g. --hostfile), while others are specific to a single program(e.g. -np).....SS Process Slots.Open MPI uses "slots" to represent a potential location for a process.Hence, a node with 2 slots means that 2 processes can be launched onthat node. For performance, the community typically equates a "slot"with a physical CPU, thus ensuring that any process assigned to thatslot has a dedicated processor. This is not, however, a requirement forthe operation of Open MPI..PPSlots can be specified in hostfiles after the hostname. For example:..TP 4host1.example.com slots=4Indicates that there are 4 process slots on host1...PPIf no slots value is specified, then Open MPI will automatically assigna default value of "slots=1" to that host...PPWhen running under resource managers (e.g., SLURM, Torque, etc.), OpenMPI will obtain both the hostnames and the number of slots directlyfrom the resource manger. For example, if running under a SLURM job,Open MPI will automatically receive the hosts that SLURM has allocatedto the job as well as how many slots on each node that SLURM saysare usable - in most high-performance environments, the slots willequate to the number of processors on the node...PPWhen deciding where to launch processes, Open MPI will first fill upall available slots before oversubscribing (see "LocationNomenclature", below, for more details on the scheduling algorithmsavailable). Unless told otherwise, Open MPI will arbitrarilyoversubscribe nodes. For example, if the only node available is thelocalhost, Open MPI will run as many processes as specified by the-n (or one of its variants) command line option on thelocalhost (although they may run quite slowly, since they'll all becompeting for CPU and other resources)...PPLimits can be placed on oversubscription with the "max_slots"attribute in the hostfile. For example:..TP 4host2.example.com slots=4 max_slots=6Indicates that there are 4 process slots on host2. Further, Open MPIis limited to launching a maximum of 6 processes on host2...TPhost3.example.com slots=2 max_slots=2Indicates that there are 2 process slots on host3 and that nooversubscription is allowed (similar to the \fI--nooversubscribe\fRoption)...TPhost4.example.com max_slots=2Shorthand; same as listing "slots=2 max_slots=2"....PPNote that Open MPI's support for resource managers does not currentlyset the "max_slots" values for hosts. If you wish to preventoversubscription in such scenarios, use the \fI--nooversubscribe\fRoption...PPIn scenarios where the user wishes to launch an application acrossall available slots by not providing a "-n" option on the mpiruncommand line, Open MPI will launch a process on each process slotfor each host within the provided environment. For example, if ahostfile has been provided, then Open MPI will spawn processeson each identified host up to the "slots=x" limit if oversubscriptionis not allowed. If oversubscription is allowed (the default), thenOpen MPI will spawn processes on each host up to the "max_slots=y" limitif that value is provided. In all cases, the "-bynode" and "-byslot"mapping directives will be enforced to ensure proper placement ofprocess ranks.....SS Location Nomenclature.As described above, \fImpirun\fP can specify arbitrary locations inthe current Open MPI universe. Locations can be specified either byCPU or by node..B Note:This nomenclature does not force Open MPI to bind processes to CPUs --specifying a location "by CPU" is really a convenience mechanism forSMPs that ultimately maps down to a specific node..PPSpecifying locations by node will launch one copy of an executable perspecified node.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -