📄 readme
字号:
ROMIO: A High-Performance, Portable MPI-IO Implementation Version 2005-06-09Major Changes in this version:------------------------------* Fixed performance problems with the darray and subarray datatypes when using MPICH2.* Better support for building against existing MPICH and MPICH2 versions. When building against an existing MPICH installation, use the "--with-mpi=mpich" option to ROMIO configure. For MPICH2, use the "--with-mpi=mpich2" option. These will allow ROMIO to take advantage of internal features of these implementations.* Deprecation of SFS, HFS, and PIOFS implementations. These are no longer actively supported, although the code will continue to be distributed for now.* Initial support for the Panasas PanFS filesystem. PanFS allows users to specify the layout of a file at file-creation time. Layout information includes the number of StorageBlades (SB) across which the data is stored, the number of SBs across which a parity stripe is written, and the number of consecutive stripes that are placed on the same set of SBs. The panfs_layout_* hints are only used if supplied at file-creation time. panfs_layout_type - Specifies the layout of a file: 2 = RAID0 3 = RAID5 Parity Stripes panfs_layout_stripe_unit - The size of the stripe unit in bytes panfs_layout_total_num_comps - The total number of StorageBlades a file is striped across. panfs_layout_parity_stripe_width - If the layout type is RAID5 Parity Stripes, this hint specifies the number of StorageBlades in a parity stripe. panfs_layout_parity_stripe_depth - If the layout type is RAID5 Parity Stripes, this hint specifies the number of contiguous parity stripes written across the same set of SBs. panfs_layout_visit_policy - If the layout type is RAID5 Parity Stripes, the policy used to determine the parity stripe a given file offset is written to: 1 = Round Robin PanFS supports the "concurrent write" (CW) mode, where groups of cooperating clients can disable the PanFS consistency mechanisms and use their own consistency protocol. Clients participating in concurrent write mode use application specific information to improve performance while maintaining file consistency. All clients accessing the file(s) must enable concurrent write mode. If any client does not enable concurrent write mode, then the PanFS consistency protocol will be invoked. Once a file is opened in CW mode on a machine, attempts to open a file in non-CW mode will fail with EACCES. If a file is already opened in non-CW mode, attempts to open the file in CW mode will fail with EACCES. The following hint is used to enable concurrent write mode. panfs_concurrent_write - If set to 1 at file open time, the file is opened using the PanFS concurrent write mode flag. Concurrent write mode is not a persistent attribute of the file. Below is an example PanFS layout using the following parameters: - panfs_layout_type = 3 - panfs_layout_total_num_comps = 100 - panfs_layout_parity_stripe_width = 10 - panfs_layout_parity_stripe_depth = 8 - panfs_layout_visit_policy = 1 Parity Stripe Group 1 Parity Stripe Group 2 . . . Parity Stripe Group 10 ---------------------- ---------------------- -------------------- SB1 SB2 ... SB10 SB11 SB12 ... SB20 ... SB91 SB92 ... SB100 ----------------------- ----------------------- --------------------- D1 D2 ... D10 D91 D92 ... D100 D181 D182 ... D190 D11 D12 D20 D101 D102 D110 D191 D192 D193 D21 D22 D30 . . . . . . D31 D32 D40 D41 D42 D50 D51 D52 D60 D61 D62 D70 D71 D72 D80 D81 D82 D90 D171 D172 D180 D261 D262 D270 D271 D272 D273 . . . . . . ...* Initial support for the Globus GridFTP filesystem. Work contributed by Troy Baer (troy@osc.edu). Major Changes in Version 1.2.5:------------------------------* Initial support for MPICH-2* fix for a bug in which ROMIO would get confused for some permutations of the aggregator list* direct io on IRIX's XFS should work now* fixed an issue with the Fortran bindings that would cause them to fail when some compilers tried to build them.* Initial support for deferred opensMajor Changes in Version 1.2.4:------------------------------* Added section describing ROMIO MPI_FILE_SYNC and MPI_FILE_CLOSE behavior to User's Guide* Bug removed from PVFS ADIO implementation regarding resize operations* Added support for PVFS listio operations, including hints to control useMajor Changes in Version 1.2.3:-------------------------------* Enhanced aggregation control via cb_config_list, romio_cb_read, and romio_cb_write hints* Asynchronous IO can be enabled under Linux with the --enable-aio argument to configure* Additional PVFS support* Additional control over data sieving with romio_ds_read hint* NTFS ADIO implementation integrated into source tree* testfs ADIO implementation added for debugging purposesMajor Changes in Version 1.0.3:-------------------------------* When used with MPICH 1.2.1, the MPI-IO functions return proper error codes and classes, and the status object is filled in.* On SGI's XFS file system, ROMIO can use direct I/O even if the user's request does not meet the various restrictions needed to use direct I/O. ROMIO does this by doing part of the request with buffered I/O (until all the restrictions are met) and doing the rest with direct I/O. (This feature hasn't been tested rigorously. Please check for errors.) By default, ROMIO will use only buffered I/O. Direct I/O can be enabled either by setting the environment variables MPIO_DIRECT_READ and/or MPIO_DIRECT_WRITE to TRUE, or on a per-file basis by using the info keys "direct_read" and "direct_write". Direct I/O will result in higher performance only if you are accessing a high-bandwidth disk system. Otherwise, buffered I/O is better and is therefore used as the default.* Miscellaneous bug fixes.Major Changes Version 1.0.2:---------------------------* Implemented the shared file pointer functions (Section 9.4.4 of MPI-2) and split collective I/O functions (Section 9.4.5). Therefore, the main components of the MPI-2 I/O chapter not yet implemented are file interoperability and error handling.* Added support for using "direct I/O" on SGI's XFS file system. Direct I/O is an optional feature of XFS in which data is moved directly between the user's buffer and the storage devices, bypassing the file-system cache. This can improve performance significantly on systems with high disk bandwidth. Without high disk bandwidth, regular I/O (that uses the file-system cache) perfoms better. ROMIO, therefore, does not use direct I/O by default. The user can turn on direct I/O (separately for reading and writing) either by using environment variables or by using MPI's hints mechanism (info). To use the environment-variables method, do setenv MPIO_DIRECT_READ TRUE setenv MPIO_DIRECT_WRITE TRUE To use the hints method, the two keys are "direct_read" and "direct_write". By default their values are "false". To turn on direct I/O, set the values to "true". The environment variables have priority over the info keys. In other words, if the environment variables are set to TRUE, direct I/O will be used even if the info keys say "false", and vice versa. Note that direct I/O must be turned on separately for reading and writing. The environment-variables method assumes that the environment variables can be read by each process in the MPI job. This is not guaranteed by the MPI Standard, but it works with SGI's MPI and the ch_shmem device of MPICH.* Added support (new ADIO device, ad_pvfs) for the PVFS parallel file system for Linux clusters, developed at Clemson University (see http://www.parl.clemson.edu/pvfs ). To use it, you must first install PVFS and then when configuring ROMIO, specify "-file_system=pvfs" in addition to any other options to "configure". (As usual, you can configure for multiple file systems by using "+"; for example, "-file_system=pvfs+ufs+nfs".) You will need to specify the path to the PVFS include files via the "-cflags" option to configure, for example, "configure -cflags=-I/usr/pvfs/include". You will also need to specify the full path name of the PVFS library. The best way to do this is via the "-lib" option to MPICH's configure script (assuming you are using ROMIO from within MPICH). * Uses weak symbols (where available) for building the profiling version, i.e., the PMPI routines. As a result, the size of the library is reduced considerably. * The Makefiles use "virtual paths" if supported by the make utility. GNU make supports it, for example. This feature allows you to untar the distribution in some directory, say a slow NFS directory, and compile the library (the .o files) in another directory, say on a faster local disk. For example, if the tar file has been untarred in an NFS directory called /home/thakur/romio, one can compile it in a different directory, say /tmp/thakur, as follows: cd /tmp/thakur /home/thakur/romio/configure make The .o files will be created in /tmp/thakur; the library will be created in /home/thakur/romio/lib/$ARCH/libmpio.a . This method works only if the make utility supports virtual paths. If the default make does not, you can install GNU make which does, and specify it to configure as /home/thakur/romio/configure -make=/usr/gnu/bin/gmake (or whatever)* Lots of miscellaneous bug fixes and other enhancements.* This version is included in MPICH 1.2.0. If you are using MPICH, you need not download ROMIO separately; it gets built as part of MPICH. The previous version of ROMIO is included in LAM, HP MPI, SGI MPI, and NEC MPI. NEC has also implemented the MPI-IO functions missing in ROMIO, and therefore NEC MPI has a complete implementation of MPI-IO.Major Changes in Version 1.0.1:------------------------------* This version is included in MPICH 1.1.1 and HP MPI 1.4.* Added support for NEC SX-4 and created a new device ad_sfs for NEC SFS file system.* New devices ad_hfs for HP/Convex HFS file system and ad_xfs for SGI XFS file system.* Users no longer need to prefix the filename with the type of file system; ROMIO determines the file-system type on its own.* Added support for 64-bit file sizes on IBM PIOFS, SGI XFS, HP/Convex HFS, and NEC SFS file systems.* MPI_Offset is an 8-byte integer on machines that support 8-byte integers. It is of type "long long" in C and "integer*8" in Fortran. With a Fortran 90 compiler, you can use either integer*8 or integer(kind=MPI_OFFSET_KIND). If you printf an MPI_Offset in C, remember to use %lld or %ld as required by your compiler. (See what is used in the test program romio/test/misc.c.)* On some machines, ROMIO detects at configure time that "long long" is either not supported by the C compiler or it doesn't work properly. In such cases, configure sets MPI_Offset to long in C and integer in Fortran. This happens on Intel Paragon, Sun4, and FreeBSD.* Added support for passing hints to the implementation via the MPI_Info parameter. ROMIO understands the following hints (keys in MPI_Info object): /* on all file systems */ cb_buffer_size - buffer size for collective I/O cb_nodes - no. of processes that actually perform I/O in collective I/O ind_rd_buffer_size - buffer size for data sieving in independent reads /* on all file systems except IBM PIOFS */ ind_wr_buffer_size - buffer size for data sieving in independent writes /* ind_wr_buffer_size is ignored on PIOFS because data sieving cannot be done for writes since PIOFS doesn't support file locking */ /* on Intel PFS and IBM PIOFS only. These hints are understood only if supplied at file-creation time. */ striping_factor - no. of I/O devices to stripe the file across striping_unit - the striping unit in bytes start_iodevice - the number of the I/O device from which to start striping (between 0 and (striping_factor-1)) /* on Intel PFS only. */ pfs_svr_buf - turn on or off PFS server buffering by setting the value to "true" or "false", case-sensitive. If ROMIO doesn't understand a hint, or if the value is invalid, the hint will be ignored. The values of hints being used by ROMIO at any time can be obtained via MPI_File_get_info.General Information -------------------ROMIO is a high-performance, portable implementation of MPI-IO (theI/O chapter in MPI-2). ROMIO's home page is athttp://www.mcs.anl.gov/romio . The MPI-2 standard is available athttp://www.mpi-forum.org/docs/docs.html .This version of ROMIO includes everything defined in the MPI-2 I/Ochapter except support for file interoperability (Sec. 9.5 of MPI-2) anduser-defined error handlers for files (Sec. 4.13.3). The subarray anddistributed array datatype constructor functions from Chapter 4(Sec. 4.14.4 & 4.14.5) have been implemented. They are useful foraccessing arrays stored in files. The functions MPI_File_f2c andMPI_File_c2f (Sec. 4.12.4) are also implemented.C, Fortran, and profiling interfaces are provided for all functionsthat have been implemented. Please read the limitations of this version of ROMIO that are listedbelow (e.g., MPIO_Request object, restriction to homogeneousenvironments).This version of ROMIO runs on at least the following machines: IBM SP;Intel Paragon; HP Exemplar; SGI Origin2000; Cray T3E; NEC SX-4; othersymmetric multiprocessors from HP, SGI, DEC, Sun, and IBM; and networks ofworkstations (Sun, SGI, HP, IBM, DEC, Linux, and FreeBSD). Supportedfile systems are IBM PIOFS, Intel PFS, HP/Convex HFS, SGI XFS, NECSFS, PVFS, NFS, and any Unix file system (UFS).This version of ROMIO is included in MPICH 1.2.3; an earlier versionis included in at least the following MPI implementations: LAM, HPMPI, SGI MPI, and NEC MPI. Note that proper I/O error codes and classes are returned and the
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -