⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 fiber

📁 MPICH是MPI的重要研究,提供了一系列的接口函数,为并行计算的实现提供了编程环境.
💻
字号:
		     Notes on using p4 with Fiber ChannelWe are just beginning to experiment with fiber channel links on our SP1, andare currently using p4 as a testbed.  p4 can use the fiber channel links intwo ways:  TCP/IP, and "direct".  This interface is also being tried on someother systems.  The file lib/p4_fc.c has the code.TCP/IP------To use the TCP-IP interface, all you need to do is change the hostnames ina standard p4 procgroup file.  Note that you have to use a specific hostnamein the first line, rather than "local" so that the fiber channel port on themachine where the program is originally started can be identified.  On theArgonne SP1, every fourth node starting with node 2 has a fiber channelconnection, so a valid procgroup file might look like:fcnode2 1 /sphome/lusk/p4test/systestfcnode6 1 /sphome/lusk/p4test/systestAnd then start systest on spnode2.Our initial measurements (using the systest ring test with two processes) showa bandwidth of about 2.5 Mbytes/sec.Direct------To use the "direct" interface, you need to use three additional calls in yourp4 program, at least until we have it better integrated into p4.  Aftercalling p4_initenv and p4_create_procgroup to start the processes and setup a few initial socket links, you must call p4_initfc() to initialize thefiber channel data structures.  Then instead of p4_send and p4_recv, usep4_sendfc and p4_recvfc, with the same arguments as p4_send and p4_recv.  Youmust supply a buffer for the receive (rather than having p4 allocate it foryou) and you cannot use the wild card option on the sender (you must specifythe source of the message you are receiving).  The program systest_fc.c in themessages directory is an example.The procgroup file should have spnodes, not fcnodes or swnodes.Using the direct interface, we measured a bandwidth of about 17 Mbytes/sec,with messages of from 256K to 1M.Once high-bandwidth has been achieved, the FC card seems to go into a badstate, and cannot handle even small messages.Both measurements were done with a load of about 2.0 on each node.This is still a very preliminary implementation, not thoroughly tested ortuned. We are currently relying on the programmer to have his receives ready whensends occur.  This has not been a problem with our current examples.  In thelong run we are going to use the IBM DCE thread package to keep an outstandingreceive going all the time.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -