⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1030.txt

📁 <VC++网络游戏建摸与实现>源代码
💻 TXT
📖 第 1 页 / 共 3 页
字号:
Network Working Group                                       M. LambertRequest for Comments: 1030      M.I.T. Laboratory for Computer Science                                                         November 1987          On Testing the NETBLT Protocol over Divers NetworksSTATUS OF THIS MEMO   This RFC describes the results gathered from testing NETBLT over   three networks of differing bandwidths and round-trip delays.  While   the results are not complete, the information gathered so far has   been very promising and supports RFC-998's assertion that that NETBLT   can provide very high throughput over networks with very different   characteristics.  Distribution of this memo is unlimited.1. Introduction   NETBLT (NETwork BLock Transfer) is a transport level protocol   intended for the rapid transfer of a large quantity of data between   computers.  It provides a transfer that is reliable and flow   controlled, and is designed to provide maximum throughput over a wide   variety of networks.  The NETBLT protocol is specified in RFC-998;   this document assumes an understanding of the specification as   described in RFC-998.   Tests over three different networks are described in this document.   The first network, a 10 megabit-per-second Proteon Token Ring, served   as a "reference environment" to determine NETBLT's best possible   performance.  The second network, a 10 megabit-per-second Ethernet,   served as an access path to the third network, the 3 megabit-per-   second Wideband satellite network.  Determining NETBLT's performance   over the Ethernet allowed us to account for Ethernet-caused behaviour   in NETBLT transfers that used the Wideband network.  Test results for   each network are described in separate sections.  The final section   presents some conclusions and further directions of research.  The   document's appendices list test results in detail.2. Acknowledgements   Many thanks are due Bob Braden, Stephen Casner, and Annette DeSchon   of ISI for the time they spent analyzing and commenting on test   results gathered at the ISI end of the NETBLT Wideband network tests.   Bob Braden was also responsible for porting the IBM PC/AT NETBLT   implementation to a SUN-3 workstation running UNIX.  Thanks are also   due Mike Brescia, Steven Storch, Claudio Topolcic and others at BBN   who provided much useful information about the Wideband network, andM. Lambert                                                      [Page 1]RFC 1030              Testing the NETBLT Protocol          November 1987   helped monitor it during testing.3. Implementations and Test Programs   This section briefly describes the NETBLT implementations and test   programs used in the testing.  Currently, NETBLT runs on three   machine types: Symbolics LISP machines, IBM PC/ATs, and SUN-3s.  The   test results described in this paper were gathered using the IBM   PC/AT and SUN-3 NETBLT implementations.  The IBM and SUN   implementations are very similar; most differences lie in timer and   multi-tasking library implementations.  The SUN NETBLT implementation   uses UNIX's user-accessible raw IP socket; it is not implemented in   the UNIX kernel.   The test application performs a simple memory-to-memory transfer of   an arbitrary amount of data.  All data are actually allocated by the   application, given to the protocol layer, and copied into NETBLT   packets.  The results are therefore fairly realistic and, with   appropriately large amounts of buffering, could be attained by disk-   based applications as well.   The test application provides several parameters that can be varied   to alter NETBLT's performance characteristics.  The most important of   these parameters are:        burst interval  The number of milliseconds from the start of one                        burst transmission to the start of the next burst                        transmission.        burst size      The number of packets transmitted per burst.        buffer size     The number of bytes in a NETBLT buffer (all                        buffers must be the same size, save the last,                        which can be any size required to complete the                        transfer).        data packet size                        The number of bytes contained in a NETBLT DATA                        packet's data segment.        number of outstanding buffers                       The number of buffers which can be in                       transmission/error recovery at any given moment.M. Lambert                                                      [Page 2]RFC 1030              Testing the NETBLT Protocol          November 1987   The protocol's throughput is measured in two ways.  First, the "real   throughput" is throughput as viewed by the user: the number of bits   transferred divided by the time from program start to program finish.   Although this is a useful measurement from the user's point of view,   another throughput measurement is more useful for analyzing NETBLT's   performance.  The "steady-state throughput" is the rate at which data   is transmitted as the transfer size approaches infinity.  It does not   take into account connection setup time, and (more importantly), does   not take into account the time spent recovering from packet-loss   errors that occur after the last buffer in the transmission is sent   out.  For NETBLT transfers using networks with long round-trip delays   (and consequently with large numbers of outstanding buffers), this   "late" recovery phase can add large amounts of time to the   transmission, time which does not reflect NETBLT's peak transmission   rate.  The throughputs listed in the test cases that follow are all   steady-state throughputs.4. Implementation Performance   This section describes the theoretical performance of the IBM PC/AT   NETBLT implementation on both the transmitting and receiving sides.   Theoretical performance was measured on two LANs: a 10 megabit-per-   second Proteon Token Ring and a 10 megabit-per-second Ethernet.   "Theoretical performance" is defined to be the performance achieved   if the sending NETBLT did nothing but transmit data packets, and the   receiving NETBLT did nothing but receive data packets.   Measuring the send-side's theoretical performance is fairly easy,   since the sending NETBLT does very little more than transmit packets   at a predetermined rate.  There are few, if any, factors which can   influence the processing speed one way or another.   Using a Proteon P1300 interface on a Proteon Token Ring, the IBM   PC/AT NETBLT implementation can copy a maximum-sized packet (1990   bytes excluding protocol headers) from NETBLT buffer to NETBLT data   packet, format the packet header, and transmit the packet onto the   network in about 8 milliseconds.  This translates to a maximum   theoretical throughput of 1.99 megabits per second.   Using a 3COM 3C500 interface on an Ethernet LAN, the same   implementation can transmit a maximum-sized packet (1438 bytes   excluding protocol headers) in 6.0 milliseconds, for a maximum   theoretical throughput of 1.92 megabits per second.   Measuring the receive-side's theoretical performance is more   difficult.  Since all timer management and message ACK overhead is   incurred at the receiving NETBLT's end, the processing speed can be   slightly slower than the sending NETBLT's processing speed (this doesM. Lambert                                                      [Page 3]RFC 1030              Testing the NETBLT Protocol          November 1987   not even take into account the demultiplexing overhead that the   receiver incurs while matching packets with protocol handling   functions and connections).  In fact, the amount by which the two   processing speeds differ is dependent on several factors, the most   important of which are: length of the NETBLT buffer list, the number   of data timers which may need to be set, and the number of control   messages which are ACKed by the data packet.  Almost all of this   added overhead is directly related to the number of outstanding   buffers allowable during the transfer.  The fewer the number of   outstanding buffers, the shorter the NETBLT buffer list, and the   faster a scan through the buffer list and the shorter the list of   unacknowledged control messages.   Assuming a single-outstanding-buffer transfer, the receiving-side   NETBLT can DMA a maximum-sized data packet from the Proteon Token   Ring into its network interface, copy it from the interface into a   packet buffer and finally copy the packet into the correct NETBLT   buffer in 8 milliseconds: the same speed as the sender of data.   Under the same conditions, the implementation can receive a maximum-   sized packet from the Ethernet in 6.1 milliseconds, for a maximum   theoretical throughput of 1.89 megabits per second.5. Testing on a Proteon Token Ring   The Proteon Token Ring used for testing is a 10 megabit-per-second   LAN supporting about 40 hosts.  The machines on either end of the   transfer were IBM PC/ATs using Proteon P1300 network interfaces.  The   Token Ring provides high bandwidth with low round-trip delay and   negligible packet loss, a good debugging environment in situations   where packet loss, packet reordering, and long round-trip time would   hinder debugging.  Also contributing to high performance is the large   (maximum 2046 bytes) network MTU.  The larger packets take somewhat   longer to transmit than do smaller packets (8 milliseconds per 2046   byte packet versus 6 milliseconds per 1500 byte packet), but the   lessened per-byte computational overhead increases throughput   somewhat.   The fastest single-outstanding-buffer transmission rate was 1.49   megabits per second, and was achieved using a test case with the   following parameters:M. Lambert                                                      [Page 4]RFC 1030              Testing the NETBLT Protocol          November 1987      transfer size   2-5 million bytes      data packet size                      1990 bytes      buffer size     19900 bytes      burst size      5 packets      burst interval  40 milliseconds.  The timer code on the IBM PC/AT                      is accurate to within 1 millisecond, so a 40                      millisecond burst can be timed very accurately.   Allowing only one outstanding buffer reduced the protocol to running   "lock-step" (the receiver of data sends a GO, the sender sends data,   the receiver sends an OK, followed by a GO for the next buffer).   Since the lock-step test incurred one round-trip-delay's worth of   overhead per buffer (between transmission of a buffer's last data   packet and receipt of an OK for that buffer/GO for the next buffer),   a test with two outstanding buffers (providing essentially constant   packet transmission) should have resulted in higher throughput.   A second test, this time with two outstanding buffers, was performed,   with the above parameters identical save for an increased burst   interval of 43 milliseconds.  The highest throughput recorded was   1.75 megabits per second.  This represents 95% efficiency (5 1990-   byte packets every 43 milliseconds gives a maximum theoretical   throughput of 1.85 megabits per second).  The increase in throughput   over a single-outstanding-buffer transmission occurs because, with   two outstanding buffers, there is no round-trip-delay lag between   buffer transmissions and the sending NETBLT can transmit constantly.   Because the P1300 interface can transmit and receive concurrently, no   packets were dropped due to collision on the interface.   As mentioned previously, the minimum transmission time for a   maximum-sized packet on the Proteon Ring is 8 milliseconds.  One   would expect, therefore, that the maximum throughput for a double-   buffered transmission would occur with a burst interval of 8   milliseconds times 5 packets per burst, or 40 milliseconds.  This   would allow the sender of data to transmit bursts with no "dead time"   in between bursts.  Unfortunately, the sender of data must take time   to process incoming control messages, which typically forces a 2-3   millisecond gap between bursts, lowering the throughput.  With a   burst interval of 43 milliseconds, the incoming packets are processedM. Lambert                                                      [Page 5]RFC 1030              Testing the NETBLT Protocol          November 1987   during the 3 millisecond-per-burst "dead time", making the protocol   more efficient.6. Testing on an Ethernet   The network used in performing this series of tests was a 10 megabit   per second Ethernet supporting about 150 hosts.  The machines at   either end of the NETBLT connection were IBM PC/ATs using 3COM 3C500   network interfaces.  As with the Proteon Token Ring, the Ethernet   provides high bandwidth with low delay.  Unfortunately, the   particular Ethernet used for testing (MIT's infamous Subnet 26) is   known for being somewhat noisy.  In addition, the 3COM 3C500 Ethernet   interfaces are relatively unsophisticated, with only a single   hardware packet buffer for both transmitting and receiving packets.   This gives the interface an annoying tendency to drop packets under   heavy load.  The combination of these factors made protocol   performance analysis somewhat more difficult than on the Proteon   Ring.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -