⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1030.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 3 页
字号:
   The fastest single-buffer transmission rate was 1.45 megabits per   second, and was achieved using a test case with the following   parameters:      transfer size   2-5 million bytes      data packet size                      1438 bytes (maximum size excluding protocol                      headers).      buffer size     14380 bytes      burst size      5 packets      burst interval  30 milliseconds (6.0 milliseconds x 5 packets).   A second test, this one with parameters identical to the first save   for number of outstanding buffers (2 instead of 1) resulted in   substantially lower throughput (994 kilobits per second), with a   large number of packets retransmitted (10%).  The retransmissions   occurred because the 3COM 3C500 network interface has only one   hardware packet buffer and cannot hold a transmitting and receiving   packet at the same time.  With two outstanding buffers, the sender of   data can transmit constantly; this means that when the receiver of   data attempts to send a packet, its interface's receive hardware goesM. Lambert                                                      [Page 6]RFC 1030              Testing the NETBLT Protocol          November 1987   deaf to the network and any packets being transmitted at the time by   the sender of data are lost.  A symmetrical problem occurs with   control messages sent from receiver of data to sender of data, but   the number of control messages sent is small enough and the   retransmission algorithm redundant enough that little performance   degradation occurs due to control message loss.   When the burst interval was lengthened from 30 milliseconds per 5   packet burst to 45 milliseconds per 5 packet burst, a third as many   packets were dropped, and throughput climbed accordingly, to 1.12   megabits per second.  Presumably, the longer burst interval allowed   more dead time between bursts and less likelihood of the receiver of   data's interface being deaf to the net while the sender of data was   sending a packet.  An interesting note is that, when the same test   was conducted on a special Ethernet LAN with the only two hosts   attached being the two NETBLT machines, no packets were dropped once   the burst interval rose above 40 milliseconds/5 packet burst.  The   improved performance was doubtless due to the absence of extra   network traffic.7. Testing on the Wideband Network   The following section describes results gathered using the Wideband   network.  The Wideband network is a satellite-based network with ten   stations competing for a raw satellite channel bandwidth of 3   megabits per second.  Since the various tests resulted in substantial   changes to the NETBLT specification and implementation, some of the   major changes are described along with the results and problems that   forced those changes.   The Wideband network has several characteristics that make it an   excellent environment for testing NETBLT.  First, it has an extremely   long round-trip delay (1.8 seconds).  This provides a good test of   NETBLT's rate control and multiple-buffering capabilities.  NETBLT's   rate control allows the packet transmission rate to be regulated   independently of the maximum allowable amount of outstanding data,   providing flow control as well as very large "windows".  NETBLT's   multiple-buffering capability enables data to still be transmitted   while earlier data are awaiting retransmission and subsequent data   are being prepared for transmission.  On a network with a long   round-trip delay, the alternative "lock-step" approach would require   a 1.8 second gap between each buffer transmission, degrading   performance.   Another interesting characteristic of the Wideband network is its   throughput.  Although its raw bandwidth is 3 megabits per second, at   the time of these tests fully 2/3 of that was consumed by low-level   network overhead and hardware limitations.  (A detailed analysis ofM. Lambert                                                      [Page 7]RFC 1030              Testing the NETBLT Protocol          November 1987   the overhead appears at the end of this document.)  This reduces the   available bandwidth to just over 1 megabit per second.  Since the   NETBLT implementation can run substantially faster than that, testing   over the Wideband net allows us to measure NETBLT's ability to   utilize very high percentages of available bandwidth.   Finally, the Wideband net has some interesting packet reorder and   delay characteristics that provide a good test of NETBLT's ability to   deal with these problems.   Testing progressed in several phases.  The first phase involved using   source-routed packets in a path from an IBM PC/AT on MIT's Subnet 26,   through a BBN Butterfly Gateway, over a T1 link to BBN, onto the   Wideband network, back down into a BBN Voice Funnel, and onto ISI's   Ethernet to another IBM PC/AT.  Testing proceeded fairly slowly, due   to gateway software and source-routing bugs.  Once a connection was   finally established, we recorded a best throughput of approximately   90K bits per second.   Several problems contributed to the low throughput.  First, the   gateways at either end were forwarding packets onto their respective   LANs faster than the IBM PC/AT's could accept them (the 3COM 3C500   interface would not have time to re-enable input before another   packet would arrive from the gateway).  Even with bursts of size 1,   spaced 6 milliseconds apart, the gateways would aggregate groups of   packets coming from the same satellite frame, and send them faster   than the PC could receive them.  The obvious result was many dropped   packets, and degraded performance.  Also, the half-duplex nature of   the 3COM interface caused incoming packets to be dropped when packets   were being sent.   The number of packets dropped on the sending NETBLT side due to the   long interface re-enable time was reduced by packing as many control   messages as possible into a single control packet (rather than   placing only one message in a control packet).  This reduced the   number of control packets transmitted to one per buffer transmission,   which the PC was able to handle.  In particular, messages of the form   OK(n) were combined with messages of the form GO(n + 1), in order to   prevent two control packets from arriving too close together to both   be received.   Performance degradation from dropped control packets was also   minimized by changing to a highly redundant control packet   transmission algorithm.  Control messages are now stored in a single   long-lived packet, with ACKed messages continuously bumped off the   head of the packet and new messages added at the tail of the packet.   Every time a new message needs to be transmitted, any unACKed old   messages are transmitted as well.  The sending NETBLT, which receivesM. Lambert                                                      [Page 8]RFC 1030              Testing the NETBLT Protocol          November 1987   these control messages, is tuned to ignore duplicate messages with   almost no overhead.  This transmission redundancy puts little   reliance on the NETBLT control timer, further reducing performance   degradation from lost control packets.   Although the effect of dropped packets on the receiving NETBLT could   not be completely eliminated, it was reduced somewhat by some changes   to the implementation.  Data packets from the sending NETBLT are   guaranteed to be transmitted by buffer number, lowest number first.   In some cases, this allowed the receiving NETBLT to make retransmit-   request decisions for a buffer N, if packets for N were expected but   none were received at the time packets for a buffer N+M were   received.  This optimization was somewhat complicated, but improved   NETBLT's performance in the face of missing packets.  Unfortunately,   the dropped-packet problem remained until the NETBLT implementation   was ported to a SUN-3 workstation.  The SUN is able to handle the   incoming packets quite well, dropping only 0.5% of the data packets   (as opposed to the PC's 15 - 20%).   Another problem with the Wideband network was its tendency to re-   order and delay packets.  Dealing with these problems required   several changes in the implementation.  Previously, the NETBLT   implementation was "optimized" to generate retransmit requests as   soon as possible, if possible not relying on expiration of a data   timer.  For instance, when the receiving NETBLT received an LDATA   packet for a buffer N, and other packets in buffer N had not arrived,   the receiver would immediately generate a RESEND for the missing   packets.  Similarly, under certain circumstances, the receiver would   generate a RESEND for a buffer N if packets for N were expected and   had not arrived before packets for a buffer N+M.  Obviously, packet-   reordering made these "optimizations" generate retransmit requests   unnecessarily.  In the first case, the implementation was changed to   no longer generate a retransmit request on receipt of an LDATA with   other packets missing in the buffer.  In the second case, a data   timer was set with an updated (and presumably more accurate) value,   hopefully allowing any re-ordered packets to arrive before timing out   and generating a retransmit request.   It is difficult to accommodate Wideband network packet delay in the   NETBLT implementation.  Packet delays tend to occur in multiples of   600 milliseconds, due to the Wideband network's datagram reservation   scheme.  A timer value calculation algorithm that used a fixed   variance on the order of 600 milliseconds would cause performance   degradation when packets were lost.  On the other hand, short fixed   variance values would not react well to the long delays possible on   the Wideband net.  Our solution has been to use an adaptive data   timer value calculation algorithm.  The algorithm maintains an   average inter-packet arrival value, and uses that to determine theM. Lambert                                                      [Page 9]RFC 1030              Testing the NETBLT Protocol          November 1987   data timer value.  If the inter-packet arrival time increases, the   data timer value will lengthen.   At this point, testing proceeded between NETBLT implementations on a   SUN-3 workstation and an IBM PC/AT.  The arrival of a Butterfly   Gateway at ISI eliminated the need for source-routed packets; some   performance improvement was also expected because the Butterfly   Gateway is optimized for IP datagram traffic.   In order to put the best Wideband network test results in context, a   short analysis follows, showing the best throughput expected on a   fully loaded channel.  Again, a detailed analysis of the numbers that   follow appears at the end of this document.   The best possible datagram rate over the current Wideband   configuration is 24,054 bits per channel frame, or 3006 bytes every   21.22 milliseconds.  Since the transmission route begins and ends on   an Ethernet, the largest amount of data transmissible (after   accounting for packet header overhead) is 1438 bytes per packet.   This translates to approximately 2 packets per frame.  Since we want   to avoid overflowing the channel, we should transmit slightly slower   than the channel frame rate of 21.2 milliseconds.  We therefore came   up with a best possible throughput of 2 1438-byte packets every 22   milliseconds, or 1.05 megabits per second.   Because of possible software bugs in either the Butterfly Gateway or   the BSAT (gateway-to-earth-station interface), 1438-byte packets were   fragmented before transmission over the Wideband network, causing   packet delay and poor performance.  The best throughput was achieved   with the following values:      transfer size   500,000 - 750,000 bytes      data packet size                      1432 bytes      buffer size     14320 bytes      burst size      5 packets      burst interval  55 milliseconds   Steady-state throughputs ranged from 926 kilobits per second to 942   kilobits per second, approximately 90% channel utilization.  TheM. Lambert                                                     [Page 10]RFC 1030              Testing the NETBLT Protocol          November 1987   amount of data transmitted should have been an order of magnitude   higher, in order to get a longer steady-state period; unfortunately   at the time we were testing, the Ethernet interface of ISI's   Butterfly Gateway would lock up fairly quickly (in 40-60 seconds) at   packet rates of approximately 90 per second, forcing a gateway reset.   Transmissions therefore had to take less than this amount of time.   This problem has reportedly been fixed since the tests were   conducted.   In order to test the Wideband network under overload conditions, we   attempted several tests at rates of 5 1432-byte packets every 50   milliseconds.  At this rate, the Wideband network ground to a halt as   four of the ten network BSATs immediately crashed and reset their   channel processor nodes.  Apparently, the BSATs crash because the ESI   (Earth Station Interface), which sends data from the BSAT to the   satellite, stops its transmit clock to the BSAT if it runs out of   buffer space.  The BIO interface connecting BSAT and ESI does not   tolerate this clock-stopping, and typically locks up, forcing the   channel processor node to reset.  A more sophisticated interface,   allowing faster transmissions, is being installed in the near future.8. Future Directions   Some more testing needs to be performed over the Wideband Network in   order to get a complete analysis of NETBLT's performance.  Once the   Butterfly Gateway Ethernet interface lockup problem described earlier   has been fixed, we want to perform transmissions of 10 to 50 million   bytes to get accurate steady-state throughput results.  We also want   to run several NETBLT processes in parallel, each tuned to take a   fraction of the Wideband Network's available bandwidth.  Hopefully,   this will demonstrate whether or not burst synchronization across   different NETBLT processes will cause network congestion or failure.   Once the BIO BSAT-ESI interface is upgraded, we will want to try for   higher throughputs, as well as greater hardware stability under   overload conditions.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -