📄 rfc1106.txt
字号:
Another problem noticed, while testing the affects of slow start over
a satellite link, was at times, the retransmission timer was set so
restrictive, that milliseconds before a naked packet's ack is
received the retransmission timer would go off due to a timed packet
within the send window. The timer was set at the round trip delay of
the network allowing no time for packet processing. If this timer
went off due to congestion then backing off is the right thing to do,
otherwise to avoid the scenario discovered by experimentation, the
transmit timer should be set a little longer so that the
retransmission timer does not go off too early. Care must be taken
to make sure the right thing is done in the implementation in
question so that a packet isn't retransmitted too soon, and blamed on
congestion when in fact, the ack is on its way.
Fox [Page 9]
RFC 1106 TCP Big Window and Nak Options June 1989
4.3 Duplicate Acks
Another problem found with the 4.3bsd implementation is in the area
of duplicate acks. When the sender of data receives a certain number
of acks (3 in the current Berkeley release) that acknowledge
previously acked data before, it then assumes that a packet has been
lost and will resend the one packet assumed lost, and close its send
window as if the network is congested and the slow start algorithm
mention above will be used to open the send window. This facility is
no longer needed since the sender can use the reception of a nak as
its indicator that a particular packet was dropped. If the nak
packet is lost then the retransmit timer will go off and the packet
will be retransmitted by normal means. If a senders algorithm
continues to count duplicate acks the sender will find itself
possibly receiving many duplicate acks after it has already resent
the packet due to a nak being received because of the large size of
the data pipe. By receiving all of these duplicate acks the sender
may find itself doing nothing but resending the same packet of data
unnecessarily while keeping the send window closed for absolutely no
reason. By removing this feature of the implementation a user can
expect to find a satellite connection working much better in the face
of errors and other connections should not see any performance loss,
but a slight improvement in performance if anything at all.
5. Conclusion
This paper has described two new options that if used will make TCP a
more efficient protocol in the face of errors and a more efficient
protocol over networks that have a high bandwidth*delay product
without decreasing performance over more common networks. If a
system that implements the options talks with one that does not, the
two systems should still be able to communicate with no problems.
This assumes that the system doesn't use the option numbers defined
in this paper in some other way or doesn't panic when faced with an
option that the machine does not implement. Currently at NASA, there
are many machines that do not implement either option and communicate
just fine with the systems that do implement them.
The drive for implementing big windows has been the direct result of
trying to make TCP more efficient over large delay networks [2,3,4,5]
such as a T1 satellite. However, another practical use of large
windows is becoming more apparent as the local area networks being
developed are becoming faster and supporting much larger MTU's.
Hyperchannel, for instances, has been stated to be able to support 1
Mega bit MTU's in their new line of products. With the current
implementation of TCP, efficient use of hyperchannel is not utilized
as it should because the physical mediums MTU is larger than the
maximum window of the protocol being used. By increasing the TCP
Fox [Page 10]
RFC 1106 TCP Big Window and Nak Options June 1989
window size, better utilization of networks like hyperchannel will be
gained instantly because the sender can send 64 Kbyte packets (IP
limitation) but not have to operate in a stop and wait fashion.
Future work is being started to increase the IP maximum datagram size
so that even better utilization of fast local area networks will be
seen by having the TCP/IP protocols being able to send large packets
over mediums with very large MTUs. This will hopefully, eliminate
the network protocol as the bottleneck in data transfers while
workstations and workstation file system technology advances even
more so, than it already has.
An area of concern when using the big window mechanism is the use of
machine resources. When running over a satellite and a packet is
dropped such that 2Z (where Z is the round trip delay) worth of data
is unacknowledged, both ends of the connection need to be able to
buffer the data using machine mbufs (or whatever mechanism the
machine uses), usually a valuable and scarce commodity. If the
window size is not chosen properly, some machines will crash when the
memory is all used up, or it will keep other parts of the system from
running. Thus, setting the window to some fairly large arbitrary
number is not a good idea, especially on a general purpose machine
where many users log on at any time. What is currently being
engineered at NASA is the ability for certain programs to use the
setsockopt feature or 4.3bsd asking to use big windows such that the
average user may not have access to the large windows, thus limiting
the use of big windows to applications that absolutely need them and
to protect a valuable system resource.
6. References
[1] Jacobson, V., "Congestion Avoidance and Control", SIGCOMM 88,
Stanford, Ca., August 1988.
[2] Jacobson, V., and R. Braden, "TCP Extensions for Long-Delay
Paths", LBL, USC/Information Sciences Institute, RFC 1072,
October 1988.
[3] Cheriton, D., "VMTP: Versatile Message Transaction Protocol", RFC
1045, Stanford University, February 1988.
[4] Clark, D., M. Lambert, and L. Zhang, "NETBLT: A Bulk Data
Transfer Protocol", RFC 998, MIT, March 1987.
[5] Fox, R., "Draft of Proposed Solution for High Delay Circuit File
Transfer", GE/NAS Internal Document, March 1988.
[6] Postel, J., "Transmission Control Protocol - DARPA Internet
Program Protocol Specification", RFC 793, DARPA, September 1981.
Fox [Page 11]
RFC 1106 TCP Big Window and Nak Options June 1989
[7] Leiner, B., "Critical Issues in High Bandwidth Networking", RFC
1077, DARPA, November 1989.
7. Appendix
Both options have been implemented and tested. Contained in this
section is some performance gathered to support the use of these two
options. The satellite channel used was a 1.544 Mbit link with a
580ms round trip delay. All values are given as units of bytes.
TCP with Big Windows, No Naks:
|---------------transfer rates----------------------|
Window Size | no error | 10e-7 error rate | 10e-6 error rate |
-----------------------------------------------------------------
64K | 94K | 53K | 14K |
-----------------------------------------------------------------
72K | 106K | 51K | 15K |
-----------------------------------------------------------------
80K | 115K | 42K | 14K |
-----------------------------------------------------------------
92K | 115K | 43K | 14K |
-----------------------------------------------------------------
100K | 135K | 66K | 15K |
-----------------------------------------------------------------
112K | 126K | 53K | 17K |
-----------------------------------------------------------------
124K | 154K | 45K | 14K |
-----------------------------------------------------------------
136K | 160K | 66K | 15K |
-----------------------------------------------------------------
156K | 167K | 45K | 14K |
-----------------------------------------------------------------
Figure 1.
Fox [Page 12]
RFC 1106 TCP Big Window and Nak Options June 1989
TCP with Big Windows, and Naks:
|---------------transfer rates----------------------|
Window Size | no error | 10e-7 error rate | 10e-6 error rate |
-----------------------------------------------------------------
64K | 95K | 83K | 43K |
-----------------------------------------------------------------
72K | 104K | 87K | 49K |
-----------------------------------------------------------------
80K | 117K | 96K | 62K |
-----------------------------------------------------------------
92K | 124K | 119K | 39K |
-----------------------------------------------------------------
100K | 140K | 124K | 35K |
-----------------------------------------------------------------
112K | 151K | 126K | 53K |
-----------------------------------------------------------------
124K | 160K | 140K | 36K |
-----------------------------------------------------------------
136K | 167K | 148K | 38K |
-----------------------------------------------------------------
156K | 167K | 160K | 38K |
-----------------------------------------------------------------
Figure 2.
With a 10e-6 error rate, many naks as well as data packets were
dropped, causing the wild swing in transfer times. Also, please note
that the machines used are SGI Iris 2500 Turbos with the 3.6 OS with
the new TCP enhancements. The performance associated with the Irises
are slower than a Sun 3/260, but due to some source code restrictions
the Iris was used. Initial results on the Sun showed slightly higher
performance and less variance.
Author's Address
Richard Fox
950 Linden #208
Sunnyvale, Cal, 94086
EMail: rfox@tandem.com
Fox [Page 13]
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -