⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1016.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 4 页
字号:
RFC 1016        Source Quench Introduced Delay -- SQuID        July 1987   and a window size of 20 datagrams.  We assumed that we could send   data over the LAN at a sustained average rate of 1 Mb/s or about 18   times as fast as over the WAN.  When TCP sends a burst of 20   datagrams to node 1 they make it to node 2 in 81 msec.  The   transition time from node 2 to node 3 is 73 msec, therefore, in 81   msec, only one datagram is forwarded to node 3.  Thus the 17th, 18th,   19th, and 20th datagram are lost every time we send a whole window.   More are lost when the queue is not empty.  If a sequence of acks   come back in response to the sent data, the acks tend to return at   the rate at which data can traverse the net thus pacing new send data   by opening the window at the rate which the network can accept it.   However as soon as one datagram is lost all of the subsequent acks   are deferred and batched until receipt of the missing data block   which acks all of the datagrams and opens the window to 20 again.   This causes the max queue size to be exceeded again.   If we assume a window smaller than the max queue size in the   bottleneck node, any time we send a window's worth of data, it is   done independently of the current size of the queue.  The larger the   send window, the larger a percentage of the stressed queue we send.   If we send 50% of the stressed queue size any time that queue is more   than 50% we threaten to overflow the queue.  Evenly spaced single   datagram bursts have the least chance of overflowing the queue since   they represent the minimum percentage of the max queue one may send.   When a big window opens up (that is, a missing datagram at the head   of a 40 datagram send queue gets retransmitted and acked), the   perceived round trip time for datagrams subsequently sent hits a   minimum value then goes up linearly.  The SRTT goes down then back up   in a nice smooth curve.  This is caused by the fact IP will not add   delay if the queue is empty and IP has not sent any datagrams to the   destination for our introduced delay time.  But as many datagrams are   added to the IP pre-staged send queue in bursts all have the same   send time as far as TCP is concerned.  IP will delay each datagram on   the head of the queue by the introduced delay amount.  The first may   be undelayed as just described but all of the others are delayed by   their ordinal number on the queue times the introduced delay amount.   It seems as though in a race between a TCP session which delays   sending to IP and one who does not, the delayer will get better   throughput because less datagrams are lost.  The send window may also   be increased to keep the pipeline full.  If however the non delayer   uses windowing to reduce the chance of SQ datagram loss his   throughput may possibly be better because no fair queuing algorithm   is in place.   If gateways send SQs early and don't toss data until its critical and   keep sending SQs until a low water mark is hit, effective throughputPrue & Postel                                                  [Page 10]RFC 1016        Source Quench Introduced Delay -- SQuID        July 1987   seems to go up.   At the startup of our tests throughput was very high, then dropped   off quickly as the last of the window got clobbered.  Our model   should have used a slow start up algorithm to minimize the startup   shock.  However the learning curve to estimate the proper value for D   was probably quicker.   A large part of the perceived RTT is due to the delay getting off the   TCP2IP (TCP transitional) queue when we used large windows.  If IP   would invoke some back-pressure to TCP in a real implementation this   can be significantly reduced.  Reducing the window would do this for   us at the expense of throughput.   After an SQ burst which tosses datagrams the sender gets in a mode   where TCP may only send one or two datagrams per RTT until the queued   but not acked segments fall into sequence and are acked.  This   assumes only the head of the retransmission queue is retransmitted on   a timeout.  We can send one datagram upon timeout.  When the ack for   the retransmission is received the window opens allowing sending a   second.  We then wait for the next lost datagram to time out.   If we stop sending data for a while but allow D to be decreased, our   algorithm causes the introduced delay to dwindle away.  We would thus   go through a new startup learning curve and network oscillation   sequence.   One thing not observed often was TCP timing out a segment before the   source IP even sent the datagram the first time.  As discussed above   the first datagram on the queue of a large burst is delayed minimally   and succeeding datagrams have linearly increasing delays.  The   smoothed round trip delay algorithm has a chance to adapt to the   perceived increasing round trip times.Unstructured Thoughts and Comments   The further down a route a datagram traverses before being clobbered   the greater the waste of network resources.  SQs which do not destroy   the datagram referred to are better than ones that do if return path   resources are available.   Any fix must be implementable piecemeal.  A fix can not be installed   in all or most nodes at one time.  The SQuID algorithm fulfills this   requirement.  It could be implemented, installed in one location, and   used effectively.   If it can be shown that by using the new algorithm effective   throughput can be increased over implementations which do notPrue & Postel                                                  [Page 11]RFC 1016        Source Quench Introduced Delay -- SQuID        July 1987   implement it that may well be effective impetus to get vendors to   implement it.   Once a source host has an established average minimum inter-datagram   delay to a destination (see Appendix A), this information should be   stored across system restarts.  This value might be used each time   data is sent to the given host as a minimum inter-datagram delay   value.   Window closing algorithms reduce the average inter-datagram delay and   the burst size but do not affect the minimum inter-datagram spacing   by TCP.   Currently an IP gateway node can know if it is in a critical path   because its queues stay high or keep building up.  Its optimum queue   size is one because it always has something to do and the through   node delay is at a minimum.  It is very important that the gateway at   the critical path not so discourage data flow that its queue size   drops to zero.  If the gateway tosses datagrams this stops data flow   for TCP for a while (as described in Observed Results above).  This   argues for the gateway algorithm described above which SQs but does   not toss datagrams unless necessary.  Optimally we should try to have   a queue size somewhat larger than 1 but less than say 50% of the max   queue size.  Large queues lead to large delay.   TCP's SRTT is made artificially large by introducing delay at IP but   the perceived round trip time variance is probably smaller allowing a   smaller Beta value for the timeout value.   So that a decrease timer is not needed for the "D" decrease function,   upon the next sent datagram to a delayed destination just decrease   the delay by the amount of time since we last did this divided by the   decrease timer interval.  An alternate algorithm would be to decrease   it by only one decrease unit amount if more than the timer interval   has gone by.  This eliminates the problem caused by the delay, "D",   dwindling away if we stop sending for a while.  The longer we send   using this "D", the more likely it is that it is too large a delay   and the more we should decrease it.   It is better for the network and the sender for our introduced delay   to be a little on the high side.  It minimizes the chances of getting   a datagram clobbered by sending it into a congested gateway.  A lost   datagram scenario described above showed that one lost datagram can   reduce our effective delay by one to two orders of magnitude   temporarily.  Also if the delay is a little high, the net is less   stressed and the queues get smaller, reducing through network delay.   The RTT experienced at a given time verses the minimum RTT possiblePrue & Postel                                                  [Page 12]RFC 1016        Source Quench Introduced Delay -- SQuID        July 1987   for the given route does give a good measure of congestion.  If we   ever get congestion control working RTT may have little to do with   the amount of congestion.  Effective throughput when compared with   the possible throughput (or some other measure) is the only real   measure of congestion.   Slow startup of TCP is a good thing and should be encouraged as an   additional mechanism for alleviating network overload.   The network dynamics tends to bunch datagrams.  If we properly space   data instead of bunching it like windowing techniques to control   overflow of queues then greater throughput is accomplished because   the absolute rate we can send is pacing our sending not the RTT.  We   eliminate "stochastic bunching" [6].   The longer the RTT the more network resources the data takes to   traverse the net.   Should "fair queuing" say that a longer route data transfer should   get less band width than a shorter one (since it consumes more of the   net)?  Being fair locally on each node may be unfair overall to   datagrams traversing many nodes.   If we solve congestion problems today, we will start loading up the   net with more data tomorrow.  When this causes congestion in a year   will that type of congestion be harder to solve than todays or is it   not our problem?  John Nagle suggests "In a large net, we may well   try to force congestion out to the fringes and keep the interior of   the net uncongested by controlling entry to the net.  The IMP-based   systems work that way, or at least used to.  This has the effect of   concentrating congestion at the entrance to the long-haul system.   That's where we want it; the Source Quench / congestion window / fair   queuing set of strategies are able to handle congestion at the LAN to   WAN bottleneck [7].  Our algorithm should try to push the network   congestion out to the extremities and keep the interior network   congestion free.   Use of the algorithm is aesthetically appealing because the data is   sitting in our local queue instead of consuming resources inside the   net.  We give data to the network only when it is ready to accept it.   An averaged minimum inter-datagram arrival value will give a measure   of the network bottleneck speed at the receiver.  If the receiver   does not defer or batch together acks the same would be learned from   the inter-datagram arrival time of the acks.  A problem is that IP   doesn't have knowledge of the datagram contents.  However IP does   know from which host a datagram comes.Prue & Postel                                                  [Page 13]RFC 1016        Source Quench Introduced Delay -- SQuID        July 1987   If SQuID limits the size of its pre-net buffering properly (causes   back-pressure to TCP) then artificially high RTT measurements would   not occur.   TCP might, in the future, get a way to query IP for the current   introduced delay, D, for a given destination and if the value is too   excessive abort or not start a session.   With the new algorithm TCP could have an arbitrarily large window to   send into without fear of over running queue sizes in intermediate   nodes (not that any TCP ever considered having this fear before).   Thus it could have a window size which would allow it to always be   sending; keeping the pipe full and seldom getting in the stop-and-   wait mode of sending.  This presupposes that the local IP is able to   cause some sort of back pressure so that the local IPs queues are not   over run.  TCP would still be operating in the burst mode of sending   but the local IP would be sending a datagram for the TCP as often as   the network could accept it keeping the data flow continuous though   potentially slow.   Experience implementing protocols suggests avoiding timers in   protocols whenever possible.  IP, as currently defined, does not use   timers. The SQuID algorithm uses two at the IP level.  A way to   eliminate the introduced delay decrease timer is to decrease the D   value when we check the send queue for data to send.  We would

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -