⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc813.txt

📁 RFC 相关的技术文档
💻 TXT
📖 第 1 页 / 共 3 页
字号:
network delivers the data to the recipient sufficiently slowly  that  itcan  process  the  data fast enough to keep the buffer from overflowing.If the receiver is faster than the sender, one could, with luck,  permitan infinitely optimistic window, in which the sender is simply permittedto send full-speed.  If the sender is faster than the receiver, however,and the window is too optimistic, then some segments will cause a bufferoverflow,  and  will  be  discarded.  Therefore, the correct strategy toimplement an optimistic window is to  increase  the  window  size  untilsegments  start to be lost.  This only works if it is possible to detectthat the segment has been lost.  In  some  cases,  it  is  easy  to  do,because  the  segment  is  partially processed inside the receiving hostbefore it is thrown away.  In other cases, overflows may actually  causethe network interface to be clogged, which will cause the segments to belost  elsewhere  in the net.  It is inadvisable to attempt an optimisticwindow strategy unless one is certain that the algorithm can detect  theresulting  lost  segments.  However, the increase in throughput which ispossible from optimistic windows is quite substantial.  Any systems withsmall buffer space should seriously consider  the  merit  of  optimisticwindows.     The  selection  of an appropriate window algorithm is actually morecomplicated than even the above  discussion  suggests.    The  followingconsiderations  are  not  presented  with  the  intention  that  they be                                   16incorporated  in  current  implementations of TCP, but as background forthe sophisticated designer who is attempting to understand how  his  TCPwill  respond  to  a variety of networks, with different speed and delaycharacteristics.  The particular pattern of windows and acknowledgementssent from receiver to sender influences two characteristics of the  databeing  sent.    First, they control the average data rate.  Clearly, theaverage rate of the  sender  cannot  exceed  the  average  rate  of  thereceiver,  or  long-term  buffer  overflow  will  occur.    Second, theyinfluence the burstiness of the data coming from the sender.  Burstinesshas both advantages and disadvantages.  The advantage of  burstiness  isthat  it  reduces  the  CPU processing necessary to send the data.  Thisfollows from the observed fact, especially on large machines, that  mostof  the  cost  of sending a segment is not the TCP or IP processing, butthe scheduling overhead of getting started.     On the other hand, the disadvantage of burstiness is  that  it  maycause  buffers  to overflow, either in the eventual recipient, which wasdiscussed above, or in an intermediate gateway,  a  problem  ignored  inthis paper.  The algorithms described above attempts to strike a balancebetween  excessive  burstiness,  which  in  the  extreme cases can causedelays because a burst is  not  requested  soon  enough,  and  excessivefragmentation   of  the  data  stream  into  small  segments,  which  weidentified as Silly Window Syndrome.     Under conditions of extreme delay  in  the  network,  none  of  thealgorithms   described   above   will   achieve   adequate   throughput.Conservative window algorithms  have  a  predictable  throughput  limit,                                   17which  is one windowfull per roundtrip delay.  Attempts to solve this byoptimistic window strategies may  cause  buffer  overflows  due  to  thebursty  nature  of the arriving data.  A very sophisticated way to solvethis is for the receiver, having measured by some  means  the  roundtripdelay  and  intersegment  arrival rate of the actual connection, to openhis window, not in one optimistic increment of gigantic proportion,  butin  a number of smaller optimistic increments, which have been carefullyspaced using a timer so that the resulting smaller bursts  which  arriveare each sufficiently small to fit into the existing buffers.  One couldvisualize this as a number of requests flowing backwards through the netwhich trigger in return a number of bursts which flow back spaced evenlyfrom  the  sender  to  the  receiver.    The  overall result is that thereceiver uses the window mechanism to  control  the  burstiness  of  thearrivals, and the average rate.     To  my knowledge, no such strategy has been implemented in any TCP.First, we do not normally have delays high enough to require  this  kindof  treatment.    Second,  the  strategy described above is probably notstable unless it is very carefully balanced.  Just as buses on a  singlebus  route tend to bunch up, bursts which start out equally spaced couldwell end up piling into each other, and forming the single  large  burstwhich  the  receiver was hoping to avoid.  It is important to understandthis extreme case, however, in order to  understand  the  limits  beyondwhich  TCP,  as normally implemented, with either conservative or simpleoptimistic windows can be expected to  deliver  throughput  which  is  areasonable percentage of the actual network capacity.                                   18     7.  Conclusions     This  paper  describes  three  simple  algorithms  for  performanceenhancement in TCP, one at the sender end and two at the receiver.   Thesender  algorithm  is  to  refrain from sending if the useable window issmaller than 25 percent of the offered window.  The receiver  algorithmsare first, to artificially reduce the offered window when processing newdata  if  the  resulting  reduction  does  not  represent more than somefraction, say 50 percent, of the actual space available, and second,  torefrain  from  sending an acknowledgment at all if two simple conditionshold.     Either of these algorithms will prevent the worst aspects of  SillyWindow  Syndrome, and when these algorithms are used together, they willproduce substantial improvement in CPU utilization, by  eliminating  theprocess of excess acknowledgements.     Preliminary  experiments  with  these  algorithms suggest that theywork, and work very well.  Both the sender and receiver algorithms  havebeen  shown  to  eliminate  SWS,  even  when  talking  to  fairly  sillyalgorithms at the other end.  The Multics  mailer,  in  particular,  hadsuffered substantial attacks of SWS while sending large mail to a numberof  hosts.   We believe that implementation of the sender side algorithmhas  eliminated  every  known  case  of  SWS  detected  in  our  mailer.Implementation  of  the  receiver  side  algorithm  produced substantialimprovements of CPU time when Multics was the sending system.    Multicsis  a  typical  large  operating system, with scheduling costs which arelarge compared to the actual  processing  time  for  protocol  handlers.                                   19Tests were done sending from Multics to a host which implemented the SWSsuppression  algorithm,  and  which  could  either  refrain  or not fromsending acknowledgements on each segment.  As predicted, suppressing thereturn acknowledgements did not influence the throughput for large  datatransfer  at  all,  since the throttling effect was elsewhere.  However,the CPU time required to process the data at the Multics end was cut  bya  factor  of  four  (In  this experiment, the bursts of data which werebeing sent were approximately eight  segments.    Thus,  the  number  ofacknowledgements in the two experiments differed by a factor of eight.)     An  important  consideration in evaluating these algorithms is thatthey must not cause the protocol implementations to deadlock.    All  ofthe  recommendations  in this document have the characteristic that theysuggest one refrain  from  doing  something  even  though  the  protocolspecification  permits one to do it.  The possibility exists that if onerefrains from doing something now one may never get to do it later,  andboth  ends will halt, even though it would appear superficially that thetransaction can continue.     Formally, the idea that things continue to work is referred  to  as"liveness".    One  of  the  defects  of ad hoc solutions to performanceproblems is the possibility that two different approaches will  interactto  prevent  liveness.   It is believed that the algorithms described inthis paper are always live, and that is one of the reasons why there  isa strong advantage in uniform use of this particular proposal, except incases where it is explicitly demonstrated not to work.     The  argument  for liveness in these solutions proceeds as follows.                                   20First,  the sender algorithm can only be stopped by one thing, a refusalof the receiver to acknowledge sent data.    As  long  as  the  receivercontinues  to  acknowledge  data, the ratio of useable window to offeredwindow will approach one, and eventually the  sender  must  continue  tosend.    However,  notice  that the receiver algorithm we have advocatedinvolves refraining from acknowledging.  Therefore, we certainly do havea situation where improper  operation  of  this  algorithm  can  preventliveness.     What  we  must show is that the receiver of the data, if it choosesto refrain from acknowledging, will do so only for a short time, and notforever.  The design of the algorithm described above  was  intended  toachieve  precisely  this  goal:  whenever the receiver of data refrainedfrom sending an acknowledgement it was required to set  a  timer.    Theonly  event  that  was  permitted to clear that timer was the receipt ofanother segment, which essentially reset the timer, and started it goingagain.  Thus, an acknowledgement will be sent as soon  as  no  data  hasbeen received.  This has precisely the effect desired:  if the data flowappears to be disrupted for any reason, the receiver responds by sendingan  up-to-date  acknowledgement.    In  fact,  the receiver algorithm isdesigned  to  be  more  robust  than  this,  for  transmission   of   anacknowledgment is triggered by two events, either a cessation of data ora  reduction in the amount of offered window to 50 percent of the actualvalue.    This  is  the  condition  which  will  normally  trigger   thetransmission of this acknowledgement.                                   21                               APPENDIX A     Dynamic Calculation of Acknowledgement Delay     The  text  suggested  that  when  setting  a  timer to postpone thesending  of  an  acknowledgement,  a  fixed  interval  of  200  to   300milliseconds  would  work  properly  in  practice.    This  has not beenverified over a wide variety of network delays, and clearly if there  isa  very  slow  net  which stretches out the intersegment arrival time, afixed interval will fail.  In a sophisticated TCP, which is expected  toadjust   dynamically   (rather   than   manually)  to  changing  networkconditions, it would be appropriate to measure this interval and responddynamically.  The following algorithm, which has been  relegated  to  anAppendix,  because  it  has not been tested, seems sensible.  Whenever asegment arrives which does not have the push  bit  on  in  it,  start  atimer,  which  runs  until  the  next  segment  arrives.   Average theseinterarrival intervals, using an exponential  decay  smoothing  functiontuned  to take into account perhaps the last ten or twenty segments thathave come in.  Occasionally, there will be a long  interarrival  period,even  for  a  segment  which is does not terminate a piece of data beingpushed, perhaps because a window has gone to zero or some glitch in  thesender  or  the  network  has held up the data.  Therefore, examine eachinterarrival interval, and discard it from the smoothing algorithm if itexceeds the current estimate by some amount, perhaps a ratio of  two  orfour times.  By rejecting the larger intersegment arrival intervals, oneshould obtain a smoothed estimate of the interarrival of segments inside                                   22a  burst.   The number need not be exact, since the timer which triggersacknowledgement can add a fairly generous fudge factor to  this  withoutcausing  trouble  with  the  sender's  estimate  of  the  retransmissioninterval, so long as the fudge factor is constant.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -