⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc896.txt

📁 RFC 相关的技术文档
💻 TXT
📖 第 1 页 / 共 2 页
字号:
at once, just as in the no-control case.  The user  will  see  novisible  delay.   Thus,  our  scheme  performs as well as the no-control scheme and provides better responsiveness than the  timerscheme.The second case to examine is the same Telnet  test  but  over  along-haul  link  with  a  5-second  round trip time.  Without anymechanism to prevent  small-packet  congestion,  25  new  packetswould be sent in 5 seconds.* Overhead here is  4000%.   With  theclassic timer scheme, and the same limit of 2 packets per second,there would still be 10 packets outstanding and  contributing  tocongestion.  Round-trip time will not be improved by sending manypackets, of course; in general it will be worse since the packetswill  contend  for line time.  Overhead now drops to 1500%.  Withour scheme, however, the first character from the user would findan  idle  TCP connection and would be sent immediately.  The next24 characters, arriving from the user at 200ms  intervals,  wouldbe  held  pending  a  message from the distant host.  When an ACKarrived for the first packet at the end of 5  seconds,  a  singlepacket  with  the 24 queued characters would be sent.  Our schemethus results in an overhead reduction to 320% with no penalty  inresponse  time.   Response time will usually be improved with ourscheme because packet overhead is reduced, here by  a  factor  of4.7 over the classic timer scheme.  Congestion will be reduced bythis factor and round-trip delay will decrease sharply.  For this________  * This problem is not seen in the pure ARPANET case because the    IMPs will block the host when the count of packets    outstanding becomes excessive, but in the case where a pure    datagram local net (such as an Ethernet) or a pure datagram    gateway (such as an ARPANET / MILNET gateway) is involved, it    is possible to have large numbers of tiny packets    outstanding.RFC 896    Congestion Control in IP/TCP Internetworks      1/6/84case, our scheme has a striking  advantage  over  either  of  theother approaches.We use our scheme for all TCP connections, not just  Telnet  con-nections.   Let us see what happens for a file transfer data con-nection using our technique. The two extreme cases will again  beconsidered.As before, we first consider the Ethernet case.  The user is  nowwriting data to TCP in 512 byte blocks as fast as TCP will acceptthem.  The user's first write to TCP will start things going; ourfirst  datagram  will  be  512+40  bytes  or 552 bytes long.  Theuser's second write to TCP will not cause a send but  will  causethe  block  to  be buffered.  Assume that the user fills up TCP'soutgoing buffer area before the first ACK comes back.  Then  whenthe  ACK  comes in, all queued data up to the window size will besent.  From then on, the window will be kept full,  as  each  ACKinitiates  a  sending  cycle  and queued data is sent out.  Thus,after a one round-trip time initial period when only one block issent,  our  scheme  settles down into a maximum-throughput condi-tion.  The delay in startup is only 50ms on the Ethernet, so  thestartup  transient  is  insignificant.  All three schemes provideequivalent performance for this case.Finally, let us look at a file transfer over the  5-second  roundtrip  time connection.  Again, only one packet will be sent untilthe first ACK comes back; the window will then be filled and keptfull.   Since the round-trip time is 5 seconds, only 512 bytes ofdata are transmitted in the first 5 seconds.  Assuming a 2K  win-dow,  once  the first ACK comes in, 2K of data will be sent and asteady rate of 2K per 5 seconds will  be  maintained  thereafter.Only  for  this  case is our scheme inferior to the timer scheme,and the difference is only in the startup transient; steady-statethroughput  is  identical.  The naive scheme and the timer schemewould both take 250 seconds to transmit a 100K  byte  file  underthe  above  conditions  and  our scheme would take 254 seconds, adifference of 1.6%.Thus, for all cases examined, our scheme provides at least 98% ofthe  performance  of  both other schemes, and provides a dramaticimprovement in Telnet performance over paths with long round triptimes.   We  use  our  scheme  in  the  Ford  Aerospace  SoftwareEngineering Network, and are able to run screen editors over Eth-ernet and talk to distant TOPS-20 hosts with improved performancein both cases.                  Congestion control with ICMPHaving solved the small-packet congestion problem and with it theproblem  of excessive small-packet congestion within our own net-work, we turned our attention to the problem of  general  conges-tion  control.   Since  our  own network is pure datagram with nonode-to-node flow control, the only  mechanism  available  to  usRFC 896    Congestion Control in IP/TCP Internetworks      1/6/84under  the  IP standard was the ICMP Source Quench message.  Withcareful handling,  we  find  this  adequate  to  prevent  seriouscongestion problems.  We do find it necessary to be careful aboutthe behavior of our hosts and switching  nodes  regarding  SourceQuench messages.               When to send an ICMP Source QuenchThe present ICMP standard* specifies that an ICMP  Source  Quenchmessage  should  be  sent whenever a packet is dropped, and addi-tionally may be sent when a gateway finds itself  becoming  shortof  resources.   There is some ambiguity here but clearly it is aviolation of the standard to drop a  packet  without  sending  anICMP message.Our basic assumption is that packets ought not to be dropped dur-ing  normal  network  operation.   We  therefore want to throttlesenders back before they overload switching nodes  and  gateways.All  our  switching  nodes  send ICMP Source Quench messages wellbefore buffer space is exhausted; they do not wait  until  it  isnecessary to drop a message before sending an ICMP Source Quench.As demonstrated in our  analysis  of  the  small-packet  problem,merely  providing  large  amounts of buffering is not a solution.In general, our experience is that Source Quench should  be  sentwhen  about  half  the  buffering space is exhausted; this is notbased on extensive experimentation but appears to be a reasonableengineering  decision.   One  could  argue for an adaptive schemethat adjusted the quench generation  threshold  based  on  recentexperience; we have not found this necessary as yet.There exist other gateway implementations  that  generate  SourceQuenches  only after more than one packet has been discarded.  Weconsider this approach undesirable since any system for  control-ling congestion based on the discarding of packets is wasteful ofbandwidth and may be susceptible  to  congestion  collapse  underheavy  load.   Our understanding is that the decision to generateSource Quenches with great reluctance stems from a fear that ack-nowledge  traffic  will  be quenched and that this will result inconnection failure.  As will be shown below, appropriate handlingof  Source  Quench in host implementations eliminates this possi-bility.        What to do when an ICMP Source Quench is receivedWe inform TCP or any other  protocol  at  that  layer  when  ICMPreceives  a Source Quench.  The basic action of our TCP implemen-tations is to reduce the amount of data  outstanding  on  connec-tions to the host mentioned in the Source Quench. This control is________  * ARPANET RFC 792 is the present standard.  We are advised by    the Defense Communications Agency that the description of    ICMP in MIL-STD-1777 is incomplete and will be deleted from    future revision of that standard.RFC 896    Congestion Control in IP/TCP Internetworks      1/6/84applied by causing the sending TCP to behave as  if  the  distanthost's  window  size  has been reduced.  Our first implementationwas simplistic but effective;  once  a  Source  Quench  has  beenreceived  our  TCP behaves as if the window size is zero wheneverthe window isn't  empty.   This  behavior  continues  until  somenumber  (at  present 10) of ACKs have been received, at that timeTCP returns to normal operation.* David Mills  of  Linkabit  Cor-poration  has  since  implemented  a  similar  but more elaboratethrottle on the count of outstanding packets in his DCN  systems.The  additional  sophistication seems to produce a modest gain inthroughput, but we have not made formal tests.  Both  implementa-tions effectively prevent congestion collapse in switching nodes.Source Quench thus has the effect of limiting the connection to alimited number (perhaps one) of outstanding messages.  Thus, com-munication can continue but at a reduced rate,  that  is  exactlythe effect desired.This scheme has the important property that Source Quench doesn'tinhibit  the  sending of acknowledges or retransmissions.  Imple-mentations of Source Quench entirely within the IP layer are usu-ally unsuccessful because IP lacks enough information to throttlea connection properly.  Holding back acknowledges tends  to  pro-duce  retransmissions and thus unnecessary traffic.  Holding backretransmissions may cause loss of a connection by  a  retransmis-sion  timeout.   Our  scheme  will  keep  connections alive undersevere overload but at reduced bandwidth per connection.Other protocols at the same layer as TCP should also  be  respon-sive  to  Source  Quench.  In each case we would suggest that newtraffic should be throttled but acknowledges  should  be  treatednormally.   The only serious problem comes from the User DatagramProtocol, not normally a major traffic generator.   We  have  notimplemented  any  throttling  in  these protocols as yet; all arepassed Source Quench messages by ICMP but ignore them.                    Self-defense for gatewaysAs we have shown, gateways are vulnerable to  host  mismanagementof  congestion.  Host misbehavior by excessive traffic generationcan prevent not only the host's own traffic from getting through,but  can interfere with other unrelated traffic.  The problem canbe dealt with at the host level but since one malfunctioning hostcan  interfere  with others, future gateways should be capable ofdefending themselves against such behavior by obnoxious or  mali-cious hosts.  We offer some basic self-defense techniques.On one occasion in late 1983, a TCP bug in an ARPANET host causedthe  host  to  frantically  generate  retransmissions of the samedatagram as fast as the ARPANET would accept them.   The  gateway________  * This follows the control engineering dictum  "Never bother    with proportional control unless bang-bang  doesn't work".RFC 896    Congestion Control in IP/TCP Internetworks      1/6/84that connected our net with the ARPANET was saturated and  littleuseful  traffic  could  get  through,  since the gateway had morebandwidth to the ARPANET than to our  net.   The  gateway  busilysent  ICMP  Source  Quench  messages  but the malfunctioning hostignored them.  This continued for several hours, until  the  mal-functioning  host  crashed.   During this period, our network waseffectively disconnected from the ARPANET.When a gateway is forced to  discard  a  packet,  the  packet  isselected  at  the  discretion of the gateway.  Classic techniquesfor making  this  decision  are  to  discard  the  most  recentlyreceived packet, or the packet at the end of the longest outgoingqueue.  We suggest that a worthwhile practical measure is to dis-card  the  latest  packet  from the host that originated the mostpackets currently queued within the gateway.  This strategy  willtend  to  balance throughput amongst the hosts using the gateway.We have not yet tried this strategy, but it  seems  a  reasonablestarting point for gateway self-protection.Another strategy is to discard a  newly  arrived  packet  if  thepacket  duplicates  a  packet already in the queue.  The computa-tional load for this check is not a problem if hashing techniquesare  used.   This  check will not protect against malicious hostsbut will provide some protection against TCP implementations withpoor  retransmission  control.   Gateways between fast local net-works and slower long-haul networks may find this check  valuableif the local hosts are tuned to work well with the local network.Ideally  the  gateway  should  detect  malfunctioning  hosts  andsquelch them; such detection is difficult in a pure datagram sys-tem.  Failure to  respond  to  an  ICMP  Source  Quench  message,though,  should be regarded as grounds for action by a gateway todisconnect a host.  Detecting such failure is non-trivial but  isa worthwhile area for further research.                           ConclusionThe congestion control problems  associated  with  pure  datagramnetworks  are  difficult, but effective solutions exist.  If IP /TCP networks are to be operated under heavy load, TCP implementa-tions  must address several key issues in ways at least as effec-tive as the ones described here.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -