📄 rfc59.txt
字号:
(5) Wrong Type of Optimization - This type of flow control optimizes for the case where every connection starts sending large amounts of data almost simultaneously, exactly the case that just about never occurs. As a result the NCP operates almost as slowly under light load as it does under heavy load.(6) Loss of Allocation Synchrony Due to Message Length Indeterminacy - If this plan is to be workable, the allocation must apply to the entire message, including header, padding, text, and marking, oth- erwise, a plethora of small messages could overflow the buffers, even though the text allocation had not been exceeded. Thomas Bar- kalow of Lincoln Laboratories has pointed out that this also fails, because the sending HOST can not know how many bits of padding that the receiving HOST's system will add to the message. After several messages, the allocation counters of the sending and receiving HOSTs will get seriously out of synchrony. This will have serious consequences. It has been argued that the allocation need apply only to the text portion, because the header, padding, and marking are deleted soon after receipt of the message. This imposes an implementation res- triction on the NCP, that it must delete all but the text part of the message just as soon as it gets it from the IMP. In both the TX2 and Multics implementations it is planned to keep most of the message in the buffers until it is read by the receiving process. [Page 4]NWG/RFC 59 Flow Control - Fixed Versus Demand AllocationThe advantages of demand allocation using the "cease on link" flow con-trol method are pretty much the converse of the disadvantages of fixedallocation. There is much greater resource utilization, flexibility,bandwidth, and less overhead, primarily because flow control restric-tions are imposed only when needed, not on normal flow.One would expect very good performance under light and moderate load,and I won't belabor this point.The real test is what happens under heavy load conditions. The chiefdisadvantage of this demand allocation scheme is that the "cease onlink" IMP message can not quench flow over a link soon enough to preventan NCP's buffers from filling completely under extreme overload.This is true. However, it is not a critical disadvantage for three rea-sons:(i) An overload that would fill an NCP's buffers is expected to occur at infrequent intervals.(ii) When it does occur, the situation is generally self-correcting and lasts for only a few seconds. Flow over individual connections may be temporarily delayed, but this is unlikely to be serious.(iv) In a few of these situations radical action by the NCP may be needed to unblock the logjam. However, this will have serious consequences only for those connections directly responsible for the tie-up.Let's examine the operation of an NCP employing demand allocation andusing "cease on link" for flow control. The following discussion isbased on a flow control algorithm in which the maximum permissible queuelength (MQL) is calculated to be a certain fraction (say 10%) of thetotal empty buffer space currently available. The NCP will block a con-nection if the input queue length exceeds the MQL. This can happeneither due to new input to the queue or a new calculation of the MQL ata lower value. Under light load conditions, the MQL is reasonably high,and relatively long input queues can be maintained without the connec-tion being blocked.As load increases, the remaining available buffer space goes down, andthe MQL is constantly recalculated at a lower value. This hardly affectsconsole communications with short queues, but more queues for highvolume connections are going above the MQL. As they do, "cease on link"messages are being sent out for these connections.If the flow control algorithm constants are set properly, there is ahigh probability that flow over the quenched links will indeed ceasebefore the scarcity of buffer space reaches critical proportions. [Page 5]NWG/RFC 59 Flow Control - Fixed Versus Demand AllocationHowever, there is a finite probability that the data still coming inover the quenched links will fill the buffers. This is most likely underalready heavy load conditions when previously inactive links starttransmitting at at once at high volume.Once the NCP's buffers are filled it must stop taking all messages fromits IMP. This is serious because now the NCP can no longer receive con-trol messages sent by other NCP's, and because the IMP may soon beforced to stop accepting messages from the NCP. (Fortunately, the NCPalready has sent out "cease on link" messages for virtually all of thehigh volume connections before it stopped taking data from the IMP.)In most cases input from the IMP remains stopped for only a few seconds.After a very short interval, some process with data queued for input islikely to come in and pick it up from the NCP's buffers. The NCP immedi-ately starts taking data from the IMP again. The IMP output may stop andstart several times as the buffers are opened and then refilled. How-ever, more processes are now reading data queued for their receive sock-ets, and the NCP's buffers are emptying at an accelerating rate. Soonthe reading processes make enough buffer space to take in all the mes-sages still pending for blocked links, and normal IMP communicationsresume.As the reading processes catch up with the sudden burst of data, the MQLbecomes lower, and more and more links become unblocked. The crisis cannot immediately reappear, because each link generally becomes unblockedat a different time. If new data suddenly shoots through, the linkimmediately goes blocked again. The MQL goes up, with the result thatother links do not become unblocked.The worst case appears at a HOST with relatively small total bufferspace (less than 8000 bits per active receive link) under heavy loadconditions. Suppose that high volume transmission suddenly comes in overmore than a half-dozen links simultaneously, and the processes attachedto the receive links take little or none of the input. The input buffersmay completely fill, and the NCP must stop all input from the IMP. Ifsome processes are reading links, buffer space soon opens up. Shortly itis filled again, this time with messages over links which are not beingread. At this point the NCP is blocked, and could remain so indefin-itely. The NCP waits for a time, hoping that some process starts readingdata. If this happens, the crisis may soon ease.If buffer space does not open up of its own accord, after a limitedinterval the NCP must take drastic action to get back into communicationwith its IMP. It selects the worst offending link on the basis of amountof data queued and interval since data was last read by this process,and totally deletes the input queue for this link. This should break thelogjam and start communications again.This type of situation is expected to occur most often when a processdeliberately tries to block an NCP in this manner. The solution hasserious consequences only for "bad" processes: those that flood the net-work with high volume transmission over multiple links which are notbeing read by the receiving processes. [Page 6]NWG/RFC 59 Flow Control - Fixed Versus Demand AllocationBecause of the infrequency and tolerability of this situation, theoverall performance of the network using this scheme of demand alloca-tion should still be much superior to that of a network employing afixed allocation scheme. [ This RFC was put into machine readable form for entry ] [ into the online RFC archives by William Lewis 6/97 ] [Page 7]
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -