⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1254.txt

📁 中、英文RFC文档大全打包下载完全版 .
💻 TXT
📖 第 1 页 / 共 5 页
字号:
RFC 1254           Gateway Congestion Control Survey         August 1991   similar. This may be one reason that the combination of Bit-Round   Fair Queueing with Congestion Indication had particularly good   simulated performance in [DKS89].   The aggregation of users has the expected drawbacks, as well.  A   TELNET user in a queue with an FTP user does not get delay benefits;   and host pairs with high volume of connections get treated the same   as a host pair with small number of connections and as a result gets   unfair services.   A problem can be mentioned about Fair Queueing, though it is related   to implementation, and perhaps not properly part of a policy   discussion.  This is a concern that the resources (processing) used   for determining where to queue incoming packets would themselves be   subject to congestion, but not to the benefits of the Fair Queuing   discipline.  In a situation where the gateway processor was not   adequate to the demands on it, the gateway would need an alternative   policy for congestion control of the queue awaiting processing.   Clever implementation can probably find an efficient way to route   packets to the queues they belong in before other input processing is   done, so that processing resources can be controlled, too.  There is   in addition, the concern that bit-by-bit round FQ requires non-FCFS   queueing even within the same source destination pairs to allow for   fairness to different connections between these end systems.   Another potential concern about Fair Queueing is whether it can scale   up to gateways with very large source-destination populations.  For   example, the state in a Fair Queueing implementation scales with the   number of active end-to-end paths, which will be high in backbone   gateways.3.5.2  Stochastic Fairness Queuing   Stochastic Fairness Queueing (SFQ) has been suggested as a technique   [McK90] to address the implementation issues relating to Fair   Queueing.  The first overhead that is reduced is that of looking up   the source-destination address pair in an incoming packet and   determining which queue that packet will have to be placed in.  SFQ   does not require as many memory accesses as Fair Queueing to place   the packet in the appropriate queue.  SFQ is thus claimed to be more   amenable to implementation for high-speed networks [McK90].   SFQ uses a simple hash function to map from the source-destination   address pair to a fixed set of queues.  Since the assignment of an   address pair to a queue is probabilistic, there is the likelihood of   multiple address pairs colliding and mapping to the same queue.  This   would potentially degrade the additional fairness that is gained with   Fairness Queueing.  If two or more address pairs collide, they wouldPerformance and Congestion Control Working Group               [Page 15]RFC 1254           Gateway Congestion Control Survey         August 1991   continue to do so.  To deal with the situation when such a collision   occurs, SFQ periodically perturbs the hash function so that these   address pairs will be unlikely to collide subsequently.   The price paid for achieving this implementation efficiency is that   SFQ requires a potentially large number of queues (we must note   however, that these queues may be organized orthogonally from the   buffers in which packets are stored. The buffers themselves may be   drawn from a common pool, and buffers in each queue organized as a   linked list pointed to from each queue header).  For example, [McK90]   indicates that to get good, consistent performance, we may need to   have up to 5 to 10 times the number of active source-destination   pairs. In a typical gateway, this may require around 1000 to 2000   queues.   [McK90] reports simulation results with SFQ. The particular hash   function that is studied is using the HDLC's cyclic redundancy check   (CRC).  The hash function is perturbed by multiplying each byte by a   sequence number in the range 1 to 255 before applying the CRC.  The   metric considered is the standard deviation of the number of packets   output per source-destination pair.  A perfectly fair policy would   have a standard deviation of zero and an unfair policy would have a   large standard deviation.  In the example studied (which has up to 20   source-destination (s-d) pairs going through a single overloaded   gateway), SFQ with 1280 queues (i.e., 64 times the number of source-   destination pairs), approaches about 3 times the standard deviation   of Fairness Queueing.  This must be compared to a FCFS queueing   discipline having a standard deviation which is almost 175 times the   standard deviation of Fairness Queueing.   It is conjectured in [McK90] that a good value for the interval in   between perturbations of the hash function appears to be in the area   between twice the queue flush time of the stochastic fairness queue   and one-tenth the average conversation time between a source-   destination pair.   SFQ also may alleviate the anticipated scaling problems that may be   an issue with Fair Queueing by not strictly requiring the number of   queues equal to the possible source-destination pairs that may be   presenting a load on a particular gateway. However, SFQ achieves this   property by trading off some of the fairness for implementation   efficiency.   [McK90] examines alternative strategies for implementation of Fair   Queueing and SFQ and estimates their complexity on common hardware   platforms (e.g., a Motorola 68020).  It is suggested that mapping an   IP address pair may require around 24 instructions per packet for   Fair Queueing in the best case; in contrast SFQ requires 10Performance and Congestion Control Working Group               [Page 16]RFC 1254           Gateway Congestion Control Survey         August 1991   instructions in the worst case.  The primary source of this gain is   that SFQ avoids a comparison of the s-d address pair on the packet to   the identity of the queue header.  The relative benefit of SFQ over   Fair Queueing is anticipated to be greater when the addresses are   longer.   SFQ offers promising implemenatation benefits.  However, the price to   be paid in terms of having a significantly larger number of queues   (and the consequent data structures and their management) than the   number of s-d pairs placing a load on the gateway is a concern.  SFQ   is also attractive in that it may be used in concert with the DEC-bit   scheme for Selective Feedback Congestion Indication to provide   fairness as well as congestion avoidance.4.  END-SYSTEM CONGESTION CONTROL POLICIES   Ideally in gateway congestion control, the end-system transport   entities adjust (decrease) their demand in response to control   exerted by the gateway.  Schemes have been put in practice for   transport entities to adjust their demand dynamically in response to   congestion feedback.  To accomplish this, in general, they call for   the user to gradually increase demand until information is received   that the load on the gateway is too high.  In response to this   information, the user decreases load, then begins an exploratory   increases again.  This cycle is repeated continuously, with the goal   that the total demand will oscillate around the optimal level.   We have already noted that a Slow-start connection may give up   considerable bandwidth when sharing a gateway with aggressive   transport entities.  There is currently no way to enforce that end-   systems use a congestion avoidance policy, though the Host   Requirements RFC [HR89] has defined Slow-start as mandatory for TCP.   This adverse effect on Slow-start connections is mitigated by the   Fair Queueing policy.  Our conclusions discuss further the   coexistence of different end-system strategies.   This section briefly presents two fielded transport congestion   control and avoidance schemes, Slow-start and End-System Congestion   Indication, and the responses by means of which they cooperate with   gateway policies.  They both use the control paradigm described   above.  Slow-start, as mentioned in Section 1, was developed by   [Jac88], and widely fielded in the BSD TCP implementation.  End-   system Congestion Indication was developed by [JRC87].  It is fielded   in DEC's Digital Network Architecture, and has been specified as well   for ISO TP4 [NIST88].   Both Slow-start and End-system Congestion Indication view the   relationship between users and gateways as a control system. TheyPerformance and Congestion Control Working Group               [Page 17]RFC 1254           Gateway Congestion Control Survey         August 1991   have feedback and control components, the latter further broken down   into a procedure bringing a new connection to equilibrium, and a   procedure to maintain demand at the proper level.  They make use of   policies for increasing and decreasing the transport sender's window   size.  These require the sender to follow a set of self-restraining   rules which dynamically adjust the send window whenever congestion is   sensed.   A predecessor of these, CUTE, developed by [Jai86], introduced the   use of retransmission timeouts as congestion feedback.  The Slow-   start scheme was also designed to use timeouts in the same way.  The   End-System policies for Congestion Indication use the Congestion   Experienced Bit delivered in the network header as the primary   feedback, but retransmission timeouts also provoke an end-system   congestion response.   In reliable transport protocols like TCP and TP4, the retransmission   timer must do its best to satisfy two conflicting goals. On one hand,   the timer must rapidly detect lost packets. And, on the other hand,   the timer must minimize false alarms.  Since the retransmit timer is   used as a congestion signal in these end-system policies, it is all   the more important that timeouts reliably correspond to packet drops.   One important rule for retransmission is to avoid bad sampling in the   measurements that will be used in estimating the round-trip delay.   [KP87] describes techniques to ensure accurate sampling.  The Host   Requirements RFC [HR89] makes these techniques mandatory for TCP.   The utilization of a resource can be defined as the ratio of its   average arrival rate to its mean service rate. This metric varies   between 0 and 1.0. In a state of congestion, one or more resources   (link, gateway buffer, gateway CPU) has a utilization approaching   1.0.  The average delay (round trip time) and its variance approach   infinity. Therefore, as the utilization of a network increases, it   becomes increasingly important to take into account the variance of   the round trip time in estimating it for the retransmission timeout.   The TCP retransmission timer is based on an estimate of the round   trip time.  [Jac88] calls the round trip time estimator the single   most important feature of any protocol implementation that expects to   survive a heavy load. The retransmit timeout procedure in RFC-793   [Pos81b] includes a fixed parameter, beta, to account for the   variance in the delay. An estimate of round trip time using the   suggested values for beta is valid for a network which is at most 30%   utilized.  Thus, the RFC-793 retransmission timeout estimator will   fail under heavy congestion, causing unnecessary retransmissions that   add to the load, and making congestive loss detection impossible.   Slow-start TCP uses the mean deviation as an estimate of the variancePerformance and Congestion Control Working Group               [Page 18]RFC 1254           Gateway Congestion Control Survey         August 1991   to improve the estimate. As a rough view of what happens with the   Slow-start retransmission calculation, delays can change by   approximately two standard deviations without the timer going off in   a false alarm.  The same method of estimation may be applicable to   TP4.  The procedure is:           Error     = Measured - Estimated           Estimated = Estimated + Gain_1 * Error           Deviation = Deviation + Gain_2 * (|Error| - Deviation)           Timeout   = Estimated + 2 * Deviation           Where:  Gain_1, Gain_2 are gain factors.4.1  Response to No Policy in Gateway   Since packets must be dropped during congestion because of the finite   buffer space, feedback of congestion to the users exists even when   there is no gateway congestion policy.  Dropping the packets is an   attempt to recover from congestion, though it needs to be noted that   congestion collapse is not prevented by packet drops if end-systems   accelerate their sending rate in these conditions.  The accurate   detection of congestive loss by all retransmission timers in the   end-systems is extremely important for gateway congestion recovery.4.2  Response to Source Quench   Given that a Source Quench message has ambiguous meaning, but usually   is issued for congestion recovery, the response of Slow-start to a   Source Quench is to return to the beginning of the cycle of increase.   This is an early response, since the Source Quench on overflow will   also lead to a retransmission timeout later.4.3 Response to Random Drop

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -