⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1254.txt

📁 中、英文RFC文档大全打包下载完全版 .
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   little traffic.  The selection of packets to be dropped is completely   uniform.  Therefore, a user who generates traffic of an amount below   the "fair share" (as defined in Section 2.3) may also experience a   small amount of packet loss at a congested gateway. This character of   uniformity is in fact a primary goal that Random Drop attempts to   achieve.   The other primary goal that Random Drop attempts to achieve is a   theoretical overhead which is scaled to the number of shared   resources in the gateway rather than the number of its users.  If a   gateway congestion algorithm has more computation the more users   there are, this can lead to processing demands that are higher as   congestion increases.  Also the low-overhead goal of Random Drop   addresses concerns about the scale of gateway processing that will be   required in the mid-term Internet as gateways with fast processors   and links are shared by very large active sets of users.3.2.1  For Congestion Recovery   Random Drop has been proposed as an improvement to packet dropping at   the operating point where the gateway's packet buffers overflow.   This is using Random Drop strictly as a congestion recovery   mechanism.   In Random Drop congestion recovery, instead of dropping the last   packet to arrive at the queue, a packet is selected randomly from the   queue.  Measurements of an implementation of Random Drop Congestion   Recovery [Man90] showed that a user with a low demand, due to a   longer round-trip time path than other users of the gateway, had a   higher drop rate with RDCR than without.  The throughput accorded to   users with the same round-trip time paths was nearly equal with RDCR   as compared to without it.  These results suggest that RDCR should be   avoided unless it is used within a scheme that groups traffic more or   less by round-trip time.Performance and Congestion Control Working Group               [Page 10]RFC 1254           Gateway Congestion Control Survey         August 19913.2.2  For Congestion Avoidance   Random Drop is also proposed as a congestion avoidance policy   [Jac89].  The intent is to initiate dropping packets when the gateway   is anticipated to become congested and remain so unless there is some   control exercised.  This  implies  selection  of  incoming packets to   be randomly dropped at a rate derived from identifying the level of   congestion at the gateway.  The rate is the number of arrivals   allowed between drops. It depends on the current operating point and   the prediction of congestion.   A part of the policy is to determine that congestion will soon occur   and that the gateway is beginning to operate beyond the knee of the   power curve.  With a suitably chosen interval (Section 2.1), the   number of packets from each individual user in a sample over that   interval is proportional to each user's demand on the gateway.  Then,   dropping one or more random packets indicates to some user(s) the   need to reduce the level of demand that is driving the gateway beyond   the desired operating point.  This is the goal that a policy of   Random Drop for congestion avoidance attempts to achieve.   There are several parameters to be determined for a Random Drop   congestion avoidance policy. The first is an interval, in terms of   number of packet arrivals, over which packets are dropped with   uniform probability.  For instance, in a sample implementation, if   this interval spanned 2000 packet arrivals, and a suitable   probability of drop was 0.001, then two random variables would be   drawn in a uniform distribution in the range of 1 to 2,000.  The   values drawn would be used by counting to select the packets dropped   at arrival.  The second parameter is the value for the probability of   drop.  This parameter would be a function of an estimate of the   number of users, their appropriate control intervals, and possibly   the length of time that congestion has persisted.  [Jac89] has   suggested successively increasing the probability of drop when   congestion persists over multiple control intervals.  The motivation   for increasing the packet drop probability is that the implicit   estimate of the number of users and random selection of their packets   to drop does not guarantee that all users have received enough   signals to decrease demand.  Increasing the probability of drop   increases the probability that enough feedback is provided.   Congestion detection is also needed in Random Drop congestion   avoidance, and could be implemented in a variety of ways.  The   simplest is a static threshold, but dynamically averaged measures of   demand or utilization are suggested.   The packets dropped in Random Drop congestion avoidance would not be   from the initial inputs to the gateway.  We suggest that they would   be selected only from packets destined for the resource which isPerformance and Congestion Control Working Group               [Page 11]RFC 1254           Gateway Congestion Control Survey         August 1991   predicted to be approaching congestion.  For example, in the case of   a gateway with multiple outbound links, access to each individual   link is treated as a separate resource, the Random Drop policy is   applied at each link independently.  Random Drop congestion avoidance   would provide uniform treatment of all cooperating transport users,   even over individual patterns of traffic multiplexed within one   user's stream.  There is no aggregation of users.   Simulation studies [Zha89], [Has90] have presented evidence that   Random Drop is not fair across cooperating and non-cooperating   transport users.  A transport user whose sending policies included   Go-Back-N retransmissions and did not include Slow-start received an   excessive share of bandwidth from a simple implementation of Random   Drop.  The simultaneously active Slow-start users received unfairly   low shares.  Considering this, it can be seen that when users do not   respond to control, over a prolonged period, the Random Drop   congestion avoidance mechanism would have an increased probability of   penalizing users with lower demand.  Their packets dropped, these   users exert the controls leading to their giving up bandwidth.   Another problem can be seen to arise in Random Drop [She89] across   users whose communication paths are of different lengths.  If the   path spans congested resources at multiple gateways, then the user's   probability of receiving an unfair drop and subsequent control (if   cooperating) is exponentially increased.  This is a significant   scaling problem.   Unequal paths cause problems for other congestion avoidance policies   as well.  Selective Feedback Congestion Indication was devised to   enhance Congestion Indication specifically because of the problem of   unequal paths.  In Fair Queueing by source-destination pairs, each   path gets its own queue in all the gateways.3.3  Congestion Indication   The Congestion Indication policy is often referred to as the DEC Bit   policy. It was developed at DEC [JRC87], originally for the Digital   Network Architecture (DNA).  It has also been specified for the   congestion avoidance of the ISO protocols TP4 and CLNP [NIST88].   Like Source Quench, it uses explicit communications from the   congested gateway to the user.  However, to use the lowest possible   network resources for indicating congestion, the information is   communicated in a single bit, the Congestion Experienced Bit, set in   the network header of the packets already being forwarded by a   gateway.  Based on the condition of this bit, the end-system user   makes an adjustment to the sending window. In the NSP transport   protocol of DECNET, the source makes an adjustment to its window; in   the ISO transport protocol, TP4, the destination makes thisPerformance and Congestion Control Working Group               [Page 12]RFC 1254           Gateway Congestion Control Survey         August 1991   adjustment in the window offered to the sender.   This policy attempts to avoid congestion by setting the bit whenever   the average queue length over the previous queue regeneration cycle   plus part of the current cycle is one or more.  The feedback is   determined independently at each resource.3.4  Selective Feedback Congestion Indication   The simple Congestion Indication policy works based upon the total   demand on the gateway.  The total number of users or the fact that   only a few of the users might be causing congestion is not   considered.  For fairness, only those users who are sending more than   their fair share should be asked to reduce their load, while others   could attempt to increase where possible.  In Selective Feedback   Congestion Indication, the Congestion Experienced Bit is used to   carry out this goal.   Selective Feedback works by keeping a count of the number of packets   sent by different users since the beginning of the queue averaging   interval.  This is equivalent to monitoring their throughputs. Based   on the total throughput, a fair share for each user is determined and   the congestion bit is set, when congestion approaches, for the users   whose demand is higher than their fair share.  If the gateway is   operating below the throughput-delay knee, congestion indications are   not set.   A min-max algorithm used to determine the fair share of capacity and   other details of this policy are described in [RJC87].  One parameter   to be computed is the capacity of each resource to be divided among   the users.  This metric depends on the distribution of inter-arrival   times and packet sizes of the users.  Attempting to determine these   in real time in the gateway is unacceptable.  The capacity is instead   estimated from on the throughput seen when the gateway is operating   in congestion, as indicated by the average queue length.  In   congestion (above the knee), the service rate of the gateway limits   its throughput.  Multiplying the throughput obtained at this   operating point by a capacity factor (between 0.5 and 0.9) to adjust   for the distributions yields an acceptable capacity estimate in   simulations.   Selective Feedback Congestion Indication takes as input a vector of   the number of packets sent by each source-destination pair of end-   systems.  Other alternatives include 1) destination address, 2)   input/output link, and 3) transport connection (source/destination   addresses and ports).   These alternatives give different granularities for fairness.  In thePerformance and Congestion Control Working Group               [Page 13]RFC 1254           Gateway Congestion Control Survey         August 1991   case where paths are the same or round-trip times of users are close   together, throughputs are equalized similarly by basing the selective   feedback on source or destination address.  In fact, if the RTTs are   close enough, the simple congestion indication policy would result in   a fair allocation.  Counts based on source/destination pairs ensure   that paths with different lengths and network conditions get a fair   throughput at the individual gateways.  Counting packets based on   link pairs has a low overhead, but may result in unfairness to users   whose demand is below the fair share and are using a long path.   Counts based on transport layer connection identifiers, if this   information was available to Internet gateways, would make good   distinctions, since the differences of demand of different   applications and instances of applications would be separately   monitored.   Problems with Selective Feedback Congestion Indication include that   the gateway has significant processing to do.  With the feasible   choice of aggregation at the gateway, unfairness across users within   the group is likely.  For example, an interactive connection   aggregated with one or more bulk transfer connections will receive   congestion indications though its own use of the gateway resources is   very low.3.5  Fair Queueing   Fair Queueing is the policy of maintaining separate gateway output   queues for individual end-systems by source-destination pair.  In the   policy as proposed by [Nag85], the gateway's processing and link   resources are distributed to the end-systems on a round-robin basis.   On congestion, packets are dropped from the longest queue.  This   policy leads to equal allocations of resources to each source-   destination pair.  A source-destination pair that demands more than a   fair share simply increases its own queueing delay and congestion   drops.3.5.1  Bit-Round Fair Queueing   An enhancement of Nagle Fair Queueing, the Bit-Round Fair Queuing   algorithm described and simulated by [DKS89] addresses several   shortcomings of Nagle's scheme. It computes the order of service to   packets using their lengths, with a technique that emulates a bit-   by-bit round-robin discipline, so that long packets do not get an   advantage over short ones.  Otherwise the round-robin would be   unfair, for example, giving more bandwidth to hosts whose traffic is   mainly long packets than to hosts sourcing short packets.   The aggregation of users of a source-destination pair by Fair   Queueing has the property of grouping the users whose round-trips arePerformance and Congestion Control Working Group               [Page 14]

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -