rfc2309.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 956 行 · 第 1/3 页
TXT
956 行
be used to control the queue size for each individual flow or
class, so that they do not experience unnecessarily high delays.
Therefore, active queue management should be applied across the
classes or flows as well as within each class or flow.
In short, scheduling algorithms and queue management should be
seen as complementary, not as replacements for each other. In
particular, there have been implementations of queue management
added to FQ, and work is in progress to add RED queue management
to CBQ.
Braden, et. al. Informational [Page 6]
RFC 2309 Internet Performance Recommendations April 1998
3. THE QUEUE MANAGEMENT ALGORITHM "RED"
Random Early Detection, or RED, is an active queue management
algorithm for routers that will provide the Internet performance
advantages cited in the previous section [RED93]. In contrast to
traditional queue management algorithms, which drop packets only when
the buffer is full, the RED algorithm drops arriving packets
probabilistically. The probability of drop increases as the
estimated average queue size grows. Note that RED responds to a
time-averaged queue length, not an instantaneous one. Thus, if the
queue has been mostly empty in the "recent past", RED won't tend to
drop packets (unless the queue overflows, of course!). On the other
hand, if the queue has recently been relatively full, indicating
persistent congestion, newly arriving packets are more likely to be
dropped.
The RED algorithm itself consists of two main parts: estimation of
the average queue size and the decision of whether or not to drop an
incoming packet.
(a) Estimation of Average Queue Size
RED estimates the average queue size, either in the forwarding
path using a simple exponentially weighted moving average (such
as presented in Appendix A of [Jacobson88]), or in the
background (i.e., not in the forwarding path) using a similar
mechanism.
Note: The queue size can be measured either in units of
packets or of bytes. This issue is discussed briefly in
[RED93] in the "Future Work" section.
Note: when the average queue size is computed in the
forwarding path, there is a special case when a packet
arrives and the queue is empty. In this case, the
computation of the average queue size must take into account
how much time has passed since the queue went empty. This is
discussed further in [RED93].
(b) Packet Drop Decision
In the second portion of the algorithm, RED decides whether or
not to drop an incoming packet. It is RED's particular
algorithm for dropping that results in performance improvement
for responsive flows. Two RED parameters, minth (minimum
threshold) and maxth (maximum threshold), figure prominently in
Braden, et. al. Informational [Page 7]
RFC 2309 Internet Performance Recommendations April 1998
this decision process. Minth specifies the average queue size
*below which* no packets will be dropped, while maxth specifies
the average queue size *above which* all packets will be
dropped. As the average queue size varies from minth to maxth,
packets will be dropped with a probability that varies linearly
from 0 to maxp.
Note: a simplistic method of implementing this would be to
calculate a new random number at each packet arrival, then
compare that number with the above probability which varies
from 0 to maxp. A more efficient implementation, described
in [RED93], computes a random number *once* for each dropped
packet.
Note: the decision whether or not to drop an incoming packet
can be made in "packet mode", ignoring packet sizes, or in
"byte mode", taking into account the size of the incoming
packet. The performance implications of the choice between
packet mode or byte mode is discussed further in [Floyd97].
RED effectively controls the average queue size while still
accommodating bursts of packets without loss. RED's use of
randomness breaks up synchronized processes that lead to lock-out
phenomena.
There have been several implementations of RED in routers, and papers
have been published reporting on experience with these
implementations ([Villamizar94], [Gaynor96]). Additional reports of
implementation experience would be welcome, and will be posted on the
RED web page [REDWWW].
All available empirical evidence shows that the deployment of active
queue management mechanisms in the Internet would have substantial
performance benefits. There are seemingly no disadvantages to using
the RED algorithm, and numerous advantages. Consequently, we believe
that the RED active queue management algorithm should be widely
deployed.
We should note that there are some extreme scenarios for which RED
will not be a cure, although it won't hurt and may still help. An
example of such a scenario would be a very large number of flows,
each so tiny that its fair share would be less than a single packet
per RTT.
Braden, et. al. Informational [Page 8]
RFC 2309 Internet Performance Recommendations April 1998
4. MANAGING AGGRESSIVE FLOWS
One of the keys to the success of the Internet has been the
congestion avoidance mechanisms of TCP. Because TCP "backs off"
during congestion, a large number of TCP connections can share a
single, congested link in such a way that bandwidth is shared
reasonably equitably among similarly situated flows. The equitable
sharing of bandwidth among flows depends on the fact that all flows
are running basically the same congestion avoidance algorithms,
conformant with the current TCP specification [HostReq89].
We introduce the term "TCP-compatible" for a flow that behaves under
congestion like a flow produced by a conformant TCP. A TCP-
compatible flow is responsive to congestion notification, and in
steady-state it uses no more bandwidth than a conformant TCP running
under comparable conditions (drop rate, RTT, MTU, etc.)
It is convenient to divide flows into three classes: (1) TCP-
compatible flows, (2) unresponsive flows, i.e., flows that do not
slow down when congestion occurs, and (3) flows that are responsive
but are not TCP-compatible. The last two classes contain more
aggressive flows that pose significant threats to Internet
performance, as we will now discuss.
o Non-Responsive Flows
There is a growing set of UDP-based applications whose
congestion avoidance algorithms are inadequate or nonexistent
(i.e, the flow does not throttle back upon receipt of congestion
notification). Such UDP applications include streaming
applications like packet voice and video, and also multicast
bulk data transport [SRM96]. If no action is taken, such
unresponsive flows could lead to a new congestion collapse.
In general, all UDP-based streaming applications should
incorporate effective congestion avoidance mechanisms. For
example, recent research has shown the possibility of
incorporating congestion avoidance mechanisms such as Receiver-
driven Layered Multicast (RLM) within UDP-based streaming
applications such as packet video [McCanne96; Bolot94]. Further
research and development on ways to accomplish congestion
avoidance for streaming applications will be very important.
However, it will also be important for the network to be able to
protect itself against unresponsive flows, and mechanisms to
accomplish this must be developed and deployed. Deployment of
such mechanisms would provide incentive for every streaming
application to become responsive by incorporating its own
Braden, et. al. Informational [Page 9]
RFC 2309 Internet Performance Recommendations April 1998
congestion control.
o Non-TCP-Compatible Transport Protocols
The second threat is posed by transport protocol implementations
that are responsive to congestion notification but, either
deliberately or through faulty implementations, are not TCP-
compatible. Such applications can grab an unfair share of the
network bandwidth.
For example, the popularity of the Internet has caused a
proliferation in the number of TCP implementations. Some of
these may fail to implement the TCP congestion avoidance
mechanisms correctly because of poor implementation. Others may
deliberately be implemented with congestion avoidance algorithms
that are more aggressive in their use of bandwidth than other
TCP implementations; this would allow a vendor to claim to have
a "faster TCP". The logical consequence of such implementations
would be a spiral of increasingly aggressive TCP
implementations, leading back to the point where there is
effectively no congestion avoidance and the Internet is
chronically congested.
Note that there is a well-known way to achieve more aggressive
TCP performance without even changing TCP: open multiple
connections to the same place, as has been done in some Web
browsers.
The projected increase in more aggressive flows of both these
classes, as a fraction of total Internet traffic, clearly poses a
threat to the future Internet. There is an urgent need for
measurements of current conditions and for further research into the
various ways of managing such flows. There are many difficult issues
in identifying and isolating unresponsive or non-TCP-compatible flows
at an acceptable router overhead cost. Finally, there is little
measurement or simulation evidence available about the rate at which
these threats are likely to be realized, or about the expected
benefit of router algorithms for managing such flows.
There is an issue about the appropriate granularity of a "flow".
There are a few "natural" answers: 1) a TCP or UDP connection (source
address/port, destination address/port); 2) a source/destination host
pair; 3) a given source host or a given destination host. We would
guess that the source/destination host pair gives the most
appropriate granularity in many circumstances. However, it is
possible that different vendors/providers could set different
granularities for defining a flow (as a way of "distinguishing"
themselves from one another), or that different granularities could
Braden, et. al. Informational [Page 10]
RFC 2309 Internet Performance Recommendations April 1998
be chosen for different places in the network. It may be the case
that the granularity is less important than the fact that we are
dealing with more unresponsive flows at *some* granularity. The
granularity of flows for congestion management is, at least in part,
a policy question that needs to be addressed in the wider IETF
community.
5. CONCLUSIONS AND RECOMMENDATIONS
This discussion leads us to make the following recommendations to the
IETF and to the Internet community as a whole.
o RECOMMENDATION 1:
Internet routers should implement some active queue management
mechanism to manage queue lengths, reduce end-to-end latency,
reduce packet dropping, and avoid lock-out phenomena within the
Internet.
The default mechanism for managing queue lengths to meet these
goals in FIFO queues is Random Early Detection (RED) [RED93].
Unless a developer has reasons to provide another equivalent
mechanism, we recommend that RED be used.
o RECOMMENDATION 2:
It is urgent to begin or continue research, engineering, and
measurement efforts contributing to the design of mechanisms to
deal with flows that are unresponsive to congestion notification
or are responsive but more aggressive than TCP.
Although there has already been some limited deployment of RED in the
Internet, we may expect that widespread implementation and deployment
of RED in accordance with Recommendation 1 will expose a number of
engineering issues. For example, such issues may include:
implementation questions for Gigabit routers, the use of RED in layer
2 switches, and the possible use of additional considerations, such
as priority, in deciding which packets to drop.
We again emphasize that the widespread implementation and deployment
of RED would not, in and of itself, achieve the goals of
Recommendation 2.
Widespread implementation and deployment of RED will also enable the
introduction of other new functionality into the Internet. One
example of an enabled functionality would be the addition of explicit
congestion notification [Ramakrishnan97] to the Internet
architecture, as a mechanism for congestion notification in addition
Braden, et. al. Informational [Page 11]
RFC 2309 Internet Performance Recommendations April 1998
to packet drops. A second example of new functionality would be
implementation of queues with packets of different drop priorities;
packets would be transmitted in the order in which they arrived, but
during times of congestion packets of the lower drop priority would
be preferentially dropped.
6. References
[Bolot94] Bolot, J.-C., Turletti, T., and Wakeman, I., Scalable
Feedback Control for Multicast Video Distribution in the Internet,
ACM SIGCOMM '94, Sept. 1994.
[Demers90] Demers, A., Keshav, S., and Shenker, S., Analysis and
Simulation of a Fair Queueing Algorithm, Internetworking: Research
and Experience, Vol. 1, 1990, pp. 3-26.
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?