⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1254.txt

📁 RFC 的详细文档!
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   congestion control is one based on round-trip times.  The rationale
   for it is as follows:  control mechanisms must adapt to changes in
   Internet congestion as quickly as possible.  Even on an uncongested
   path, changed conditions will not be detected by the sender faster
   than a round-trip time.  The effect of a sending end-system's control
   will also not be seen in less than a round-trip time in the entire
   path as well as at the end systems.  For the control mechanism to be
   adaptive, new information on the path is needed before making a
   modification to the control exerted.  The statistics and metrics used
   in congestion control must be able to provide information to the
   control mechanism so that it can make an informed decision.
   Transient information which may be obsolete before a change is made
   by the end-system should be avoided.  This implies the
   monitoring/estimating interval is one lasting one or more round
   trips.  The requirements described here give bounds on:

      How short an interval:  not small enough that obsolete information
      is used for control;

      How long:  not more than the period at which the end-system makes
      changes.

   But, from the point of view of the gateway congestion control policy,
   what is a round-trip time?  If all the users of a given gateway have



Performance and Congestion Control Working Group                [Page 5]

RFC 1254           Gateway Congestion Control Survey         August 1991


   the same path through the Internet, they also have the same round-
   trip time through the gateway.  But this is rarely the case.

   A meaningful interval must be found for users with both short and
   long paths. Two approaches have been suggested for estimating this
   dynamically, queue regeneration cycle and frequency analysis.

   Use of the queue regeneration cycle has been described as part of the
   Congestion Indication policy.  The time period used for averaging
   here begins when a resource goes from the idle to busy state.  The
   basic interval for averaging is a "regeneration cycle" which is in
   the form of busy and idle intervals.  Because an average based on a
   single previous regeneration may become old information, the
   recommendation in [JRC87] is to average over the sum of two
   intervals, that is, the previous (busy and idle) period, and the time
   since the beginning of the current busy period.

   If the gateway users are window-based transport entities, it is
   possible to see how the regeneration interval responds to their
   round-trip times.  If a user with a long round-trip time has the
   dominant traffic, the queue length may be zero only when that user is
   awaiting acknowledgements.  Then the users with short paths will
   receive gateway congestion information that is averaged over several
   of their round-trip times.  If the short path traffic dominates the
   activity in the gateway, i.e., the connections with shorter round-
   trip times are the dominant users of the gateway resources, then the
   regeneration interval is shorter and the information communicated to
   them can be more timely. In this case, users with longer paths
   receive, in one of their round-trip times, multiple samples of the
   dominant traffic; the end system averaging is based on individual
   user's intervals, so that these multiple samples are integrated
   appropriately for these connections with longer paths.

   The use of frequency analysis has been described by [Jac89]. In this
   approach, the gateway congestion control is done at intervals based
   on spectral analysis of the traffic arrivals.  It is possible for
   users to have round-trip times close to each other, but be out of
   phase from each other. A spectral analysis algorithm detects this.
   Otherwise, if multiple round-trip times are significant, multiple
   intervals will be identified.  Either one of these will be
   predominant, or several will be comparable. An as yet difficult
   problem for the design of algorithms accomplishing this technique is
   the likelihood of "locking" to the frequency of periodic traffic of
   low intensity, such as routing updates.







Performance and Congestion Control Working Group                [Page 6]

RFC 1254           Gateway Congestion Control Survey         August 1991


2.2  Power and its Relationship to the Operating Point

   Performance goals for a congestion control/avoidance strategy embody
   a conflict in that they call for as high a throughput as possible,
   with as little delay as possible.  A measure that is often used to
   reflect the tradeoff between these goals is power, the ratio of
   throughput to delay.  We would like to maximize the value of power
   for a given resource.  In the standard expression for power,

     Power = (Throughput^alpha)/Delay

   the exponent alpha is chosen for throughput, based on the relative
   emphasis placed on throughput versus delay: if throughput is more
   important, then a value of A alpha greater than one is chosen.  If
   throughput and delay are equally important (e.g., both bulk transfer
   traffic and interactive traffic are equally important), then alpha
   equal to one is chosen. The operating point where power is maximized
   is the "knee" in the throughput and delay curves.  It is desirable
   that the operating point of the resource be driven towards the knee,
   where power is maximized.  A useful property of power is that it is
   decreasing whether the resource is under- or over-utilized relative
   to the knee.

   In an internetwork comprising nodes and links of diverse speeds and
   utilization, bottlenecks or concentrations of demand may form.  Any
   particular user can see a single bottleneck, which is the slowest or
   busiest link or gateway in the path (or possibly identical "balanced"
   bottlenecks).  The throughput that the path can sustain is limited by
   the bottleneck.  The delay for packets through a particular path is
   determined by the service times and queueing at each individual hop.
   The queueing delay is dominated by the queueing at the bottleneck
   resource(s).  The contribution to the delay over other hops is
   primarily the service time, although the propagation delay over
   certain hops, such as a satellite link, can be significant.  We would
   like to operate all shared resources at their knee and maximize the
   power of every user's bottleneck.

   The above goal underscores the significance of gateway congestion
   control.  If techniques can be found to operate gateways at their
   resource knee, it can improve Internet performance broadly.

2.3  Fairness

   We would like gateways to allocate resources fairly to users.  A
   concept of fairness is only relevant when multiple users share a
   gateway and their total demand is greater than its capacity.  If
   demands were equal, a fair allocation of the resource would be to
   provide an equal share to each user.  But even over short intervals,



Performance and Congestion Control Working Group                [Page 7]

RFC 1254           Gateway Congestion Control Survey         August 1991


   demands are not equal.  Identifying the fair share of the resource
   for the user becomes hard.  Having identified it, it is desirable to
   allocate at least this fair share to each user.  However, not all
   users may take advantage of this allocation.  The unused capacity can
   be given to other users.  The resulting final allocation is termed a
   maximally fair allocation.  [RJC87] gives a quantitative method for
   comparing the allocation by a given policy to the maximally fair
   allocation.

   It is known that the Internet environment has heterogeneous transport
   entities, which do not follow the same congestion control policies
   (our definition of cooperating transports). Then, the controls given
   by a gateway may not affect all users and the congestion control
   policy may have unequal effects.  Is "fairness" obtainable in such a
   heterogeneous community?  In Fair Queueing, transport entities with
   differing congestion control policies can be insulated from each
   other and each given a set share of gateway bandwidth.

   It is important to realize that since Internet gateways cannot refuse
   new users, fairness in gateway congestion control can lead to all
   users receiving small (sub-divided) amounts of the gateway resources
   inadequate to meet their performance requirements.  None of the
   policies described in this paper currently addresses this.  Then,
   there may be policy reasons for unequal allocation of the gateway
   resources.  This has been addressed by Bit-Round Fair Queueing.

2.4  Network Management

   Network performance goals may be assessed by measurements in either
   the end-system or gateway frame of reference.  Performance goals are
   often resource-centered and the measurement of the global performance
   of "the network," is not only difficult to measure but is also
   difficult to define.  Resource-centered metrics are more easily
   obtained, and do not require synchronization.  That resource-centered
   metrics are appropriate ones for use in optimization of power is
   shown by [Jaf81].

   It would be valuable for the goal of developing effective gateway
   congestion handling if Management Information Base (MIB) objects
   useful for evaluating gateway congestion were developed.  The
   reflections on the control interval described above should be applied
   when network management applications are designed for this purpose.
   In particular, obtaining an instantaneous queue length from the
   managed gateway is not meaningful for the purposes of congestion
   management.






Performance and Congestion Control Working Group                [Page 8]

RFC 1254           Gateway Congestion Control Survey         August 1991


3.  Gateway Congestion Control Policies

   There have been proposed a handful of approaches to dealing with
   congestion in the gateway. Some of these are Source Quench, Random
   Drop, Congestion Indication, Selective Feedback Congestion
   Indication, Fair Queueing, and Bit-Round Fair Queueing.  They differ
   in whether they use a control message, and indeed, whether they view
   control of the end-systems as necessary, but none of them in itself
   lowers the demand of users and consequent load on the network.  End-
   system policies that reduce demand in conjunction with gateway
   congestion control are described in Section 4.

3.1  Source Quench

   The method of gateway congestion control currently used in the
   Internet is the Source Quench message of the RFC-792 [Pos81a]
   Internet Control Message Protocol (ICMP). When a gateway responds to
   congestion by dropping datagrams, it may send an ICMP Source Quench
   message to the source of the dropped datagram.  This is a congestion
   recovery policy.

   The Gateway Requirements RFC, RFC-1009 [GREQ87], specifies that
   gateways should only send Source Quench messages with a limited
   frequency, to conserve CPU resources during the time of heavy load.
   We note that operating the gateway for long periods under such loaded
   conditions should be averted by a gateway congestion control policy.
   A revised Gateway Requirements RFC is being prepared by the IETF.

   Another significant drawback of the Source Quench policy is that its
   details are discretionary, or, alternatively, that the policy is
   really a family of varied policies.  Major Internet gateway
   manufacturers have implemented a variety of source quench
   frequencies.  It is impossible for the end-system user on receiving a
   Source Quench to be certain of the circumstances in which it was
   issued.  This makes the needed end-system response problematic:  is
   the Source Quench an indication of heavy congestion, approaching
   congestion, a burst causing massive overload, or a burst slightly
   exceeding reasonable load?

   To the extent that gateways drop the last arrived datagram on
   overload, Source Quench messages may be distributed unfairly.  This
   is because the position at the end of the queue may be unfairly often
   occupied by the packets of low demand, intermittent users, since
   these do not send regular bursts of packets that can preempt most of
   the queue space.

   [Fin89] developed algorithms for when to issue Source Quench and for
   responding to it with a rate-reduction in the IP layer on the sending



Performance and Congestion Control Working Group                [Page 9]

RFC 1254           Gateway Congestion Control Survey         August 1991

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -