📄 rfc1254.txt
字号:
approximately two standard deviations without the timer going off in
a false alarm. The same method of estimation may be applicable to
TP4. The procedure is:
Error = Measured - Estimated
Estimated = Estimated + Gain_1 * Error
Deviation = Deviation + Gain_2 * (|Error| - Deviation)
Timeout = Estimated + 2 * Deviation
Where: Gain_1, Gain_2 are gain factors.
4.1 Response to No Policy in Gateway
Since packets must be dropped during congestion because of the finite
buffer space, feedback of congestion to the users exists even when
there is no gateway congestion policy. Dropping the packets is an
attempt to recover from congestion, though it needs to be noted that
congestion collapse is not prevented by packet drops if end-systems
accelerate their sending rate in these conditions. The accurate
detection of congestive loss by all retransmission timers in the
end-systems is extremely important for gateway congestion recovery.
4.2 Response to Source Quench
Given that a Source Quench message has ambiguous meaning, but usually
is issued for congestion recovery, the response of Slow-start to a
Source Quench is to return to the beginning of the cycle of increase.
This is an early response, since the Source Quench on overflow will
also lead to a retransmission timeout later.
4.3 Response to Random Drop
The end-system's view of Random Drop is the same as its view of loss
caused by overflow at the gateway. This is a drawback of the use of
packet drops as congestion feedback for congestion avoidance: the
decrease policy on congestion feedback cannot be made more drastic
for overflows than for the drops the gateway does for congestion
avoidance. Slow-start responds rapidly to gateway feedback. In a
case where the users are cooperating (all Slow-start), a desired
outcome would be that this responsiveness would lead quickly to a
decreased probability of drop. There would be, as an ideal, a self-
adjusting overall amount of control.
Performance and Congestion Control Working Group [Page 19]
RFC 1254 Gateway Congestion Control Survey August 1991
4.4 Response to Congestion Indication
Since the Congestion Indication mechanism attempts to keep gateways
uncongested, cooperating end-system congestion control policies need
not reduce demand as much as with other gateway policies. The
difference between the Slow-start response to packet drops and the
End-System Congestion Indication response to Congestion Experienced
Bits is primarily in the decrease policy. Slow-start decreases the
window to one packet on a retransmission timeout. End-System
Congestion Indication decreases the window by a fraction (for
instance, to 7/8 of the original value), when the Congestion
Experienced Bit is set in half of the packets in a sample spanning
two round-trip times (two windows full).
4.5 Response to Fair Queuing
A Fair Queueing policy may issue control indications, as in the
simulated Bit-Round Fair Queueing with DEC Bit, or it may depend only
on the passive effects of the queueing. When the passive control is
the main effect (perhaps because most users are not responsive to
control indications), the behavior of retransmission timers will be
very important. If retransmitting users send more packets in
response to increases in their delay and drops, Fair Queueing will be
prone to degraded performance, though collapse (zero throughput for
all users) may be prevented for a longer period of time.
5. Conclusions
The impact of users with excessive demand is a driving force as
proposed gateway policies come closer to implementation. Random Drop
and Congestion Indication can be fair only if the transport entities
sharing the gateway are all cooperative and reduce demand as needed.
Within some portions of the Internet, good behavior of end-systems
eventually may be enforced through administration. But for various
reasons, we can expect non-cooperating transports to be a persistent
population in the Internet. Congestion avoidance mechanisms will not
be deployed all at once, even if they are adopted as standards.
Without enforcement, or even with penalties for excessive demand,
some end-systems will never implement congestion avoidance
mechanisms.
Since it is outside the context of any of the gateway policies, we
will mention here a suggestion for a relatively small-scale response
to users which implement especially aggressive policies. This has
been made experimentally by [Jac89]. It would implement a low
priority queue, to which the majority of traffic is not routed. The
candidate traffic to be queued there would be identified by a cache
of recent recipients of whatever control indications the gateway
Performance and Congestion Control Working Group [Page 20]
RFC 1254 Gateway Congestion Control Survey August 1991
policy makes. Remaining in the cache over multiple control intervals
is the criterion for being routed to the low priority queue. In
approaching or established congestion, the bandwidth given to the
low-service queue is decreased.
The goal of end-system cooperation itself has been questioned. As
[She89] points out, it is difficult to define. A TCP implementation
that retransmits before it determines that has been loss indicated
and in a Go-Back-N manner is clearly non-cooperating. On the other
hand, a transport entity with selective acknowledgement may demand
more from the gateways than TCP, even while responding to congestion
in a cooperative way.
Fair Queueing maintains its control of allocations regardless of the
end-system congestion avoidance policies. [Nag85] and [DKS89] argue
that the extra delays and congestion drops that result from
misbehavior could work to enforce good end-system policies. Are the
rewards and penalties less sharply defined when one or more
misbehaving systems cause the whole gateway's performance to be less?
While the tax on all users imposed by the "over-users" is much less
than in gateways with other policies, how can it be made sufficiently
low?
In the sections on Selective Feedback Congestion Indication and Bit-
Round Fair Queueing we have pointed out that more needs to be done on
two particular fronts:
How can increased computational overhead be avoided?
The allocation of resources to source-destination pairs is, in
many scenarios, unfair to individual users. Bit-Round Fair
Queueing offers a potential administrative remedy, but even if it
is applied, how should the unequal allocations be propagated
through multiple gateways?
The first question has been taken up by [McK90].
Since Selective Feedback Congestion Indication (or Congestion
Indication used with Fair Queueing) uses a network bit, its use in
the Internet requires protocol support; the transition of some
portions of the Internet to OSI protocols may make such a change
attractive in the future. The interactions between heterogeneous
congestion control policies in the Internet will need to be explored.
The goals of Random Drop Congestion Avoidance are presented in this
survey, but without any claim that the problems of this policy can be
solved. These goals themselves, of uniform, dynamic treatment of
users (streams/flows), of low overhead, and of good scaling
Performance and Congestion Control Working Group [Page 21]
RFC 1254 Gateway Congestion Control Survey August 1991
characteristics in large and loaded networks, are significant.
Appendix: Congestion and Connection-oriented Subnets
This section presents a recommendation for gateway implementation in
an areas that unavoidably interacts with gateway congestion control,
the impact of connection-oriented subnets, such as those based on
X.25.
The need to manage a connection oriented service (X.25) in order to
transport datagram traffic (IP) can cause problems for gateway
congestion control. Being a pure datagram protocol, IP provides no
information delimiting when a pair of IP entities need to establish a
session between themselves. The solution involves compromise among
delay, cost, and resources. Delay is introduced by call
establishment when a new X.25 SVC (Switched Virtual Circuit) needs to
be established, and also by queueing delays for the physical line.
Cost includes any charges by the X.25 network service provider.
Besides the resources most gateways have (CPU, memory, links), a
maximum supported or permitted number of virtual circuits may be in
contest.
SVCs are established on demand when an IP packet needs to be sent and
there is no SVC established or pending establishment to the
destination IP entity. Optionally, when cost considerations, and
sufficient numbers of unused virtual circuits allow, redundant SVCs
may be established between the same pair of IP entities. Redundant
SVCs can have the effect of improving performance when coping with
large end-to-end delay, small maximum packet sizes and small maximum
packet windows. It is generally preferred though, to negotiate large
packet sizes and packet windows on a single SVC. Redundant SVCs must
especially be discouraged when virtual circuit resources are small
compared with the number of destination IP entities among the active
users of the gateway.
SVCs are closed after some period of inactivity indicates that
communication may have been suspended between the IP entities. This
timeout is significant in the operation of the interface. Setting
the value too low can result in closing of the SVC even though the
traffic has not stopped. This results in potentially large delays
for the packets which reopen the SVC and may also incur charges
associated with SVC calls. Also, clearing of SVCs is, by definition,
nongraceful. If an SVC is closed before the last packets are
acknowledged, there is no guarantee of delivery. Packet losses are
introduced by this destructive close independent of gateway traffic
and congestion.
When a new circuit is needed and all available circuits are currently
Performance and Congestion Control Working Group [Page 22]
RFC 1254 Gateway Congestion Control Survey August 1991
in use, there is a temptation to pick one to close (possibly using
some Least Recently Used criterion). But if connectivity demands are
larger than available circuit resources, this strategy can lead to a
type of thrashing where circuits are constantly being closed and
reopened. In the worst case, a circuit is opened, a single packet
sent and the circuit closed (without a guarantee of packet delivery).
To counteract this, each circuit should be allowed to remain open a
minimum amount of time.
One possible SVC strategy is to dynamically change the time a circuit
will be allowed to remain open based on the number of circuits in
use. Three administratively controlled variables are used, a minimum
time, a maximum time and an adaptation factor in seconds per
available circuit. In this scheme, a circuit is closed after it has
been idle for a time period equal to the minimum plus the adaptation
factor times the number of available circuits, limited by the maximum
time. By administratively adjusting these variables, one has
considerable flexibility in adjusting the SVC utilization to meet the
needs of a specific gateway.
Acknowledgements
This paper is the outcome of discussions in the Performance and
Congestion Control Working Group between April 1988 and July 1989.
Both PCC WG and plenary IETF members gave us helpful reviews of
earlier drafts. Several of the ideas described here were contributed
by the members of the PCC WG. The Appendix was written by Art
Berggreen. We are particularly thankful also to Van Jacobson, Scott
Shenker, Bruce Schofield, Paul McKenney, Matt Mathis, Geof Stone, and
Lixia Zhang for participation and reviews.
Reference
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -