rfc2990.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 1,304 行 · 第 1/5 页
TXT
1,304 行
distinguished service path to the destination? No existing
mechanisms exist within either of these architectures to query the
network for the potential to support a specific service profile. Such
a query would need to examine a number of candidate paths, rather
than simply examining the lowest metric routing path, so that this
discovery function is likely to be associated with some form of QoS
routing functionality.
From this perspective, there is still further refinement that may be
required in the model of service discovery and the associated task of
resource reservation.
3.4 QoS Routing and Resource Management
To date QoS routing has been developed at some distance from the task
of development of QoS architectures. The implicit assumption within
the current QoS architectural models is that the routing best effort
path will be used for both best effort traffic and distinguished
service traffic.
There is no explicit architectural option to allow the network
service path to be aligned along other than the single best routing
metric path, so that available network resources can be efficiently
applied to meet service requests. Considerations of maximizing
network efficiency would imply that some form of path selection is
necessary within a QoS architecture, allowing the set of service
requirements to be optimally supported within the network's aggregate
resource capability.
In addition to path selection, SPF-based interior routing protocols
allow for the flooding of link metric information across all network
elements. This mechanism appears to be a productive direction to
provide the control-level signaling between the interior of the
network and the network admission elements, allowing the admission
Huston Informational [Page 10]
RFC 2990 Next Steps for QoS Architecture November 2000
systems to admit traffic based on current resource availability
rather than on necessarily conservative statically defined admission
criteria.
There is a more fundamental issue here concerning resource management
and traffic engineering. The approach of single path selection with
static load characteristics does not match a networked environment
which contains a richer mesh of connectivity and dynamic load
characteristics. In order to make efficient use of a rich
connectivity mesh, it is necessary to be able to direct traffic with
a common ingress and egress point across a set of available network
paths, spreading the load across a broader collection of network
links. At its basic form this is essentially a traffic engineering
problem. To support this function it is necessary to calculate per-
path dynamic load metrics, and allow the network's ingress system the
ability to distribute incoming traffic across these paths in
accordance with some model of desired traffic balance. To apply this
approach to a QoS architecture would imply that each path has some
form of vector of quality attributes, and incoming traffic is
balanced across a subset of available paths where the quality
attribute of the traffic is matched with the quality vector of each
available path. This augmentation to the semantics of the traffic
engineering is matched by a corresponding shift in the calculation
and interpretation of the path's quality vector. In this approach
what needs to be measured is not the path's resource availability
level (or idle proportion), but the path's potential to carry
additional traffic at a certain level of quality. This potential
metric is one that allows existing lower priority traffic to be
displaced to alternative paths. The path's quality metric can be
interpreted as a metric describing the displacement capability of the
path, rather than a resource availability metric.
This area of active network resource management, coupled with dynamic
network resource discovery, and the associated control level
signaling to network admission systems appears to be a topic for
further research at this point in time.
3.5 TCP and QoS
A congestion-managed rate-adaptive traffic flow (such as used by TCP)
uses the feedback from the ACK packet stream to time subsequent data
transmissions. The resultant traffic flow rate is an outcome of the
service quality provided to both the forward data packets and the
reverse ACK packets. If the ACK stream is treated by the network
with a different service profile to the outgoing data packets, it
remains an open question as to what extent will the data forwarding
service be compromised in terms of achievable throughput. High rates
of jitter on the ACK stream can cause ACK compression, that in turn
Huston Informational [Page 11]
RFC 2990 Next Steps for QoS Architecture November 2000
will cause high burst rates on the subsequent data send. Such bursts
will stress the service capacity of the network and will compromise
TCP throughput rates.
One way to address this is to use some form of symmetric service,
where the ACK packets are handled using the same service class as the
forward data packets. If symmetric service profiles are important
for TCP sessions, how can this be structured in a fashion that does
not incorrectly account for service usage? In other words, how can
both directions of a TCP flow be accurately accounted to one party?
Additionally, there is the interaction between the routing system and
the two TCP data flows. The Internet routing architecture does not
intrinsically preserve TCP flow symmetry, and the network path taken
by the forward packets of a TCP session may not exactly correspond to
the path used by the reverse packet flow.
TCP also exposes an additional performance constraint in the manner
of the traffic conditioning elements in a QoS-enabled network.
Traffic conditioners within QoS architectures are typically specified
using a rate enforcement mechanism of token buckets. Token bucket
traffic conditioners behave in a manner that is analogous to a First
In First Out queue. Such traffic conditioning systems impose tail
drop behavior on TCP streams. This tail drop behavior can produce
TCP timeout retransmission, unduly penalizing the average TCP goodput
rate to a level that may be well below the level specified by the
token bucket traffic conditioner. Token buckets can be considered as
TCP-hostile network elements.
The larger issue exposed in this consideration is that provision of
some form of assured service to congestion-managed traffic flows
requires traffic conditioning elements that operate using weighted
RED-like control behaviors within the network, with less
deterministic traffic patterns as an outcome. A requirement to
manage TCP burst behavior through token bucket control mechanisms is
most appropriately managed in the sender's TCP stack.
There are a number of open areas in this topic that would benefit
from further research. The nature of the interaction between the
end-to-end TCP control system and a collection of service
differentiation mechanisms with a network is has a large number of
variables. The issues concern the time constants of the control
systems, the amplitude of feedback loops, and the extent to which
each control system assumes an operating model of other active
control systems that are applied to the same traffic flow, and the
mode of convergence to a stable operational state for each control
system.
Huston Informational [Page 12]
RFC 2990 Next Steps for QoS Architecture November 2000
3.6 Per-Flow States and Per-Packet classifiers
Both the IntServ and DiffServ architectures use packet classifiers as
an intrinsic part of their architecture. These classifiers can be
considered as coarse or fine level classifiers. Fine-grained
classifiers can be considered as classifiers that attempt to isolate
elements of traffic from an invocation of an application (a `micro-
flow') and use a number of fields in the IP packet header to assist
in this, typically including the source and destination IP addresses
and source and source and destination port addresses. Coarse-grained
classifiers attempt to isolate traffic that belongs to an aggregated
service state, and typically use the DiffServ code field as the
classifying field. In the case of DiffServ there is the potential to
use fine-grained classifiers as part of the network ingress element,
and coarse-gained classifiers within the interior of the network.
Within flow-sensitive IntServ deployments, every active network
element that undertakes active service discrimination is requirement
to operate fine-grained packet classifiers. The granularity of the
classifiers can be relaxed with the specification of aggregate
classifiers [5], but at the expense of the precision and accuracy of
the service response.
Within the IntServ architecture the fine-grained classifiers are
defined to the level of granularity of an individual traffic flow,
using the packet's 5-tuple of (source address, destination address,
source port, destination port, protocol) as the means to identify an
individual traffic flow. The DiffServ Multi-Field (MF) classifiers
are also able to use this 5-tuple to map individual traffic flows
into supported behavior aggregates.
The use of IPSEC, NAT and various forms of IP tunnels result in a
occlusion of the flow identification within the IP packet header,
combining individual flows into a larger aggregate state that may be
too coarse for the network's service policies. The issue with such
mechanisms is that they may occur within the network path in a
fashion that is not visible to the end application, compromising the
ability for the application to determine whether the requested
service profile is being delivered by the network. In the case of
IPSEC there is a proposal to carry the IPSEC Security Parameter Index
(SPI) in the RSVP object [10], as a surrogate for the port addresses.
In the case of NAT and various forms of IP tunnels, there appears to
be no coherent way to preserve fine-grained classification
characteristics across NAT devices, or across tunnel encapsulation.
IP packet fragmentation also affects the ability of the network to
identify individual flows, as the trailing fragments of the IP packet
will not include the TCP or UDP port address information. This admits
Huston Informational [Page 13]
RFC 2990 Next Steps for QoS Architecture November 2000
the possibility of trailing fragments of a packet within a
distinguished service class being classified into the base best
effort service category, and delaying the ultimate delivery of the IP
packet to the destination until the trailing best effort delivered
fragments have arrived.
The observation made here is that QoS services do have a number of
caveats that should be placed on both the application and the
network. Applications should perform path MTU discovery in order to
avoid packet fragmentation. Deployment of various forms of payload
encryption, header address translation and header encapsulation
should be undertaken with due attention to their potential impacts on
service delivery packet classifiers.
3.7 The Service Set
The underlying question posed here is how many distinguished service
responses are adequate to provide a functionally adequate range of
service responses?
The Differentiated Services architecture does not make any limiting
restrictions on the number of potential services that a network
operator can offer. The network operator may be limited to a choice
of up to 64 discrete services in terms of the 6 bit service code
point in the IP header but as the mapping from service to code point
can be defined by each network operator, there can be any number of
potential services.
As always, there is such a thing as too much of a good thing, and a
large number of potential services leads to a set of issues around
end-to-end service coherency when spanning multiple network domains.
A small set of distinguished services can be supported across a large
set of service providers by equipment vendors and by application
designers alike. An ill-defined large set of potential services
often serves little productive purpose. This does point to a
potential refinement of the QoS architecture to define a small core
set of service profiles as "well-known" service profiles, and place
all other profiles within a "private use" category.
3.8 Measuring Service Delivery
There is a strong requirement within any QoS architecture for network
management approaches that provide a coherent view of the operating
state of the network. This differs from a conventional element-by-
element management view of the network in that the desire here is to
be able to provide a view of the available resources along a
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?