📄 rfc2475.txt
字号:
provider to support service marking semantics [DSFIELD].
Examples of the label switching (or virtual circuit) model include
Frame Relay, ATM, and MPLS [FRELAY, ATM]. In this model path
forwarding state and traffic management or QoS state is established
for traffic streams on each hop along a network path. Traffic
aggregates of varying granularity are associated with a label
switched path at an ingress node, and packets/cells within each label
switched path are marked with a forwarding label that is used to
lookup the next-hop node, the per-hop forwarding behavior, and the
replacement label at each hop. This model permits finer granularity
resource allocation to traffic streams, since label values are not
globally significant but are only significant on a single link;
therefore resources can be reserved for the aggregate of packets/
cells received on a link with a particular label, and the label
switching semantics govern the next-hop selection, allowing a traffic
stream to follow a specially engineered path through the network.
This improved granularity comes at the cost of additional management
and configuration requirements to establish and maintain the label
switched paths. In addition, the amount of forwarding state
maintained at each node scales in proportion to the number of edge
nodes of the network in the best case (assuming multipoint-to-point
Blake, et. al. Informational [Page 10]
RFC 2475 Architecture for Differentiated Services December 1998
label switched paths), and it scales in proportion with the square of
the number of edge nodes in the worst case, when edge-edge label
switched paths with provisioned resources are employed.
The Integrated Services/RSVP model relies upon traditional datagram
forwarding in the default case, but allows sources and receivers to
exchange signaling messages which establish additional packet
classification and forwarding state on each node along the path
between them [RFC1633, RSVP]. In the absence of state aggregation,
the amount of state on each node scales in proportion to the number
of concurrent reservations, which can be potentially large on high-
speed links. This model also requires application support for the
RSVP signaling protocol. Differentiated services mechanisms can be
utilized to aggregate Integrated Services/RSVP state in the core of
the network [Bernet].
A variant of the Integrated Services/RSVP model eliminates the
requirement for hop-by-hop signaling by utilizing only "static"
classification and forwarding policies which are implemented in each
node along a network path. These policies are updated on
administrative timescales and not in response to the instantaneous
mix of microflows active in the network. The state requirements for
this variant are potentially worse than those encountered when RSVP
is used, especially in backbone nodes, since the number of static
policies that might be applicable at a node over time may be larger
than the number of active sender-receiver sessions that might have
installed reservation state on a node. Although the support of large
numbers of classifier rules and forwarding policies may be
computationally feasible, the management burden associated with
installing and maintaining these rules on each node within a backbone
network which might be traversed by a traffic stream is substantial.
Although we contrast our architecture with these alternative models
of service differentiation, it should be noted that links and nodes
employing these techniques may be utilized to extend differentiated
services behaviors and semantics across a layer-2 switched
infrastructure (e.g., 802.1p LANs, Frame Relay/ATM backbones)
interconnecting DS nodes, and in the case of MPLS may be used as an
alternative intra-domain implementation technology. The constraints
imposed by the use of a specific link-layer technology in particular
regions of a DS domain (or in a network providing access to DS
domains) may imply the differentiation of traffic on a coarser grain
basis. Depending on the mapping of PHBs to different link-layer
services and the way in which packets are scheduled over a restricted
set of priority classes (or virtual circuits of different category
and capacity), all or a subset of the PHBs in use may be supportable
(or may be indistinguishable).
Blake, et. al. Informational [Page 11]
RFC 2475 Architecture for Differentiated Services December 1998
2. Differentiated Services Architectural Model
The differentiated services architecture is based on a simple model
where traffic entering a network is classified and possibly
conditioned at the boundaries of the network, and assigned to
different behavior aggregates. Each behavior aggregate is identified
by a single DS codepoint. Within the core of the network, packets
are forwarded according to the per-hop behavior associated with the
DS codepoint. In this section, we discuss the key components within
a differentiated services region, traffic classification and
conditioning functions, and how differentiated services are achieved
through the combination of traffic conditioning and PHB-based
forwarding.
2.1 Differentiated Services Domain
A DS domain is a contiguous set of DS nodes which operate with a
common service provisioning policy and set of PHB groups implemented
on each node. A DS domain has a well-defined boundary consisting of
DS boundary nodes which classify and possibly condition ingress
traffic to ensure that packets which transit the domain are
appropriately marked to select a PHB from one of the PHB groups
supported within the domain. Nodes within the DS domain select the
forwarding behavior for packets based on their DS codepoint, mapping
that value to one of the supported PHBs using either the recommended
codepoint->PHB mapping or a locally customized mapping [DSFIELD].
Inclusion of non-DS-compliant nodes within a DS domain may result in
unpredictable performance and may impede the ability to satisfy
service level agreements (SLAs).
A DS domain normally consists of one or more networks under the same
administration; for example, an organization's intranet or an ISP.
The administration of the domain is responsible for ensuring that
adequate resources are provisioned and/or reserved to support the
SLAs offered by the domain.
2.1.1 DS Boundary Nodes and Interior Nodes
A DS domain consists of DS boundary nodes and DS interior nodes. DS
boundary nodes interconnect the DS domain to other DS or non-DS-
capable domains, whilst DS interior nodes only connect to other DS
interior or boundary nodes within the same DS domain.
Both DS boundary nodes and interior nodes must be able to apply the
appropriate PHB to packets based on the DS codepoint; otherwise
unpredictable behavior may result. In addition, DS boundary nodes
may be required to perform traffic conditioning functions as defined
by a traffic conditioning agreement (TCA) between their DS domain and
Blake, et. al. Informational [Page 12]
RFC 2475 Architecture for Differentiated Services December 1998
the peering domain which they connect to (see Sec. 2.3.3).
Interior nodes may be able to perform limited traffic conditioning
functions such as DS codepoint re-marking. Interior nodes which
implement more complex classification and traffic conditioning
functions are analogous to DS boundary nodes (see Sec. 2.3.4.4).
A host in a network containing a DS domain may act as a DS boundary
node for traffic from applications running on that host; we therefore
say that the host is within the DS domain. If a host does not act as
a boundary node, then the DS node topologically closest to that host
acts as the DS boundary node for that host's traffic.
2.1.2 DS Ingress Node and Egress Node
DS boundary nodes act both as a DS ingress node and as a DS egress
node for different directions of traffic. Traffic enters a DS domain
at a DS ingress node and leaves a DS domain at a DS egress node. A
DS ingress node is responsible for ensuring that the traffic entering
the DS domain conforms to any TCA between it and the other domain to
which the ingress node is connected. A DS egress node may perform
traffic conditioning functions on traffic forwarded to a directly
connected peering domain, depending on the details of the TCA between
the two domains. Note that a DS boundary node may act as a DS
interior node for some set of interfaces.
2.2 Differentiated Services Region
A differentiated services region (DS Region) is a set of one or more
contiguous DS domains. DS regions are capable of supporting
differentiated services along paths which span the domains within the
region.
The DS domains in a DS region may support different PHB groups
internally and different codepoint->PHB mappings. However, to permit
services which span across the domains, the peering DS domains must
each establish a peering SLA which defines (either explicitly or
implicitly) a TCA which specifies how transit traffic from one DS
domain to another is conditioned at the boundary between the two DS
domains.
It is possible that several DS domains within a DS region may adopt a
common service provisioning policy and may support a common set of
PHB groups and codepoint mappings, thus eliminating the need for
traffic conditioning between those DS domains.
Blake, et. al. Informational [Page 13]
RFC 2475 Architecture for Differentiated Services December 1998
2.3 Traffic Classification and Conditioning
Differentiated services are extended across a DS domain boundary by
establishing a SLA between an upstream network and a downstream DS
domain. The SLA may specify packet classification and re-marking
rules and may also specify traffic profiles and actions to traffic
streams which are in- or out-of-profile (see Sec. 2.3.2). The TCA
between the domains is derived (explicitly or implicitly) from this
SLA.
The packet classification policy identifies the subset of traffic
which may receive a differentiated service by being conditioned and/
or mapped to one or more behavior aggregates (by DS codepoint re-
marking) within the DS domain.
Traffic conditioning performs metering, shaping, policing and/or re-
marking to ensure that the traffic entering the DS domain conforms to
the rules specified in the TCA, in accordance with the domain's
service provisioning policy. The extent of traffic conditioning
required is dependent on the specifics of the service offering, and
may range from simple codepoint re-marking to complex policing and
shaping operations. The details of traffic conditioning policies
which are negotiated between networks is outside the scope of this
document.
2.3.1 Classifiers
Packet classifiers select packets in a traffic stream based on the
content of some portion of the packet header. We define two types of
classifiers. The BA (Behavior Aggregate) Classifier classifies
packets based on the DS codepoint only. The MF (Multi-Field)
classifier selects packets based on the value of a combination of one
or more header fields, such as source address, destination address,
DS field, protocol ID, source port and destination port numbers, and
other information such as incoming interface.
Classifiers are used to "steer" packets matching some specified rule
to an element of a traffic conditioner for further processing.
Classifiers must be configured by some management procedure in
accordance with the appropriate TCA.
The classifier should authenticate the information which it uses to
classify the packet (see Sec. 6).
Note that in the event of upstream packet fragmentation, MF
classifiers which examine the contents of transport-layer header
fields may incorrectly classify packet fragments subsequent to the
first. A possible solution to this problem is to maintain
Blake, et. al. Informational [Page 14]
RFC 2475 Architecture for Differentiated Services December 1998
fragmentation state; however, this is not a general solution due to
the possibility of upstream fragment re-ordering or divergent routing
paths. The policy to apply to packet fragments is outside the scope
of this document.
2.3.2 Traffic Profiles
A traffic profile specifies the temporal properties of a traffic
stream selected by a classifier. It provides rules for determining
whether a particular packet is in-profile or out-of-profile. For
example, a profile based on a token bucket may look like:
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -