rfc2211.txt

来自「中、英文RFC文档大全打包下载完全版 .」· 文本 代码 · 共 1,068 行 · 第 1/4 页

TXT
1,068
字号
     - The "merged" TSpec parameters are used as the traffic flow's     TSpec at the local node.     - The merged parameters are passed upstream to traffic source(s) to     describe characteristics of the actually installed reservation     along the data path.   For the controlled-load service, a merged TSpec may be calculated   over a set of TSpecs by taking:     (1) the largest token bucket rate r;     (2) the largest token bucket size b;     (3) the largest peak rate p;     (4) the smallest minimal policed unit m;     (5) the *smallest* maximum packet size M;   across all members of the set.Wroclawski                 Standards Track                     [Page 10]RFC 2211                Controlled-Load Network           September 1997   A Least Common TSpec is a TSpec adequate to describe the traffic from   any one of a number of traffic flows. The least common TSpec may be   useful when creating a shared reservation for a number of flows using   SNMP or another management protocol. This differs from the merged   TSpec described above in that the computed parameters are not passed   upstream to the sources of traffic.   For the controlled-load service, the Least Common TSpec may be   calculated over a set of TSpecs by taking:     (1) the largest token bucket rate r;     (2) the largest token bucket size b;     (3) the largest peak rate p;     (4) the smallest minimal policed unit m;     (5) the largest maximum packet size M;   across all members of the set.   The sum of n controlled-load service TSpecs is used when computing   the TSpec for a shared reservation of n flows. It is computed by   taking:     - The sum across all TSpecs of the token bucket rate parameter r.     - The sum across all TSpecs of the token bucket size parameter b.     - The sum across all TSpecs of the peak rate parameter p.     - The minimum across all TSpecs of the minimum policed unit       parameter m.     - The maximum across all TSpecs of the maximum packet size       parameter M.   The minimum of two TSpecs differs according to whether the TSpecs can   be ordered according to the "greater than or equal to" rule above.   If one TSpec is less than the other TSpec, the smaller TSpec is the   minimum.  For unordered TSpecs, a different rule is used.  The   minimum of two unordered TSpecs is determined by comparing the   respective values in the two TSpecs and choosing:Wroclawski                 Standards Track                     [Page 11]RFC 2211                Controlled-Load Network           September 1997     (1) the smaller token bucket rate r;     (2) the *larger* token bucket size b;     (3) the smaller peak rate p;     (4) the *smaller* minimum policed unit m;     (5) the smaller maximum packet size M;9. Guidelines for Implementors   REQUIREMENTS PLACED ON ADMISSION CONTROL ALGORITHM: The intention of   this service specification is that network elements deliver a level   of service closely approximating best-effort service under unloaded   conditions. As with best-effort service under these conditions, it is   not required that every single packet must be successfully delivered   with zero queueing delay. Network elements providing controlled-load   service are permitted to oversubscribe the available resources to   some extent, in the sense that the bandwidth and buffer requirements   indicated by summing the TSpec token buckets of all controlled-load   flows may exceed the maximum capabilities of the network element.   However, this oversubscription may only be done in cases where the   element is quite sure that actual utilization is less than the sum of   the token buckets would suggest, so that the implementor's   performance goals will be met. This information may come from   measurement of the aggregate traffic flow, specific knowledge of   application traffic statistics, or other means. The most conservative   approach, rejection of new flows whenever the addition of their   traffic would cause the strict sum of the token buckets to exceed the   capacity of the network element (including consideration of resources   needed to maintain the delay and loss characteristics specified by   the service) may be appropriate in other circumstances.   Specific issues related to this subject are discussed in the   "Evaluation Criteria" and "Examples of Implementation" sections   below.   INTERACTION WITH BEST-EFFORT TRAFFIC: Implementors of this service   should clearly understand that in certain circumstances (routers   acting as the "split points" of a multicast distribution tree   supporting a shared reservation) large numbers of a flow's packets   may fail the TSpec conformance test *as a matter of normal   operation*.  According to the requirements of Section 7, these   packets should be forwarded on a best-effort basis if resources   permit.Wroclawski                 Standards Track                     [Page 12]RFC 2211                Controlled-Load Network           September 1997   If the network element's best-effort queueing algorithm does not   distinguish between these packets and elastic best-effort traffic   such as TCP flows, THE EXCESS CONTROLLED-LOAD PACKETS WILL "BACK OFF"   THE ELASTIC TRAFFIC AND DOMINATE THE BEST-EFFORT BANDWIDTH USAGE. The   integrated services framework does not currently address this issue.   However, several possible solutions to the problem are known [RED,   xFQ].  Network elements supporting the controlled load service should   implement some mechanism in their best-effort queueing path to   discriminate between classes of best-effort traffic and provide   elastic traffic with protection from inelastic best-effort flows.   Two basic approaches are available to meet this requirement. The   network element can maintain separate resource allocations for   different classes of best-effort traffic, so that no one class will   excessively dominate the loaded best-effort mix. Alternatively, an   element can process excess controlled-load traffic at somewhat lower   priority than elastic best-effort traffic, so as to completely avoid   the back-off effect discussed above.   If most or all controlled-load traffic arises from non-rate-adaptive   real-time applications, the use of priority mechanisms might be   desirable. If most controlled-load traffic arises from rate-adaptive   realtime or elastic applications attempting to establish a bounded   minimum level of service, the use of separate resource classes might   be preferable. However, this is not a firm guideline. In practice,   the network element designer's choice of mechanism will depend   heavily on both the goals of the design and the implementation   techniques appropriate for the designer's platform. This version of   the service specification does not specify one or the other behavior,   but leaves the choice to the implementor.   FORWARDING BEHAVIOR IN PRESENCE OF NONCONFORMANT TRAFFIC: As   indicated in Section 7, the controlled-load service does not define   the QoS behavior delivered to flows with non-conformant arriving   traffic.  It is permissible either to degrade the service delivered   to all of the flow's packets equally, or to sort the flow's packets   into a conformant set and a nonconformant set and deliver different   levels of service to the two sets.   In the first case, expected queueing delay and packet loss   probability will rise for all packets in the flow, but packet   delivery reordering will, in general, remain at low levels. This   behavior is preferable for those applications or transport protocols   which are sensitive to excessive packet reordering. A possible   example is an unmodified TCP connection, which would see reordering   as lost packets, triggering duplicate acks and hence excessive   retransmissions.Wroclawski                 Standards Track                     [Page 13]RFC 2211                Controlled-Load Network           September 1997   In the second case, some subset of the flow's packets will be   delivered with low loss and delay, while some other subset will be   delivered with higher loss and potentially higher delay. The delayed   packets will appear to the receiver to have been reordered in the   network, while the non-delayed packets will, on average, arrive in a   more timely fashion than if all packets were treated equally. This   might be preferable for applications which are highly time-sensitive,   such as interactive conferencing tools.10. Evaluation Criteria   The basic requirement placed on an implementation of controlled-load   service is that, under all conditions, it provide accepted data flows   with service closely similar to the service that same flow would   receive using best-effort service under unloaded conditions.   This suggests a simple two-step evaluation strategy. Step one is to   compare the service given best-effort traffic and controlled-load   traffic under underloaded conditions.     - Measure the packet loss rate and delay characteristics of a test     flow using best-effort service and with no load on the network     element.     - Compare those measurements with measurements of the same flow     receiving controlled-load service with no load on the network     element.     Closer measurements indicate higher evaluation ratings. A     substantial difference in the delay characteristics, such as the     smoothing which would be seen in an implementation which scheduled     the controlled-load flow using a fixed, constant-bitrate algorithm,     should result in a somewhat lower rating.   Step two is to observe the change in service received by a   controlled-load flow as the load increases.     - Increase the background traffic load on the network element,     while continuing to measuring the loss and delay characteristics of     the controlled-load flow. Characteristics which remain essentially     constant as the element is driven into overload indicate a high     evaluation rating. Minor changes in the delay distribution indicate     a somewhat lower rating. Significant increases in delay or loss     indicate a poor evaluation rating.Wroclawski                 Standards Track                     [Page 14]RFC 2211                Controlled-Load Network           September 1997   This simple model is not adequate to fully evaluate the performance   of controlled-load service. Three additional variables affect the   evaluation. The first is the short-term burstiness of the traffic   stream used to perform the tests outlined above. The second is the   degree of long-term change in the controlled-load traffic within the   bounds of its TSpec.  (Changes in this characteristic will have great   effect on the effectiveness of certain admission control algorithms.)   The third is the ratio of controlled-load traffic to other traffic at   the network element (either best effort or other controlled   services).

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?