⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2212.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 4 页
字号:
RFC 2212             Guaranteed Quality of Service        September 1997   Reshaping is done by combining a buffer with a token bucket and peak   rate regulator and buffering data until it can be sent in conformance   with the token bucket and peak rate parameters.  (The token bucket   regulator MUST start with its token bucket full of tokens).  Under   guaranteed service, the amount of buffering required to reshape any   conforming traffic back to its original token bucket shape is   b+Csum+(Dsum*r), where Csum and Dsum are the sums of the parameters C   and D between the last reshaping point and the current reshaping   point.  Note that the knowledge of the peak rate at the reshapers can   be used to reduce these buffer requirements (see the section on   "Guidelines for Implementors" below).  A network element MUST provide   the necessary buffers to ensure that conforming traffic is not lost   at the reshaper.      NOTE: Observe that a router that is not reshaping can still      identify non-conforming datagrams (and discard them or schedule      them at lower priority) by observing when queued traffic for the      flow exceeds b+Csum+(Dsum*r).   If a datagram arrives to discover the reshaping buffer is full, then   the datagram is non-conforming.  Observe this means that a reshaper   is effectively policing too.  As with a policer, the reshaper SHOULD   relegate non-conforming datagrams to best effort.  [If marking is   available, the non-conforming datagrams SHOULD be marked]      NOTE: As with policers, it SHOULD be possible to configure how      reshapers handle non-conforming datagrams.   Note that while the large buffer makes it appear that reshapers add   considerable delay, this is not the case.  Given a valid TSpec that   accurately describes the traffic, reshaping will cause little extra   actual delay at the reshaping point (and will not affect the delay   bound at all).  Furthermore, in the normal case, reshaping will not   cause the loss of any data.   However, (typically at merge or branch points), it may happen that   the TSpec is smaller than the actual traffic.  If this happens,   reshaping will cause a large queue to develop at the reshaping point,   which both causes substantial additional delays and forces some   datagrams to be treated as non-conforming.  This scenario makes an   unpleasant denial of service attack possible, in which a receiver who   is successfully receiving a flow's traffic via best effort service is   pre-empted by a new receiver who requests a reservation for the flow,   but with an inadequate TSpec and RSpec.  The flow's traffic will now   be policed and possibly reshaped.  If the policing function was   chosen to discard datagrams, the best-effort receiver would stop   receiving traffic.  For this reason, in the normal case, policers are   simply to treat non-conforming datagrams as best effort (and markingShenker, et. al.            Standards Track                    [Page 11]RFC 2212             Guaranteed Quality of Service        September 1997   them if marking is implemented).  While this protects against denial   of service, it is still true that the bad TSpec may cause queueing   delays to increase.      NOTE: To minimize problems of reordering datagrams, reshaping      points may wish to forward a best-effort datagram from the front      of the reshaping queue when a new datagram arrives and the      reshaping buffer is full.      Readers should also observe that reclassifying datagrams as best      effort (as opposed to dropping the datagrams) also makes support      for elastic flows easier.  They can reserve a modest token bucket      and when their traffic exceeds the token bucket, the excess      traffic will be sent best effort.   A related issue is that at all network elements, datagrams bigger   than the MTU of the network element MUST be considered non-conformant   and SHOULD be classified as best effort (and will then either be   fragmented or dropped according to the element's handling of best   effort traffic).  [Again, if marking is available, these reclassified   datagrams SHOULD be marked.]Ordering and Merging   TSpec's are ordered according to the following rules.   TSpec A is a substitute ("as good or better than") for TSpec B if (1)   both the token rate r and bucket depth b for TSpec A are greater than   or equal to those of TSpec B; (2) the peak rate p is at least as   large in TSpec A as it is in TSpec B; (3) the minimum policed unit m   is at least as small for TSpec A as it is for TSpec B; and (4) the   maximum datagram size M is at least as large for TSpec A as it is for   TSpec B.   TSpec A is "less than or equal" to TSpec B if (1) both the token rate   r and bucket depth b for TSpec A are less than or equal to those of   TSpec B; (2) the peak rate p in TSpec A is at least as small as the   peak rate in TSpec B; (3) the minimum policed unit m is at least as   large for TSpec A as it is for TSpec B; and (4) the maximum datagram   size M is at least as small for TSpec A as it is for TSpec B.   A merged TSpec may be calculated over a set of TSpecs by taking (1)   the largest token bucket rate, (2) the largest bucket size, (3) the   largest peak rate, (4) the smallest minimum policed unit, and (5) the   smallest maximum datagram size across all members of the set.  This   use of the word "merging" is similar to that in the RSVP protocol   [10]; a merged TSpec is one which is adequate to describe the traffic   from any one of constituent TSpecs.Shenker, et. al.            Standards Track                    [Page 12]RFC 2212             Guaranteed Quality of Service        September 1997   A summed TSpec may be calculated over a set of TSpecs by computing   (1) the sum of the token bucket rates, (2) the sum of the bucket   sizes, (3) the sum of the peak rates, (4) the smallest minimum   policed unit, and (5) the maximum datagram size parameter.   A least common TSpec is one that is sufficient to describe the   traffic of any one in a set of traffic flows.  A least common TSpec   may be calculated over a set of TSpecs by computing: (1) the largest   token bucket rate, (2) the largest bucket size, (3) the largest peak   rate, (4) the smallest minimum policed unit, and (5) the largest   maximum datagram size across all members of the set.   The minimum of two TSpecs differs according to whether the TSpecs can   be ordered.  If one TSpec is less than the other TSpec, the smaller   TSpec is the minimum.  Otherwise, the minimum TSpec of two TSpecs is   determined by comparing the respective values in the two TSpecs and   choosing (1) the smaller token bucket rate, (2) the larger token   bucket size (3) the smaller peak rate, (4) the smaller minimum   policed unit, and (5) the smaller maximum datagram size.   The RSpec's are merged in a similar manner as the TSpecs, i.e. a set   of RSpecs is merged onto a single RSpec by taking the largest rate R,   and the smallest slack S.  More precisely, RSpec A is a substitute   for RSpec B if the value of reserved service rate, R, in RSpec A is   greater than or equal to the value in RSpec B, and the value of the   slack, S, in RSpec A is smaller than or equal to that in RSpec B.   Each network element receives a service request of the form (TSpec,   RSpec), where the RSpec is of the form (Rin, Sin).  The network   element processes this request and performs one of two actions:    a. it accepts the request and returns a new Rspec of the form       (Rout, Sout);    b. it rejects the request.   The processing rules for generating the new RSpec are governed by the   delay constraint:          Sout + b/Rout + Ctoti/Rout <= Sin + b/Rin + Ctoti/Rin,   where Ctoti is the cumulative sum of the error terms, C, for all the   network elements that are upstream of and including the current   element, i.  In other words, this element consumes (Sin - Sout) of   slack and can use it to reduce its reservation level, provided that   the above inequality is satisfied.  Rin and Rout MUST also satisfy   the constraint:                             r <= Rout <= Rin.Shenker, et. al.            Standards Track                    [Page 13]RFC 2212             Guaranteed Quality of Service        September 1997   When several RSpec's, each with rate Rj, j=1,2..., are to be merged   at a split point, the value of Rout is the maximum over all the rates   Rj, and the value of Sout is the minimum over all the slack terms Sj.      NOTE: The various TSpec functions described above are used by      applications which desire to combine TSpecs.  It is important to      observe, however, that the properties of the actual reservation      are determined by combining the TSpec with the RSpec rate (R).      Because the guaranteed reservation requires both the TSpec and the      RSpec rate, there exist some difficult problems for shared      reservations in RSVP, particularly where two or more source      streams meet.  Upstream of the meeting point, it would be      desirable to reduce the TSpec and RSpec to use only as much      bandwidth and buffering as is required by the individual source's      traffic.  (Indeed, it may be necessary if the sender is      transmitting over a low bandwidth link).      However, the RSpec's rate is set to achieve a particular delay      bound (and is notjust a function of the TSpec), so changing the      RSpec may cause the reservation to fail to meet the receiver's      delay requirements.  At the same time, not adjusting the RSpec      rate means that "shared" RSVP reservations using guaranteed      service will fail whenever the bandwidth available at a particular      link is less than the receiver's requested rate R, even if the      bandwidth is adequate to support the number of senders actually      using the link.  At this time, this limitation is an open problem      in using the guaranteed service with RSVP.Guidelines for Implementors   This section discusses a number of important implementation issues in   no particular order.   It is important to note that individual subnetworks are network   elements and both routers and subnetworks MUST support the guaranteed   service model to achieve guaranteed service.  Since subnetworks   typically are not capable of negotiating service using IP-based   protocols, as part of providing guaranteed service, routers will have   to act as proxies for the subnetworks they are attached to.   In some cases, this proxy service will be easy.  For instance, on   leased line managed by a WFQ scheduler on the upstream node, the   proxy need simply ensure that the sum of all the flows' RSpec rates   does not exceed the bandwidth of the line, and needs to advertise the   rate-based and non-rate-based delays of the link as the values of C   and D.Shenker, et. al.            Standards Track                    [Page 14]RFC 2212             Guaranteed Quality of Service        September 1997   In other cases, this proxy service will be complex.  In an ATM   network, for example, it may require establishing an ATM VC for the   flow and computing the C and D terms for that VC.  Readers may   observe that the token bucket and peak rate used by guaranteed   service map directly to the Sustained Cell Rate, Burst Size, and Peak   Cell Rate of ATM's Q.2931 QoS parameters for Variable Bit Rate   traffic.   The assurance that datagrams will not be lost is obtained by setting   the router buffer space B to be equal to the token bucket b plus some   error term (described below).   Another issue related to subnetworks is that the TSpec's token bucket   rates measure IP traffic and do not (and cannot) account for link   level headers.  So the subnetwork network elements MUST adjust the   rate and possibly the bucket size to account for adding link level   headers.  Tunnels MUST also account for the additional IP headers   that they add.   For datagram networks, a maximum header rate can usually be computed   by dividing the rate and bucket sizes by the minimum policed unit.   For networks that do internal fragmentation, such as ATM, the   computation may be more complex, since one MUST account for both   per-fragment overhead and any wastage (padding bytes transmitted) due   to mismatches between datagram sizes and fragment sizes.  For   instance, a conservative estimate of the additional data rate imposed   by ATM AAL5 plus ATM segmentation and reassembly is                         ((r/48)*5)+((r/m)*(8+52))   which represents the rate divided into 48-byte cells multiplied by   the 5-byte ATM header, plus the maximum datagram rate (r/m)   multiplied by the cost of the 8-byte AAL5 header plus the maximum   space that can be wasted by ATM segmentation of a datagram (which is   the 52 bytes wasted in a cell that contains one byte).  But this   estimate is likely to be wildly high, especially if m is small, since   ATM wastage is usually much less than 52 bytes.  (ATM implementors   should be warned that the token bucket may also have to be scaled   when setting the VC parameters for call setup and that this example   does not account for overhead incurred by encapsulations such as   those specified in RFC 1483).   To ensure no loss, network elements will have to allocate some   buffering for bursts.  If every hop implemented the fluid model   perfectly, this buffering would simply be b (the token bucket size).   However, as noted in the discussion of reshaping earlier,   implementations are approximations and we expect that traffic will   become more bursty as it goes through the network.  However, as withShenker, et. al.            Standards Track                    [Page 15]

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -