⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc1633.txt

📁 <VC++网络游戏建摸与实现>源代码
💻 TXT
📖 第 1 页 / 共 5 页
字号:
RFC 1633            Integrated Services Architecture           June 1994      mechanics of the interface, and the actual I/O driver that is only      concerned with the grittiness of the hardware.  The estimator      lives somewhere in between.  We only note this fact, without      suggesting that it be elevated to a principle.).        _____________________________________________________________       |         ____________     ____________     ___________       |       |        |            |   | Reservation|   |           |      |       |        |   Routing  |   |    Setup   |   | Management|      |       |        |    Agent   |   |    Agent   |   |  Agent    |      |       |        |______._____|   |______._____|   |_____._____|      |       |               .                .    |          .            |       |               .                .   _V________  .            |       |               .                .  | Admission| .            |       |               .                .  |  Control | .            |       |               V                .  |__________| .            |       |           [Routing ]           V               V            |       |           [Database]     [Traffic Control Database]         |       |=============================================================|       |        |                  |     _______                     |       |        |   __________     |    |_|_|_|_| => o               |       |        |  |          |    |      Packet     |     _____     |       |     ====> |Classifier| =====>   Scheduler   |===>|_|_|_| ===>       |        |  |__________|    |     _______     |               |       |        |                  |    |_|_|_|_| => o               |       | Input  |   Internet       |                                 |       | Driver |   Forwarder      |     O u t p u t   D r i v e r   |       |________|__________________|_________________________________|             Figure 1: Implementation Reference Model for Routers      The background code is simply loaded into router memory and      executed by a general-purpose CPU.  These background routines      create data structures that control the forwarding path.  The      routing agent implements a particular routing protocol and builds      a routing database.  The reservation setup agent implements the      protocol used to set up resource reservations; see Section .  If      admission control gives the "OK" for a new request, the      appropriate changes are made to the classifier and packet      scheduler database to implement the desired QoS.  Finally, every      router supports an agent for network management.  This agent must      be able to modify the classifier and packet scheduler databases to      set up controlled link-sharing and to set admission control      policies.Braden, Clark & Shenker                                        [Page 10]RFC 1633            Integrated Services Architecture           June 1994      The implementation framework for a host is generally similar to      that for a router, with the addition of applications.  Rather than      being forwarded, host data originates and terminates in an      application.  An application needing a real-time QoS for a flow      must somehow invoke a local reservation setup agent.  The best way      to interface to applications is still to be determined.  For      example, there might be an explicit API for network resource      setup, or the setup might be invoked implicitly as part of the      operating system scheduling function.  The IP output routine of a      host may need no classifier, since the class assignment for a      packet can be specified in the local I/O control structure      corresponding to the flow.      In routers, integrated service will require changes to both the      forwarding path and the background functions.  The forwarding      path, which may depend upon hardware acceleration for performance,      will be the more difficult and costly to change.  It will be vital      to choose a set of traffic control mechanisms that is general and      adaptable to a wide variety of policy requirements and future      circumstances, and that can be implemented efficiently.3. Integrated Services Model   A service model is embedded within the network service interface   invoked by applications to define the set of services they can   request.  While both the underlying network technology and the   overlying suite of applications will evolve, the need for   compatibility requires that this service interface remain relatively   stable (or, more properly, extensible; we do expect to add new   services in the future but we also expect that it will be hard to   change existing services).  Because of its enduring impact, the   service model should not be designed in reference to any specific   network artifact but rather should be based on fundamental service   requirements.   We now briefly describe a proposal for a core set of services for the   Internet; this proposed core service model is more fully described in   [SCZ93a, SCZ93b].  This core service model addresses those services   which relate most directly to the time-of-delivery of packets.  We   leave the remaining services (such as routing, security, or stream   synchronization) for other standardization venues.  A service model   consists of a set of service commitments; in response to a service   request the network commits to deliver some service.  These service   commitments can be categorized by the entity to whom they are made:   they can be made to either individual flows or to collective entities   (classes of flows).  The service commitments made to individual flows   are intended to provide reasonable application performance, and thus   are driven by the ergonomic requirements of the applications; theseBraden, Clark & Shenker                                        [Page 11]RFC 1633            Integrated Services Architecture           June 1994   service commitments relate to the quality of service delivered to an   individual flow.  The service commitments made to collective entities   are driven by resource-sharing, or economic, requirements; these   service commitments relate to the aggregate resources made available   to the various entities.   In this section we start by exploring the service requirements of   individual flows and propose a corresponding set of services.  We   then discuss the service requirements and services for resource   sharing.  Finally, we conclude with some remarks about packet   dropping.   3.1 Quality of Service Requirements      The core service model is concerned almost exclusively with the      time-of-delivery of packets.  Thus, per-packet delay is the      central quantity about which the network makes quality of service      commitments.  We make the even more restrictive assumption that      the only quantity about which we make quantitative service      commitments are bounds on the maximum and minimum delays.      The degree to which application performance depends on low delay      service varies widely, and we can make several qualitative      distinctions between applications based on the degree of their      dependence.  One class of applications needs the data in each      packet by a certain time and, if the data has not arrived by then,      the data is essentially worthless; we call these real-time      applications.  Another class of applications will always wait for      data to arrive; we call these " elastic" applications.  We now      consider the delay requirements of these two classes separately.      3.1.1 Real-Time Applications         An important class of such real-time applications, which are         the only real-time applications we explicitly consider in the         arguments that follow, are "playback" applications.  In a         playback application, the source takes some signal, packetizes         it, and then transmits the packets over the network.  The         network inevitably introduces some variation in the delay of         the delivered packets.  The receiver depacketizes the data and         then attempts to faithfully play back the signal.  This is done         by buffering the incoming data and then replaying the signal at         some fixed offset delay from the original departure time; the         term "playback point" refers to the point in time which is         offset from the original departure time by this fixed delay.         Any data that arrives before its associated playback point can         be used to reconstruct the signal; data arriving after the         playback point is essentially useless in reconstructing theBraden, Clark & Shenker                                        [Page 12]RFC 1633            Integrated Services Architecture           June 1994         real-time signal.         In order to choose a reasonable value for the offset delay, an         application needs some "a priori" characterization of the         maximum delay its packets will experience.  This "a priori"         characterization could either be provided by the network in a         quantitative service commitment to a delay bound, or through         the observation of the delays experienced by the previously         arrived packets; the application needs to know what delays to         expect, but this expectation need not be constant for the         entire duration of the flow.         The performance of a playback application is measured along two         dimensions:  latency and fidelity.  Some playback applications,         in particular those that involve interaction between the two         ends of a connection such as a phone call, are rather sensitive         to the latency; other playback applications, such as         transmitting a movie or lecture, are not.  Similarly,         applications exhibit a wide range of sensitivity to loss of         fidelity.  We will consider two somewhat artificially         dichotomous classes: intolerant applications, which require an         absolutely faithful playback, and tolerant applications, which         can tolerate some loss of fidelity.  We expect that the vast         bulk of audio and video applications will be tolerant, but we         also suspect that there will be other applications, such as         circuit emulation, that are intolerant.         Delay can affect the performance of playback applications in         two ways.  First, the value of the offset delay, which is         determined by predictions about the future packet delays,         determines the latency of the application.  Second, the delays         of individual packets can decrease the fidelity of the playback         by exceeding the offset delay; the application then can either         change the offset delay in order to play back late packets         (which introduces distortion) or merely discard late packets         (which creates an incomplete signal).  The two different ways         of coping with late packets offer a choice between an         incomplete signal and a distorted one, and the optimal choice         will depend on the details of the application, but the         important point is that late packets necessarily decrease         fidelity.         Intolerant applications must use a fixed offset delay, since         any variation in the offset delay will introduce some         distortion in the playback.  For a given distribution of packet         delays, this fixed offset delay must be larger than the         absolute maximum delay, to avoid the possibility of late         packets.   Such an application can only set its offset delayBraden, Clark & Shenker                                        [Page 13]RFC 1633            Integrated Services Architecture           June 1994         appropriately if it is given a perfectly reliable upper bound         on the maximum delay of each packet.  We call a service         characterized by a perfectly reliable upper bound on delay "         guaranteed service", and propose this as the appropriate         service model for intolerant playback applications.         In contrast, tolerant applications need not set their offset         delay greater than the absolute maximum delay, since they can         tolerate some late packets.  Moreover, instead of using a         single fixed value for the offset delay, they can attempt to         reduce their latency by varying their offset delays in response         to the actual packet delays experienced in the recent past.  We         call applications which vary their offset delays in this manner         "adaptive" playback applications.         For tolerant applications we propose a service model called "         predictive service" which supplies a fairly reliable, but not         perfectly reliable, delay bound.  This bound, in contrast to         the bound in the guaranteed service, is not based on worst case         assumptions on the behavior of other flows.  Instead, this         bound might be computed with properly conservative predictions         about the behavior of other flows.  If the network turns out to         be wrong and the bound is violated, the application's         performance will perhaps suffer, but the users are willing to         tolerate such interruptions in service in return for the

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -