⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2884.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 3 页
字号:
RFC 2884                   ECN in IP Networks                  July 2000   of Linux and have exactly the same hardware configuration. The server   is always ECN capable (and can handle NON ECN flows as well using the   standard congestion algorithms). The machine labeled "C" is used to   create congestion in the network. Router R2 acts as a path-delay   controller.  With it we adjust the RTT the clients see.  Router R1   has RED implemented in it and has capability for supporting ECN   flows.  The path-delay router is a PC running the Nistnet [16]   package on a Linux platform. The latency of the link for the   experiments was set to be 20 millisecs.4.2. Validating the Implementation   We spent time validating that the implementation was conformant to   the specification in RFC 2481. To do this, the popular tcpdump   sniffer [24] was modified to show the packets being marked. We   visually inspected tcpdump traces to validate the conformance to the   RFC under a lot of different scenarios.  We also modified tcptrace   [25] in order to plot the marked packets for visualization and   analysis.   Both tcpdump and tcptrace revealed that the implementation was   conformant to the RFC.4.3. Terminology used   This section presents background terminology used in the next few   sections.   * Congesting flows: These are TCP flows that are started in the   background so as to create congestion from R1 towards R2. We use the   laptop labeled "C" to introduce congesting flows. Note that "C" as is   the case with the other clients retrieves data from the server.   * Low, Moderate and High congestion: For the case of low congestion   we start two congesting flows in the background, for moderate   congestion we start five congesting flows and for the case of high   congestion we start ten congesting flows in the background.   * Competing flows: These are the flows that we are interested in.   They are either ECN TCP flows from/to "ECN ON" or NON ECN TCP flows   from/to "ECN OFF".   * Maximum drop rate: This is the RED parameter that sets the maximum   probability of a packet being marked at the router. This corresponds   to maxp as explained in Section 2.1.Salim & Ahmed                Informational                      [Page 7]RFC 2884                   ECN in IP Networks                  July 2000   Our tests were repeated for varying levels of congestion with varying   maximum drop rates. The results are presented in the subsequent   sections.   * Low, Medium and High drop probability: We use the term low   probability to mean a drop probability maxp of 0.02, medium   probability for 0.2 and high probability for 0.5. We also   experimented with drop probabilities of 0.05, 0.1 and 0.3.   * Goodput: We define goodput as the effective data rate as observed   by the user, i.e., if we transmitted 4 data packets in which two of   them were retransmitted packets, the efficiency is 50% and the   resulting goodput is 2*packet size/time taken to transmit.   * RED Region: When the router's average queue size is between minth   and maxth we denote that we are operating in the RED region.4.4. RED parameter selection   In our initial testing we noticed that as we increase the number of   congesting flows the RED queue degenerates into a simple Tail Drop   queue.  i.e. the average queue exceeds the maximum threshold most of   the times.  Note that this phenomena has also been observed by [5]   who proposes a dynamic solution to alleviate it by adjusting the   packet dropping probability "maxp" based on the past history of the   average queue size.  Hence, it is necessary that in the course of our   experiments the router operate in the RED region, i.e., we have to   make sure that the average queue is maintained between minth and   maxth. If this is not maintained, then the queue acts like a Tail   Drop queue and the advantages of ECN diminish. Our goal is to   validate ECN's benefits when used with RED at the router.  To ensure   that we were operating in the RED region we monitored the average   queue size and the actual queue size in times of low, moderate and   high congestion and fine-tuned the RED parameters such that the   average queue zones around the RED region before running the   experiment proper.  Our results are, therefore, not influenced by   operating in the wrong RED region.5. The Experiments   We start by making sure that the background flows do not bias our   results by computing the fairness index [12] in Section 5.1. We   proceed to carry out the experiments for bulk transfer presenting the   results and analysis in Section 5.2. In Section 5.3 the results for   transactional transfers along with analysis is presented.  More   details on the experimental results can be found in [27].Salim & Ahmed                Informational                      [Page 8]RFC 2884                   ECN in IP Networks                  July 20005.1. Fairness   In the course of the experiments we wanted to make sure that our   choice of the type of background flows does not bias the results that   we collect.  Hence we carried out some tests initially with both ECN   and NON ECN flows as the background flows. We repeated the   experiments for different drop probabilities and calculated the   fairness index [12].  We also noticed (when there were equal number   of ECN and NON ECN flows) that the number of packets dropped for the   NON ECN flows was equal to the number of packets marked for the ECN   flows, showing thereby that the RED algorithm was fair to both kind   of flows.   Fairness index: The fairness index is a performance metric described   in [12].  Jain [12] postulates that the network is a multi-user   system, and derives a metric to see how fairly each user is treated.   He defines fairness as a function of the variability of throughput   across users. For a given set of user throughputs (x1, x2...xn), the   fairness index to the set is defined as follows:   f(x1,x2,.....,xn) = square((sum[i=1..n]xi))/(n*sum[i=1..n]square(xi))   The fairness index always lies between 0 and 1. A value of 1   indicates that all flows got exactly the same throughput.  Each of   the tests was carried out 10 times to gain confidence in our results.   To compute the fairness index we used FTP to generate traffic.   Experiment details: At time t = 0 we start 2 NON ECN FTP sessions in   the background to create congestion. At time t=20 seconds we start   two competing flows. We note the throughput of all the flows in the   network and calculate the fairness index. The experiment was carried   out for various maximum drop probabilities and for various congestion   levels.  The same procedure is repeated with the background flows as   ECN. The fairness index was fairly constant in both the cases when   the background flows were ECN and NON ECN indicating that there was   no bias when the background flows were either ECN or NON ECN.   Max     Fairness                Fairness   Drop    With BG                 With BG   Prob    flows ECN               flows NON ECN   0.02    0.996888                0.991946   0.05    0.995987                0.988286   0.1     0.985403                0.989726   0.2     0.979368                0.983342Salim & Ahmed                Informational                      [Page 9]RFC 2884                   ECN in IP Networks                  July 2000   With the observation that the nature of background flows does not   alter the results, we proceed by using the background flows as NON   ECN for the rest of the experiments.5.2. Bulk transfers   The metric we chose for bulk transfer is end user throughput.   Experiment Details: All TCP flows used are RENO TCP. For the case of   low congestion we start 2 FTP flows in the background at time 0. Then   after about 20 seconds we start the competing flows, one data   transfer to the ECN machine and the second to the NON ECN machine.   The size of the file used is 20MB. For the case of moderate   congestion we start 5 FTP flows in the background and for the case of   high congestion we start 10 FTP flows in the background. We repeat   the experiments for various maximum drop rates each repeated for a   number of sets.   Observation and Analysis:   We make three key observations:   1) As the congestion level increases, the relative advantage for ECN   increases but the absolute advantage decreases (expected, since there   are more flows competing for the same link resource). ECN still does   better than NON ECN even under high congestion.  Infering a sample   from the collected results: at maximum drop probability of 0.1, for   example, the relative advantage of ECN increases from 23% to 50% as   the congestion level increases from low to high.   2) Maintaining congestion levels and varying the maximum drop   probability (MDP) reveals that the relative advantage of ECN   increases with increasing MDP. As an example, for the case of high   congestion as we vary the drop probability from 0.02 to 0.5 the   relative advantage of ECN increases from 10% to 60%.   3) There were hardly any retransmissions for ECN flows (except the   occasional packet drop in a minority of the tests for the case of   high congestion and low maximum drop probability).   We analyzed tcpdump traces for NON ECN with the help of tcptrace and   observed that there were hardly any retransmits due to timeouts.   (Retransmit due to timeouts are inferred by counting the number of 3   DUPACKS retransmit and subtracting them from the total recorded   number of retransmits).  This means that over a long period of time   (as is the case of long bulk transfers), the data-driven loss   recovery mechanism of the Fast Retransmit/Recovery algorithm is very   effective.  The algorithm for ECN on congestion notification from ECESalim & Ahmed                Informational                     [Page 10]RFC 2884                   ECN in IP Networks                  July 2000   is the same as that for a Fast Retransmit for NON ECN. Since both are   operating in the RED region, ECN barely gets any advantage over NON   ECN from the signaling (packet drop vs. marking).   It is clear, however, from the results that ECN flows benefit in bulk   transfers.  We believe that the main advantage of ECN for bulk   transfers is that less time is spent recovering (whereas NON ECN   spends time retransmitting), and timeouts are avoided altogether.   [23] has shown that even with RED deployed, TCP RENO could suffer   from multiple packet drops within the same window of data, likely to   lead to multiple congestion reactions or timeouts (these problems are   alleviated by ECN). However, while TCP Reno has performance problems   with multiple packets dropped in a window of data, New Reno and SACK   have no such problems.   Thus, for scenarios with very high levels of congestion, the   advantages of ECN for TCP Reno flows could be more dramatic than the   advantages of ECN for NewReno or SACK flows.  An important   observation to make from our results is that we do not notice   multiple drops within a single window of data. Thus, we would expect   that our results are not heavily influenced by Reno's performance   problems with multiple packets dropped from a window of data.  We   repeated these tests with ECN patched newer Linux kernels. As   mentioned earlier these kernels would use a SACK/FACK combo with a   fallback to New Reno.  SACK can be selectively turned off (defaulting   to New Reno).  Our results indicate that ECN still improves   performance for the bulk transfers. More results are available in the   pdf version[27]. As in 1) above, maintaining a maximum drop   probability of 0.1 and increasing the congestion level, it is   observed that ECN-SACK improves performance from about 5% at low   congestion to about 15% at high congestion. In the scenario where   high congestion is maintained and the maximum drop probability is   moved from 0.02 to 0.5, the relative advantage of ECN-SACK improves   from 10% to 40%.  Although this numbers are lower than the ones   exhibited by Reno, they do reflect the improvement that ECN offers   even in the presence of robust recovery mechanisms such as SACK.5.3. Transactional transfers   We model transactional transfers by sending a small request and   getting a response from a server before sending the next request. To   generate transactional transfer traffic we use Netperf [17] with the   CRR (Connect Request Response) option.  As an example let us assume   that we are retrieving a small file of say 5 - 20 KB, then in effect   we send a small request to the server and the server responds by   sending us the file. The transaction is complete when we receive the   complete file. To gain confidence in our results we carry the   simulation for about one hour. For each test there are a few thousandSalim & Ahmed                Informational                     [Page 11]RFC 2884                   ECN in IP Networks                  July 2000   of these requests and responses taking place.  Although not exactly   modeling HTTP 1.0 traffic, where several concurrent sessions are   opened, Netperf-CRR is nevertheless a close approximation.  Since   Netperf-CRR waits for one connection to complete before opening the   next one (0 think time), that single connection could be viewed as   the slowest response in the set of the opened concurrent sessions (in   HTTP).  The transactional data sizes were selected based on [2] which   indicates that the average web transaction was around 8 - 10 KB; The   smaller (5KB) size was selected to guestimate the size of   transactional processing that may become prevalent with policy   management schemes in the diffserv [4] context.  Using Netperf we are   able to initiate these kind of transactional transfers for a variable   length of time. The main metric of interest in this case is the   transaction rate, which is recorded by Netperf.   * Define Transaction rate as: The number of requests and complete   responses for a particular requested size that we are able to do per   second. For example if our request is of 1KB and the response is 5KB   then we define the transaction rate as the number of such complete   transactions that we can accomplish per second.   Experiment Details: Similar to the case of bulk transfers we start   the background FTP flows to introduce the congestion in the network   at time 0. About 20 seconds later we start the transactional   transfers and run each test for three minutes. We record the   transactions per second that are complete. We repeat the test for   about an hour and plot the various transactions per second, averaged   out over the runs. The experiment is repeated for various maximum   drop probabilities, file sizes and various levels of congestion.   Observation and Analysis   There are three key observations:   1) As congestion increases (with fixed drop probability) the relative   advantage for ECN increases (again the absolute advantage does not   increase since more flows are sharing the same bandwidth). For   example, from the results, if we consider the 5KB transactional flow,   as we increase the congestion from medium congestion (5 congesting   flows) to high congestion (10 congesting flows) for a maximum drop   probability of 0.1 the relative gain for ECN increases from 42% to   62%.   2) Maintaining the congestion level while adjusting the maximum drop   probability indicates that the relative advantage for ECN flows   increase.  From the case of high congestion for the 5KB flow weSalim & Ahmed                Informational                     [Page 12]RFC 2884                   ECN in IP Networks                  July 2000

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -