⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 poller_bench.html

📁 实现了poll/epoll/devpoll等C++封装
💻 HTML
📖 第 1 页 / 共 2 页
字号:
<html><head><title>Microbenchmark comparing poll, kqueue, and /dev/poll - 24 Oct 2000</title></head><body><h1>Microbenchmark comparing poll, kqueue, and /dev/poll - 24 Oct 2000</h1><ul><li><a href="#goal">Goal</a><li><a href="#desc">Description</a><li><a href="#setup.linux">Setup - Linux</a><li><a href="#setup.solaris">Setup - Solaris</a><li><a href="#procedure">Procedure</a><li><a href="#results">Results</a><li><a href="#discussion">Discussion</a><li><a href="#profiling">Results - kernel profiling</a><li><a href="#lmbench">lmbench results</a></ul><h2><a name="goal">Goal</a></h2>Various mechanisms of handling multiple sockets with a single serverthread are available; not all mechanisms are available on all platforms.A C++ class, Poller, has been written which abstracts these mechanisms into acommon interface, and should provide both high performance and portability.This microbenchmark compares the performance of select(), poll(), kqueue(), and /dev/pollusing Poller on Solaris 7, Linux 2.2.14, Linux 2.4.0-test10-pre4, and FreeBSD 4.x-STABLE.<p>Note that this is a synthetic microbenchmark, not a real world benchmark.In the real world, other effects often swamp the kinds of thingsmeasured here.<h2><a name="desc">Description</a></h2>Poller and Poller_bench are GPL'd software; you can<a href="../dkftpbench/">download the source</a> or <a href="doc/Poller.html">view the doc online</a>.<p>Poller_bench sets up an array of socketpairs, has a Poller monitorthe read end of each socketpair, and measures how long it takes to executethe following snippet of code with various styles of Poller:<pre>    for (k=0; k&lt;num_active; k++)        write(fdpairs[k * spacing + spacing/2][1], "a", 1);    poller.waitAndDispatchEvents(0);</pre>where spacing = num_pipes / num_active.<p><code>poller.waitAndDispatchEvents()</code> calls <code>poll()</code> or <code>ioctl(m_dpfd, DP_POLL, &amp;dopoll)</code>,as appropriate, then calls an event handler for each ready socket.<p>The event handler for this benchmark just executes<pre>    read(event->fd, buf, 1);</pre><h2><a name="setup.linux">Setup - Linux</a></h2><ul><li>Download the /dev/poll patch from<a href="http://www.citi.umich.edu/projects/linux-scalability/patches/">http://www.citi.umich.edu/projects/linux-scalability/patches/</a>.Be careful to get the right version for your kernel; not all kernels are supported.I used vanilla kernel 2.2.14.<li>Apply the patch, configure your kernel to enable /dev/poll support(with 'make menuconfig'), and rebuild the kernel.<li>Create an entry in /dev for the /dev/poll driver with<pre>cd /devmknod poll u 15 125chmod 666 /dev/poll</pre>where 15 is MISC_MAJOR and 125 is DEVPOLL_MINOR from the kernel sources;your MISC_MAJOR may differ, be sure to check /usr/src/linux/include/linux/major.h for the definition of MISC_MAJOR on your system.<li>Create a symbolic link so the benchmark (which includes usr/include/sys/devpoll.h)can be compiled:<pre>cd /usr/include/asmln -s ../linux/devpoll.h</pre></ul><h2><a name="setup.solaris">Setup - Solaris</a></h2>On Solaris 7, you may need to install a patch to get /dev/poll(or at least to get it to work properly); it's standard in Solaris 8.See also <a href="http://www.kegel.com/c10k.html#nb./dev/poll">my notes on /dev/poll</a>.<p>Also, a line near the end of /usr/include/sys/poll_impl.hmay need to be moved to get it to compile when included from C++ programs.<h2><a name="procedure">Procedure</a></h2>Download the dkftpbench source tarball from<a href="http://www.kegel.com/dkftpbench/">http://www.kegel.com/dkftpbench/</a> and unpack.<p>On Linux, if you want kernel profile results, boot with argument'profile=2' to enable the kernel's builtin profiler.<p>Run the shell script Poller_bench.sh as follows:<pre>  su  sh Poller_bench.sh</pre><p>The script raises file descriptor limits, then runs the command<pre>	./Poller_bench 5 1 spd 100 1000 10000 </pre><p>It should be run on an idle machine, with no email client,web browser, or X server running.  The Pentium III machineat my disposal was running a single sshd; the Solaris machinewas running two sshd's and an idle XWindow server, so it wasn't quite as idle.<h2><a name="results">Results</a></h2>With 1 active socket amongst 100, 1000, or 10000 total sockets,waitAndDispatchEvents takes the following amount of wall-clock time, in microseconds(lower is faster):<p>On a 167MHz sun4u Sparc Ultra-1 running SunOS 5.7 (Solaris 7) Generic_106541-11:<b><pre>     pipes    100    1000   10000    select    151       -       -      poll    470     676    3742 /dev/poll     61      70      92165133 microseconds to open each of 10000 socketpairs29646 microseconds to close each of 10000 socketpairs   </pre></b><p>On a 4X400Mhz Enterprise 450 running Solaris 8 (results contributed by Doug Lea):<b><pre>     pipes    100    1000   10000     select     60       -       -       poll    273     388    1559  /dev/poll     27      28      34 116586 microseconds to open each of 10000 socketpairs19235 microseconds to close each of 10000 socketpairs</pre></b>(The machine wasn't idle, but at most one CPU was doing other stuffduring test, and the test seemed to occupy only one CPU.)<p>On an idle 650 MHz dual Pentium III running Red Hat Linux 6.2with kernel 2.2.14smp plus the /dev/poll patch plus <a href="close.patch">Dave Miller'spatch to speed up close()</a>:<b><pre>     pipes    100    1000   10000     select     28       -       -       poll     23     890   11333  /dev/poll     19     146    4264 </pre></b>(Time to open or close socketpairs was not recorded, but was under 14 microseconds.)<p>On the same machine as above, but with kernel 2.4.0-test10-pre4 smp:<b><pre>     pipes    100    1000   10000     select     52       -       -       poll     49    1184   14660 26 microseconds to open each of 10000 socketpairs14 microseconds to close each of 10000 socketpairs</pre></b>(Note: the /dev/poll patch does not apply cleanly to recent 2.4.0-test kernels, I believe,and I did not try it.)<p>On a single processor 600Mhz Pentium-III with 512MB of memory, running FreeBSD 4.x-STABLE(results contributed by <a href="http://www.freebsd.org/~jlemon">Jonathan Lemon</a>):<b><pre>     pipes    100    1000    10000   30000    select     54       -        -       -      poll     50     552    11559   35178    kqueue      8       8        8       8</pre></b>(Note: Jonathan also varied the number of <i>active</i>pipes, and found that kqueue's time scaled linearly withthat number, whereas poll's time scaled linearly withnumber of <i>total</i> pipes.)<p>The test was also run with pipes instead of socketpairs (results not shown);the performance on Solaris was about the same, but the /dev/poll driveron Linux did not perform well with pipes.  According to Niels Provos,<blockquote><i>The hinting code which causes a considerable speed up for /dev/poll onlyapplies to network sockets.  If there are any serious applications thatmake uses of pipes in a manner that would benefit from /dev/poll thenthe pipe code needs to return hints too.</i></blockquote><h2><a name="discussion">Discussion</a></h2><h3>Miscellany</h3>Running the benchmark was painfully slow on Solaris 7 because the time to create or close socketpairs was outrageous.  Likewise, on unpatched 2.2.14, the time to closesocketpairs was outrageous, but the recent patch from Dave Millerfixes that nicely.<p>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -