📄 perf-tuning.html.en
字号:
and instructions for creating <code>type-map</code> files.</p>
<h3>Memory-mapping</h3>
<p>In situations where Apache 2.0 needs to look at the contents
of a file being delivered--for example, when doing server-side-include
processing--it normally memory-maps the file if the OS supports
some form of <code>mmap(2)</code>.</p>
<p>On some platforms, this memory-mapping improves performance.
However, there are cases where memory-mapping can hurt the performance
or even the stability of the httpd:</p>
<ul>
<li>
<p>On some operating systems, <code>mmap</code> does not scale
as well as <code>read(2)</code> when the number of CPUs increases.
On multiprocessor Solaris servers, for example, Apache 2.0 sometimes
delivers server-parsed files faster when <code>mmap</code> is disabled.</p>
</li>
<li>
<p>If you memory-map a file located on an NFS-mounted filesystem
and a process on another NFS client machine deletes or truncates
the file, your process may get a bus error the next time it tries
to access the mapped file content.</p>
</li>
</ul>
<p>For installations where either of these factors applies, you
should use <code>EnableMMAP off</code> to disable the memory-mapping
of delivered files. (Note: This directive can be overridden on
a per-directory basis.)</p>
<h3>Sendfile</h3>
<p>In situations where Apache 2.0 can ignore the contents of the file
to be delivered -- for example, when serving static file content --
it normally uses the kernel sendfile support the file if the OS
supports the <code>sendfile(2)</code> operation.</p>
<p>On most platforms, using sendfile improves performance by eliminating
separate read and send mechanics. However, there are cases where using
sendfile can harm the stability of the httpd:</p>
<ul>
<li>
<p>Some platforms may have broken sendfile support that the build
system did not detect, especially if the binaries were built on
another box and moved to such a machine with broken sendfile support.</p>
</li>
<li>
<p>With an NFS-mounted files, the kernel may be unable
to reliably serve the network file through it's own cache.</p>
</li>
</ul>
<p>For installations where either of these factors applies, you
should use <code>EnableSendfile off</code> to disable sendfile
delivery of file contents. (Note: This directive can be overridden
on a per-directory basis.)</p>
<h3><a name="process" id="process">Process Creation</a></h3>
<p>Prior to Apache 1.3 the <code class="directive"><a href="../mod/prefork.html#minspareservers">MinSpareServers</a></code>, <code class="directive"><a href="../mod/prefork.html#maxspareservers">MaxSpareServers</a></code>, and <code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> settings all had drastic effects on
benchmark results. In particular, Apache required a "ramp-up"
period in order to reach a number of children sufficient to serve
the load being applied. After the initial spawning of
<code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> children,
only one child per second would be created to satisfy the
<code class="directive"><a href="../mod/prefork.html#minspareservers">MinSpareServers</a></code>
setting. So a server being accessed by 100 simultaneous
clients, using the default <code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> of <code>5</code> would take on
the order 95 seconds to spawn enough children to handle
the load. This works fine in practice on real-life servers,
because they aren't restarted frequently. But does really
poorly on benchmarks which might only run for ten minutes.</p>
<p>The one-per-second rule was implemented in an effort to
avoid swamping the machine with the startup of new children. If
the machine is busy spawning children it can't service
requests. But it has such a drastic effect on the perceived
performance of Apache that it had to be replaced. As of Apache
1.3, the code will relax the one-per-second rule. It will spawn
one, wait a second, then spawn two, wait a second, then spawn
four, and it will continue exponentially until it is spawning
32 children per second. It will stop whenever it satisfies the
<code class="directive"><a href="../mod/prefork.html#minspareservers">MinSpareServers</a></code>
setting.</p>
<p>This appears to be responsive enough that it's almost
unnecessary to twiddle the <code class="directive"><a href="../mod/prefork.html#minspareservers">MinSpareServers</a></code>, <code class="directive"><a href="../mod/prefork.html#maxspareservers">MaxSpareServers</a></code> and <code class="directive"><a href="../mod/mpm_common.html#startservers">StartServers</a></code> knobs. When more than 4 children are
spawned per second, a message will be emitted to the
<code class="directive"><a href="../mod/core.html#errorlog">ErrorLog</a></code>. If you
see a lot of these errors then consider tuning these settings.
Use the <code class="module"><a href="../mod/mod_status.html">mod_status</a></code> output as a guide.</p>
<p>Related to process creation is process death induced by the
<code class="directive"><a href="../mod/mpm_common.html#maxrequestsperchild">MaxRequestsPerChild</a></code>
setting. By default this is <code>0</code>,
which means that there is no limit to the number of requests
handled per child. If your configuration currently has this set
to some very low number, such as <code>30</code>, you may want to bump this
up significantly. If you are running SunOS or an old version of
Solaris, limit this to <code>10000</code> or so because of memory leaks.</p>
<p>When keep-alives are in use, children will be kept busy
doing nothing waiting for more requests on the already open
connection. The default <code class="directive"><a href="../mod/core.html#keepalivetimeout">KeepAliveTimeout</a></code> of <code>15</code>
seconds attempts to minimize this effect. The tradeoff here is
between network bandwidth and server resources. In no event
should you raise this above about <code>60</code> seconds, as <a href="http://www.research.digital.com/wrl/techreports/abstracts/95.4.html">
most of the benefits are lost</a>.</p>
</div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div>
<div class="section">
<h2><a name="compiletime" id="compiletime">Compile-Time Configuration Issues</a></h2>
<h3>Choosing an MPM</h3>
<p>Apache 2.x supports pluggable concurrency models, called
<a href="../mpm.html">Multi-Processing Modules</a> (MPMs).
When building Apache, you must choose an MPM to use. There
are platform-specific MPMs for some platforms:
<code class="module"><a href="../mod/beos.html">beos</a></code>, <code class="module"><a href="../mod/mpm_netware.html">mpm_netware</a></code>,
<code class="module"><a href="../mod/mpmt_os2.html">mpmt_os2</a></code>, and <code class="module"><a href="../mod/mpm_winnt.html">mpm_winnt</a></code>. For
general Unix-type systems, there are several MPMs from which
to choose. The choice of MPM can affect the speed and scalability
of the httpd:</p>
<ul>
<li>The <code class="module"><a href="../mod/worker.html">worker</a></code> MPM uses multiple child
processes with many threads each. Each thread handles
one connection at a time. Worker generally is a good
choice for high-traffic servers because it has a smaller
memory footprint than the prefork MPM.</li>
<li>The <code class="module"><a href="../mod/prefork.html">prefork</a></code> MPM uses multiple child
processes with one thread each. Each process handles
one connection at a time. On many systems, prefork is
comparable in speed to worker, but it uses more memory.
Prefork's threadless design has advantages over worker
in some situations: it can be used with non-thread-safe
third-party modules, and it is easier to debug on platforms
with poor thread debugging support.</li>
</ul>
<p>For more information on these and other MPMs, please
see the MPM <a href="../mpm.html">documentation</a>.</p>
<h3><a name="modules" id="modules">Modules</a></h3>
<p>Since memory usage is such an important consideration in
performance, you should attempt to eliminate modules that youare
not actually using. If you have built the modules as <a href="../dso.html">DSOs</a>, eliminating modules is a simple
matter of commenting out the associated <code class="directive"><a href="../mod/mod_so.html#loadmodule">LoadModule</a></code> directive for that module.
This allows you to experiment with removing modules, and seeing
if your site still functions in their absense.</p>
<p>If, on the other hand, you have modules statically linked
into your Apache binary, you will need to recompile Apache in
order to remove unwanted modules.</p>
<p>An associated question that arises here is, of course, what
modules you need, and which ones you don't. The answer here
will, of course, vary from one web site to another. However, the
<em>minimal</em> list of modules which you can get by with tends
to include <code class="module"><a href="../mod/mod_mime.html">mod_mime</a></code>, <code class="module"><a href="../mod/mod_dir.html">mod_dir</a></code>,
and <code class="module"><a href="../mod/mod_log_config.html">mod_log_config</a></code>. <code>mod_log_config</code> is,
of course, optional, as you can run a web site without log
files. This is, however, not recommended.</p>
<h3>Atomic Operations</h3>
<p>Some modules, such as <code class="module"><a href="../mod/mod_cache.html">mod_cache</a></code> and
recent development builds of the worker MPM, use APR's
atomic API. This API provides atomic operations that can
be used for lightweight thread synchronization.</p>
<p>By default, APR implements these operations using the
most efficient mechanism available on each target
OS/CPU platform. Many modern CPUs, for example, have
an instruction that does an atomic compare-and-swap (CAS)
operation in hardware. On some platforms, however, APR
defaults to a slower, mutex-based implementation of the
atomic API in order to ensure compatibility with older
CPU models that lack such instructions. If you are
building Apache for one of these platforms, and you plan
to run only on newer CPUs, you can select a faster atomic
implementation at build time by configuring Apache with
the <code>--enable-nonportable-atomics</code> option:</p>
<div class="example"><p><code>
./buildconf<br />
./configure --with-mpm=worker --enable-nonportable-atomics=yes
</code></p></div>
<p>The <code>--enable-nonportable-atomics</code> option is
relevant for the following platforms:</p>
<ul>
<li>Solaris on SPARC<br />
By default, APR uses mutex-based atomics on Solaris/SPARC.
If you configure with <code>--enable-nonportable-atomics</code>,
however, APR generates code that uses a SPARC v8plus opcode for
fast hardware compare-and-swap. If you configure Apache with
this option, the atomic operations will be more efficient
(allowing for lower CPU utilization and higher concurrency),
but the resulting executable will run only on UltraSPARC
chips.
</li>
<li>Linux on x86<br />
By default, APR uses mutex-based atomics on Linux. If you
configure with <code>--enable-nonportable-atomics</code>,
however, APR generates code that uses a 486 opcode for fast
hardware compare-and-swap. This will result in more efficient
atomic operations, but the resulting executable will run only
on 486 and later chips (and not on 386).
</li>
</ul>
<h3>mod_status and ExtendedStatus On</h3>
<p>If you include <code class="module"><a href="../mod/mod_status.html">mod_status</a></code> and you also set
<code>ExtendedStatus On</code> when building and running
Apache, then on every request Apache will perform two calls to
<code>gettimeofday(2)</code> (or <code>times(2)</code>
depending on your operating system), and (pre-1.3) several
extra calls to <code>time(2)</code>. This is all done so that
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -