📄 00000006.htm
字号:
ulimit -n 32768 <BR> <BR> increases the current process' limit. I verified that a process <BR>on Red Hat 6.0 (2.2.5 or so plus patches) can open at least <BR> 31000 file descriptors this way. Another fellow has verified that a <BR> process on 2.2.12 can open at least 90000 file <BR> descriptors this way (with appropriate limits). The upper bound <BR>seems to be available memory. <BR> Stephen C. Tweedie posted about how to set ulimit limits globally <BR>or per-user at boot time using initscript and pam_limit. <BR> In older 2.2 kernels, though, the number of open files per <BR>process is still limited to 1024, even with the above changes. <BR> See also Oskar's 1998 post, which talks about the per-process and <BR>system-wide limits on file descriptors in the 2.0.36 <BR> kernel. <BR> <BR>Limits on threads <BR> <BR>On any architecture, you may need to reduce the amount of stack space <BR>allocated for each thread to avoid running out of virtual <BR>memory. You can set this at runtime with pthread_attr_init() if you're <BR>using pthreads. <BR> <BR> Solaris: it supports as many threads as will fit in memory, I hear. <BR> <BR> FreeBSD: ? <BR> Linux: Even the 2.2.13 kernel limits the number of threads, at <BR>least on Intel. I don't know what the limits are on other <BR> architectures. Mingo posted a patch for 2.1.131 on Intel that <BR>removed this limit. It appears to be integrated into 2.3.20. <BR> <BR> See also Volano's detailed instructions for raising file, thread, <BR>and FD_SET limits in the 2.2 kernel. Wow. This stunning little <BR> document steps you through a lot of stuff that would be hard to <BR>figure out yourself. This is a must-read, even though some <BR> of its advice is already out of date. <BR> Java: See Volano's detailed benchmark info, plus their info on <BR>how to tune various systems to handle lots of threads. <BR> <BR>Other limits/tips <BR> <BR> select() is limited to FD_SETSIZE handles. This limit is compiled <BR>in to the standard library and user programs. The similar <BR> call poll() does not have a comparable limit, and can have less <BR>overhead than select(). <BR> Old system libraries might use 16 bit variables to hold file <BR>handles, which causes trouble above 32767 handles. glibc2.1 <BR> should be ok. <BR> Many systems use 16 bit variables to hold process or thread id's. <BR>It would be interesting to port the Volano scalability <BR> benchmark to C, and see what the upper limit on number of threads <BR>is for the various operating systems. <BR> Too much thread-local memory is preallocated by some operating <BR>systems; if each thread gets 1MB, and total VM space <BR> is 2GB, that creates an upper limit of 2000 threads. <BR> Normally, data gets copied many times on its way from here to <BR>there. mmap() and sendfile() can be used to reduce this <BR> overhead in some cases. IO-Lite is a proposal (already <BR>implemented on FreeBSD) for a set of I/O primitives that gets rid <BR> of the need for many copies. It's sexy; go read it. But see also <BR>Alan Cox's opinion of zero-copy. <BR> The sendfile() function in Linux and FreeBSD lets you tell the <BR>kernel to send part or all of a file. This lets the OS do it as <BR> efficiently as possible. It can be used equally well in servers <BR>using threads or servers using nonblocking I/O. (In Linux, It's <BR> poorly documented at the moment; use _syscall4 to call it. Andi <BR>Kleen is writing new man pages that cover this.) Rumor <BR> has it, ftp.cdrom.com benefitted noticably from sendfile(). <BR> A new socket option under Linux, TCP_CORK, tells the kernel to <BR>avoid sending partial frames, which helps a bit e.g. <BR> when there are lots of little write() calls you can't bundle <BR>together for some reason. Unsetting the option flushes the buffer. <BR> Not all threads are created equal. The clone() function in Linux <BR>(and its friends in other operating systems) lets you create a <BR> thread that has its own current working directory, for instance, <BR>which can be very helpful when implementing an ftp server. <BR> See Hoser FTPd for an example of the use of native threads rather <BR>than pthreads. <BR> To keep the number of filehandles per process down, servers can <BR>fork() once they reach the desired maximum; the child <BR> finishes serving the existing clients, and the parent accepts and <BR>services new clients. (If the desired maximum is 1, this <BR> degenerates to the classical one-process-per-client model.) <BR> One developer using sendfile() with Freebsd reports that using <BR>POLLWRBAND instead of POLLOUT makes a big <BR> difference. <BR> Look at the performance comparison graph at the bottom of http: <BR>//www.acme.com/software/thttpd/benchmarks.html. <BR> Notice how various servers have trouble above 128 connections, even <BR> on Solaris 2.6? Anyone who figures out why, let me <BR> know. <BR> Note: if the TCP stack has a bug that causes a short (200ms) <BR>delay at SYN or FIN time, as Linux 2.2.0-2.2.6 had, and <BR> the OS or http daemon has a hard limit on the number of connections <BR> open, you would expect exactly this behavior. There <BR> may be other causes. <BR> "Re: fix for hybrid server problems" by Vivek Sadananda Pai <BR>(<A HREF="mailto:vivek@cs.rice.edu)">vivek@cs.rice.edu)</A> on new-httpd, May 9th, notes: <BR> <BR> "I've compared the raw performance of a select-based server <BR>with a multiple-process server on both <BR> FreeBSD and Solaris/x86. On microbenchmarks, there's only a <BR>marginal difference in performance stemming <BR> from the software architecture. The big performance win for <BR>select-based servers stems from doing <BR> application-level caching. While multiple-process servers <BR>can do it at a higher cost, it's harder to get the same <BR> benefits on real workloads (vs microbenchmarks). I'll be <BR>presenting those measurements as part of a paper <BR> that'll appear at the next Usenix conference. If you've got <BR>postscript, the paper is available at <BR> <A HREF="http://www.cs.rice.edu/~vivek/flash99/"">http://www.cs.rice.edu/~vivek/flash99/"</A> <BR> <BR>Kernel Issues <BR> <BR>For Linux, it looks like kernel bottlenecks are being fixed constantly. <BR> See Linux HQ, Kernel Traffic, and the Linux-Kernel mailing <BR>list (Example interesting posts by a user asking how to tune, and Dean <BR>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -