⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 threads.txt

📁 memcached是一个高性能的分布式的内存对象缓存系统
💻 TXT
字号:
Multithreading support in memcachedOVERVIEWBy default, memcached is compiled as a single-threaded application. This isthe most CPU-efficient mode of operation, and it is appropriate for memcachedinstances running on single-processor servers or whose request volume islow enough that available CPU power is not a bottleneck.More heavily-used memcached instances can benefit from multithreaded mode.To enable it, use the "--enable-threads" option to the configure script:./configure --enable-threadsYou must have the POSIX thread functions (pthread_*) on your system in orderto use memcached's multithreaded mode.Once you have a thread-capable memcached executable, you can control thenumber of threads using the "-t" option; the default is 4. On a machinethat's dedicated to memcached, you will typically want one thread perprocessor core. Due to memcached's nonblocking architecture, there is noreal advantage to using more threads than the number of CPUs on the machine;doing so will increase lock contention and is likely to degrade performance.INTERNALSThe threading support is mostly implemented as a series of wrapper functionsthat protect calls to underlying code with one of a small number of locks.In single-threaded mode, the wrappers are replaced with direct invocationsof the target code using #define; that is done in memcached.h. This approachallows memcached to be compiled in either single- or multi-threaded mode.Each thread has its own instance of libevent ("base" in libevent terminology).The only direct interaction between threads is for new connections. One ofthe threads handles the TCP listen socket; each new connection is passed toa different thread on a round-robin basis. After that, each thread operateson its set of connections as if it were running in single-threaded mode,using libevent to manage nonblocking I/O as usual.UDP requests are a bit different, since there is only one UDP socket that'sshared by all clients. The UDP socket is monitored by all of the threads.When a datagram comes in, all the threads that aren't already processinganother request will receive "socket readable" callbacks from libevent.Only one thread will successfully read the request; the others will go backto sleep or, in the case of a very busy server, will read whatever otherUDP requests are waiting in the socket buffer. Note that in the case ofmoderately busy servers, this results in increased CPU consumption sincethreads will constantly wake up and find no input waiting for them. Butshort of much more major surgery on the I/O code, this is not easy to avoid.TO DOThe locking is currently very coarse-grained.  There is, for example, onelock that protects all the calls to the hashtable-related functions. Sincememcached spends much of its CPU time on command parsing and responseassembly, rather than managing the hashtable per se, this is not a hugebottleneck for small numbers of processors. However, the locking will likelyhave to be refined in the event that memcached needs to run well onmassively-parallel machines.One cheap optimization to reduce contention on that lock: move the hash valuecomputation so it occurs before the lock is obtained whenever possible.Right now the hash is performed at the lowest levels of the functions inassoc.c. If instead it was computed in memcached.c, then passed along withthe key and length into the items.c code and down into assoc.c, that wouldreduce the amount of time each thread needs to keep the hashtable lock held.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -