⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 memory_management.txt

📁 memcached是一个高性能的分布式的内存对象缓存系统
💻 TXT
字号:
Date: Fri, 5 Sep 2003 20:31:03 +0300From: Anatoly Vorobey <mellon@pobox.com>To: memcached@lists.danga.comSubject: Re: Memory Management...On Fri, Sep 05, 2003 at 12:07:48PM -0400, Kyle R. Burton wrote:> prefixing keys with a container identifier).  We have just begun to> look at the implementation of the memory management sub-system with> regards to it's allocation, de-allocation and compaction approaches.> Is there any documentation or discussion of how this subsystem> operates? (slabs.c?)There's no documentation yet, and it's worth mentioning that thissubsystem is the most active area of memcached under development at the moment (however, all the changes to it won't modify the way memcachedpresents itself towards clients, they're primarily directed at makingmemcached use memory more efficiently).Here's a quick recap of what it does now and what is being workedon. The primary goal of the slabs subsystem in memcached was to eliminatememory fragmentation issues totally by using fixed-size memory chunkscoming from a few predetermined size classes (early versions of memcached relied on malloc()'s handling of fragmentation which proved woefully inadequate for our purposes). For instance, supposewe decide at the outset that the list of possible sizes is: 64 bytes,128 bytes, 256 bytes, etc. - doubling all the way up to 1Mb. For eachsize class in this list (each possible size) we maintain a list of freechunks of this size. Whenever a request comes for a particular size,it is rounded up to the closest size class and a free chunk is taken from that size class. In the above example, if you request from the slabs subsystem 100 bytes of memory, you'll actually get a chunk 128bytes worth, from the 128-bytes size class. If there are no free chunksof the needed size at the moment, there are two ways to get one: 1) freean existing chunk in the same size class, using LRU queues to free the least needed objects; 2) get more memory from the system, which we currently always do in _slabs_ of 1Mb each; we malloc() a slab, divide it to chunks of the needed size, and use them.The tradeoff is between memory fragmentation and memory utilisation. In the scheme we're now using, we have zero fragmentation, but a relativelyhigh percentage of memory is wasted. The most efficient way to reducethe waste is to use a list of size classes that closely matches (if that's at all possible) common sizes of objects that the clientsof this particular installation of memcached are likely to store.For example, if your installation is going to store hundreds of                                                                  thousands of objects of the size exactly 120 bytes, you'd be much betteroff changing, in the "naive" list of sizes outlined above, the classof 128 bytes to something a bit higher (because the overhead of storing an item, while not large, will push those 120-bytes objects over 128 bytes of storage internally, and will require using 256 bytes foreach of them in the naive scheme, forcing you to waste almost 50% ofmemory). Such tinkering with the list of size classes is not currentlypossible with memcached, but enabling it is one of the immediate goals.Ideally, the slabs subsystem would analyze at runtime the common sizesof objects that are being requested, and would be able to modify thelist of sizes dynamically to improve memory utilisation. This is notplanned for the immediate future, however. What is planned is the ability to reassign slabs to different classes. Here's what this means. Currently, the total amount of memory allocated for each size class isdetermined by how clients interact with memcached during the initial phase of its execution, when it keeps malloc()'ing more slabs and dividing them into chunks, until it hits the specified memory limit (say, 2Gb, or whatever else was specified). Once it hits the limit, to allocate a new chunk it'll always delete an existing chunk of the same size (using LRU queues), and will never malloc() or free() any memory from/to the system. So if, for example, during those initial few hours of memcached's execution your clients mainly wanted to store very small items, the bulk of memory allocated will be divided to small-sized chunks, and the large size classes will get fewer memory, therefore the life-cycle of large objects you'll store in memcached will henceforth always be much shorter, with this instance of memcached (their LRU queues will be shorter and they'll be pushed out much more often). In general, if your system starts producing a different pattern of common object sizes, the memcached servers will become less efficient, unless you restart them. Slabs reassignment, which is the next feature being worked on, will ensure the server's ability to reclaim a slab (1Mb of memory) from one size  class and put it into another class size, where it's needed more.-- avva

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -