📄 malloc.h
字号:
#ifndef DEFAULT_MMAP_THRESHOLD#define DEFAULT_MMAP_THRESHOLD (256 * 1024)#endif/* M_MMAP_MAX is the maximum number of requests to simultaneously service using mmap. This parameter exists because. Some systems have a limited number of internal tables for use by mmap, and using more than a few of them may degrade performance. The default is set to a value that serves only as a safeguard. Setting to 0 disables use of mmap for servicing large requests. If HAVE_MMAP is not set, the default value is 0, and attempts to set it to non-zero values in mallopt will fail.*/#define M_MMAP_MAX -4#ifndef DEFAULT_MMAP_MAX#define DEFAULT_MMAP_MAX (65536)#endif/* ------------------ MMAP support ------------------ */#include <fcntl.h>#include <sys/mman.h>#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)#define MAP_ANONYMOUS MAP_ANON#endif#ifdef __ARCH_HAS_MMU__#define MMAP(addr, size, prot) \ (mmap((addr), (size), (prot), MAP_PRIVATE|MAP_ANONYMOUS, 0, 0))#else#define MMAP(addr, size, prot) \ (mmap((addr), (size), (prot), MAP_SHARED|MAP_ANONYMOUS, 0, 0))#endif/* ----------------------- Chunk representations ----------------------- *//* This struct declaration is misleading (but accurate and necessary). It declares a "view" into memory allowing access to necessary fields at known offsets from a given base. See explanation below.*/struct malloc_chunk { size_t prev_size; /* Size of previous chunk (if free). */ size_t size; /* Size in bytes, including overhead. */ struct malloc_chunk* fd; /* double links -- used only if free. */ struct malloc_chunk* bk;};typedef struct malloc_chunk* mchunkptr;/* malloc_chunk details: (The following includes lightly edited explanations by Colin Plumb.) Chunks of memory are maintained using a `boundary tag' method as described in e.g., Knuth or Standish. (See the paper by Paul Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such techniques.) Sizes of free chunks are stored both in the front of each chunk and at the end. This makes consolidating fragmented chunks into bigger chunks very fast. The size fields also hold bits representing whether chunks are free or in use. An allocated chunk looks like this: chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of previous chunk, if allocated | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of chunk, in bytes |P| mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | User data starts here... . . . . (malloc_usable_space() bytes) . . |nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of chunk | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Where "chunk" is the front of the chunk for the purpose of most of the malloc code, but "mem" is the pointer that is returned to the user. "Nextchunk" is the beginning of the next contiguous chunk. Chunks always begin on even word boundries, so the mem portion (which is returned to the user) is also on an even word boundary, and thus at least double-word aligned. Free chunks are stored in circular doubly-linked lists, and look like this: chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Size of previous chunk | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ `head:' | Size of chunk, in bytes |P| mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Forward pointer to next chunk in list | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Back pointer to previous chunk in list | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Unused space (may be 0 bytes long) . . . . |nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ `foot:' | Size of chunk, in bytes | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The P (PREV_INUSE) bit, stored in the unused low-order bit of the chunk size (which is always a multiple of two words), is an in-use bit for the *previous* chunk. If that bit is *clear*, then the word before the current chunk size contains the previous chunk size, and can be used to find the front of the previous chunk. The very first chunk allocated always has this bit set, preventing access to non-existent (or non-owned) memory. If prev_inuse is set for any given chunk, then you CANNOT determine the size of the previous chunk, and might even get a memory addressing fault when trying to do so. Note that the `foot' of the current chunk is actually represented as the prev_size of the NEXT chunk. This makes it easier to deal with alignments etc but can be very confusing when trying to extend or adapt this code. The two exceptions to all this are 1. The special chunk `top' doesn't bother using the trailing size field since there is no next contiguous chunk that would have to index off it. After initialization, `top' is forced to always exist. If it would become less than MINSIZE bytes long, it is replenished. 2. Chunks allocated via mmap, which have the second-lowest-order bit (IS_MMAPPED) set in their size fields. Because they are allocated one-by-one, each must contain its own trailing size field.*//* ---------- Size and alignment checks and conversions ----------*//* conversion from malloc headers to user pointers, and back */#define chunk2mem(p) ((void*)((char*)(p) + 2*(sizeof(size_t))))#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*(sizeof(size_t))))/* The smallest possible chunk */#define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk))/* The smallest size we can malloc is an aligned minimal chunk */#define MINSIZE \ (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))/* Check if m has acceptable alignment */#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)/* Check if a request is so large that it would wrap around zero when padded and aligned. To simplify some other code, the bound is made low enough so that adding MINSIZE will also not wrap around sero.*/#define REQUEST_OUT_OF_RANGE(req) \ ((unsigned long)(req) >= \ (unsigned long)(size_t)(-2 * MINSIZE))/* pad request bytes into a usable size -- internal version */#define request2size(req) \ (((req) + (sizeof(size_t)) + MALLOC_ALIGN_MASK < MINSIZE) ? \ MINSIZE : \ ((req) + (sizeof(size_t)) + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)/* Same, except also perform argument check */#define checked_request2size(req, sz) \ if (REQUEST_OUT_OF_RANGE(req)) { \ errno = ENOMEM; \ return 0; \ } \ (sz) = request2size(req);/* --------------- Physical chunk operations ---------------*//* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */#define PREV_INUSE 0x1/* extract inuse bit of previous chunk */#define prev_inuse(p) ((p)->size & PREV_INUSE)/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */#define IS_MMAPPED 0x2/* check for mmap()'ed chunk */#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)/* Bits to mask off when extracting size Note: IS_MMAPPED is intentionally not masked off from size field in macros for which mmapped chunks should never be seen. This should cause helpful core dumps to occur if it is tried by accident by people extending or adapting this malloc.*/#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)/* Get size, ignoring use bits */#define chunksize(p) ((p)->size & ~(SIZE_BITS))/* Ptr to next physical malloc_chunk. */#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))/* Ptr to previous physical malloc_chunk */#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))/* Treat space at ptr + offset as a chunk */#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))/* extract p's inuse bit */#define inuse(p)\((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)/* set/clear chunk as being inuse without otherwise disturbing */#define set_inuse(p)\((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE#define clear_inuse(p)\((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)/* check/set/clear inuse bits in known places */#define inuse_bit_at_offset(p, s)\ (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)#define set_inuse_bit_at_offset(p, s)\ (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)#define clear_inuse_bit_at_offset(p, s)\ (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))/* Set size at head, without disturbing its use bit */#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))/* Set size/use field */#define set_head(p, s) ((p)->size = (s))/* Set size at footer (only when chunk is not in use) */#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))/* -------------------- Internal data structures -------------------- *//* Bins An array of bin headers for free chunks. Each bin is doubly linked. The bins are approximately proportionally (log) spaced. There are a lot of these bins (128). This may look excessive, but works very well in practice. Most bins hold sizes that are unusual as malloc request sizes, but are more usual for fragments and consolidated sets of chunks, which is what these bins hold, so they can be found quickly. All procedures maintain the invariant that no consolidated chunk physically borders another one, so each chunk in a list is known to be preceeded and followed by either inuse chunks or the ends of memory. Chunks in bins are kept in size order, with ties going to the approximately least recently used chunk. Ordering isn't needed for the small bins, which all contain the same-sized chunks, but facilitates best-fit allocation for larger chunks. These lists are just sequential. Keeping them in order almost never requires enough traversal to warrant using fancier ordered data structures. Chunks of the same size are linked with the most recently freed at the front, and allocations are taken from the back. This results in LRU (FIFO) allocation order, which tends to give each chunk an equal opportunity to be consolidated with adjacent freed chunks, resulting in larger free chunks and less fragmentation. To simplify use in double-linked lists, each bin header acts as a malloc_chunk. This avoids special-casing for headers. But to conserve space and improve locality, we allocate only the fd/bk pointers of bins, and then use repositioning tricks to treat these as the fields of a malloc_chunk*. */typedef struct malloc_chunk* mbinptr;/* addressing -- note that bin_at(0) does not exist */#define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - ((sizeof(size_t))<<1)))/* analog of ++bin */#define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))/* Reminders about list directionality within bins */#define first(b) ((b)->fd)#define last(b) ((b)->bk)
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -