⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 malloc.c

📁 一个很好的内存泄露的监测代码
💻 C
📖 第 1 页 / 共 5 页
字号:
  should instead use a single regular malloc, and assign pointers at  particular offsets in the aggregate space. (In this case though, you  cannot independently free elements.)  independent_comallac differs from independent_calloc in that each  element may have a different size, and also that it does not  automatically clear elements.  independent_comalloc can be used to speed up allocation in cases  where several structs or objects must always be allocated at the  same time.  For example:  struct Head { ... }  struct Foot { ... }  void send_message(char* msg) {    int msglen = strlen(msg);    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };    void* chunks[3];    if (independent_comalloc(3, sizes, chunks) == 0)      die();    struct Head* head = (struct Head*)(chunks[0]);    char*        body = (char*)(chunks[1]);    struct Foot* foot = (struct Foot*)(chunks[2]);    // ...  }  In general though, independent_comalloc is worth using only for  larger values of n_elements. For small values, you probably won't  detect enough difference from series of malloc calls to bother.  Overuse of independent_comalloc can increase overall memory usage,  since it cannot reuse existing noncontiguous small chunks that  might be available for some of the elements.*/#if __STD_CVoid_t** public_iCOMALLOc(size_t, size_t*, Void_t**);#elseVoid_t** public_iCOMALLOc();#endif/*  pvalloc(size_t n);  Equivalent to valloc(minimum-page-that-holds(n)), that is,  round up n to nearest pagesize. */#if __STD_CVoid_t*  public_pVALLOc(size_t);#elseVoid_t*  public_pVALLOc();#endif/*  cfree(Void_t* p);  Equivalent to free(p).  cfree is needed/defined on some systems that pair it with calloc,  for odd historical reasons (such as: cfree is used in example  code in the first edition of K&R).*/#if __STD_Cvoid     public_cFREe(Void_t*);#elsevoid     public_cFREe();#endif/*  malloc_trim(size_t pad);  If possible, gives memory back to the system (via negative  arguments to sbrk) if there is unused memory at the `high' end of  the malloc pool. You can call this after freeing large blocks of  memory to potentially reduce the system-level memory requirements  of a program. However, it cannot guarantee to reduce memory. Under  some allocation patterns, some large free blocks of memory will be  locked between two used chunks, so they cannot be given back to  the system.  The `pad' argument to malloc_trim represents the amount of free  trailing space to leave untrimmed. If this argument is zero,  only the minimum amount of memory to maintain internal data  structures will be left (one page or less). Non-zero arguments  can be supplied to maintain enough trailing space to service  future expected allocations without having to re-obtain memory  from the system.  Malloc_trim returns 1 if it actually released any memory, else 0.  On systems that do not support "negative sbrks", it will always  rreturn 0.*/#if __STD_Cint      public_mTRIm(size_t);#elseint      public_mTRIm();#endif/*  malloc_usable_size(Void_t* p);  Returns the number of bytes you can actually use in  an allocated chunk, which may be more than you requested (although  often not) due to alignment and minimum size constraints.  You can use this many bytes without worrying about  overwriting other allocated objects. This is not a particularly great  programming practice. malloc_usable_size can be more useful in  debugging and assertions, for example:  p = malloc(n);  assert(malloc_usable_size(p) >= 256);*/#if __STD_Csize_t   public_mUSABLe(Void_t*);#elsesize_t   public_mUSABLe();#endif/*  malloc_stats();  Prints on stderr the amount of space obtained from the system (both  via sbrk and mmap), the maximum amount (which may be more than  current if malloc_trim and/or munmap got called), and the current  number of bytes allocated via malloc (or realloc, etc) but not yet  freed. Note that this is the number of bytes allocated, not the  number requested. It will be larger than the number requested  because of alignment and bookkeeping overhead. Because it includes  alignment wastage as being in use, this figure may be greater than  zero even when no user-level chunks are allocated.  The reported current and maximum system memory can be inaccurate if  a program makes other calls to system memory allocation functions  (normally sbrk) outside of malloc.  malloc_stats prints only the most commonly interesting statistics.  More information can be obtained by calling mallinfo.*/#if __STD_Cvoid     public_mSTATs(void);#elsevoid     public_mSTATs();#endif/*  malloc_get_state(void);  Returns the state of all malloc variables in an opaque data  structure.*/#if __STD_CVoid_t*  public_gET_STATe(void);#elseVoid_t*  public_gET_STATe();#endif/*  malloc_set_state(Void_t* state);  Restore the state of all malloc variables from data obtained with  malloc_get_state().*/#if __STD_Cint      public_sET_STATe(Void_t*);#elseint      public_sET_STATe();#endif#ifdef _LIBC/*  posix_memalign(void **memptr, size_t alignment, size_t size);  POSIX wrapper like memalign(), checking for validity of size.*/int      __posix_memalign(void **, size_t, size_t);#endif/* mallopt tuning options *//*  M_MXFAST is the maximum request size used for "fastbins", special bins  that hold returned chunks without consolidating their spaces. This  enables future requests for chunks of the same size to be handled  very quickly, but can increase fragmentation, and thus increase the  overall memory footprint of a program.  This malloc manages fastbins very conservatively yet still  efficiently, so fragmentation is rarely a problem for values less  than or equal to the default.  The maximum supported value of MXFAST  is 80. You wouldn't want it any higher than this anyway.  Fastbins  are designed especially for use with many small structs, objects or  strings -- the default handles structs/objects/arrays with sizes up  to 8 4byte fields, or small strings representing words, tokens,  etc. Using fastbins for larger objects normally worsens  fragmentation without improving speed.  M_MXFAST is set in REQUEST size units. It is internally used in  chunksize units, which adds padding and alignment.  You can reduce  M_MXFAST to 0 to disable all use of fastbins.  This causes the malloc  algorithm to be a closer approximation of fifo-best-fit in all cases,  not just for larger requests, but will generally cause it to be  slower.*//* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */#ifndef M_MXFAST#define M_MXFAST            1#endif#ifndef DEFAULT_MXFAST#define DEFAULT_MXFAST     64#endif/*  M_TRIM_THRESHOLD is the maximum amount of unused top-most memory  to keep before releasing via malloc_trim in free().  Automatic trimming is mainly useful in long-lived programs.  Because trimming via sbrk can be slow on some systems, and can  sometimes be wasteful (in cases where programs immediately  afterward allocate more large chunks) the value should be high  enough so that your overall system performance would improve by  releasing this much memory.  The trim threshold and the mmap control parameters (see below)  can be traded off with one another. Trimming and mmapping are  two different ways of releasing unused memory back to the  system. Between these two, it is often possible to keep  system-level demands of a long-lived program down to a bare  minimum. For example, in one test suite of sessions measuring  the XF86 X server on Linux, using a trim threshold of 128K and a  mmap threshold of 192K led to near-minimal long term resource  consumption.  If you are using this malloc in a long-lived program, it should  pay to experiment with these values.  As a rough guide, you  might set to a value close to the average size of a process  (program) running on your system.  Releasing this much memory  would allow such a process to run in memory.  Generally, it's  worth it to tune for trimming rather tham memory mapping when a  program undergoes phases where several large chunks are  allocated and released in ways that can reuse each other's  storage, perhaps mixed with phases where there are no such  chunks at all.  And in well-behaved long-lived programs,  controlling release of large blocks via trimming versus mapping  is usually faster.  However, in most programs, these parameters serve mainly as  protection against the system-level effects of carrying around  massive amounts of unneeded memory. Since frequent calls to  sbrk, mmap, and munmap otherwise degrade performance, the default  parameters are set to relatively high values that serve only as  safeguards.  The trim value It must be greater than page size to have any useful  effect.  To disable trimming completely, you can set to  (unsigned long)(-1)  Trim settings interact with fastbin (MXFAST) settings: Unless  TRIM_FASTBINS is defined, automatic trimming never takes place upon  freeing a chunk with size less than or equal to MXFAST. Trimming is  instead delayed until subsequent freeing of larger chunks. However,  you can still force an attempted trim by calling malloc_trim.  Also, trimming is not generally possible in cases where  the main arena is obtained via mmap.  Note that the trick some people use of mallocing a huge space and  then freeing it at program startup, in an attempt to reserve system  memory, doesn't have the intended effect under automatic trimming,  since that memory will immediately be returned to the system.*/#define M_TRIM_THRESHOLD       -1#ifndef DEFAULT_TRIM_THRESHOLD#define DEFAULT_TRIM_THRESHOLD (128 * 1024)#endif/*  M_TOP_PAD is the amount of extra `padding' space to allocate or  retain whenever sbrk is called. It is used in two ways internally:  * When sbrk is called to extend the top of the arena to satisfy  a new malloc request, this much padding is added to the sbrk  request.  * When malloc_trim is called automatically from free(),  it is used as the `pad' argument.  In both cases, the actual amount of padding is rounded  so that the end of the arena is always a system page boundary.  The main reason for using padding is to avoid calling sbrk so  often. Having even a small pad greatly reduces the likelihood  that nearly every malloc request during program start-up (or  after trimming) will invoke sbrk, which needlessly wastes  time.  Automatic rounding-up to page-size units is normally sufficient  to avoid measurable overhead, so the default is 0.  However, in  systems where sbrk is relatively slow, it can pay to increase  this value, at the expense of carrying around more memory than  the program needs.*/#define M_TOP_PAD              -2#ifndef DEFAULT_TOP_PAD#define DEFAULT_TOP_PAD        (0)#endif/*  M_MMAP_THRESHOLD is the request size threshold for using mmap()  to service a request. Requests of at least this size that cannot  be allocated using already-existing space will be serviced via mmap.  (If enough normal freed space already exists it is used instead.)  Using mmap segregates relatively large chunks of memory so that  they can be individually obtained and released from the host  system. A request serviced through mmap is never reused by any  other request (at least not directly; the system may just so  happen to remap successive requests to the same locations).  Segregating space in this way has the benefits that:   1. Mmapped space can ALWAYS be individually released back      to the system, which helps keep the system level memory      demands of a long-lived program low.   2. Mapped memory can never become `locked' between      other chunks, as can happen with normally allocated chunks, which      means that even trimming via malloc_trim would not release them.   3. On some systems with "holes" in address spaces, mmap can obtain      memory that sbrk cannot.  However, it has the disadvantages that:   1. The space cannot be reclaimed, consolidated, and then      used to service later requests, as happens with normal chunks.   2. It can lead to more wastage because of mmap page alignment      requirements   3. It causes malloc performance to be more dependent on host      system memory management support routines which may vary in      implementation quality and may impose arbitrary      limitations. Generally, servicing a request via normal      malloc steps is faster than going through a system's mmap.  The advantages of mmap nearly always outweigh disadvantages for  "large" chunks, but the value of "large" varies across systems.  The  default is an empirically derived value that works well in most  systems.*/#define M_MMAP_THRESHOLD      -3#ifndef DEFAULT_MMAP_THRESHOLD#define DEFAULT_MMAP_THRESHOLD (128 * 1024)#endif/*  M_MMAP_MAX is the maximum number of requests to simultaneously  service using mmap. This parameter exists because  some systems have a limited number of internal tables for

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -