⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mallocr.c

📁 用于嵌入式Linux系统的标准C的库函数
💻 C
📖 第 1 页 / 共 5 页
字号:
      chunks at all.  And in well-behaved long-lived programs,      controlling release of large blocks via trimming versus mapping      is usually faster.      However, in most programs, these parameters serve mainly as      protection against the system-level effects of carrying around      massive amounts of unneeded memory. Since frequent calls to      sbrk, mmap, and munmap otherwise degrade performance, the default      parameters are set to relatively high values that serve only as      safeguards.      The default trim value is high enough to cause trimming only in      fairly extreme (by current memory consumption standards) cases.      It must be greater than page size to have any useful effect.  To      disable trimming completely, you can set to (unsigned long)(-1);*/#ifndef DEFAULT_TOP_PAD#define DEFAULT_TOP_PAD        (0)#endif/*    M_TOP_PAD is the amount of extra `padding' space to allocate or       retain whenever sbrk is called. It is used in two ways internally:      * When sbrk is called to extend the top of the arena to satisfy        a new malloc request, this much padding is added to the sbrk        request.      * When malloc_trim is called automatically from free(),        it is used as the `pad' argument.      In both cases, the actual amount of padding is rounded       so that the end of the arena is always a system page boundary.      The main reason for using padding is to avoid calling sbrk so      often. Having even a small pad greatly reduces the likelihood      that nearly every malloc request during program start-up (or      after trimming) will invoke sbrk, which needlessly wastes      time.       Automatic rounding-up to page-size units is normally sufficient      to avoid measurable overhead, so the default is 0.  However, in      systems where sbrk is relatively slow, it can pay to increase      this value, at the expense of carrying around more memory than       the program needs.*/#ifndef DEFAULT_MMAP_THRESHOLD#define DEFAULT_MMAP_THRESHOLD (128 * 1024)#endif/*    M_MMAP_THRESHOLD is the request size threshold for using mmap()       to service a request. Requests of at least this size that cannot       be allocated using already-existing space will be serviced via mmap.        (If enough normal freed space already exists it is used instead.)      Using mmap segregates relatively large chunks of memory so that      they can be individually obtained and released from the host      system. A request serviced through mmap is never reused by any      other request (at least not directly; the system may just so      happen to remap successive requests to the same locations).      Segregating space in this way has the benefit that mmapped space      can ALWAYS be individually released back to the system, which      helps keep the system level memory demands of a long-lived      program low. Mapped memory can never become `locked' between      other chunks, as can happen with normally allocated chunks, which      menas that even trimming via malloc_trim would not release them.      However, it has the disadvantages that:         1. The space cannot be reclaimed, consolidated, and then            used to service later requests, as happens with normal chunks.          2. It can lead to more wastage because of mmap page alignment            requirements         3. It causes malloc performance to be more dependent on host            system memory management support routines which may vary in            implementation quality and may impose arbitrary            limitations. Generally, servicing a request via normal            malloc steps is faster than going through a system's mmap.      All together, these considerations should lead you to use mmap      only for relatively large requests.  */#ifndef DEFAULT_MMAP_MAX#if HAVE_MMAP#define DEFAULT_MMAP_MAX       (64)#else#define DEFAULT_MMAP_MAX       (0)#endif#endif/*    M_MMAP_MAX is the maximum number of requests to simultaneously       service using mmap. This parameter exists because:         1. Some systems have a limited number of internal tables for            use by mmap.         2. In most systems, overreliance on mmap can degrade overall            performance.         3. If a program allocates many large regions, it is probably            better off using normal sbrk-based allocation routines that            can reclaim and reallocate normal heap memory. Using a            small value allows transition into this mode after the            first few allocations.      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,      the default value is 0, and attempts to set it to non-zero values      in mallopt will fail.*//*   Special defines for linux libc  Except when compiled using these special defines for Linux libc  using weak aliases, this malloc is NOT designed to work in  multithreaded applications.  No semaphores or other concurrency  control are provided to ensure that multiple malloc or free calls  don't run at the same time, which could be disasterous. A single  semaphore could be used across malloc, realloc, and free (which is  essentially the effect of the linux weak alias approach). It would  be hard to obtain finer granularity.*/#ifdef INTERNAL_LINUX_C_LIB#if __STD_CVoid_t * __default_morecore_init (ptrdiff_t);Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;#elseVoid_t * __default_morecore_init ();Void_t *(*__morecore)() = __default_morecore_init;#endif#define MORECORE (*__morecore)#define MORECORE_FAILURE 0#define MORECORE_CLEARS 1 #else /* INTERNAL_LINUX_C_LIB */#ifndef INTERNAL_NEWLIB#if __STD_Cextern Void_t*     sbrk(ptrdiff_t);#elseextern Void_t*     sbrk();#endif#endif#ifndef MORECORE#define MORECORE sbrk#endif#ifndef MORECORE_FAILURE#define MORECORE_FAILURE -1#endif#ifndef MORECORE_CLEARS#define MORECORE_CLEARS 1#endif#endif /* INTERNAL_LINUX_C_LIB */#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)#define cALLOc		__libc_calloc#define fREe		__libc_free#define mALLOc		__libc_malloc#define mEMALIGn	__libc_memalign#define rEALLOc		__libc_realloc#define vALLOc		__libc_valloc#define pvALLOc		__libc_pvalloc#define mALLINFo	__libc_mallinfo#define mALLOPt		__libc_mallopt#pragma weak calloc = __libc_calloc#pragma weak free = __libc_free#pragma weak cfree = __libc_free#pragma weak malloc = __libc_malloc#pragma weak memalign = __libc_memalign#pragma weak realloc = __libc_realloc#pragma weak valloc = __libc_valloc#pragma weak pvalloc = __libc_pvalloc#pragma weak mallinfo = __libc_mallinfo#pragma weak mallopt = __libc_mallopt#else#ifdef INTERNAL_NEWLIB#define cALLOc		_calloc_r#define fREe		_free_r#define mALLOc		_malloc_r#define mEMALIGn	_memalign_r#define rEALLOc		_realloc_r#define vALLOc		_valloc_r#define pvALLOc		_pvalloc_r#define mALLINFo	_mallinfo_r#define mALLOPt		_mallopt_r#define malloc_stats			_malloc_stats_r#define malloc_trim			_malloc_trim_r#define malloc_usable_size		_malloc_usable_size_r#define malloc_update_mallinfo		__malloc_update_mallinfo#define malloc_av_			__malloc_av_#define malloc_current_mallinfo		__malloc_current_mallinfo#define malloc_max_sbrked_mem		__malloc_max_sbrked_mem#define malloc_max_total_mem		__malloc_max_total_mem#define malloc_sbrk_base		__malloc_sbrk_base#define malloc_top_pad			__malloc_top_pad#define malloc_trim_threshold		__malloc_trim_threshold#else /* ! INTERNAL_NEWLIB */#define cALLOc		calloc#define fREe		free#define mALLOc		malloc#define mEMALIGn	memalign#define rEALLOc		realloc#define vALLOc		valloc#define pvALLOc		pvalloc#define mALLINFo	mallinfo#define mALLOPt		mallopt#endif /* ! INTERNAL_NEWLIB */#endif/* Public routines */#if __STD_CVoid_t* mALLOc(RARG size_t);void    fREe(RARG Void_t*);Void_t* rEALLOc(RARG Void_t*, size_t);Void_t* mEMALIGn(RARG size_t, size_t);Void_t* vALLOc(RARG size_t);Void_t* pvALLOc(RARG size_t);Void_t* cALLOc(RARG size_t, size_t);void    cfree(Void_t*);int     malloc_trim(RARG size_t);size_t  malloc_usable_size(RARG Void_t*);void    malloc_stats(RONEARG);int     mALLOPt(RARG int, int);struct mallinfo mALLINFo(RONEARG);#elseVoid_t* mALLOc();void    fREe();Void_t* rEALLOc();Void_t* mEMALIGn();Void_t* vALLOc();Void_t* pvALLOc();Void_t* cALLOc();void    cfree();int     malloc_trim();size_t  malloc_usable_size();void    malloc_stats();int     mALLOPt();struct mallinfo mALLINFo();#endif#ifdef __cplusplus};  /* end of extern "C" */#endif/* ---------- To make a malloc.h, end cutting here ------------ *//*   Emulation of sbrk for WIN32  All code within the ifdef WIN32 is untested by me.*/#ifdef WIN32#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \~(malloc_getpagesize-1))/* resrve 64MB to insure large contiguous space */ #define RESERVED_SIZE (1024*1024*64)#define NEXT_SIZE (2048*1024)#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)struct GmListElement;typedef struct GmListElement GmListElement;struct GmListElement {	GmListElement* next;	void* base;};static GmListElement* head = 0;static unsigned int gNextAddress = 0;static unsigned int gAddressBase = 0;static unsigned int gAllocatedSize = 0;staticGmListElement* makeGmListElement (void* bas){	GmListElement* this;	this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));	ASSERT (this);	if (this)	{		this->base = bas;		this->next = head;		head = this;	}	return this;}void gcleanup (){	BOOL rval;	ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));	if (gAddressBase && (gNextAddress - gAddressBase))	{		rval = VirtualFree ((void*)gAddressBase, 							gNextAddress - gAddressBase, 							MEM_DECOMMIT);        ASSERT (rval);	}	while (head)	{		GmListElement* next = head->next;		rval = VirtualFree (head->base, 0, MEM_RELEASE);		ASSERT (rval);		LocalFree (head);		head = next;	}}		staticvoid* findRegion (void* start_address, unsigned long size){	MEMORY_BASIC_INFORMATION info;	while ((unsigned long)start_address < TOP_MEMORY)	{		VirtualQuery (start_address, &info, sizeof (info));		if (info.State != MEM_FREE)			start_address = (char*)info.BaseAddress + info.RegionSize;		else if (info.RegionSize >= size)			return start_address;		else			start_address = (char*)info.BaseAddress + info.RegionSize; 	}	return NULL;	}void* wsbrk (long size){	void* tmp;	if (size > 0)	{		if (gAddressBase == 0)		{			gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));			gNextAddress = gAddressBase = 				(unsigned int)VirtualAlloc (NULL, gAllocatedSize, 											MEM_RESERVE, PAGE_NOACCESS);		} else if (AlignPage (gNextAddress + size) > (gAddressBase +

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -