⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 vxmalloc.c

📁 This version of malloc for VxWorks contains two different algorithms. One is the BSD based Kingsley
💻 C
📖 第 1 页 / 共 5 页
字号:
      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;      chunk_at_offset(old_top, old_top_size	     )->size =        SIZE_SZ|PREV_INUSE;      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =        SIZE_SZ|PREV_INUSE;      set_head_size(old_top, old_top_size);      /* If possible, release the rest. */      if (old_top_size >= MINSIZE)       {        fREe(chunk2mem(old_top));      }     }  }  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)     max_sbrked_mem = sbrked_mem;  /* We always land on a page boundary */  ASSERT(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);}/* Main public routines *//*  Malloc Algorthim:    The requested size is first converted into a usable form, `nb'.    This currently means to add 4 bytes overhead plus possibly more to    obtain 8-byte alignment and/or to obtain a size of at least    MINSIZE (currently 16 bytes), the smallest allocatable size.    (All fits are considered `exact' if they are within MINSIZE bytes.)    From there, the first successful of the following steps is taken:      1. The bin corresponding to the request size is scanned, and if	 a chunk of exactly the	right size is found, it is taken.      2. The most recently remaindered chunk is used if it is big	 enough.  This is a form of (roving) first fit,	used only in	 the absence of	exact fits. Runs of consecutive requests use	 the remainder of the chunk used for the previous such request	 whenever possible. This limited use of	a first-fit style	 allocation strategy tends to give contiguous chunks	 coextensive lifetimes,	which improves locality and can reduce	 fragmentation in the long run.      3. Other bins are scanned in increasing size order, using a	 chunk big enough to fulfill the request, and splitting	off	 any remainder.	 This search is strictly by best-fit; i.e.,	 the smallest (with ties going to approximately	the least	 recently used)	chunk that fits is selected.      4. If large enough, the chunk bordering the end of memory	 (`top') is split off. (This use of `top' is in	accord with	 the best-fit search rule.  In effect, `top' is	treated as	 larger	(and thus less well fitting) than any other available	 chunk since it	can be extended to be as large as necessary	 (up to	system limitations).      5. If the request size meets the mmap threshold and the	 system	supports mmap, and there are few enough currently	 allocated mmapped regions, and	a call to mmap succeeds,	 the request is	allocated via direct memory mapping.      6. Otherwise, the top of memory is extended by	 obtaining more	space from the system (normally using sbrk,	 but definable to anything else	via the MORECORE macro).	 Memory	is gathered from the system (in system page-sized	 units)	in a way that allows chunks obtained across different	 sbrk calls to be consolidated,	but does not require	 contiguous memory. Thus, it should be safe to intersperse	 mallocs with other sbrk calls.      All allocations are made from the the `lowest' part of any found      chunk. (The implementation invariant is that prev_inuse is      always true of any allocated chunk; i.e., that each allocated      chunk borders either a previously allocated and still in-use chunk,      or the base of its memory arena.)*/Void_t* mALLoc(size_t bytes){  mchunkptr victim;		     /*	inspected/selected chunk */  INTERNAL_SIZE_T victim_size;	     /*	its size */  int	    idx;		     /*	index for bin traversal */  mbinptr   bin;		     /*	associated bin */  mchunkptr remainder;		     /*	remainder from a split */  long	    remainder_size;	     /*	its size */  int	    remainder_index;	     /*	its bin index */  unsigned long block;		     /*	block traverser bit */  int	    startidx;		     /*	first bin of a traversed block */  mchunkptr fwd;		     /*	misc temp for linking */  mchunkptr bck;		     /*	misc temp for linking */  mbinptr q;			     /*	misc temp */  INTERNAL_SIZE_T nb;  nb = request2size(bytes );  /* padded request size; */  semTake(dl_mem_sid,WAIT_FOREVER);  /* Check for exact match in a bin */  if (is_small_request(nb))  /* Faster version for small requests */  {    idx = smallbin_index(nb);     /* No traversal or size check necessary for small bins.  */    q = bin_at(idx);    victim = last(q);    /* Also scan the next one, since it would have a remainder < MINSIZE */    if (victim == q)    {      q = next_bin(q);      victim = last(q);    }    if (victim != q)    {        victim_size = chunksize(victim);        unlink(victim, bck, fwd);        set_inuse_bit_at_offset(victim, victim_size);        check_malloced_chunk(victim, nb);        cumblocks ++;        cumbytes += nb;         semGive(dl_mem_sid);         return chunk2mem(victim);    }    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */  }  else  {    idx = bin_index(nb);    bin = bin_at(idx);    for (victim = last(bin); victim != bin; victim = victim->bk)    {      victim_size = chunksize(victim);      remainder_size = victim_size - nb;            if (remainder_size >= (long)MINSIZE) /* too big */      {        --idx; /* adjust to rescan below after checking last remainder */        break;         }      else if (remainder_size >= 0) /* exact fit */      {        unlink(victim, bck, fwd);        set_inuse_bit_at_offset(victim, victim_size);        check_malloced_chunk(victim, nb);        cumblocks ++;        cumbytes += nb;         semGive(dl_mem_sid);         return chunk2mem(victim);      }    }    ++idx;   }  /* Try to use the last split-off remainder */  if ( (victim = last_remainder->fd) != last_remainder)  {    victim_size = chunksize(victim);    remainder_size = victim_size - nb;    if (remainder_size >= (long)MINSIZE) /* re-split */    {      remainder = chunk_at_offset(victim, nb);      set_head(victim, nb | PREV_INUSE);      link_last_remainder(remainder);      set_head(remainder, remainder_size | PREV_INUSE);      set_foot(remainder, remainder_size);      check_malloced_chunk(victim, nb);        cumblocks ++;        cumbytes += nb;       semGive(dl_mem_sid);       return chunk2mem(victim);    }    clear_last_remainder;    if (remainder_size >= 0)  /* exhaust */    {      set_inuse_bit_at_offset(victim, victim_size);      check_malloced_chunk(victim, nb);      semGive(dl_mem_sid);         cumblocks ++;        cumbytes += nb;       return chunk2mem(victim);    }    /* Else place in bin */    frontlink(victim, victim_size, remainder_index, bck, fwd);  }  /*      If there are any possibly nonempty big-enough blocks,      search for best fitting chunk by scanning bins in blockwidth units.  */  if ( (block = idx2binblock(idx)) <= binblocks)    {    /* Get to the first marked block */    if ( (block & binblocks) == 0)     {      /* force to an even block boundary */      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;      block <<= 1;      while ((block & binblocks) == 0)      {        idx += BINBLOCKWIDTH;        block <<= 1;      }    }          /* For each possibly nonempty block ... */    for (;;)      {      startidx = idx;	       /* (track incomplete blocks) */      q = bin = bin_at(idx);      /* For each bin in this block ... */      do      {        /* Find and use first big enough chunk ... */        for (victim = last(bin); victim != bin; victim = victim->bk)        {	  victim_size =	chunksize(victim);	  remainder_size = victim_size - nb;	  if (remainder_size >=	(long)MINSIZE) /* split */	  {	    remainder =	chunk_at_offset(victim, nb);	    set_head(victim, nb	| PREV_INUSE);	    unlink(victim, bck,	fwd);	    link_last_remainder(remainder);	    set_head(remainder,	remainder_size | PREV_INUSE);	    set_foot(remainder,	remainder_size);	    check_malloced_chunk(victim, nb);        cumblocks ++;        cumbytes += nb; 	    semGive(dl_mem_sid); 	    return chunk2mem(victim);	  }	  else if (remainder_size >= 0)	 /* take */	  {	    set_inuse_bit_at_offset(victim, victim_size);	    unlink(victim, bck,	fwd);	    check_malloced_chunk(victim, nb);        cumblocks ++;        cumbytes += nb; 	    semGive(dl_mem_sid); 	    return chunk2mem(victim);	  }        }       bin = next_bin(bin);      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);      /* Clear out the block bit. */      do   /* Possibly backtrack to try to clear a partial block */      {        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)        {	  binblocks &= ~block;	  break;        }        --startidx;       q = prev_bin(q);      } while (first(q) == q);      /* Get to the next possibly nonempty block */      if ( (block <<= 1) <= binblocks && (block != 0) )       {        while ((block & binblocks) == 0)        {	  idx += BINBLOCKWIDTH;	  block	<<= 1;        }      }      else        break;    }  }  /* Try to use top chunk */  /* Require that there be a remainder, ensuring top always exists  */  if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)  {    /* Try to extend */    malloc_extend_top(nb);    if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)    {        dbg_printf(" malloc failed %d\n",bytes);	          semGive(dl_mem_sid);         return 0; /* propagate failure */    }   }  victim = top;  set_head(victim, nb | PREV_INUSE);  top = chunk_at_offset(victim, nb);  set_head(top, remainder_size | PREV_INUSE);  check_malloced_chunk(victim, nb);        cumblocks ++;        cumbytes += nb;   semGive(dl_mem_sid);   return chunk2mem(victim);}/*  free() algorithm :    cases:       1. free(0) has no effect.         2. If the chunk was allocated via mmap, it is release via munmap().       3. If a returned chunk borders the current high end of memory,	  it is	consolidated into the top, and if the total unused	  topmost memory exceeds the trim threshold, malloc_trim is	  called.       4. Other chunks are consolidated as they arrive, and	  placed in corresponding bins.	(This includes the case of	  consolidating	with the current `last_remainder').*/void fREe(Void_t* mem){  mchunkptr p;	       /* chunk	corresponding to mem */  INTERNAL_SIZE_T hd;  /* its head field */  INTERNAL_SIZE_T sz;  /* its size */  int	    idx;       /* its bin index	*/  mchunkptr next;      /* next contiguous chunk */  INTERNAL_SIZE_T nextsz; /* its size */  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */  mchunkptr bck;       /* misc temp for linking */  mchunkptr fwd;       /* misc temp for linking */  int	    islr;      /* track	whether merging with last_remainder */  if (mem == 0)				     /*	free(0) has no effect */    return;  p = mem2chunk(mem);  hd = p->size;    check_inuse_chunk(p);    sz = hd & ~PREV_INUSE;  next = chunk_at_offset(p, sz);  nextsz = chunksize(next);  semTake(dl_mem_sid,WAIT_FOREVER);    if (next == top)			      /* merge with top	*/  {    sz += nextsz;    if (!(hd & PREV_INUSE))		       /* consolidate backward */    {      prevsz = p->prev_size;      p = chunk_at_offset(p, -prevsz);      sz += prevsz;      unlink(p, bck, fwd);    }    set_head(p, sz | PREV_INUSE);    top = p;    if ((unsigned long)(sz) >= (unsigned long)trim_threshold)       malloc_trim(top_pad);     semGive(dl_mem_sid);     return;  }  set_head(next, nextsz);		     /*	clear inuse bit */  islr = 0; 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -