⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sbm.c

📁 minix软件源代码
💻 C
📖 第 1 页 / 共 3 页
字号:
		return(0);	/* Found none that minimum would fit */	if(sm->smflags&SM_NXM)	  {	/* Found free area, but it's marked NXM and the system		 * must be persuaded (via sbrk) to let us use that portion		 * of our address space.  Grab a good-sized chunk.		 */		if(sbm_nfl == 0)	/* Verify a spare SM node is avail */			goto getnod;	/* Nope, must get one. */		/* Decide amount of mem to ask system for, via sbrk.		 * The fine point here is the check of sbm_nxtra to make sure		 * that, when building more freelist nodes, we don't have		 * to use more than one SM node in the process.  If we		 * asked for too much mem, we'd have to use a SM node		 * to hold the excess after splitting.		 */		csiz = cmax;		if(sbm_nxtra		/* If normal then try for big chunk */		  && csiz < sbm_chksiz) csiz = sbm_chksiz;	/* Max */		if (csiz > sm->smlen)  csiz = sm->smlen;	/* Min */		/* Get the NXM mem */		if((addr = (SBMA)SBM_SBRK(csiz)) != sm->smaddr)		  {     /* Unexpected value returned from SBRK! */			if((int)addr != 0 && (int)addr != -1)			  {	return(sbm_err(0,"SBRK %o != %o", addr,						sm->smaddr));#if 0			/* If value indicates couldn't get the stuff, then			 * we have probably hit our limit and the rest of			 * NXM should be declared "used" to prevent further			 * hopeless sbrk calls.  We split off the portion			 * of NXM that is known for sure to be unavailable,			 * and mark it "used".  If a "used NXM" area already			 * exists following this one, the two are merged.			 * The chunk size is then reduced by half, so			 * only log2(SMCHUNKSIZ) attempts will be made, and			 * we try again.			 */				/* If returned some mem which starts outside				 * the NXM then something is screwed up. */				if(addr < sm->smaddr				  || (addr >= sm->smaddr+sm->smlen))					return(sbm_err(0,"SBRK %o != %o",						addr, sm->smaddr));				/* Got some mem, falls within NXM.				 * Presumably someone else has called sbrk				 * since last time, so we need to fence off				 * the intervening area. */				sm = sbm_split((sml=sm),(addr - sm->smaddr));				sml->smflags |= SM_USE|SM_EXT;				return(sbm_mget(cmin,cmax));#endif /*COMMENT*/			  }			/* Handle case of SBRK claiming no more memory.			 * Gobble as much as we can, and then turn this NXM			 * block into a free-mem block, and leave the			 * remainder in the used-NXM block (which should			 * immediately follow this free-NXM block!)			 */			if(!(sml = sm->smforw)	/* Ensure have used-NXM blk */			  || (sml->smflags&(SM_USE|SM_NXM))					!= (SM_USE|SM_NXM))				return(sbm_err(0,"No uNXM node!"));			xaddr = sm->smaddr;	/* Use this for checking */			sm->smuse = 0;		/* Use this for sum */			for(csiz = sm->smlen; csiz > 0;)			  {	addr = SBM_SBRK(csiz);				if((int)addr == 0 || (int)addr == -1)				  {	csiz >>= 1;					continue;				  }				if(addr != xaddr)					return(sbm_err(0,"SBRK %o != %o", addr,						xaddr));				sm->smuse += csiz;				xaddr += csiz;			  }			/* Have gobbled as much from SBRK as we could.			 * Turn the free-NXM block into a free-mem block,			 * unless we got nothing, in which case just merge			 * it into the used-NXM block and continue			 * searching from this point.			 */			if(!(csiz = sm->smuse))	/* Get total added */			  {	sm->smflags = sml->smflags;	/* Ugh. */				sbm_mmrg(sm);				goto retry;		/* Keep looking */			  }			else			  {	sml->smaddr = sm->smaddr + csiz;				sml->smlen += sm->smlen - csiz;				sm->smlen = csiz;				sm->smflags &= ~SM_NXM;	/* No longer NXM */			  }		  }		/* Here when we've acquired CSIZ more memory from sbrk.		 * If preceding mem area is not in use, merge new mem		 * into it.		 */		if((sml = sm->smback) && 		  (sml->smflags&(SM_USE|SM_NXM))==0)    /* Previous free? */		  {     sml->smlen += csiz;		/* Yes, simple! */			sm->smaddr += csiz;		/* Fix up */			if((sm->smlen -= csiz) == 0)	/* If no NXM left,*/				sbm_mmrg(sml);	/* Merge NXM node w/prev */			sm = sml;		/* Prev is now winning node */		  }		else		  {	/* Prev node isn't a free area.  Split up the NXM			 * node to account for acquired mem, unless we			 * gobbled all the mem available.			 */			if(sm->smlen > csiz	/* Split unless all used */			  && !sbm_split(sm,csiz)) /* Call shd always win */				return(sbm_err(0,"getsplit err: %o",sm));			sm->smflags &= ~SM_NXM;	/* Node is now real mem */		  }		/* Now make a final check that we have enough memory.		 * This can fail because SBRK may not have been able		 * to gobble enough memory, either because (1) not		 * as much NXM was available as we thought,		 * or (2) we noticed the free-NXM area and immediately		 * gambled on trying it without checking any lengths.		 * In any case, we try again starting from the current SM		 * because there may be more free mem higher up (eg on		 * stack).		 */		if(sm->smlen < cmin)			goto retry;	  }	/* Check to see if node has too much mem.  This is especially true	 * for memory just acquired via sbrk, which gobbles a huge chunk each	 * time.  If there's too much, we split up the area.	 */	if(sm->smlen > cmax+FUDGE)	/* Got too much?  (Allow some fudge)*/		/* Yes, split up so don't gobble too much. */		if(sbm_nfl)                     /* If success guaranteed, */			sbm_split(sm,cmax);     /* split it, all's well. */		else goto getnod;	sm->smuse = 0;	sm->smflags |= SM_USE;  /* Finally seize it by marking "in-use". */	return(sm);	/* Come here when we will need to get another SM node but the	 * SM freelist is empty.  We have to forget about using the area	 * we just found, because sbm_nget may gobble it for the	 * freelist.  So, we first force a refill of the freelist, and then	 * invoke ourselves again on what's left.	 */getnod:	if(sml = sbm_nget())		/* Try to build freelist */	  {	sbm_nfre(sml);		/* Won, give node back, */		sm = sbm_list;		/* and retry, starting over! */		goto retry;		  }	/* Failed.  Not enough memory for both this request	 * and one more block of SM nodes.  Since such a SM_MNODS	 * block isn't very big, we are so close to the limits that it	 * isn't worth trying to do something fancy here to satisfy the	 * original request.  So we just fail.	 */	return(0);}#ifdef DBG_SIZE/* Code for debugging stuff by imposing an artificial limitation on size * of available memory. */SBMO sbm_dlim = MAXSBMO;	/* Amount of mem to allow (default is max) */char *sbm_brk(size)unsigned size;{	register char *addr;	if(size > sbm_dlim) return(0);	addr = sbrk(size);	if((int)addr == 0 || (int)addr == -1)		return(0);	sbm_dlim -= size;	return(addr);}#endif /*DBG_SIZE*//* SBM_MFREE(sm) - Free up an allocated memory area. */sbm_mfree(sm)register struct smblk *sm;{       register struct smblk *smx;	register SBMO crem;	sm->smflags &= ~SM_USE;			/* Say mem is free */	if((smx = sm->smback)                   /* Check preceding mem */	  && (smx->smflags&(SM_USE|SM_NXM))==0) /*   If it's free, */		sbm_mmrg(sm = smx);		/*   then merge 'em. */	if((smx = sm->smforw)			/* Check following mem */	  && (smx->smflags&(SM_USE|SM_NXM))==0) /*   Again, if free,  */		sbm_mmrg(sm);                   /*   merge them.   */	if(sm->smlen == 0)              /* Just in case, chk for null blk */	  {     if(smx = sm->smback)            /* If pred exists, */			sbm_mmrg(smx);          /* merge quietly. */		else {			sbm_list = sm->smforw;  /* 1st node on list, so */			sbm_nfre(sm);           /* simply flush it. */		  }		return;	  }	/* This code is slightly over-general for some machines.	 * The pointer subtraction is done in order to get a valid integer	 * offset value regardless of the internal representation of a pointer.	 * We cannot reliably force alignment via casts; some C implementations	 * treat that as a no-op.	 */	if(crem = rndrem(sm->smaddr - sbm_lowaddr))	/* On word bndry? */	  {     /* No -- must adjust.  All free mem blks MUST, by fiat,		 * start on word boundary.  Here we fix things by		 * making the leftover bytes belong to the previous blk,		 * no matter what it is used for.  Prev blk is guaranteed to		 * (1) Exist (this cannot be 1st blk since 1st is known to		 * start on wd boundary) and to be (2) Non-free (else it would		 * have been merged).		 */		if((smx = sm->smback) == 0)     /* Get ptr to prev blk */		  {	sbm_err(0,"Align err");	/* Catch screws */			return;		  }		crem = WDSIZE - crem;	/* Find # bytes to flush */		if(crem >= sm->smlen)	/* Make sure node has that many */		  {	sbm_mmrg(smx);  /* Flush node to avoid zero length */			return;		  }		smx->smlen += crem;	/* Make stray bytes part of prev */		sm->smaddr += crem;	/* And flush from current. */		sm->smlen -= crem;	  }}/* SBM_EXP - Expand (or shrink) size of an allocated memory chunk. *	"nsize" is desired new size; may be larger or smaller than current *	size. */struct smblk *sbm_exp(sm,size)register struct smblk *sm;register SBMO size;{       register struct smblk *smf;	register SBMO mexp, pred, succ;	if(sm->smlen >= size)		/* Do we want truncation? */		goto realo2;		/* Yup, go split block */	/* Block is expanding. */	mexp = size - sm->smlen;		/* Get # bytes to expand by */	pred = succ = 0;	if((smf = sm->smforw)           	/* See if free mem follows */	 && (smf->smflags&(SM_USE|SM_NXM)) == 0)		if((succ = smf->smlen) >= mexp)			goto realo1;		/* Quick stuff if succ OK */	if((smf = sm->smback)			/* See if free mem precedes */	 && (smf->smflags&(SM_USE|SM_NXM)) == 0)		pred = smf->smlen;	/* If not enough free space combined on both sides of this chunk,	 * we have to look for a completely new block.	 */	if(pred+succ < mexp)	  {	if((smf = sbm_mget(size,size)) == 0)			return(0);              /* Couldn't find one */		else pred = 0;			/* Won, indicate new block */	  }	/* OK, must copy either into new block or down into predecessor	 * (overlap is OK as long as bcopy moves 1st byte first)	 */	bcopy(sm->smaddr, smf->smaddr, sm->smlen);	smf->smflags = sm->smflags;     /* Copy extra attribs */	smf->smuse = sm->smuse;	if(!pred)			/* If invoked sbm_mget */	  {	sbm_mfree(sm);		/* then must free up old area */		return(smf);		/* and can return immediately. */	  }	sbm_mmrg(smf);			/* Merge current into pred blk */	sm = smf;			/* Now pred is current blk. */	if(succ)realo1:		sbm_mmrg(sm);		/* Merge succ into current blk */realo2: if(sm->smlen > size		/* If now have too much, */	  && sbm_split(sm, size))       /* split up and possibly */		sbm_mfree(sm->smforw);  /* free up unused space. */	return(sm);	/* Note that sbm_split can fail if it can't get a free node,	 * which is only possible if we are reducing the size of an area.	 * If it fails, we just return anyway without truncating the area.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -