⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 freespace.c

📁 PostgreSQL7.4.6 for Linux
💻 C
📖 第 1 页 / 共 4 页
字号:
		if (newAlloc > FreeSpaceMap->totalChunks - nextChunkIndex)			newAlloc = FreeSpaceMap->totalChunks - nextChunkIndex;		if (fsmrel->isIndex)			newAllocPages = newAlloc * INDEXCHUNKPAGES;		else			newAllocPages = newAlloc * CHUNKPAGES;		/*		 * Determine current size, current and new locations		 */		curChunks = fsm_current_chunks(fsmrel);		oldChunkIndex = fsmrel->firstChunk;		oldLocation = FreeSpaceMap->arena + oldChunkIndex * CHUNKBYTES;		newChunkIndex = nextChunkIndex;		newLocation = FreeSpaceMap->arena + newChunkIndex * CHUNKBYTES;		/*		 * It's possible that we have to move data down, not up, if the		 * allocations of previous rels expanded.  This normally means that		 * our allocation expanded too (or at least got no worse), and		 * ditto for later rels.  So there should be room to move all our		 * data down without dropping any --- but we might have to push down		 * following rels to acquire the room.  We don't want to do the push		 * more than once, so pack everything against the end of the arena		 * if so.		 *		 * In corner cases where we are on the short end of a roundoff choice		 * that we were formerly on the long end of, it's possible that we		 * have to move down and compress our data too.  In fact, even after		 * pushing down the following rels, there might not be as much space		 * as we computed for this rel above --- that would imply that some		 * following rel(s) are also on the losing end of roundoff choices.		 * We could handle this fairly by doing the per-rel compactions		 * out-of-order, but that seems like way too much complexity to deal		 * with a very infrequent corner case.  Instead, we simply drop pages		 * from the end of the current rel's data until it fits.		 */		if (newChunkIndex > oldChunkIndex)		{			int			limitChunkIndex;			if (newAllocPages < fsmrel->storedPages)			{				/* move and compress --- just drop excess pages */				fsmrel->storedPages = newAllocPages;				curChunks = fsm_current_chunks(fsmrel);			}			/* is there enough space? */			if (fsmrel->nextPhysical != NULL)				limitChunkIndex = fsmrel->nextPhysical->firstChunk;			else				limitChunkIndex = FreeSpaceMap->totalChunks;			if (newChunkIndex + curChunks > limitChunkIndex)			{				/* not enough space, push down following rels */				if (!did_push)				{					push_fsm_rels_after(fsmrel);					did_push = true;				}				/* now is there enough space? */				if (fsmrel->nextPhysical != NULL)					limitChunkIndex = fsmrel->nextPhysical->firstChunk;				else					limitChunkIndex = FreeSpaceMap->totalChunks;				if (newChunkIndex + curChunks > limitChunkIndex)				{					/* uh-oh, forcibly cut the allocation to fit */					newAlloc = limitChunkIndex - newChunkIndex;					/*					 * If newAlloc < 0 at this point, we are moving the rel's					 * firstChunk into territory currently assigned to a later					 * rel.  This is okay so long as we do not copy any data.					 * The rels will be back in nondecreasing firstChunk order					 * at completion of the compaction pass.					 */					if (newAlloc < 0)						newAlloc = 0;					if (fsmrel->isIndex)						newAllocPages = newAlloc * INDEXCHUNKPAGES;					else						newAllocPages = newAlloc * CHUNKPAGES;					fsmrel->storedPages = newAllocPages;					curChunks = fsm_current_chunks(fsmrel);				}			}			memmove(newLocation, oldLocation, curChunks * CHUNKBYTES);		}		else if (newAllocPages < fsmrel->storedPages)		{			/*			 * Need to compress the page data.	For an index,			 * "compression" just means dropping excess pages; otherwise			 * we try to keep the ones with the most space.			 */			if (fsmrel->isIndex)			{				fsmrel->storedPages = newAllocPages;				/* may need to move data */				if (newChunkIndex != oldChunkIndex)					memmove(newLocation, oldLocation, newAlloc * CHUNKBYTES);			}			else			{				pack_existing_pages((FSMPageData *) newLocation,									newAllocPages,									(FSMPageData *) oldLocation,									fsmrel->storedPages);				fsmrel->storedPages = newAllocPages;			}		}		else if (newChunkIndex != oldChunkIndex)		{			/*			 * No compression needed, but must copy the data up			 */			memmove(newLocation, oldLocation, curChunks * CHUNKBYTES);		}		fsmrel->firstChunk = newChunkIndex;		nextChunkIndex += newAlloc;	}	Assert(nextChunkIndex <= FreeSpaceMap->totalChunks);	FreeSpaceMap->usedChunks = nextChunkIndex;}/* * Push all FSMRels physically after afterRel to the end of the storage arena. * * We sometimes have to do this when deletion or truncation of a relation * causes the allocations of remaining rels to expand markedly.  We must * temporarily push existing data down to the end so that we can move it * back up in an orderly fashion. */static voidpush_fsm_rels_after(FSMRelation *afterRel){	int			nextChunkIndex = FreeSpaceMap->totalChunks;	FSMRelation *fsmrel;	FreeSpaceMap->usedChunks = FreeSpaceMap->totalChunks;	for (fsmrel = FreeSpaceMap->lastRel;		 fsmrel != NULL;		 fsmrel = fsmrel->priorPhysical)	{		int			chunkCount;		int			newChunkIndex;		int			oldChunkIndex;		char	   *newLocation;		char	   *oldLocation;		if (fsmrel == afterRel)			break;		chunkCount = fsm_current_chunks(fsmrel);		nextChunkIndex -= chunkCount;		newChunkIndex = nextChunkIndex;		oldChunkIndex = fsmrel->firstChunk;		if (newChunkIndex < oldChunkIndex)		{			/* we're pushing down, how can it move up? */			elog(PANIC, "inconsistent entry sizes in FSM");		}		else if (newChunkIndex > oldChunkIndex)		{			/* need to move it */			newLocation = FreeSpaceMap->arena + newChunkIndex * CHUNKBYTES;			oldLocation = FreeSpaceMap->arena + oldChunkIndex * CHUNKBYTES;			memmove(newLocation, oldLocation, chunkCount * CHUNKBYTES);			fsmrel->firstChunk = newChunkIndex;		}	}	Assert(nextChunkIndex >= 0);}/* * Pack a set of per-page freespace data into a smaller amount of space. * * The method is to compute a low-resolution histogram of the free space * amounts, then determine which histogram bin contains the break point. * We then keep all pages above that bin, none below it, and just enough * of the pages in that bin to fill the output area exactly. */#define HISTOGRAM_BINS	64static voidpack_incoming_pages(FSMPageData *newLocation, int newPages,					PageFreeSpaceInfo *pageSpaces, int nPages){	int			histogram[HISTOGRAM_BINS];	int			above,				binct,				i;	Size		thresholdL,				thresholdU;	Assert(newPages < nPages);	/* else I shouldn't have been called */	/* Build histogram */	MemSet(histogram, 0, sizeof(histogram));	for (i = 0; i < nPages; i++)	{		Size		avail = pageSpaces[i].avail;		if (avail >= BLCKSZ)			elog(ERROR, "bogus freespace amount");		avail /= (BLCKSZ / HISTOGRAM_BINS);		histogram[avail]++;	}	/* Find the breakpoint bin */	above = 0;	for (i = HISTOGRAM_BINS - 1; i >= 0; i--)	{		int			sum = above + histogram[i];		if (sum > newPages)			break;		above = sum;	}	Assert(i >= 0);	thresholdL = i * BLCKSZ / HISTOGRAM_BINS;	/* low bound of bp bin */	thresholdU = (i + 1) * BLCKSZ / HISTOGRAM_BINS;		/* hi bound */	binct = newPages - above;	/* number to take from bp bin */	/* And copy the appropriate data */	for (i = 0; i < nPages; i++)	{		BlockNumber page = pageSpaces[i].blkno;		Size		avail = pageSpaces[i].avail;		/* Check caller provides sorted data */		if (i > 0 && page <= pageSpaces[i - 1].blkno)			elog(ERROR, "free-space data is not in page order");		/* Save this page? */		if (avail >= thresholdU ||			(avail >= thresholdL && (--binct >= 0)))		{			FSMPageSetPageNum(newLocation, page);			FSMPageSetSpace(newLocation, avail);			newLocation++;			newPages--;		}	}	Assert(newPages == 0);}/* * Pack a set of per-page freespace data into a smaller amount of space. * * This is algorithmically identical to pack_incoming_pages(), but accepts * a different input representation.  Also, we assume the input data has * previously been checked for validity (size in bounds, pages in order). * * Note: it is possible for the source and destination arrays to overlap. * The caller is responsible for making sure newLocation is at lower addresses * so that we can copy data moving forward in the arrays without problem. */static voidpack_existing_pages(FSMPageData *newLocation, int newPages,					FSMPageData *oldLocation, int oldPages){	int			histogram[HISTOGRAM_BINS];	int			above,				binct,				i;	Size		thresholdL,				thresholdU;	Assert(newPages < oldPages);	/* else I shouldn't have been called */	/* Build histogram */	MemSet(histogram, 0, sizeof(histogram));	for (i = 0; i < oldPages; i++)	{		Size		avail = FSMPageGetSpace(oldLocation + i);		/* Shouldn't happen, but test to protect against stack clobber */		if (avail >= BLCKSZ)			elog(ERROR, "bogus freespace amount");		avail /= (BLCKSZ / HISTOGRAM_BINS);		histogram[avail]++;	}	/* Find the breakpoint bin */	above = 0;	for (i = HISTOGRAM_BINS - 1; i >= 0; i--)	{		int			sum = above + histogram[i];		if (sum > newPages)			break;		above = sum;	}	Assert(i >= 0);	thresholdL = i * BLCKSZ / HISTOGRAM_BINS;	/* low bound of bp bin */	thresholdU = (i + 1) * BLCKSZ / HISTOGRAM_BINS;		/* hi bound */	binct = newPages - above;	/* number to take from bp bin */	/* And copy the appropriate data */	for (i = 0; i < oldPages; i++)	{		BlockNumber page = FSMPageGetPageNum(oldLocation + i);		Size		avail = FSMPageGetSpace(oldLocation + i);		/* Save this page? */		if (avail >= thresholdU ||			(avail >= thresholdL && (--binct >= 0)))		{			FSMPageSetPageNum(newLocation, page);			FSMPageSetSpace(newLocation, avail);			newLocation++;			newPages--;		}	}	Assert(newPages == 0);}/* * Calculate number of chunks "requested" by a rel. * * Rel's lastPageCount and isIndex settings must be up-to-date when called. * * See notes at top of file for details. */static intfsm_calc_request(FSMRelation *fsmrel){	int			chunkCount;	/* Convert page count to chunk count */	if (fsmrel->isIndex)		chunkCount = (fsmrel->lastPageCount - 1) / INDEXCHUNKPAGES + 1;	else		chunkCount = (fsmrel->lastPageCount - 1) / CHUNKPAGES + 1;	/* "Request" is anything beyond our one guaranteed chunk */	if (chunkCount <= 0)		return 0;	else		return chunkCount - 1;}/* * Calculate target allocation (number of chunks) for a rel * * Parameter is the result from fsm_calc_request().  The global sumRequests * and numRels totals must be up-to-date already. * * See notes at top of file for details. */static intfsm_calc_target_allocation(int myRequest){	double		spareChunks;	int			extra;	spareChunks = FreeSpaceMap->totalChunks - FreeSpaceMap->numRels;	Assert(spareChunks > 0);	if (spareChunks >= FreeSpaceMap->sumRequests)	{		/* We aren't oversubscribed, so allocate exactly the request */		extra = myRequest;	}	else	{		extra = (int) rint(spareChunks * myRequest / FreeSpaceMap->sumRequests);		if (extra < 0)			/* shouldn't happen, but make sure */			extra = 0;	}	return 1 + extra;}/* * Calculate number of chunks actually used to store current data */static intfsm_current_chunks(FSMRelation *fsmrel){	int			chunkCount;	/* Make sure storedPages==0 produces right answer */	if (fsmrel->storedPages <= 0)		return 0;	/* Convert page count to chunk count */	if (fsmrel->isIndex)		chunkCount = (fsmrel->storedPages - 1) / INDEXCHUNKPAGES + 1;	else		chunkCount = (fsmrel->storedPages - 1) / CHUNKPAGES + 1;	return chunkCount;}/* * Calculate current actual allocation (number of chunks) for a rel */static intfsm_current_allocation(FSMRelation *fsmrel){	if (fsmrel->nextPhysical != NULL)		return fsmrel->nextPhysical->firstChunk - fsmrel->firstChunk;	else if (fsmrel == FreeSpaceMap->lastRel)		return FreeSpaceMap->usedChunks - fsmrel->firstChunk;	else	{		/* it's not in the storage-order list */		Assert(fsmrel->firstChunk < 0 && fsmrel->storedPages == 0);		return 0;	}}#ifdef FREESPACE_DEBUG/* * Dump contents of freespace map for debugging. * * We assume caller holds the FreeSpaceLock, or is otherwise unconcerned * about other processes. */voidDumpFreeSpace(void){	FSMRelation *fsmrel;	FSMRelation *prevrel = NULL;	int			relNum = 0;	int			nPages;	for (fsmrel = FreeSpaceMap->usageList; fsmrel; fsmrel = fsmrel->nextUsage)	{		relNum++;		fprintf(stderr, "Map %d: rel %u/%u isIndex %d avgRequest %u lastPageCount %d nextPage %d\nMap= ",				relNum, fsmrel->key.tblNode, fsmrel->key.relNode,				(int) fsmrel->isIndex, fsmrel->avgRequest,				fsmrel->lastPageCount, fsmrel->nextPage);		if (fsmrel->isIndex)		{			IndexFSMPageData *page;			page = (IndexFSMPageData *)				(FreeSpaceMap->arena + fsmrel->firstChunk * CHUNKBYTES);			for (nPages = 0; nPages < fsmrel->storedPages; nPages++)			{				fprintf(stderr, " %u",						IndexFSMPageGetPageNum(page));				page++;			}		}		else		{			FSMPageData *page;			page = (FSMPageData *)				(FreeSpaceMap->arena + fsmrel->firstChunk * CHUNKBYTES);			for (nPages = 0; nPages < fsmrel->storedPages; nPages++)			{				fprintf(stderr, " %u:%u",						FSMPageGetPageNum(page),						FSMPageGetSpace(page));				page++;			}		}		fprintf(stderr, "\n");		/* Cross-check list links */		if (prevrel != fsmrel->priorUsage)			fprintf(stderr, "DumpFreeSpace: broken list links\n");		prevrel = fsmrel;	}	if (prevrel != FreeSpaceMap->usageListTail)		fprintf(stderr, "DumpFreeSpace: broken list links\n");	/* Cross-check global counters */	if (relNum != FreeSpaceMap->numRels)		fprintf(stderr, "DumpFreeSpace: %d rels in list, but numRels = %d\n",				relNum, FreeSpaceMap->numRels);}#endif   /* FREESPACE_DEBUG */

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -