⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pruneheap.c

📁 postgresql8.3.4源码,开源数据库
💻 C
📖 第 1 页 / 共 2 页
字号:
			continue;		}		/*		 * Likewise, a dead item pointer can't be part of the chain. (We		 * already eliminated the case of dead root tuple outside this		 * function.)		 */		if (ItemIdIsDead(lp))			break;		Assert(ItemIdIsNormal(lp));		htup = (HeapTupleHeader) PageGetItem(dp, lp);		/*		 * Check the tuple XMIN against prior XMAX, if any		 */		if (TransactionIdIsValid(priorXmax) &&			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))			break;		/*		 * OK, this tuple is indeed a member of the chain.		 */		chainitems[nchain++] = offnum;		/*		 * Check tuple's visibility status.		 */		tupdead = recent_dead = false;		switch (HeapTupleSatisfiesVacuum(htup, OldestXmin, buffer))		{			case HEAPTUPLE_DEAD:				tupdead = true;				break;			case HEAPTUPLE_RECENTLY_DEAD:				recent_dead = true;				/*				 * This tuple may soon become DEAD.  Update the hint field so				 * that the page is reconsidered for pruning in future.				 */				heap_prune_record_prunable(prstate,										   HeapTupleHeaderGetXmax(htup));				break;			case HEAPTUPLE_DELETE_IN_PROGRESS:				/*				 * This tuple may soon become DEAD.  Update the hint field so				 * that the page is reconsidered for pruning in future.				 */				heap_prune_record_prunable(prstate,										   HeapTupleHeaderGetXmax(htup));				break;			case HEAPTUPLE_LIVE:			case HEAPTUPLE_INSERT_IN_PROGRESS:				/*				 * If we wanted to optimize for aborts, we might consider				 * marking the page prunable when we see INSERT_IN_PROGRESS.				 * But we don't.  See related decisions about when to mark the				 * page prunable in heapam.c.				 */				break;			default:				elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result");				break;		}		/*		 * Remember the last DEAD tuple seen.  We will advance past		 * RECENTLY_DEAD tuples just in case there's a DEAD one after them;		 * but we can't advance past anything else.  (XXX is it really worth		 * continuing to scan beyond RECENTLY_DEAD?  The case where we will		 * find another DEAD tuple is a fairly unusual corner case.)		 */		if (tupdead)			latestdead = offnum;		else if (!recent_dead)			break;		/*		 * If the tuple is not HOT-updated, then we are at the end of this		 * HOT-update chain.		 */		if (!HeapTupleHeaderIsHotUpdated(htup))			break;		/*		 * Advance to next chain member.		 */		Assert(ItemPointerGetBlockNumber(&htup->t_ctid) ==			   BufferGetBlockNumber(buffer));		offnum = ItemPointerGetOffsetNumber(&htup->t_ctid);		priorXmax = HeapTupleHeaderGetXmax(htup);	}	/*	 * If we found a DEAD tuple in the chain, adjust the HOT chain so that all	 * the DEAD tuples at the start of the chain are removed and the root line	 * pointer is appropriately redirected.	 */	if (OffsetNumberIsValid(latestdead))	{		/*		 * Mark as unused each intermediate item that we are able to remove		 * from the chain.		 *		 * When the previous item is the last dead tuple seen, we are at the		 * right candidate for redirection.		 */		for (i = 1; (i < nchain) && (chainitems[i - 1] != latestdead); i++)		{			heap_prune_record_unused(prstate, chainitems[i]);			ndeleted++;		}		/*		 * If the root entry had been a normal tuple, we are deleting it, so		 * count it in the result.	But changing a redirect (even to DEAD		 * state) doesn't count.		 */		if (ItemIdIsNormal(rootlp))			ndeleted++;		/*		 * If the DEAD tuple is at the end of the chain, the entire chain is		 * dead and the root line pointer can be marked dead.  Otherwise just		 * redirect the root to the correct chain member.		 */		if (i >= nchain)			heap_prune_record_dead(prstate, rootoffnum);		else		{			heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);			/* If the redirection will be a move, need more processing */			if (redirect_move)				redirect_target = chainitems[i];		}	}	else if (nchain < 2 && ItemIdIsRedirected(rootlp))	{		/*		 * We found a redirect item that doesn't point to a valid follow-on		 * item.  This can happen if the loop in heap_page_prune caused us to		 * visit the dead successor of a redirect item before visiting the		 * redirect item.  We can clean up by setting the redirect item to		 * DEAD state.		 */		heap_prune_record_dead(prstate, rootoffnum);	}	else if (redirect_move && ItemIdIsRedirected(rootlp))	{		/*		 * If we desire to eliminate LP_REDIRECT items by moving tuples,		 * make a redirection entry for each redirected root item; this		 * will cause heap_page_prune_execute to actually do the move.		 * (We get here only when there are no DEAD tuples in the chain;		 * otherwise the redirection entry was made above.)		 */		heap_prune_record_redirect(prstate, rootoffnum, chainitems[1]);		redirect_target = chainitems[1];	}	/*	 * If we are going to implement a redirect by moving tuples, we have	 * to issue a cache invalidation against the redirection target tuple,	 * because its CTID will be effectively changed by the move.  Note that	 * CacheInvalidateHeapTuple only queues the request, it doesn't send it;	 * if we fail before reaching EndNonTransactionalInvalidation, nothing	 * happens and no harm is done.	 */	if (OffsetNumberIsValid(redirect_target))	{		ItemId		firstlp = PageGetItemId(dp, redirect_target);		HeapTupleData firsttup;		Assert(ItemIdIsNormal(firstlp));		/* Set up firsttup to reference the tuple at its existing CTID */		firsttup.t_data = (HeapTupleHeader) PageGetItem(dp, firstlp);		firsttup.t_len = ItemIdGetLength(firstlp);		ItemPointerSet(&firsttup.t_self,					   BufferGetBlockNumber(buffer),					   redirect_target);		firsttup.t_tableOid = RelationGetRelid(relation);		CacheInvalidateHeapTuple(relation, &firsttup);	}	return ndeleted;}/* Record lowest soon-prunable XID */static voidheap_prune_record_prunable(PruneState *prstate, TransactionId xid){	/*	 * This should exactly match the PageSetPrunable macro.  We can't store	 * directly into the page header yet, so we update working state.	 */	Assert(TransactionIdIsNormal(xid));	if (!TransactionIdIsValid(prstate->new_prune_xid) ||		TransactionIdPrecedes(xid, prstate->new_prune_xid))		prstate->new_prune_xid = xid;}/* Record item pointer to be redirected */static voidheap_prune_record_redirect(PruneState *prstate,						   OffsetNumber offnum, OffsetNumber rdoffnum){	Assert(prstate->nredirected < MaxHeapTuplesPerPage);	prstate->redirected[prstate->nredirected * 2] = offnum;	prstate->redirected[prstate->nredirected * 2 + 1] = rdoffnum;	prstate->nredirected++;	Assert(!prstate->marked[offnum]);	prstate->marked[offnum] = true;	Assert(!prstate->marked[rdoffnum]);	prstate->marked[rdoffnum] = true;}/* Record item pointer to be marked dead */static voidheap_prune_record_dead(PruneState *prstate, OffsetNumber offnum){	Assert(prstate->ndead < MaxHeapTuplesPerPage);	prstate->nowdead[prstate->ndead] = offnum;	prstate->ndead++;	Assert(!prstate->marked[offnum]);	prstate->marked[offnum] = true;}/* Record item pointer to be marked unused */static voidheap_prune_record_unused(PruneState *prstate, OffsetNumber offnum){	Assert(prstate->nunused < MaxHeapTuplesPerPage);	prstate->nowunused[prstate->nunused] = offnum;	prstate->nunused++;	Assert(!prstate->marked[offnum]);	prstate->marked[offnum] = true;}/* * Perform the actual page changes needed by heap_page_prune. * It is expected that the caller has suitable pin and lock on the * buffer, and is inside a critical section. * * This is split out because it is also used by heap_xlog_clean() * to replay the WAL record when needed after a crash.  Note that the * arguments are identical to those of log_heap_clean(). */voidheap_page_prune_execute(Relation reln, Buffer buffer,						OffsetNumber *redirected, int nredirected,						OffsetNumber *nowdead, int ndead,						OffsetNumber *nowunused, int nunused,						bool redirect_move){	Page		page = (Page) BufferGetPage(buffer);	OffsetNumber *offnum;	int			i;	/* Update all redirected or moved line pointers */	offnum = redirected;	for (i = 0; i < nredirected; i++)	{		OffsetNumber fromoff = *offnum++;		OffsetNumber tooff = *offnum++;		ItemId		fromlp = PageGetItemId(page, fromoff);		if (redirect_move)		{			/* Physically move the "to" item to the "from" slot */			ItemId		tolp = PageGetItemId(page, tooff);			HeapTupleHeader htup;			*fromlp = *tolp;			ItemIdSetUnused(tolp);			/*			 * Change heap-only status of the tuple because after the line			 * pointer manipulation, it's no longer a heap-only tuple, but is			 * directly pointed to by index entries.			 */			Assert(ItemIdIsNormal(fromlp));			htup = (HeapTupleHeader) PageGetItem(page, fromlp);			Assert(HeapTupleHeaderIsHeapOnly(htup));			HeapTupleHeaderClearHeapOnly(htup);		}		else		{			/* Just insert a REDIRECT link at fromoff */			ItemIdSetRedirect(fromlp, tooff);		}	}	/* Update all now-dead line pointers */	offnum = nowdead;	for (i = 0; i < ndead; i++)	{		OffsetNumber off = *offnum++;		ItemId		lp = PageGetItemId(page, off);		ItemIdSetDead(lp);	}	/* Update all now-unused line pointers */	offnum = nowunused;	for (i = 0; i < nunused; i++)	{		OffsetNumber off = *offnum++;		ItemId		lp = PageGetItemId(page, off);		ItemIdSetUnused(lp);	}	/*	 * Finally, repair any fragmentation, and update the page's hint bit about	 * whether it has free pointers.	 */	PageRepairFragmentation(page);}/* * For all items in this page, find their respective root line pointers. * If item k is part of a HOT-chain with root at item j, then we set * root_offsets[k - 1] = j. * * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries. * We zero out all unused entries. * * The function must be called with at least share lock on the buffer, to * prevent concurrent prune operations. * * Note: The information collected here is valid only as long as the caller * holds a pin on the buffer. Once pin is released, a tuple might be pruned * and reused by a completely unrelated tuple. */voidheap_get_root_tuples(Page page, OffsetNumber *root_offsets){	OffsetNumber offnum,				maxoff;	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));	maxoff = PageGetMaxOffsetNumber(page);	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum++)	{		ItemId		lp = PageGetItemId(page, offnum);		HeapTupleHeader htup;		OffsetNumber nextoffnum;		TransactionId priorXmax;		/* skip unused and dead items */		if (!ItemIdIsUsed(lp) || ItemIdIsDead(lp))			continue;		if (ItemIdIsNormal(lp))		{			htup = (HeapTupleHeader) PageGetItem(page, lp);			/*			 * Check if this tuple is part of a HOT-chain rooted at some other			 * tuple. If so, skip it for now; we'll process it when we find			 * its root.			 */			if (HeapTupleHeaderIsHeapOnly(htup))				continue;			/*			 * This is either a plain tuple or the root of a HOT-chain.			 * Remember it in the mapping.			 */			root_offsets[offnum - 1] = offnum;			/* If it's not the start of a HOT-chain, we're done with it */			if (!HeapTupleHeaderIsHotUpdated(htup))				continue;			/* Set up to scan the HOT-chain */			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);			priorXmax = HeapTupleHeaderGetXmax(htup);		}		else		{			/* Must be a redirect item. We do not set its root_offsets entry */			Assert(ItemIdIsRedirected(lp));			/* Set up to scan the HOT-chain */			nextoffnum = ItemIdGetRedirect(lp);			priorXmax = InvalidTransactionId;		}		/*		 * Now follow the HOT-chain and collect other tuples in the chain.		 *		 * Note: Even though this is a nested loop, the complexity of the		 * function is O(N) because a tuple in the page should be visited not		 * more than twice, once in the outer loop and once in HOT-chain		 * chases.		 */		for (;;)		{			lp = PageGetItemId(page, nextoffnum);			/* Check for broken chains */			if (!ItemIdIsNormal(lp))				break;			htup = (HeapTupleHeader) PageGetItem(page, lp);			if (TransactionIdIsValid(priorXmax) &&				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))				break;			/* Remember the root line pointer for this item */			root_offsets[nextoffnum - 1] = offnum;			/* Advance to next chain member, if any */			if (!HeapTupleHeaderIsHotUpdated(htup))				break;			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);			priorXmax = HeapTupleHeaderGetXmax(htup);		}	}}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -