⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 transaction.c

📁 linux 内核源代码
💻 C
📖 第 1 页 / 共 5 页
字号:
}/* * This buffer is no longer needed.  If it is on an older transaction's * checkpoint list we need to record it on this transaction's forget list * to pin this buffer (and hence its checkpointing transaction) down until * this transaction commits.  If the buffer isn't on a checkpoint list, we * release it. * Returns non-zero if JBD no longer has an interest in the buffer. * * Called under j_list_lock. * * Called under jbd_lock_bh_state(bh). */static int __dispose_buffer(struct journal_head *jh, transaction_t *transaction){	int may_free = 1;	struct buffer_head *bh = jh2bh(jh);	__journal_unfile_buffer(jh);	if (jh->b_cp_transaction) {		JBUFFER_TRACE(jh, "on running+cp transaction");		__journal_file_buffer(jh, transaction, BJ_Forget);		clear_buffer_jbddirty(bh);		may_free = 0;	} else {		JBUFFER_TRACE(jh, "on running transaction");		journal_remove_journal_head(bh);		__brelse(bh);	}	return may_free;}/* * journal_invalidatepage * * This code is tricky.  It has a number of cases to deal with. * * There are two invariants which this code relies on: * * i_size must be updated on disk before we start calling invalidatepage on the * data. * *  This is done in ext3 by defining an ext3_setattr method which *  updates i_size before truncate gets going.  By maintaining this *  invariant, we can be sure that it is safe to throw away any buffers *  attached to the current transaction: once the transaction commits, *  we know that the data will not be needed. * *  Note however that we can *not* throw away data belonging to the *  previous, committing transaction! * * Any disk blocks which *are* part of the previous, committing * transaction (and which therefore cannot be discarded immediately) are * not going to be reused in the new running transaction * *  The bitmap committed_data images guarantee this: any block which is *  allocated in one transaction and removed in the next will be marked *  as in-use in the committed_data bitmap, so cannot be reused until *  the next transaction to delete the block commits.  This means that *  leaving committing buffers dirty is quite safe: the disk blocks *  cannot be reallocated to a different file and so buffer aliasing is *  not possible. * * * The above applies mainly to ordered data mode.  In writeback mode we * don't make guarantees about the order in which data hits disk --- in * particular we don't guarantee that new dirty data is flushed before * transaction commit --- so it is always safe just to discard data * immediately in that mode.  --sct *//* * The journal_unmap_buffer helper function returns zero if the buffer * concerned remains pinned as an anonymous buffer belonging to an older * transaction. * * We're outside-transaction here.  Either or both of j_running_transaction * and j_committing_transaction may be NULL. */static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh){	transaction_t *transaction;	struct journal_head *jh;	int may_free = 1;	int ret;	BUFFER_TRACE(bh, "entry");	/*	 * It is safe to proceed here without the j_list_lock because the	 * buffers cannot be stolen by try_to_free_buffers as long as we are	 * holding the page lock. --sct	 */	if (!buffer_jbd(bh))		goto zap_buffer_unlocked;	spin_lock(&journal->j_state_lock);	jbd_lock_bh_state(bh);	spin_lock(&journal->j_list_lock);	jh = journal_grab_journal_head(bh);	if (!jh)		goto zap_buffer_no_jh;	transaction = jh->b_transaction;	if (transaction == NULL) {		/* First case: not on any transaction.  If it		 * has no checkpoint link, then we can zap it:		 * it's a writeback-mode buffer so we don't care		 * if it hits disk safely. */		if (!jh->b_cp_transaction) {			JBUFFER_TRACE(jh, "not on any transaction: zap");			goto zap_buffer;		}		if (!buffer_dirty(bh)) {			/* bdflush has written it.  We can drop it now */			goto zap_buffer;		}		/* OK, it must be in the journal but still not		 * written fully to disk: it's metadata or		 * journaled data... */		if (journal->j_running_transaction) {			/* ... and once the current transaction has			 * committed, the buffer won't be needed any			 * longer. */			JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");			ret = __dispose_buffer(jh,					journal->j_running_transaction);			journal_put_journal_head(jh);			spin_unlock(&journal->j_list_lock);			jbd_unlock_bh_state(bh);			spin_unlock(&journal->j_state_lock);			return ret;		} else {			/* There is no currently-running transaction. So the			 * orphan record which we wrote for this file must have			 * passed into commit.  We must attach this buffer to			 * the committing transaction, if it exists. */			if (journal->j_committing_transaction) {				JBUFFER_TRACE(jh, "give to committing trans");				ret = __dispose_buffer(jh,					journal->j_committing_transaction);				journal_put_journal_head(jh);				spin_unlock(&journal->j_list_lock);				jbd_unlock_bh_state(bh);				spin_unlock(&journal->j_state_lock);				return ret;			} else {				/* The orphan record's transaction has				 * committed.  We can cleanse this buffer */				clear_buffer_jbddirty(bh);				goto zap_buffer;			}		}	} else if (transaction == journal->j_committing_transaction) {		JBUFFER_TRACE(jh, "on committing transaction");		if (jh->b_jlist == BJ_Locked) {			/*			 * The buffer is on the committing transaction's locked			 * list.  We have the buffer locked, so I/O has			 * completed.  So we can nail the buffer now.			 */			may_free = __dispose_buffer(jh, transaction);			goto zap_buffer;		}		/*		 * If it is committing, we simply cannot touch it.  We		 * can remove it's next_transaction pointer from the		 * running transaction if that is set, but nothing		 * else. */		set_buffer_freed(bh);		if (jh->b_next_transaction) {			J_ASSERT(jh->b_next_transaction ==					journal->j_running_transaction);			jh->b_next_transaction = NULL;		}		journal_put_journal_head(jh);		spin_unlock(&journal->j_list_lock);		jbd_unlock_bh_state(bh);		spin_unlock(&journal->j_state_lock);		return 0;	} else {		/* Good, the buffer belongs to the running transaction.		 * We are writing our own transaction's data, not any		 * previous one's, so it is safe to throw it away		 * (remember that we expect the filesystem to have set		 * i_size already for this truncate so recovery will not		 * expose the disk blocks we are discarding here.) */		J_ASSERT_JH(jh, transaction == journal->j_running_transaction);		JBUFFER_TRACE(jh, "on running transaction");		may_free = __dispose_buffer(jh, transaction);	}zap_buffer:	journal_put_journal_head(jh);zap_buffer_no_jh:	spin_unlock(&journal->j_list_lock);	jbd_unlock_bh_state(bh);	spin_unlock(&journal->j_state_lock);zap_buffer_unlocked:	clear_buffer_dirty(bh);	J_ASSERT_BH(bh, !buffer_jbddirty(bh));	clear_buffer_mapped(bh);	clear_buffer_req(bh);	clear_buffer_new(bh);	bh->b_bdev = NULL;	return may_free;}/** * void journal_invalidatepage() * @journal: journal to use for flush... * @page:    page to flush * @offset:  length of page to invalidate. * * Reap page buffers containing data after offset in page. * */void journal_invalidatepage(journal_t *journal,		      struct page *page,		      unsigned long offset){	struct buffer_head *head, *bh, *next;	unsigned int curr_off = 0;	int may_free = 1;	if (!PageLocked(page))		BUG();	if (!page_has_buffers(page))		return;	/* We will potentially be playing with lists other than just the	 * data lists (especially for journaled data mode), so be	 * cautious in our locking. */	head = bh = page_buffers(page);	do {		unsigned int next_off = curr_off + bh->b_size;		next = bh->b_this_page;		if (offset <= curr_off) {			/* This block is wholly outside the truncation point */			lock_buffer(bh);			may_free &= journal_unmap_buffer(journal, bh);			unlock_buffer(bh);		}		curr_off = next_off;		bh = next;	} while (bh != head);	if (!offset) {		if (may_free && try_to_free_buffers(page))			J_ASSERT(!page_has_buffers(page));	}}/* * File a buffer on the given transaction list. */void __journal_file_buffer(struct journal_head *jh,			transaction_t *transaction, int jlist){	struct journal_head **list = NULL;	int was_dirty = 0;	struct buffer_head *bh = jh2bh(jh);	J_ASSERT_JH(jh, jbd_is_locked_bh_state(bh));	assert_spin_locked(&transaction->t_journal->j_list_lock);	J_ASSERT_JH(jh, jh->b_jlist < BJ_Types);	J_ASSERT_JH(jh, jh->b_transaction == transaction ||				jh->b_transaction == NULL);	if (jh->b_transaction && jh->b_jlist == jlist)		return;	/* The following list of buffer states needs to be consistent	 * with __jbd_unexpected_dirty_buffer()'s handling of dirty	 * state. */	if (jlist == BJ_Metadata || jlist == BJ_Reserved ||	    jlist == BJ_Shadow || jlist == BJ_Forget) {		if (test_clear_buffer_dirty(bh) ||		    test_clear_buffer_jbddirty(bh))			was_dirty = 1;	}	if (jh->b_transaction)		__journal_temp_unlink_buffer(jh);	jh->b_transaction = transaction;	switch (jlist) {	case BJ_None:		J_ASSERT_JH(jh, !jh->b_committed_data);		J_ASSERT_JH(jh, !jh->b_frozen_data);		return;	case BJ_SyncData:		list = &transaction->t_sync_datalist;		break;	case BJ_Metadata:		transaction->t_nr_buffers++;		list = &transaction->t_buffers;		break;	case BJ_Forget:		list = &transaction->t_forget;		break;	case BJ_IO:		list = &transaction->t_iobuf_list;		break;	case BJ_Shadow:		list = &transaction->t_shadow_list;		break;	case BJ_LogCtl:		list = &transaction->t_log_list;		break;	case BJ_Reserved:		list = &transaction->t_reserved_list;		break;	case BJ_Locked:		list =  &transaction->t_locked_list;		break;	}	__blist_add_buffer(list, jh);	jh->b_jlist = jlist;	if (was_dirty)		set_buffer_jbddirty(bh);}void journal_file_buffer(struct journal_head *jh,				transaction_t *transaction, int jlist){	jbd_lock_bh_state(jh2bh(jh));	spin_lock(&transaction->t_journal->j_list_lock);	__journal_file_buffer(jh, transaction, jlist);	spin_unlock(&transaction->t_journal->j_list_lock);	jbd_unlock_bh_state(jh2bh(jh));}/* * Remove a buffer from its current buffer list in preparation for * dropping it from its current transaction entirely.  If the buffer has * already started to be used by a subsequent transaction, refile the * buffer on that transaction's metadata list. * * Called under journal->j_list_lock * * Called under jbd_lock_bh_state(jh2bh(jh)) */void __journal_refile_buffer(struct journal_head *jh){	int was_dirty;	struct buffer_head *bh = jh2bh(jh);	J_ASSERT_JH(jh, jbd_is_locked_bh_state(bh));	if (jh->b_transaction)		assert_spin_locked(&jh->b_transaction->t_journal->j_list_lock);	/* If the buffer is now unused, just drop it. */	if (jh->b_next_transaction == NULL) {		__journal_unfile_buffer(jh);		return;	}	/*	 * It has been modified by a later transaction: add it to the new	 * transaction's metadata list.	 */	was_dirty = test_clear_buffer_jbddirty(bh);	__journal_temp_unlink_buffer(jh);	jh->b_transaction = jh->b_next_transaction;	jh->b_next_transaction = NULL;	__journal_file_buffer(jh, jh->b_transaction,				was_dirty ? BJ_Metadata : BJ_Reserved);	J_ASSERT_JH(jh, jh->b_transaction->t_state == T_RUNNING);	if (was_dirty)		set_buffer_jbddirty(bh);}/* * For the unlocked version of this call, also make sure that any * hanging journal_head is cleaned up if necessary. * * __journal_refile_buffer is usually called as part of a single locked * operation on a buffer_head, in which the caller is probably going to * be hooking the journal_head onto other lists.  In that case it is up * to the caller to remove the journal_head if necessary.  For the * unlocked journal_refile_buffer call, the caller isn't going to be * doing anything else to the buffer so we need to do the cleanup * ourselves to avoid a jh leak. * * *** The journal_head may be freed by this call! *** */void journal_refile_buffer(journal_t *journal, struct journal_head *jh){	struct buffer_head *bh = jh2bh(jh);	jbd_lock_bh_state(bh);	spin_lock(&journal->j_list_lock);	__journal_refile_buffer(jh);	jbd_unlock_bh_state(bh);	journal_remove_journal_head(bh);	spin_unlock(&journal->j_list_lock);	__brelse(bh);}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -