⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 xfs_mru_cache.c

📁 linux 内核源代码
💻 C
📖 第 1 页 / 共 2 页
字号:
/* * Copyright (c) 2006-2007 Silicon Graphics, Inc. * All Rights Reserved. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation. * * This program is distributed in the hope that it would be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write the Free Software Foundation, * Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA */#include "xfs.h"#include "xfs_mru_cache.h"/* * The MRU Cache data structure consists of a data store, an array of lists and * a lock to protect its internal state.  At initialisation time, the client * supplies an element lifetime in milliseconds and a group count, as well as a * function pointer to call when deleting elements.  A data structure for * queueing up work in the form of timed callbacks is also included. * * The group count controls how many lists are created, and thereby how finely * the elements are grouped in time.  When reaping occurs, all the elements in * all the lists whose time has expired are deleted. * * To give an example of how this works in practice, consider a client that * initialises an MRU Cache with a lifetime of ten seconds and a group count of * five.  Five internal lists will be created, each representing a two second * period in time.  When the first element is added, time zero for the data * structure is initialised to the current time. * * All the elements added in the first two seconds are appended to the first * list.  Elements added in the third second go into the second list, and so on. * If an element is accessed at any point, it is removed from its list and * inserted at the head of the current most-recently-used list. * * The reaper function will have nothing to do until at least twelve seconds * have elapsed since the first element was added.  The reason for this is that * if it were called at t=11s, there could be elements in the first list that * have only been inactive for nine seconds, so it still does nothing.  If it is * called anywhere between t=12 and t=14 seconds, it will delete all the * elements that remain in the first list.  It's therefore possible for elements * to remain in the data store even after they've been inactive for up to * (t + t/g) seconds, where t is the inactive element lifetime and g is the * number of groups. * * The above example assumes that the reaper function gets called at least once * every (t/g) seconds.  If it is called less frequently, unused elements will * accumulate in the reap list until the reaper function is eventually called. * The current implementation uses work queue callbacks to carefully time the * reaper function calls, so this should happen rarely, if at all. * * From a design perspective, the primary reason for the choice of a list array * representing discrete time intervals is that it's only practical to reap * expired elements in groups of some appreciable size.  This automatically * introduces a granularity to element lifetimes, so there's no point storing an * individual timeout with each element that specifies a more precise reap time. * The bonus is a saving of sizeof(long) bytes of memory per element stored. * * The elements could have been stored in just one list, but an array of * counters or pointers would need to be maintained to allow them to be divided * up into discrete time groups.  More critically, the process of touching or * removing an element would involve walking large portions of the entire list, * which would have a detrimental effect on performance.  The additional memory * requirement for the array of list heads is minimal. * * When an element is touched or deleted, it needs to be removed from its * current list.  Doubly linked lists are used to make the list maintenance * portion of these operations O(1).  Since reaper timing can be imprecise, * inserts and lookups can occur when there are no free lists available.  When * this happens, all the elements on the LRU list need to be migrated to the end * of the reap list.  To keep the list maintenance portion of these operations * O(1) also, list tails need to be accessible without walking the entire list. * This is the reason why doubly linked list heads are used. *//* * An MRU Cache is a dynamic data structure that stores its elements in a way * that allows efficient lookups, but also groups them into discrete time * intervals based on insertion time.  This allows elements to be efficiently * and automatically reaped after a fixed period of inactivity. * * When a client data pointer is stored in the MRU Cache it needs to be added to * both the data store and to one of the lists.  It must also be possible to * access each of these entries via the other, i.e. to: * *    a) Walk a list, removing the corresponding data store entry for each item. *    b) Look up a data store entry, then access its list entry directly. * * To achieve both of these goals, each entry must contain both a list entry and * a key, in addition to the user's data pointer.  Note that it's not a good * idea to have the client embed one of these structures at the top of their own * data structure, because inserting the same item more than once would most * likely result in a loop in one of the lists.  That's a sure-fire recipe for * an infinite loop in the code. */typedef struct xfs_mru_cache_elem{	struct list_head list_node;	unsigned long	key;	void		*value;} xfs_mru_cache_elem_t;static kmem_zone_t		*xfs_mru_elem_zone;static struct workqueue_struct	*xfs_mru_reap_wq;/* * When inserting, destroying or reaping, it's first necessary to update the * lists relative to a particular time.  In the case of destroying, that time * will be well in the future to ensure that all items are moved to the reap * list.  In all other cases though, the time will be the current time. * * This function enters a loop, moving the contents of the LRU list to the reap * list again and again until either a) the lists are all empty, or b) time zero * has been advanced sufficiently to be within the immediate element lifetime. * * Case a) above is detected by counting how many groups are migrated and * stopping when they've all been moved.  Case b) is detected by monitoring the * time_zero field, which is updated as each group is migrated. * * The return value is the earliest time that more migration could be needed, or * zero if there's no need to schedule more work because the lists are empty. */STATIC unsigned long_xfs_mru_cache_migrate(	xfs_mru_cache_t	*mru,	unsigned long	now){	unsigned int	grp;	unsigned int	migrated = 0;	struct list_head *lru_list;	/* Nothing to do if the data store is empty. */	if (!mru->time_zero)		return 0;	/* While time zero is older than the time spanned by all the lists. */	while (mru->time_zero <= now - mru->grp_count * mru->grp_time) {		/*		 * If the LRU list isn't empty, migrate its elements to the tail		 * of the reap list.		 */		lru_list = mru->lists + mru->lru_grp;		if (!list_empty(lru_list))			list_splice_init(lru_list, mru->reap_list.prev);		/*		 * Advance the LRU group number, freeing the old LRU list to		 * become the new MRU list; advance time zero accordingly.		 */		mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count;		mru->time_zero += mru->grp_time;		/*		 * If reaping is so far behind that all the elements on all the		 * lists have been migrated to the reap list, it's now empty.		 */		if (++migrated == mru->grp_count) {			mru->lru_grp = 0;			mru->time_zero = 0;			return 0;		}	}	/* Find the first non-empty list from the LRU end. */	for (grp = 0; grp < mru->grp_count; grp++) {		/* Check the grp'th list from the LRU end. */		lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count);		if (!list_empty(lru_list))			return mru->time_zero +			       (mru->grp_count + grp) * mru->grp_time;	}	/* All the lists must be empty. */	mru->lru_grp = 0;	mru->time_zero = 0;	return 0;}/* * When inserting or doing a lookup, an element needs to be inserted into the * MRU list.  The lists must be migrated first to ensure that they're * up-to-date, otherwise the new element could be given a shorter lifetime in * the cache than it should. */STATIC void_xfs_mru_cache_list_insert(	xfs_mru_cache_t		*mru,	xfs_mru_cache_elem_t	*elem){	unsigned int	grp = 0;	unsigned long	now = jiffies;	/*	 * If the data store is empty, initialise time zero, leave grp set to	 * zero and start the work queue timer if necessary.  Otherwise, set grp	 * to the number of group times that have elapsed since time zero.	 */	if (!_xfs_mru_cache_migrate(mru, now)) {		mru->time_zero = now;		if (!mru->queued) {			mru->queued = 1;			queue_delayed_work(xfs_mru_reap_wq, &mru->work,			                   mru->grp_count * mru->grp_time);		}	} else {		grp = (now - mru->time_zero) / mru->grp_time;		grp = (mru->lru_grp + grp) % mru->grp_count;	}	/* Insert the element at the tail of the corresponding list. */	list_add_tail(&elem->list_node, mru->lists + grp);}/* * When destroying or reaping, all the elements that were migrated to the reap * list need to be deleted.  For each element this involves removing it from the * data store, removing it from the reap list, calling the client's free * function and deleting the element from the element zone. */STATIC void_xfs_mru_cache_clear_reap_list(	xfs_mru_cache_t		*mru){	xfs_mru_cache_elem_t	*elem, *next;	struct list_head	tmp;	INIT_LIST_HEAD(&tmp);	list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) {		/* Remove the element from the data store. */		radix_tree_delete(&mru->store, elem->key);		/*		 * remove to temp list so it can be freed without		 * needing to hold the lock		 */		list_move(&elem->list_node, &tmp);	}	mutex_spinunlock(&mru->lock, 0);	list_for_each_entry_safe(elem, next, &tmp, list_node) {		/* Remove the element from the reap list. */		list_del_init(&elem->list_node);		/* Call the client's free function with the key and value pointer. */		mru->free_func(elem->key, elem->value);		/* Free the element structure. */		kmem_zone_free(xfs_mru_elem_zone, elem);	}	mutex_spinlock(&mru->lock);}/* * We fire the reap timer every group expiry interval so * we always have a reaper ready to run. This makes shutdown * and flushing of the reaper easy to do. Hence we need to * keep when the next reap must occur so we can determine * at each interval whether there is anything we need to do. */STATIC void_xfs_mru_cache_reap(	struct work_struct	*work){	xfs_mru_cache_t		*mru = container_of(work, xfs_mru_cache_t, work.work);	unsigned long		now, next;	ASSERT(mru && mru->lists);	if (!mru || !mru->lists)		return;	mutex_spinlock(&mru->lock);	next = _xfs_mru_cache_migrate(mru, jiffies);	_xfs_mru_cache_clear_reap_list(mru);	mru->queued = next;	if ((mru->queued > 0)) {		now = jiffies;		if (next <= now)			next = 0;		else			next -= now;		queue_delayed_work(xfs_mru_reap_wq, &mru->work, next);	}	mutex_spinunlock(&mru->lock, 0);

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -