⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sched.c

📁 Linux内核源代码 为压缩文件 是<<Linux内核>>一书中的源代码
💻 C
📖 第 1 页 / 共 2 页
字号:
/* * linux/net/sunrpc/sched.c * * Scheduling for synchronous and asynchronous RPC requests. * * Copyright (C) 1996 Olaf Kirch, <okir@monad.swb.de> *  * TCP NFS related read + write fixes * (C) 1999 Dave Airlie, University of Limerick, Ireland <airlied@linux.ie> */#include <linux/module.h>#define __KERNEL_SYSCALLS__#include <linux/sched.h>#include <linux/interrupt.h>#include <linux/malloc.h>#include <linux/unistd.h>#include <linux/smp.h>#include <linux/smp_lock.h>#include <linux/spinlock.h>#include <linux/sunrpc/clnt.h>#ifdef RPC_DEBUG#define RPCDBG_FACILITY		RPCDBG_SCHEDstatic int			rpc_task_id;#endif/* * We give RPC the same get_free_pages priority as NFS */#define GFP_RPC			GFP_NFSstatic void			__rpc_default_timer(struct rpc_task *task);static void			rpciod_killall(void);/* * When an asynchronous RPC task is activated within a bottom half * handler, or while executing another RPC task, it is put on * schedq, and rpciod is woken up. */static struct rpc_wait_queue	schedq = RPC_INIT_WAITQ("schedq");/* * RPC tasks that create another task (e.g. for contacting the portmapper) * will wait on this queue for their child's completion */static struct rpc_wait_queue	childq = RPC_INIT_WAITQ("childq");/* * RPC tasks sit here while waiting for conditions to improve. */static struct rpc_wait_queue	delay_queue = RPC_INIT_WAITQ("delayq");/* * All RPC tasks are linked into this list */static struct rpc_task *	all_tasks;/* * rpciod-related stuff */static DECLARE_WAIT_QUEUE_HEAD(rpciod_idle);static DECLARE_WAIT_QUEUE_HEAD(rpciod_killer);static DECLARE_MUTEX(rpciod_sema);static unsigned int		rpciod_users;static pid_t			rpciod_pid;static int			rpc_inhibit;/* * Spinlock for wait queues. Access to the latter also has to be * interrupt-safe in order to allow timers to wake up sleeping tasks. */spinlock_t rpc_queue_lock = SPIN_LOCK_UNLOCKED;/* * Spinlock for other critical sections of code. */spinlock_t rpc_sched_lock = SPIN_LOCK_UNLOCKED;/* * This is the last-ditch buffer for NFS swap requests */static u32			swap_buffer[PAGE_SIZE >> 2];static long			swap_buffer_used;/* * Make allocation of the swap_buffer SMP-safe */static __inline__ int rpc_lock_swapbuf(void){	return !test_and_set_bit(1, &swap_buffer_used);}static __inline__ void rpc_unlock_swapbuf(void){	clear_bit(1, &swap_buffer_used);}/* * Disable the timer for a given RPC task. Should be called with * rpc_queue_lock and bh_disabled in order to avoid races within * rpc_run_timer(). */static inline void__rpc_disable_timer(struct rpc_task *task){	dprintk("RPC: %4d disabling timer\n", task->tk_pid);	task->tk_timeout_fn = NULL;	task->tk_timeout = 0;}/* * Run a timeout function. * We use the callback in order to allow __rpc_wake_up_task() * and friends to disable the timer synchronously on SMP systems * without calling del_timer_sync(). The latter could cause a * deadlock if called while we're holding spinlocks... */static voidrpc_run_timer(struct rpc_task *task){	void (*callback)(struct rpc_task *);	spin_lock_bh(&rpc_queue_lock);	callback = task->tk_timeout_fn;	task->tk_timeout_fn = NULL;	spin_unlock_bh(&rpc_queue_lock);	if (callback) {		dprintk("RPC: %4d running timer\n", task->tk_pid);		callback(task);	}}/* * Set up a timer for the current task. */static inline void__rpc_add_timer(struct rpc_task *task, rpc_action timer){	if (!task->tk_timeout)		return;	dprintk("RPC: %4d setting alarm for %lu ms\n",			task->tk_pid, task->tk_timeout * 1000 / HZ);	if (timer)		task->tk_timeout_fn = timer;	else		task->tk_timeout_fn = __rpc_default_timer;	mod_timer(&task->tk_timer, jiffies + task->tk_timeout);}/* * Set up a timer for an already sleeping task. */void rpc_add_timer(struct rpc_task *task, rpc_action timer){	spin_lock_bh(&rpc_queue_lock);	if (!(RPC_IS_RUNNING(task) || task->tk_wakeup))		__rpc_add_timer(task, timer);	spin_unlock_bh(&rpc_queue_lock);}/* * Delete any timer for the current task. Because we use del_timer_sync(), * this function should never be called while holding rpc_queue_lock. */static inline voidrpc_delete_timer(struct rpc_task *task){	if (timer_pending(&task->tk_timer)) {		dprintk("RPC: %4d deleting timer\n", task->tk_pid);		del_timer_sync(&task->tk_timer);	}}/* * Add new request to wait queue. * * Swapper tasks always get inserted at the head of the queue. * This should avoid many nasty memory deadlocks and hopefully * improve overall performance. * Everyone else gets appended to the queue to ensure proper FIFO behavior. */static inline int__rpc_add_wait_queue(struct rpc_wait_queue *queue, struct rpc_task *task){	if (task->tk_rpcwait == queue)		return 0;	if (task->tk_rpcwait) {		printk(KERN_WARNING "RPC: doubly enqueued task!\n");		return -EWOULDBLOCK;	}	if (RPC_IS_SWAPPER(task))		rpc_insert_list(&queue->task, task);	else		rpc_append_list(&queue->task, task);	task->tk_rpcwait = queue;	dprintk("RPC: %4d added to queue %p \"%s\"\n",				task->tk_pid, queue, rpc_qname(queue));	return 0;}intrpc_add_wait_queue(struct rpc_wait_queue *q, struct rpc_task *task){	int		result;	spin_lock_bh(&rpc_queue_lock);	result = __rpc_add_wait_queue(q, task);	spin_unlock_bh(&rpc_queue_lock);	return result;}/* * Remove request from queue. * Note: must be called with spin lock held. */static inline void__rpc_remove_wait_queue(struct rpc_task *task){	struct rpc_wait_queue *queue = task->tk_rpcwait;	if (!queue)		return;	rpc_remove_list(&queue->task, task);	task->tk_rpcwait = NULL;	dprintk("RPC: %4d removed from queue %p \"%s\"\n",				task->tk_pid, queue, rpc_qname(queue));}voidrpc_remove_wait_queue(struct rpc_task *task){	if (!task->tk_rpcwait)		return;	spin_lock_bh(&rpc_queue_lock);	__rpc_remove_wait_queue(task);	spin_unlock_bh(&rpc_queue_lock);}/* * Make an RPC task runnable. * * Note: If the task is ASYNC, this must be called with  * the spinlock held to protect the wait queue operation. */static inline voidrpc_make_runnable(struct rpc_task *task){	if (task->tk_timeout_fn) {		printk(KERN_ERR "RPC: task w/ running timer in rpc_make_runnable!!\n");		return;	}	rpc_set_running(task);	if (RPC_IS_ASYNC(task)) {		if (RPC_IS_SLEEPING(task)) {			int status;			status = __rpc_add_wait_queue(&schedq, task);			if (status < 0) {				printk(KERN_WARNING "RPC: failed to add task to queue: error: %d!\n", status);				task->tk_status = status;				return;			}			rpc_clear_sleeping(task);			if (waitqueue_active(&rpciod_idle))				wake_up(&rpciod_idle);		}	} else {		rpc_clear_sleeping(task);		if (waitqueue_active(&task->tk_wait))			wake_up(&task->tk_wait);	}}/* * Place a newly initialized task on the schedq. */static inline voidrpc_schedule_run(struct rpc_task *task){	/* Don't run a child twice! */	if (RPC_IS_ACTIVATED(task))		return;	task->tk_active = 1;	rpc_set_sleeping(task);	rpc_make_runnable(task);}/* *	For other people who may need to wake the I/O daemon *	but should (for now) know nothing about its innards */void rpciod_wake_up(void){	if(rpciod_pid==0)		printk(KERN_ERR "rpciod: wot no daemon?\n");	if (waitqueue_active(&rpciod_idle))		wake_up(&rpciod_idle);}/* * Prepare for sleeping on a wait queue. * By always appending tasks to the list we ensure FIFO behavior. * NB: An RPC task will only receive interrupt-driven events as long * as it's on a wait queue. */static void__rpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,			rpc_action action, rpc_action timer){	int status;	dprintk("RPC: %4d sleep_on(queue \"%s\" time %ld)\n", task->tk_pid,				rpc_qname(q), jiffies);	if (!RPC_IS_ASYNC(task) && !RPC_IS_ACTIVATED(task)) {		printk(KERN_ERR "RPC: Inactive synchronous task put to sleep!\n");		return;	}	/* Mark the task as being activated if so needed */	if (!RPC_IS_ACTIVATED(task)) {		task->tk_active = 1;		rpc_set_sleeping(task);	}	status = __rpc_add_wait_queue(q, task);	if (status) {		printk(KERN_WARNING "RPC: failed to add task to queue: error: %d!\n", status);		task->tk_status = status;	} else {		rpc_clear_running(task);		if (task->tk_callback) {			dprintk(KERN_ERR "RPC: %4d overwrites an active callback\n", task->tk_pid);			BUG();		}		task->tk_callback = action;		__rpc_add_timer(task, timer);	}}voidrpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,				rpc_action action, rpc_action timer){	/*	 * Protect the queue operations.	 */	spin_lock_bh(&rpc_queue_lock);	__rpc_sleep_on(q, task, action, timer);	spin_unlock_bh(&rpc_queue_lock);}voidrpc_sleep_locked(struct rpc_wait_queue *q, struct rpc_task *task,		 rpc_action action, rpc_action timer){	/*	 * Protect the queue operations.	 */	spin_lock_bh(&rpc_queue_lock);	__rpc_sleep_on(q, task, action, timer);	__rpc_lock_task(task);	spin_unlock_bh(&rpc_queue_lock);}/** * __rpc_wake_up_task - wake up a single rpc_task * @task: task to be woken up * * If the task is locked, it is merely removed from the queue, and * 'task->tk_wakeup' is set. rpc_unlock_task() will then ensure * that it is woken up as soon as the lock count goes to zero. * * Caller must hold rpc_queue_lock */static void__rpc_wake_up_task(struct rpc_task *task){	dprintk("RPC: %4d __rpc_wake_up_task (now %ld inh %d)\n",					task->tk_pid, jiffies, rpc_inhibit);#ifdef RPC_DEBUG	if (task->tk_magic != 0xf00baa) {		printk(KERN_ERR "RPC: attempt to wake up non-existing task!\n");		rpc_debug = ~0;		rpc_show_tasks();		return;	}#endif	/* Has the task been executed yet? If not, we cannot wake it up! */	if (!RPC_IS_ACTIVATED(task)) {		printk(KERN_ERR "RPC: Inactive task (%p) being woken up!\n", task);		return;	}	if (RPC_IS_RUNNING(task))		return;	__rpc_disable_timer(task);	if (task->tk_rpcwait != &schedq)		__rpc_remove_wait_queue(task);	/* If the task has been locked, then set tk_wakeup so that	 * rpc_unlock_task() wakes us up... */	if (task->tk_lock) {		task->tk_wakeup = 1;		return;	} else		task->tk_wakeup = 0;	rpc_make_runnable(task);	dprintk("RPC:      __rpc_wake_up_task done\n");}/* * Default timeout handler if none specified by user */static void__rpc_default_timer(struct rpc_task *task){	dprintk("RPC: %d timeout (default timer)\n", task->tk_pid);	task->tk_status = -ETIMEDOUT;	rpc_wake_up_task(task);}/* * Wake up the specified task */voidrpc_wake_up_task(struct rpc_task *task){	if (RPC_IS_RUNNING(task))		return;	spin_lock_bh(&rpc_queue_lock);	__rpc_wake_up_task(task);	spin_unlock_bh(&rpc_queue_lock);}/* * Wake up the next task on the wait queue. */struct rpc_task *rpc_wake_up_next(struct rpc_wait_queue *queue){	struct rpc_task	*task;	dprintk("RPC:      wake_up_next(%p \"%s\")\n", queue, rpc_qname(queue));	spin_lock_bh(&rpc_queue_lock);	if ((task = queue->task) != 0)		__rpc_wake_up_task(task);	spin_unlock_bh(&rpc_queue_lock);	return task;}/** * rpc_wake_up - wake up all rpc_tasks * @queue: rpc_wait_queue on which the tasks are sleeping * * Grabs rpc_queue_lock */voidrpc_wake_up(struct rpc_wait_queue *queue){	spin_lock_bh(&rpc_queue_lock);	while (queue->task)		__rpc_wake_up_task(queue->task);	spin_unlock_bh(&rpc_queue_lock);}/** * rpc_wake_up_status - wake up all rpc_tasks and set their status value. * @queue: rpc_wait_queue on which the tasks are sleeping * @status: status value to set * * Grabs rpc_queue_lock */voidrpc_wake_up_status(struct rpc_wait_queue *queue, int status){	struct rpc_task	*task;	spin_lock_bh(&rpc_queue_lock);	while ((task = queue->task) != NULL) {		task->tk_status = status;		__rpc_wake_up_task(task);	}	spin_unlock_bh(&rpc_queue_lock);}/* * Lock down a sleeping task to prevent it from waking up * and disappearing from beneath us. * * This function should always be called with the * rpc_queue_lock held. */int__rpc_lock_task(struct rpc_task *task){	if (!RPC_IS_RUNNING(task))		return ++task->tk_lock;	return 0;}voidrpc_unlock_task(struct rpc_task *task){	spin_lock_bh(&rpc_queue_lock);	if (task->tk_lock && !--task->tk_lock && task->tk_wakeup)		__rpc_wake_up_task(task);	spin_unlock_bh(&rpc_queue_lock);}/* * Run a task at a later time */static void	__rpc_atrun(struct rpc_task *);voidrpc_delay(struct rpc_task *task, unsigned long delay){	task->tk_timeout = delay;	rpc_sleep_on(&delay_queue, task, NULL, __rpc_atrun);}static void__rpc_atrun(struct rpc_task *task){	task->tk_status = 0;	rpc_wake_up_task(task);}/* * This is the RPC `scheduler' (or rather, the finite state machine). */static int__rpc_execute(struct rpc_task *task){	int		status = 0;	dprintk("RPC: %4d rpc_execute flgs %x\n",				task->tk_pid, task->tk_flags);	if (!RPC_IS_RUNNING(task)) {		printk(KERN_WARNING "RPC: rpc_execute called for sleeping task!!\n");		return 0;	} restarted:	while (1) {		/*		 * Execute any pending callback.		 */		if (RPC_DO_CALLBACK(task)) {			/* Define a callback save pointer */			void (*save_callback)(struct rpc_task *);				/* 			 * If a callback exists, save it, reset it,			 * call it.			 * The save is needed to stop from resetting			 * another callback set within the callback handler			 * - Dave			 */			save_callback=task->tk_callback;			task->tk_callback=NULL;			save_callback(task);		}		/*		 * Perform the next FSM step.		 * tk_action may be NULL when the task has been killed		 * by someone else.		 */		if (RPC_IS_RUNNING(task)) {			/*			 * Garbage collection of pending timers...			 */			rpc_delete_timer(task);			if (!task->tk_action)				break;			task->tk_action(task);		}		/*		 * Check whether task is sleeping.		 */		spin_lock_bh(&rpc_queue_lock);		if (!RPC_IS_RUNNING(task)) {			rpc_set_sleeping(task);			if (RPC_IS_ASYNC(task)) {				spin_unlock_bh(&rpc_queue_lock);				return 0;			}		}		spin_unlock_bh(&rpc_queue_lock);		while (RPC_IS_SLEEPING(task)) {			/* sync task: sleep here */			dprintk("RPC: %4d sync task going to sleep\n",							task->tk_pid);			if (current->pid == rpciod_pid)				printk(KERN_ERR "RPC: rpciod waiting on sync task!\n");			__wait_event(task->tk_wait, !RPC_IS_SLEEPING(task));			dprintk("RPC: %4d sync task resuming\n", task->tk_pid);			/*			 * When a sync task receives a signal, it exits with			 * -ERESTARTSYS. In order to catch any callbacks that			 * clean up after sleeping on some queue, we don't			 * break the loop here, but go around once more.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -