📄 00000045.htm
字号:
list_del(&p-run_list); <BR> list_add_tail(&p-run_list, &runqueue_head); <BR>} <BR>static inline void move_first_runqueue(struct task_struct * p) <BR>{ <BR> list_del(&p-run_list); <BR> list_add(&p-run_list, &runqueue_head); <BR>} <BR>2.5 Wait Queues <BR>When a process requests the kernel to do something which is currently imposs <BR>ible but that may become possible later, the process is put to sleep and is <BR>woken up when the request is more likely to be satisfied. One of the kernel <BR>mechanisms used for this is called a 'wait queue'. <BR>Linux implementation allows wake-on semantics using TASK_EXCLUSIVE flag. Wit <BR>h waitqueues you can either use a well-known queue and then simply sleep_on/ <BR>sleep_on_timeout/interruptible_sleep_on/interruptible_sleep_on_timeout or yo <BR>u can define your own waitqueue and use add/remove_wait_queue to add and rem <BR>ove yourself from it and also wake_up/wake_up_interruptible to wake up when <BR>needed. <BR>An example of the first usage of waiteueus is interaction between page alloc <BR>ator mm/page_alloc.c:__alloc_pages() using the well-known queue kswapd_wait <BR>declared in mm/vmscan.c and on which kswap kernel daemon is sleeping in mm/v <BR>mscan.c:kswap() and is woken up when page allocator needs to free up some pa <BR>ges. <BR>An example of autonomous waitqueue usage is interaction between user process <BR> requesting data via read(2) system call and kernel running in the interrupt <BR> context to supply the data. An interrupt handler might look like (simplifie <BR>d drivers/char/rtc_interrupt()): <BR>static DECLARE_WAIT_QUEUE_HEAD(rtc_wait); <BR>void rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs) <BR>{ <BR> spin_lock(&rtc_lock); <BR> rtc_irq_data = CMOS_READ(RTC_INTR_FLAGS); <BR> spin_unlock(&rtc_lock); <BR> wake_up_interruptible(&rtc_wait); <BR>} <BR>so, the interrupt handler obtains the data by reading from some device-speci <BR>fic io port (CMOS_READ() macro turns into a couple outb/inb) and then wakes <BR>up whoever is sleeping on the rtc_wait wait queue. <BR>Now, the read(2) system call could be implemented as: <BR>ssize_t rtc_read(struct file file, char *buf, size_t count, loff_t *ppos) <BR>{ <BR> DECLARE_WAITQUEUE(wait, current); <BR> unsigned long data; <BR> ssize_t retval; <BR> add_wait_queue(&rtc_wait, &wait); <BR> current-state = TASK_INTERRUPTIBLE; <BR> do { <BR> spin_lock_irq(&rtc_lock); <BR> data = rtc_irq_data; <BR> rtc_irq_data = 0; <BR> spin_unlock_irq(&rtc_lock); <BR> if (data != 0) <BR> break; <BR> if (file-f_flags & O_NONBLOCK) { <BR> retval = -EAGAIN; <BR> goto out; <BR> } <BR> if (signal_pending(current)) { <BR> retval = -ERESTARTSYS; <BR> goto out; <BR> } <BR> schedule(); <BR> } while(1); <BR> retval = put_user(data, (unsigned long *)buf); <BR> if (!retval) <BR> retval = sizeof(unsigned long); <BR>out: <BR> current-state = TASK_RUNNING; <BR> remove_wait_queue(&rtc_wait, &wait); <BR> return retval; <BR>} <BR>What happens in rtc_read() is this: <BR>1. We declare a wait queue element pointing to current process context <BR>2. We add this element to the rtc_wait waitqueue <BR>3. We mark current context as TASK_INTERRUPTIBLE which means it will not be <BR>rescheduled after the next time it sleeps <BR>4. We check if there is no data available, if there is we break out, copy da <BR>ta to user buffer, mark ourselves as TASK_RUNNING, remove from the wait queu <BR>e and return <BR>5. If there is no data yet we check if user specified non-blocking io and if <BR> so we fail with EAGAIN (which is the same as EWOULDBLOCK) <BR>6. We also check if a signal is pending and if so inform the "higher layers" <BR> to restart the system call if necessary. By "if necessary" I meant the deta <BR>ils of signal disposition as specified in sigaction(2) system call <BR>7. Then we "switch out", i.e. fall asleep. until woken up by the interrupt h <BR>andler. If we didn't mark ourselves as TASK_INTERRUPTIBLE then the scheduler <BR> could schedule as sooner than when the data is available and cause unneeded <BR> processing <BR>It is also worth pointing out that using wait queues it is rather easy to im <BR>plement poll(2) system call: <BR>static unsigned int rtc_poll(struct file *file, poll_table *wait) <BR>{ <BR> unsigned long l; <BR> poll_wait(file, &rtc_wait, wait); <BR> spin_lock_irq(&rtc_lock); <BR> l = rtc_irq_data; <BR> spin_unlock_irq(&rtc_lock); <BR> if (l != 0) <BR> return POLLIN | POLLRDNORM; <BR> return 0; <BR>} <BR>All the work is done by device-independent function poll_wait() which does t <BR>he necessary waitqueue manipulations all we need is to point it to the waitq <BR>ueue which is woken up by our device-specific interrupt handler. <BR>2.6 Kernel Timers <BR>Now let us turn our attention to kernel timers. Kernel timers are used to di <BR>spatch execution of a particular function (called 'timer handler') at a spec <BR>ified time in the future. The main data structure is 'struct timer_list' dec <BR>lared in include/linux/timer.h: <BR>struct timer_list { <BR> struct list_head list; <BR> unsigned long expires; <BR> &nbs
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -