📄 00000045.htm
字号:
ed, on SMP only process not already running on another cpu is eligible to be <BR> scheduled on this cpu. The goodness is calculated by a function called good <BR>ness() which treats realtime processes by making their goodness very high 10 <BR>00 + p-rt_priority, this being greater than 1000 guarantees that no SCHED_OT <BR>HER process can win so they only contend with other realtime processes that <BR>may have a greater p-rt_priority. The goodness function returns 0 if the pro <BR>cess' time slice (p-counter) is over. For non-realtime processes the initial <BR> value of goodness is set to p-counter - this way the process is less likely <BR> to get CPU if it already had it for a while, i.e. interactive processes are <BR> favoured more than cpu-bound number crunchers. The arch-specific constant P <BR>ROC_CHANGE_PENALTY attempts to implement "cpu affinity" i.e. give advantage <BR>to a process on the same cpu. It also gives slight advantage to processes wi <BR>th mm pointing to current active_mm or to processes with no (user) address s <BR>pace, i.e. kernel threads. <BR>13. if the current value of goodness is 0 then the entire list of processes <BR>(not just runqueue!) is examined and their dynamic priorities are recalculat <BR>ed using simple algorithm: <BR>recalculate: <BR> { <BR> struct task_struct *p; <BR> spin_unlock_irq(&runqueue_lock); <BR> read_lock(&tasklist_lock); <BR> for_each_task(p) <BR> p-counter = (p-counter 1) + p-priority; <BR> read_unlock(&tasklist_lock); <BR> spin_lock_irq(&runqueue_lock); <BR> } <BR>Note that the we drop the runqueue_lock before we recalculate because we go <BR>through entire set of processes which can take a long time whilst the schedu <BR>le() could be called on another cpu and select a process with goodness good <BR>enough for that cpu whilst we on this cpu were forced to recalculate. Ok, ad <BR>mittedly this is somewhat inconsistent because while we (on this cpu) are se <BR>lecting a process with the best goodness, schedule() running on another cpu <BR>could be recalculating dynamic priorities <BR>14. From this point on it is certain that 'next' points to the task to be sc <BR>heduled so we initialise next-has_cpu to 1 and next-processor to this_cpu. T <BR>he runqueue_lock can now be unlocked. <BR>15. If we are switching back to the same task (next == prev) then we can sim <BR>ply reacquire the global kernel lock and return, i.e. skip all the hardware- <BR>level (registers, stack etc.) and VM-related (switch page directory, recalcu <BR>late active_mm etc.) stuff <BR>16. The macro switch_to() is architecture specific and (on i386) it is conce <BR>rned with a) FPU handling b) LDT handling c) reloading segment registers d) <BR>TSS handling and e) reloading debug registers <BR>2.4 Linux linked list implementation <BR>Before we go on to examine implementation of wait queues we must acquaint ou <BR>rselves with the Linux standard doubly-linked list implementation because wa <BR>it queues (as well as everything else in Linux) makes heavy use of them and <BR>they are called in jargon "list.h implementation" because the most relevant <BR>file is include/linux/list.h. <BR>The fundamental data structure here is 'struct list_head': <BR>struct list_head { <BR> struct list_head *next, *prev; <BR>}; <BR>#define LIST_HEAD_INIT(name) { &(name), &(name) } <BR>#define LIST_HEAD(name) \ <BR> struct list_head name = LIST_HEAD_INIT(name) <BR>#define INIT_LIST_HEAD(ptr) do { \ <BR> (ptr)-next = (ptr); (ptr)-prev = (ptr); \ <BR>} while (0) <BR>#define list_entry(ptr, type, member) \ <BR> ((type *)((char *)(ptr)-(unsigned long)(&((type *)0)-member))) <BR>#define list_for_each(pos, head) \ <BR> for (pos = (head)-next; pos != (head); pos = pos-next) <BR>The first three macros are for initialising an empty list by pointing both n <BR>ext and prev pointers to itself. It is obvious from C syntactical restrictio <BR>ns which ones should be used where - for example, LIST_HEAD_INIT() can be us <BR>ed for structure's element initialisation in declaration, the second can be <BR>used for static variable initialising declarations and the third can be used <BR> inside a function. <BR>The macro list_entry() gives access to individual list element, for example: <BR> (from fs/file_table.c:fs_may_remount_ro()) <BR>struct super_block { <BR> ... <BR> struct list_head s_files; <BR> ... <BR>} *sb = &some_super_block; <BR>struct file { <BR> ... <BR> struct list_head f_list; <BR> ... <BR>} *file; <BR>struct list_head *p; <BR>for (p = sb-s_files.next; p != &sb-s_files; p = p-next) { <BR> struct file *file = list_entry(p, struct file, f_list); <BR> do something to 'file' <BR>} <BR>A good example of the use of list_for_each() macro is in the scheduler where <BR> we walk the runqueue looking for the process with highest goodness: <BR>static LIST_HEAD(runqueue_head); <BR>struct list_head *tmp; <BR>struct task_struct *p; <BR>list_for_each(tmp, &runqueue_head) { <BR> p = list_entry(tmp, struct task_struct, run_list); <BR> if (can_schedule(p)) { <BR> int weight = goodness(p, this_cpu, prev-active_mm); <BR> if (weight c) <BR> c = weight, next = p; <BR> } <BR>} <BR>Here p-run_list is declared as 'struct list_head run_list' inside task_struc <BR>t structure and serves as anchor to the list. Removing an element from the l <BR>ist and adding (to head or tail of the list) is done by list_del()/list_add( <BR>)/list_add_tail() macros. The examples below are adding and removing a task <BR>from runqueue: <BR>static inline void del_from_runqueue(struct task_struct * p) <BR>{ <BR> nr_running--; <BR> list_del(&p-run_list); <BR> p-run_list.next = NULL; <BR>} <BR>static inline void add_to_runqueue(struct task_struct * p) <BR>{ <BR> list_add(&p-run_list, &runqueue_head); <BR> nr_running++; <BR>} <BR>static inline void move_last_runqueue(struct task_struct * p) <BR>{ <BR>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -