📄 kernel-locking.tmpl
字号:
__cache_add(obj);</programlisting><para>Note that I decide that the <structfield>popularity</structfield>count should be protected by the <symbol>cache_lock</symbol> ratherthan the per-object lock: this is because it (like the<structname>struct list_head</structname> inside the object) islogically part of the infrastructure. This way, I don't need to grabthe lock of every object in <function>__cache_add</function> whenseeking the least popular.</para><para>I also decided that the <structfield>id</structfield> member isunchangeable, so I don't need to grab each object lock in<function>__cache_find()</function> to examine the<structfield>id</structfield>: the object lock is only used by acaller who wants to read or write the <structfield>name</structfield>field.</para><para>Note also that I added a comment describing what data was protected bywhich locks. This is extremely important, as it describes the runtimebehavior of the code, and can be hard to gain from just reading. Andas Alan Cox says, <quote>Lock data, not code</quote>.</para></sect1></chapter> <chapter id="common-problems"> <title>Common Problems</title> <sect1 id="deadlock"> <title>Deadlock: Simple and Advanced</title> <para> There is a coding bug where a piece of code tries to grab a spinlock twice: it will spin forever, waiting for the lock to be released (spinlocks, rwlocks and semaphores are not recursive in Linux). This is trivial to diagnose: not a stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem. </para> <para> For a slightly more complex case, imagine you have a region shared by a softirq and user context. If you use a <function>spin_lock()</function> call to protect it, it is possible that the user context will be interrupted by the softirq while it holds the lock, and the softirq will then spin forever trying to get the same lock. </para> <para> Both of these are called deadlock, and as shown above, it can occur even with a single CPU (although not on UP compiles, since spinlocks vanish on kernel compiles with <symbol>CONFIG_SMP</symbol>=n. You'll still get data corruption in the second example). </para> <para> This complete lockup is easy to diagnose: on SMP boxes the watchdog timer or compiling with <symbol>DEBUG_SPINLOCKS</symbol> set (<filename>include/linux/spinlock.h</filename>) will show this up immediately when it happens. </para> <para> A more complex problem is the so-called 'deadly embrace', involving two or more locks. Say you have a hash table: each entry in the table is a spinlock, and a chain of hashed objects. Inside a softirq handler, you sometimes want to alter an object from one place in the hash to another: you grab the spinlock of the old hash chain and the spinlock of the new hash chain, and delete the object from the old one, and insert it in the new one. </para> <para> There are two problems here. First, if your code ever tries to move the object to the same chain, it will deadlock with itself as it tries to lock it twice. Secondly, if the same softirq on another CPU is trying to move another object in the reverse direction, the following could happen: </para> <table> <title>Consequences</title> <tgroup cols="2" align="left"> <thead> <row> <entry>CPU 1</entry> <entry>CPU 2</entry> </row> </thead> <tbody> <row> <entry>Grab lock A -> OK</entry> <entry>Grab lock B -> OK</entry> </row> <row> <entry>Grab lock B -> spin</entry> <entry>Grab lock A -> spin</entry> </row> </tbody> </tgroup> </table> <para> The two CPUs will spin forever, waiting for the other to give up their lock. It will look, smell, and feel like a crash. </para> </sect1> <sect1 id="techs-deadlock-prevent"> <title>Preventing Deadlock</title> <para> Textbooks will tell you that if you always lock in the same order, you will never get this kind of deadlock. Practice will tell you that this approach doesn't scale: when I create a new lock, I don't understand enough of the kernel to figure out where in the 5000 lock hierarchy it will fit. </para> <para> The best locks are encapsulated: they never get exposed in headers, and are never held around calls to non-trivial functions outside the same file. You can read through this code and see that it will never deadlock, because it never tries to grab another lock while it has that one. People using your code don't even need to know you are using a lock. </para> <para> A classic problem here is when you provide callbacks or hooks: if you call these with the lock held, you risk simple deadlock, or a deadly embrace (who knows what the callback will do?). Remember, the other programmers are out to get you, so don't do this. </para> <sect2 id="techs-deadlock-overprevent"> <title>Overzealous Prevention Of Deadlocks</title> <para> Deadlocks are problematic, but not as bad as data corruption. Code which grabs a read lock, searches a list, fails to find what it wants, drops the read lock, grabs a write lock and inserts the object has a race condition. </para> <para> If you don't see why, please stay the fuck away from my code. </para> </sect2> </sect1> <sect1 id="racing-timers"> <title>Racing Timers: A Kernel Pastime</title> <para> Timers can produce their own special problems with races. Consider a collection of objects (list, hash, etc) where each object has a timer which is due to destroy it. </para> <para> If you want to destroy the entire collection (say on module removal), you might do the following: </para> <programlisting> /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE HUNGARIAN NOTATION */ spin_lock_bh(&list_lock); while (list) { struct foo *next = list->next; del_timer(&list->timer); kfree(list); list = next; } spin_unlock_bh(&list_lock); </programlisting> <para> Sooner or later, this will crash on SMP, because a timer can have just gone off before the <function>spin_lock_bh()</function>, and it will only get the lock after we <function>spin_unlock_bh()</function>, and then try to free the element (which has already been freed!). </para> <para> This can be avoided by checking the result of <function>del_timer()</function>: if it returns <returnvalue>1</returnvalue>, the timer has been deleted. If <returnvalue>0</returnvalue>, it means (in this case) that it is currently running, so we can do: </para> <programlisting> retry: spin_lock_bh(&list_lock); while (list) { struct foo *next = list->next; if (!del_timer(&list->timer)) { /* Give timer a chance to delete this */ spin_unlock_bh(&list_lock); goto retry; } kfree(list); list = next; } spin_unlock_bh(&list_lock); </programlisting> <para> Another common problem is deleting timers which restart themselves (by calling <function>add_timer()</function> at the end of their timer function). Because this is a fairly common case which is prone to races, you should use <function>del_timer_sync()</function> (<filename class="headerfile">include/linux/timer.h</filename>) to handle this case. It returns the number of times the timer had to be deleted before we finally stopped it from adding itself back in. </para> </sect1> </chapter> <chapter id="Efficiency"> <title>Locking Speed</title> <para>There are three main things to worry about when considering speed ofsome code which does locking. First is concurrency: how many thingsare going to be waiting while someone else is holding a lock. Secondis the time taken to actually acquire and release an uncontended lock.Third is using fewer, or smarter locks. I'm assuming that the lock isused fairly often: otherwise, you wouldn't be concerned aboutefficiency.</para> <para>Concurrency depends on how long the lock is usually held: you shouldhold the lock for as long as needed, but no longer. In the cacheexample, we always create the object without the lock held, and thengrab the lock only when we are ready to insert it in the list.</para> <para>Acquisition times depend on how much damage the lock operations do tothe pipeline (pipeline stalls) and how likely it is that this CPU wasthe last one to grab the lock (ie. is the lock cache-hot for thisCPU): on a machine with more CPUs, this likelihood drops fast.Consider a 700MHz Intel Pentium III: an instruction takes about 0.7ns,an atomic increment takes about 58ns, a lock which is cache-hot onthis CPU takes 160ns, and a cacheline transfer from another CPU takesan additional 170 to 360ns. (These figures from Paul McKenney's<ulink url="http://www.linuxjournal.com/article.php?sid=6993"> LinuxJournal RCU article</ulink>).</para> <para>These two aims conflict: holding a lock for a short time might be doneby splitting locks into parts (such as in our final per-object-lockexample), but this increases the number of lock acquisitions, and theresults are often slower than having a single lock. This is anotherreason to advocate locking simplicity.</para> <para>The third concern is addressed below: there are some methods to reducethe amount of locking which needs to be done.</para> <sect1 id="efficiency-rwlocks"> <title>Read/Write Lock Variants</title> <para> Both spinlocks and semaphores have read/write variants: <type>rwlock_t</type> and <structname>struct rw_semaphore</structname>. These divide users into two classes: the readers and the writers. If you are only reading the data, you can get a read lock, but to write to the data you need the write lock. Many people can hold a read lock, but a writer must be sole holder. </para> <para> If your code divides neatly along reader/writer lines (as our cache code does), and the lock is held by readers for significant lengths of time, using these locks can help. They are slightly slower than the normal locks though, so in practice <type>rwlock_t</type> is not usually worthwhile. </para> </sect1> <sect1 id="efficiency-read-copy-update"> <title>Avoiding Locks: Read Copy Update</title> <para> There is a special method of read/write locking called Read Copy Update. Using RCU, the readers can avoid taking a lock altogether: as we expect our cache to be read more often than updated (otherwise the cache is a waste of time), it is a candidate for this optimization. </para> <para> How do we get rid of read locks? Getting rid of read locks means that writers may be changing the list underneath the readers. That is actually quite simple: we can read a linked list while an element is being added if the writer adds the element very carefully. For example, adding <symbol>new</symbol> to a single linked list called <symbol>list</symbol>: </para> <programlisting> new->next = list->next; wmb(); list->next = new; </programlisting> <para> The <function>wmb()</function> is a write memory barrier. It ensures that the first operation (setting the new element's <symbol>next</symbol> pointer) is complete and will be seen by all CPUs, before the second operation is (putting the new element into the list). This is important, since modern compilers and modern CPUs can both reorder instructions unless told otherwise: we want a reader to either not see the new element at all, or see the new element with the <symbol>next</symbol> pointer correctly pointing at the rest of the list. </para> <para> Fortunately, there is a function to do this for standard <structname>struct list_head</structname> lists: <function>list_add_rcu()</function> (<filename>include/linux/list.h</filename>). </para> <para> Removing an element from the list is even simpler: we replace the pointer to the old element with a pointer to its successor, and readers will either see it, or skip over it. </para> <programlisting> list->next = old->next; </programlisting> <para> There is <function>list_del_rcu()</function> (<filename>include/linux/list.h</filename>) which does this (the normal version poisons the old object, which we don't want). </para> <para> The reader must also be careful: some CPUs can look through the <symbol>next</symbol> pointer to start reading the contents of the next element early, but don't realize that the pre-fetched contents is wrong when the <symbol>next</symbol> pointer changes underneath them. Once again, there is a <function>list_for_each_entry_rcu()</function> (<filename>include/linux/list.h</filename>) to help you. Of course, writers can just use <function>list_for_each_entry()</function>, since there cannot be two simultaneous writers. </para> <para> Our final dilemma is this: when can we actually destroy the removed element? Remember, a reader might be stepping through this element in the list right now: if we free this element and the <symbol>next</symbol> pointer changes, the reader will jump off into garbage and crash. We need to wait until we know that all the readers who were traversing the list when we deleted the element are finished. We use <function>call_rcu()</function> to register a callback which will actually destroy the object once the readers are finished. </para> <para> But how does Read Copy Update know when the readers are finished? The method is this: firstly, the readers always traverse the list inside <function>rcu_read_lock()</function>/<function>rcu_read_unlock()</function> pairs: these simply disable preemption so the reader won't go to sleep while reading the list. </para> <para> RCU then waits until every other CPU has slept at least once: since readers cannot sleep, we know that any readers which were traversing the list during the deletion are finished, and the callback is triggered. The real Read Copy Update code is a little more optimized than this, but this is the fundamental idea. </para><programlisting>--- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100+++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100@@ -1,15 +1,18 @@ #include <linux/list.h> #include <linux/slab.h> #include <linux/string.h>+#include <linux/rcupdate.h> #include <asm/semaphore.h> #include <asm/errno.h> struct object {- /* These two protected by cache_lock. */
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -