📄 readme
字号:
$PostgreSQL: pgsql/src/backend/storage/buffer/README,v 1.12 2007/05/30 20:11:58 tgl Exp $Notes about shared buffer access rules--------------------------------------There are two separate access control mechanisms for shared disk buffers:reference counts (a/k/a pin counts) and buffer content locks. (Actually,there's a third level of access control: one must hold the appropriate kindof lock on a relation before one can legally access any page belonging tothe relation. Relation-level locks are not discussed here.)Pins: one must "hold a pin on" a buffer (increment its reference count)before being allowed to do anything at all with it. An unpinned buffer issubject to being reclaimed and reused for a different page at any instant,so touching it is unsafe. Normally a pin is acquired via ReadBuffer andreleased via ReleaseBuffer. It is OK and indeed common for a singlebackend to pin a page more than once concurrently; the buffer managerhandles this efficiently. It is considered OK to hold a pin for longintervals --- for example, sequential scans hold a pin on the current pageuntil done processing all the tuples on the page, which could be quite awhile if the scan is the outer scan of a join. Similarly, btree indexscans hold a pin on the current index page. This is OK because normaloperations never wait for a page's pin count to drop to zero. (Anythingthat might need to do such a wait is instead handled by waiting to obtainthe relation-level lock, which is why you'd better hold one first.) Pinsmay not be held across transaction boundaries, however.Buffer content locks: there are two kinds of buffer lock, shared and exclusive,which act just as you'd expect: multiple backends can hold shared locks onthe same buffer, but an exclusive lock prevents anyone else from holdingeither shared or exclusive lock. (These can alternatively be called READand WRITE locks.) These locks are intended to be short-term: they should notbe held for long. Buffer locks are acquired and released by LockBuffer().It will *not* work for a single backend to try to acquire multiple locks onthe same buffer. One must pin a buffer before trying to lock it.Buffer access rules:1. To scan a page for tuples, one must hold a pin and either shared orexclusive content lock. To examine the commit status (XIDs and status bits)of a tuple in a shared buffer, one must likewise hold a pin and either sharedor exclusive lock.2. Once one has determined that a tuple is interesting (visible to thecurrent transaction) one may drop the content lock, yet continue to accessthe tuple's data for as long as one holds the buffer pin. This is what istypically done by heap scans, since the tuple returned by heap_fetchcontains a pointer to tuple data in the shared buffer. Therefore thetuple cannot go away while the pin is held (see rule #5). Its state couldchange, but that is assumed not to matter after the initial determinationof visibility is made.3. To add a tuple or change the xmin/xmax fields of an existing tuple,one must hold a pin and an exclusive content lock on the containing buffer.This ensures that no one else might see a partially-updated state of thetuple while they are doing visibility checks.4. It is considered OK to update tuple commit status bits (ie, OR thevalues HEAP_XMIN_COMMITTED, HEAP_XMIN_INVALID, HEAP_XMAX_COMMITTED, orHEAP_XMAX_INVALID into t_infomask) while holding only a shared lock andpin on a buffer. This is OK because another backend looking at the tupleat about the same time would OR the same bits into the field, so thereis little or no risk of conflicting update; what's more, if there didmanage to be a conflict it would merely mean that one bit-update wouldbe lost and need to be done again later. These four bits are only hints(they cache the results of transaction status lookups in pg_clog), so nogreat harm is done if they get reset to zero by conflicting updates.5. To physically remove a tuple or compact free space on a page, onemust hold a pin and an exclusive lock, *and* observe while holding theexclusive lock that the buffer's shared reference count is one (ie,no other backend holds a pin). If these conditions are met then no otherbackend can perform a page scan until the exclusive lock is dropped, andno other backend can be holding a reference to an existing tuple that itmight expect to examine again. Note that another backend might pin thebuffer (increment the refcount) while one is performing the cleanup, butit won't be able to actually examine the page until it acquires sharedor exclusive content lock.Rule #5 only affects VACUUM operations. Obtaining thenecessary lock is done by the bufmgr routine LockBufferForCleanup().It first gets an exclusive lock and then checks to see if the shared pincount is currently 1. If not, it releases the exclusive lock (but not thecaller's pin) and waits until signaled by another backend, whereupon ittries again. The signal will occur when UnpinBuffer decrements the sharedpin count to 1. As indicated above, this operation might have to wait agood while before it acquires lock, but that shouldn't matter much forconcurrent VACUUM. The current implementation only supports a singlewaiter for pin-count-1 on any particular shared buffer. This is enoughfor VACUUM's use, since we don't allow multiple VACUUMs concurrently on asingle relation anyway.Buffer manager's internal locking---------------------------------Before PostgreSQL 8.1, all operations of the shared buffer manager itselfwere protected by a single system-wide lock, the BufMgrLock, whichunsurprisingly proved to be a source of contention. The new locking schemeavoids grabbing system-wide exclusive locks in common code paths. It workslike this:* There is a system-wide LWLock, the BufMappingLock, that notionallyprotects the mapping from buffer tags (page identifiers) to buffers.(Physically, it can be thought of as protecting the hash table maintainedby buf_table.c.) To look up whether a buffer exists for a tag, it issufficient to obtain share lock on the BufMappingLock. Note that onemust pin the found buffer, if any, before releasing the BufMappingLock.To alter the page assignment of any buffer, one must hold exclusive lockon the BufMappingLock. This lock must be held across adjusting the buffer'sheader fields and changing the buf_table hash table. The only commonoperation that needs exclusive lock is reading in a page that was notin shared buffers already, which will require at least a kernel calland usually a wait for I/O, so it will be slow anyway.* As of PG 8.2, the BufMappingLock has been split into NUM_BUFFER_PARTITIONSseparate locks, each guarding a portion of the buffer tag space. This allowsfurther reduction of contention in the normal code paths. The partitionthat a particular buffer tag belongs to is determined from the low-orderbits of the tag's hash value. The rules stated above apply to each partitionindependently. If it is necessary to lock more than one partition at a time,they must be locked in partition-number order to avoid risk of deadlock.* A separate system-wide LWLock, the BufFreelistLock, provides mutualexclusion for operations that access the buffer free list or selectbuffers for replacement. This is always taken in exclusive mode sincethere are no read-only operations on those data structures. The buffermanagement policy is designed so that BufFreelistLock need not be takenexcept in paths that will require I/O, and thus will be slow anyway.(Details appear below.) It is never necessary to hold the BufMappingLockand the BufFreelistLock at the same time.* Each buffer header contains a spinlock that must be taken when examiningor changing fields of that buffer header. This allows operations such asReleaseBuffer to make local state changes without taking any system-widelock. We use a spinlock, not an LWLock, since there are no cases wherethe lock needs to be held for more than a few instructions.Note that a buffer header's spinlock does not control access to the dataheld within the buffer. Each buffer header also contains an LWLock, the"buffer content lock", that *does* represent the right to access the datain the buffer. It is used per the rules above.There is yet another set of per-buffer LWLocks, the io_in_progress locks,that are used to wait for I/O on a buffer to complete. The process doinga read or write takes exclusive lock for the duration, and processes thatneed to wait for completion try to take shared locks (which they releaseimmediately upon obtaining). XXX on systems where an LWLock representsnontrivial resources, it's fairly annoying to need so many locks. Possiblywe could use per-backend LWLocks instead (a buffer header would then containa field to show which backend is doing its I/O).Normal buffer replacement strategy----------------------------------There is a "free list" of buffers that are prime candidates for replacement.In particular, buffers that are completely free (contain no valid page) arealways in this list. We could also throw buffers into this list if weconsider their pages unlikely to be needed soon; however, the currentalgorithm never does that. The list is singly-linked using fields in thebuffer headers; we maintain head and tail pointers in global variables.(Note: although the list links are in the buffer headers, they areconsidered to be protected by the BufFreelistLock, not the buffer-headerspinlocks.) To choose a victim buffer to recycle when there are no freebuffers available, we use a simple clock-sweep algorithm, which avoids theneed to take system-wide locks during common operations. It works likethis:Each buffer header contains a usage counter, which is incremented (up to asmall limit value) whenever the buffer is unpinned. (This requires only thebuffer header spinlock, which would have to be taken anyway to decrement thebuffer reference count, so it's nearly free.)The "clock hand" is a buffer index, NextVictimBuffer, that moves circularlythrough all the available buffers. NextVictimBuffer is protected by theBufFreelistLock.The algorithm for a process that needs to obtain a victim buffer is:1. Obtain BufFreelistLock.2. If buffer free list is nonempty, remove its head buffer. If the bufferis pinned or has a nonzero usage count, it cannot be used; ignore it andreturn to the start of step 2. Otherwise, pin the buffer, releaseBufFreelistLock, and return the buffer.3. Otherwise, select the buffer pointed to by NextVictimBuffer, andcircularly advance NextVictimBuffer for next time.4. If the selected buffer is pinned or has a nonzero usage count, it cannotbe used. Decrement its usage count (if nonzero) and return to step 3 toexamine the next buffer.5. Pin the selected buffer, release BufFreelistLock, and return the buffer.(Note that if the selected buffer is dirty, we will have to write it outbefore we can recycle it; if someone else pins the buffer meanwhile we willhave to give up and try another buffer. This however is not a concernof the basic select-a-victim-buffer algorithm.)Buffer ring replacement strategy---------------------------------When running a query that needs to access a large number of pages just once,such as VACUUM or a large sequential scan, a different strategy is used.A page that has been touched only by such a scan is unlikely to be neededagain soon, so instead of running the normal clock sweep algorithm andblowing out the entire buffer cache, a small ring of buffers is allocatedusing the normal clock sweep algorithm and those buffers are reused for thewhole scan. This also implies that much of the write traffic caused by sucha statement will be done by the backend itself and not pushed off onto otherprocesses.For sequential scans, a 256KB ring is used. That's small enough to fit in L2cache, which makes transferring pages from OS cache to shared buffer cacheefficient. Even less would often be enough, but the ring must be big enoughto accommodate all pages in the scan that are pinned concurrently. 256KBshould also be enough to leave a small cache trail for other backends tojoin in a synchronized seq scan. If a ring buffer is dirtied and its LSNupdated, we would normally have to write and flush WAL before we couldre-use the buffer; in this case we instead discard the buffer from the ringand (later) choose a replacement using the normal clock-sweep algorithm.Hence this strategy works best for scans that are read-only (or at worstupdate hint bits). In a scan that modifies every page in the scan, like abulk UPDATE or DELETE, the buffers in the ring will always be dirtied andthe ring strategy effectively degrades to the normal strategy.VACUUM uses a 256KB ring like sequential scans, but dirty pages are notremoved from the ring. Instead, WAL is flushed if needed to allow reuse ofthe buffers. Before introducing the buffer ring strategy in 8.3, VACUUM'sbuffers were sent to the freelist, which was effectively a buffer ring of 1buffer, resulting in excessive WAL flushing. Allowing VACUUM to update256KB between WAL flushes should be more efficient.Background writer's processing------------------------------The background writer is designed to write out pages that are likely to berecycled soon, thereby offloading the writing work from active backends.To do this, it scans forward circularly from the current position ofNextVictimBuffer (which it does not change!), looking for buffers that aredirty and not pinned nor marked with a positive usage count. It pins,writes, and releases any such buffer.If we can assume that reading NextVictimBuffer is an atomic action, thenthe writer doesn't even need to take the BufFreelistLock in order to lookfor buffers to write; it needs only to spinlock each buffer header for longenough to check the dirtybit. Even without that assumption, the writeronly needs to take the lock long enough to read the variable value, notwhile scanning the buffers. (This is a very substantial improvement inthe contention cost of the writer compared to PG 8.0.)During a checkpoint, the writer's strategy must be to write every dirtybuffer (pinned or not!). We may as well make it start this scan from NextVictimBuffer, however, so that the first-to-be-written pages are theones that backends might otherwise have to write for themselves soon.The background writer takes shared content lock on a buffer while writing itout (and anyone else who flushes buffer contents to disk must do so too).This ensures that the page image transferred to disk is reasonably consistent.We might miss a hint-bit update or two but that isn't a problem, for the samereasons mentioned under buffer access rules.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -