⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.wisc.edu^~cs537-1^memory.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 2 页
字号:
combining a block of size 2<sup>n</sup> with its buddy creates a properlyaligned block of size 2<sup>n+1</sup>For example, blocks of size 4 could start at addresses 0, 4, 8, 12, 16, 20, etc.The blocks at 0 and 4 are buddies; combining them gives a block at 0 oflength 8.  Similarly 8 and 12 are buddies, 16 and 20 are buddies, etc.The blocks at 4 and 8 are not buddies even though they are neighbors:Combining them would give a block of size 8 starting at address 4, whichis not a multiple of 8.The address of a block's buddycan be easily calculated by flipping the nth bit from the right in thebinary representation of the block's address.For example, the pairs of buddies (0,4), (8,12), (16,20) in binary are(00000,00100), (01000,01100), (10000,10100).  In each case, the two addressesin the pair differ only in the third bit from the right.In short, you can find the address of the buddy of a block by taking theexclusive <em>or</em> of the address of the block with its size.To allocate a block of a given size, first round the size up to the nextpower of two and look on the list of blocks of that size.If that list is empty, split a block from the next higher list (if thatlist is empty, first add two blocks to it by splitting a block from thenext higher list, and so on).When deallocating a block, first check to see whether the block's buddyis free.  If so, combine the block with its buddy and add the resultingblock to the next higher free list.  As with allocations, deallocationscan cascade to higher and higher lists.<a name="core_compaction"> <h3> Compaction and Garbage Collection </h3> </a>What do you do when you run out of memory?Any of these methods can fail because all the memory is allocated,or because there is too much fragmentation.Malloc, which is being used to allocate the data segmentof a Unix process, just gives up and calls the (expensive) OS call toexpand the data segment.A memory manager allocating real physical memory doesn't have thatluxury.  The allocation attempt simply fails.There are two ways of delaying this catastrophe, compaction and garbagecollection.<p>Compaction attacks the problem of fragmentation by moving all the allocatedblocks to one end of memory, thus combining all the holes.Aside from the obvious cost of all that copying, there is an importantlimitation to compaction:Any pointers to a block need to be updated when the block is moved.Unless it is possible to find all such pointers, compaction is not possible.Pointers can stored in the allocated blocks themselves as well as otherplaces in the client of the memory manager.In some situations, pointers can point not only to the start of blocksbut also into their bodies.For example, if a block contains executable code, a branch instructionmight be a pointer to another location in the same block.Compaction is performed in three phases.First, the new location of each block is calculated to determine thedistance the block will be moved.Then each pointer is updated by adding to it the amount that the block itis pointing (in)to will be moved.Finally, the data is actually moved.There are various clever tricks possible to combine these operations.<p>Garbage collection finds blocks of memory that are inaccessible and returns them to the free list.As with compaction, garbage collection normally assumes we find allpointers to blocks, both within the blocks themselves and ``from the outside.''If that is not possible, we can stilldo ``conservative'' garbage collection in whichevery word in memory that contains a value that appears to be a pointeris treated as a pointer.The conservative approach may fail to collect blocks that are garbage,but it will never mistakenly collect accessible blocks.There are three main approaches to garbage collection:reference counting, mark-and-sweep, and generational algorithms.<p>Reference counting keeps in each block a count of the number of pointersto the block.When the count drops to zero, the block may be freed.This approach is only practical in situations where there is some``higher level'' software to keep track of the counts (it's much too hardto do by hand), and even then, it will not detect cyclic structures ofgarbage:Consider a cycle of blocks, each of which is only pointed to by itspredecessor in the cycle.Each block has a reference count of 1, but the entire cycle is garbage.<p>Mark-and-sweep works in two passes:First we mark all non-garbage blocks by doing a depth-first searchstarting with each pointer ``from outside'':<pre><font color="0f0fff">    void mark(Address b) {        mark block b;        for (each pointer p in block b) {            if (the block pointed to by p is not marked)                mark(p);        }    }</font></pre>The second pass sweeps through all blocks and returns the unmarked onesto the free list.The sweep pass usually also does compaction, as decribed above.<p>There are two problems with mark-and-sweep.First, the amount of work in the mark pass is proportional to the amount of<em>non</em>-garbage.  Thus if memory is nearly full, it will do a lotof work with very little payoff.Second, the mark phase does a lot of jumping around in memory, which isbad for virtual memory systems, as we will soon see.<p>The third approach to garbage collection is called <em>generational</em>collection.Memory is divided into <em>spaces</em>.When a space is chosen for garbage collection, all subsequent referencesto objects in that space cause the object to be copied to a new space.After a while, the old space either becomes empty and can be returned to thefree list all at once, or at least it becomes so sparse that a mark-and-sweepgarbage collection on it will be cheap.As an empirical fact, objects tend to be either short-lived or long-lived.In other words, an object that has survived for a while is likely tolive a lot longer.By carefully choosing where to move objects when they are referenced,we can arrange to have some spaces filled only with long-lived objects,which are very unlikely to become garbage.We garbage-collect these spaces seldom if ever.<a name="core_swapping"> <h3> Swapping </h3> </a>When all else fails, <samp><font color="0f0fff">allocate</font></samp> simply fails.In the case of an application program, it may be adequate to simplyprint an error message and exit.An OS must be able recover more gracefully.<p>We motivated memory management by the desire to have many processes inmemory at once.In a batch system, if the OS cannot allocate memory to start a new job, itcan ``recover'' by simply delaying starting the job.If there is a queue of jobs waiting to be created, the OS mightwant to go down the list, looking for a smaller job that can becreated right away.This approach maximizes utilization of memory, but can starve large jobs.The situation is analogous to short-term CPU scheduling, in which SJFgives optimal CPU utilization but can starve long bursts.The same trick works here: aging.As a job waits longer and longer, increase its priority, until itspriority is so high that the OS refuses to skip over it looking fora more recently arrived but smaller job.<p>An alterantive way of avoiding starvation is to usea memory-allocation scheme with fixed partitions (holesare not split or combined).  Assuming no job is bigger than the biggestpartion, there will be no starvation, provided that each time a partitionis freed, we start the first job in line that is smaller than thatpartition.However, we have another choice analogous to thedifference between first-fit and best fit.Of course we want to use the ``best'' hole for each job (the smallestfree partition that is at least as big as the job), but suppose thenext job in line is small and all the small partitions arecurrently in use.We might want to delay starting that job and look through the arrivalqueue for a job that better uses the partitions currently available.This policy re-introduces the possibility of starvation, which wecan combat by aging, as above.<p>If a disk is available, we can also <em>swap</em> blocked jobs out to disk.When a job finishes, we first swap back jobs form disk before allowingnew jobs to start.When a job is blocked (either because it wants to do I/O or because ourshort-term scheduling algorithm says to switch to another job), we have achoice of leaving it in memory or swapping it out.One way of looking at this scheme is that it increases the multiprogramminglevel (the number of jobs ``in memory'') at the cost of making it(<em>much</em>) more expensive to switch jobs.A variant of the MLFQ (multi-level feedback queues) CPU scheduling algorithm isparticularly attractive for this situation.The queues are numbered from 0 up to some maximum.When a job becomes ready, it enters queue zero.The CPU scheduler always runs a job from the lowest-numbered non-emptyqueue (i.e., the priority is the negative of the queue number).It runs a job from queue <em>i</em> for a maximum of <em>i</em> quanta.If the job does not block or complete within that time limit, it isadded to the next higher queue.This algorithm behaves like RR with short quata in that short burstsget high priority, but does not incur the overhead of frequentswaps between jobs with long bursts.The number of swaps is limited to the logarithm of the burst size.<hr><address><i><!WA5><a HREF="mailto:solomon@cs.wisc.edu">solomon@cs.wisc.edu</a><br>Thu Oct 31 15:38:54 CST 1996</i></address><br>Copyright &#169; 1996 by Marvin Solomon.  All rights reserved.</body></html>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -