📄 016_mm_page_alloc_c.html
字号:
} @page { @top { content: flow(header); } @bottom { content: flow(footer); } } /* end default print css */ /* custom css *//* end custom css */ /* ui edited css */ body { font-family: Verdana; font-size: 10.0pt; line-height: normal; background-color: #ffffff; } .documentBG { background-color: #ffffff; } /* end ui edited css */</style> </head> <body revision="dcbsxfpf_57cfvdwzgr:5"> <table align=center cellpadding=0 cellspacing=0 height=5716 width=768>
<tbody>
<tr>
<td height=5716 valign=top width=100%>
<pre>2006-6-1 <br>mm/page_alloc.c<br> 纵观这个模块,实际上是buddy算法的核心所在.除了buddy算法的几个核心函数,<br>其余都是简单的再封装.<br><br>I) 全局量,概述<br><br> 从头到尾的看吧.首先是几个变量的声明,这些变量是lru表的关键组成部分:<br><br>nt nr_swap_pages; /*可用(空闲)的swap page 数量*/<br>int nr_active_pages;/*active_list 中的ram页面个数*/<br>int nr_inactive_dirty_pages; /*inactive_dirty_list 中ram页面的个数*/<br>pg_data_t *pgdat_list; /*node 表*/<br><br>struct list_head active_list; /*像那些被频繁访问的页面*/<br>struct list_head inactive_dirty_list; /*修改过的,看起来最近不会被访问到<br> 的一般已经从进程的页表中脱离*/<br> <br> 在分析mm/filemap.c的时候分析过lru. mm/memory.c中也稍有涉及,主要是从lru<br>表中恢复页面映射.<br> 这里再次用一个图表看一下lru队列和操作lru的几个关键函数的作用:<br><br> mm->rss *********************************<br> |swap_out->try_to_swap_out<br>active_list->******************************* |<br> /\ | refill_inactive_scan <br> |page_launder \ / \ /<br>inactive_dirty_list->*****************************************************<br> | page_launder <br> \ / (wite back to disk if necessary)<br>zone_t->inactive_clean_pages->************* --- reclaim_page->直接使用<br> | kreclaimd->reclaim_page<br> \ /<br>zone_t->free_area(buddy)->***********************<br><br> 这里除了定义lru的几个队列再无其他和lru相关的东西了,我也就是这样简单<br>提提.<br><br> 接下来是关于zone的一组参数:(没有什么好解释的)<br><br>/*zone 参数*/<br>static char *zone_names[MAX_NR_ZONES] = { "DMA", "Normal", "HighMem" };<br>static int zone_balance_ratio[MAX_NR_ZONES] = { 32, 128, 128, };<br>static int zone_balance_min[MAX_NR_ZONES] = { 10 , 10, 10, };<br>static int zone_balance_max[MAX_NR_ZONES] = { 255 , 255, 255, };<br><br><br><br><br>II)Buddy 算法核心<br> 最后开始理解最终要的buddy系统,为了说明白算法到底如何工作,我们先来引入<br>几个重要的概念:<br><br>1)某组page: <br> 假设order是2,则page_idx是0-3的是一组,4-7是一组.组中最低地址的那个页<br>面代表这个组连接进area的list链表,并且组内其他page不会存在于其他任何area<br>的list。<br><br>2)组对(相关组):<br> area中有两个重要的数据结构:<br> (1)list : 属于此order(area)的pgae,或者page组的链表<br> (2)*map : 每一位代表一对相关组(buddy)。所谓相关组就是可以合并成更高<br> order的两个连续page组,我们也称之为相关组.<br> <br> map中的组对bit位的意义我们暂且不说,当释放时0代表这对相关组的其中一组还<br>在使用,无法合并,如果释放是1则代表可以合并。初始值是0,对应的list也为空。<br> <br> <br> <br> map中bit的意义对于理解buddy算法很有好处,可惜一直未能彻底搞明白,这次希<br>望运气好些.<br><br> <br> 通过对初始化的分析看看map中bit位的变化:<br>1)初始为0,也代表配对page无法合并吗?(显然不是)list也为空.内存初始化调用所<br>有页面的释放函数,挂入area0 的list,对应位置1.<br>2)配对的page释放后对应位又变成0,从list摘出先释放的那个一齐放入area1.(如果<br>重复释放某个page,page组,就会出现严重错误)<br><br>3)分配的时候,从areaX的list中出队,mapbit从0变成1,如果这时释放刚分配的page,<br>1代表可以合并.如果另一个组也分配出去了,位图又变成0,代表释放的时候不能合并.<br>此时不会在分配这两个组了,因为list中这两个组都已经出队.<br><br> 所以,操作顺序,bitmap,和list相互配合完成buddy的管理.<br> <br> 仔细考虑,总结一下,map中的bit含义其实是:<br> 0,相关组的状态一致<br> 1,相关组的状态不同<br> (god,是个异或)<br> <br> 所以分配的时候反转两次,释放的时候反转两次.在看上面的分析:<br>1)初始为0,连个页面都处于使用中. 释放其中一个的时候,bit为0,代表状态相同,即<br>都在使用中(我们是在释放啊),我们释放完成后当然无法合并.同时我们释放其中一个<br>后反转bit,bit变成1,代表现在状态不同了.<br><br>2)当配对的一个释放时,发现bit是1,代表状态不同.因为这在使用中,所以另一个已经<br>释放,故这个释放后就可以合并了.这就是为什么释放时bit为0代表可以合并.<br><br>3)请自己分析。<br><br><br> <br> <br> Buddy算法的核心函数之一,__free_pages_ok:释放page组,根据bitmap判断buddy<br>page组的状态,合并buddy page组到更高的order.<br>/*<br> * Buddy system. Hairy. You really aren't expected to understand this<br> * (真的不期望你们理解这段代码---原作者)<br> * Hint: -mask = 1+~mask<br> */<br>static void FASTCALL(__free_pages_ok (struct page *page, unsigned long order));<br>static void __free_pages_ok (struct page *page, unsigned long order)<br>{<br> unsigned long index, page_idx, mask, flags;<br> free_area_t *area;<br> struct page *base;<br> zone_t *zone;<br><br> //要释放的页面不能再有任何人使用了<br> if (page->buffers)<br> BUG();<br> ..........//省略<br> <br> //清除ref,diry位,恢复其初始生命值<br> page->flags &= ~((1<<PG_referenced) | (1<<PG_dirty));<br> page->age = PAGE_AGE_START;<br> <br> zone = page->zone;<br><br> mask = (~0UL) << order; // like: 111111110000,此order的mask<br> base = mem_map + zone->offset;<br> page_idx = page - base; <br> if (page_idx & ~mask) //~mask: 0000000001111,<br> BUG();// page_idx 必须是2^order的整倍数, xxxxxxxx0000<br><br> index = page_idx >> (1 + order); //index是指代表page_idx所指组对<br> //的位图索引,所以是idx>>order>>1<br> <br> area = zone->free_area + order;<br><br> spin_lock_irqsave(&zone->lock, flags);<br><br> zone->free_pages -= mask; //-mask :00010000,2^order<br><br> while (mask + (1 << (MAX_ORDER-1))) {// 11xxxx00+001000000=000xxx00<br> struct page *buddy1, *buddy2;<br><br> if (area >= zone->free_area + MAX_ORDER)<br> BUG();<br> if (!test_and_change_bit(index, area->map))<br> //释放时如果为0,代表另一组还在使用中,并反转对应位<br> /*<br> * the buddy page is still allocated.<br> */<br> break;<br><br> /*<br> * Move the buddy up one level.<br> */<br> buddy1 = base + (page_idx ^ -mask);//-mask:00010000,疑惑取得另一个组<br> buddy2 = base + page_idx;<br> if (BAD_RANGE(zone,buddy1))<br> BUG();<br> if (BAD_RANGE(zone,buddy2))<br> BUG();<br><br> memlist_del(&buddy1->list);<br> mask <<= 1; //11xxxx00->11xxx000,更大的一个order<br> area++;<br> index >>= 1; <br> page_idx &= mask;//新order下代表组对的页面索引<br> //(清除一个bit,既,idx值小的那个组)<br> }<br> memlist_add_head(&(base + page_idx)->list, &area->free_list);<br><br> spin_unlock_irqrestore(&zone->lock, flags);<br><br> ......<br>}<br><br><br><br> Buddy系统的另一个核心函数,expand:当指定的order,low无页面可以分配的时候<br>从oredr为high的area分配,将high中一个页面组依次向低order的area拆分,一次一<br>半,直至order为low的那个area为止.如,我们要order为1的页面组,从order为3的开<br>始分配, 8=4+2+(2),(2)代表我们所要的页面组.<br>static inline struct page * expand (zone_t *zone, struct page *page,<br> unsigned long index, int low, int high, free_area_t * area)<br>{<br> //low :目标order high: 从此order的area分配<br> //page: order为high 的页面组<br> //index:page组在order为high的area中的位图索引<br> unsigned long size = 1 << high;<br><br> while (high > low) {//分解到low<br> if (BAD_RANGE(zone,page))<br> BUG();<br> area--; //lower area<br> high--; //lower order<br> size >>= 1;//size 减半<br> memlist_add_head(&(page)->list, &(area)->free_list);//将拆分的头一个组<br> //入低级别的area<br> MARK_USED(index, high, area);//在此低级别的area中反转状态<br> index += size;//低一个order的位图索引<br> page += size; //留下同一对的后一个组分配给用户<br> }<br> if (BAD_RANGE(zone,page))<br> BUG();<br> return page;<br>}<br><br> 分析完expand后对于rmqueue应该不难了:<br>static struct page * rmqueue(zone_t *zone, unsigned long order)<br>{<br> free_area_t * area = zone->free_area + order;<br> unsigned long curr_order = order;<br> struct list_head *head, *curr;<br> unsigned long flags;<br> struct page *page;<br><br> spin_lock_irqsave(&zone->lock, flags); /*only __free_pages_ok show_free_areas_core and this */<br> do {//指定的order可能没有页面了,要依次向高order的area遍历<br> head = &area->free_list;<br> curr = memlist_next(head);<br><br> if (curr != head) {//还有页面可以分配<br> unsigned int index;<br> page = memlist_entry(curr, struct page, list);<br> if (BAD_RANGE(zone,page))<br> BUG();<br> memlist_del(curr);<br> index = (page - mem_map) - zone->offset;<br> MARK_USED(index, curr_order, area);//反转状态<br> zone->free_pages -= 1 << order;<br><br> page = expand(zone, page, index, order, curr_order, area);<br> spin_unlock_irqrestore(&zone->lock, flags);<br><br> set_page_count(page, 1);<br> if (BAD_RANGE(zone,page))<br> BUG();<br> DEBUG_ADD_PAGE<br> return page; <br> }<br> //如果此order的area无页面就试试高一个order的area<br> curr_order++;<br> area++;<br> } while (curr_order < MAX_ORDER);<br> spin_unlock_irqrestore(&zone->lock, flags);<br><br> return NULL;<br>}<br><br><br><br>III) __alloc_pages:基于zone的buddy系统分配的核心策略<br><br> 先看一个内部接口函数,按照指定的可回收页面(free+inactive clean)的水<br>位寻找合适的zone分配page:<br>static struct page * __alloc_pages_limit(zonelist_t *zonelist,<br> unsigned long order, int limit, int direct_reclaim)<br>{ <br>//limit指定按照什么水位寻找zone,有三种标准,PAGES_MIN,PAGES_LOW,PAGES_HIGH<br>zone_t **zone = zonelist->zones;<br><br> for (;;) {<br> zone_t *z = *(zone++);<br> unsigned long water_mark;<br><br> if (!z)<br> break;<br> if (!z->size)<br> BUG();<br><br> /*<br> * We allocate if the number of free + inactive_clean<br> * pages is above the watermark.<br> */<br> switch (limit) {<br> default:<br> case PAGES_MIN: //只要拥有z->pages_min这么多的空闲页面即可<br> water_mark = z->pages_min;<br> break;<br> case PAGES_LOW:<br> water_mark = z->pages_low;<br> break;<br> case PAGES_HIGH:<br> water_mark = z->pages_high;<br> }<br><br> if (z->free_pages + z->inactive_clean_pages > water_mark) {<br> struct page *page = NULL;<br> /* If possible, reclaim a page directly. */<br> if (direct_reclaim && z->free_pages < z->pages_min + 8)<br> page = reclaim_page(z);<br> /* If that fails, fall back to rmqueue. */<br> if (!page)<br> page = rmqueue(z, order);<br> if (page)<br> return page;<br> }<br> }<br><br> /* Found nothing. */<br> return NULL;<br><br>}<br> 这里说的可回收页面包括两个部分即:free_pages+inactive_clean_pages.<br><br> __alloc_pages处于内核内存管理的最底层,无论slab,vmallc,kmalloc,mmap,brk<br>还是page cache,buffer都要通过__alloc_pages获取最基本的物理内存pages.<br> linux执行这样一种内存管理策略:<br> a)充分利用物理内存,建立各种cache,优化程序性能,减少磁盘操作.这一点和win<br>dows系统不同,windows系统中总是有很多内存空闲,即便是进行了大量的磁盘操作后.<br>而linux中真正空闲的物理内存几乎就看不到.<br><br> b)保证有足够的潜在物理内存(页面),可以立即加以回收,或称潜在可分配页面.通<br>过内核的守护进程kswapd,bdflush,kreclaimd的定期处理,加上每次内存分配对系统<br>的调整,即通过__alloc_pages所遇到的各种内存分配压力,不断的调整守护进程的工<br>作方向,保证系统拥有足够的潜在可回收内存.<br> <br> 先看看对内存页面有些什么样的保有量要求:<br> 1)可分配页面的保有量要求:inactive_clean+free pages(in buddy pages)<br> <br> 系统的期望值是freepages.high + inactive_target / 3,inactive_target就是<br>min((memory_pressure >> INACTIVE_SHIFT),num_physpages / 4)).可见期望的保<br>有量有动态的因素在内.<br> 现在的保有量是nr_free_pages() + nr_inactive_clean_pages();<br> mm/vmscan.c中的函数free_shortage,计算期望的可分配页面和现实之差距.如果<br>保有量合格,但看zone中的inbuddy free pages是比期望值少.只要有一个保有量不<br>合格,就必须立即加以调整.free_shortage请自己阅读.<br><br> 2)潜在可分配页面的保有量要求:(buddyfree+inactiveclean+inactive_dirty)<br> 期望保有量:freepages.high+inactive_target<br> 现存量:<br> nr_free_pages()+nr_inactive_clean_pages()+nr_inactive_dirty_pages.<br> <br><br>所做分析已注入代码:<br>/*<br> * 基于区的buddy 系统的核心策略<br> * This is the 'heart' of the zoned buddy allocator:<br> */<br>struct page * __alloc_pages(zonelist_t *zonelist, unsigned long order)<br>{<br> zone_t **zone;<br> int direct_reclaim = 0;<br> unsigned int gfp_mask = zonelist->gfp_mask;<br> struct page * page;<br><br> /*<br> * Allocations put pressure on the VM subsystem.<br> */<br> memory_pressure++;<br><br> /*<br> * (If anyone calls gfp from interrupts nonatomically then it<br> * will sooner or later tripped up by a schedule().)<br> *<br> * We are falling back to lower-level zones if allocation<br> * in a higher zone fails.<br> */<br><br> /*<br> * Can we take pages directly from the inactive_clean<br> * list?<br> */<br> /* PF_MEMALLOC 代表是为管理目的而请求分配pages */<br> if (order == 0 && (gfp_mask & __GFP_WAIT) &&<br> !(current->flags & PF_MEMALLOC))<br> direct_reclaim = 1;<br><br> /*<br> * If we are about to get low on free pages and we also have<br> * an inactive page shortage, wake up kswapd.<br> */<br> if (inactive_shortage() > inactive_target / 2 && free_shortage())<br> wakeup_kswapd(0);/*用各种办法保持潜在可分配页面的数量*/<br> /*<br> * If we are about to get low on free pages and cleaning<br> * the inactive_dirty pages would fix the situation,<br> * wake up bdflush.<br> */<br> else if (free_shortage() && nr_inactive_dirty_pages > free_shortage()<br> && nr_inactive_dirty_pages >= freepages.high)<br> wakeup_bdflush(0);/*加速将buffer中的数据写入磁盘的过程*/<br><br>try_again:<br> /*<br> * 首先,选取那些拥有许多的空闲内存的zone<br> * We allocate free memory first because it doesn't contain<br> * any data ... DUH!<br> */<br> /* 这轮分配只看绝对空闲页的水位*/<br> zone = zonelist->zones;<br> for (;;) {<br> zone_t *z = *(zone++);<br> if (!z)<br> break;<br> if (!z->size)<br> BUG();<br><br> if (z->free_pages >= z->pages_low) {//空闲页面保有量合格<br> page = rmqueue(z, order); <br> if (page)<br> return page;<br> } else if (z->free_pages < z->pages_min &&<br> waitqueue_active(&kreclaimd_wait)) {<br> wake_up_interruptible(&kreclaimd_wait); <br> /* kreclaimd:从zone_t->inactive_clean_list 队列中回收页面 */<br> }<br> }<br><br><br> /* If there is a lot of activity, inactive_target<br> * will be high and we'll have a good chance of<br> * finding a page using the HIGH limit.<br> */<br> /*既然找不到空闲页面较多的zone,就找inactive_clean页面很<br> *丰富的zone试试<br> */<br> page = __alloc_pages_limit(zonelist, order, PAGES_HIGH, direct_reclaim);<br> if (page)<br> return page;<br><br> /*<br> * 还不行就找inactive_clean页面还行的zone<br> * zone->pages_low < free + inactive_clean<br> * When the working set is very large and VM activity<br> * is low, we're most likely to have our allocation<br> * succeed here.<br> */<br> page = __alloc_pages_limit(zonelist, order, PAGES_LOW, direct_reclaim);<br> if (page)<br> return page;<br><br> /*<br> * 没有zone 的空闲页面(buddy+inactive clean)能够满足需求了<br> * <br> * We wake up kswapd, in the hope that kswapd will<br> * resolve this situation before memory gets tight.<br> *<br> * We also yield the CPU, because that:<br> * - gives kswapd a chance to do something<br> * - slows down allocations, in particular the<br> * allocations from the fast allocator that's<br> * causing the problems ...<br> * - ... which minimises the impact the "bad guys"<br> * have on the rest of the system<br> * - if we don't have __GFP_IO set, kswapd may be<br> * able to free some memory we can't free ourselves<br> */<br> wakeup_kswapd(0); /* 参数0, 代表不睡眠*/<br> /* kswapd -->致力于保持潜在可分配页面的保有量*/<br> if (gfp_mask & __GFP_WAIT) {<br> __set_current_state(TASK_RUNNING);<br> current->policy |= SCHED_YIELD;<br> schedule();<br> }<br><br> /*<br> * After waking up kswapd, we try to allocate a page<br> * from any zone which isn't critical yet.<br> *<br> * 也许我们不能等Kswapd 完成他的工作<br> * 先以更低的水位要求试试<br> */<br> page = __alloc_pages_limit(zonelist, order, PAGES_MIN, direct_reclaim);<br> if (page)<br> return page;<br><br><br> /*<br> * Damn, we didn't succeed.<br> * <br> */<br> /* 对于普通进程还有情况我们可以 考虑到*/<br> if (!(current->flags & PF_MEMALLOC)) { <br> <br> if (order > 0 && (gfp_mask & __GFP_WAIT)) {<br> /* 我们在处理 higher order 的分配,并且可以等待 */<br> zone = zonelist->zones;<br> /*将dirty页面写入磁盘*/<br> current->flags |= PF_MEMALLOC; //page_launder 也可能分配页面<br> page_launder(gfp_mask, 1); //这个进程作为调用环境,提升其<br> current->flags &= ~PF_MEMALLOC;//优先级避免递归运行到这里<br> for (;;) {<br> zone_t *z = *(zone++);<br> if (!z)<br> break;<br> if (!z->size)<br> continue;<br> while (z->inactive_clean_pages) {<br> /*补充空闲页面到buddy*/<br> struct page * page;<br> /* Move one page to the free list. */<br> page = reclaim_page(z); <br> if (!page)<br> break;<br> __free_page(page); //释放到buddy<br> /*也许就有连续页面了*/<br> /* Try if the allocation succeeds. */<br> page = rmqueue(z, order); //再试试high_order 的分配<br> if (page)<br> return page;<br> }<br> }<br> }<br><br><br> /*<br> * We have to do this because something else might eat<br> * the memory kswapd frees for us and we need to be<br> * reliable. <br> */<br> if ((gfp_mask & (__GFP_WAIT|__GFP_IO)) == (__GFP_WAIT|__GFP_IO)) {<br> /* 如果容许io操作,并可以等待,唤醒kswapd<br> * 并等待kswapd 恢复内存的平衡状态<br> */<br> wakeup_kswapd(1); /* 参数1, 代表可以阻塞*/<br> memory_pressure++;<br> if (!order) //* 主意:我们在higher order 时不'again',<br> // 因为,可能kswapd 永远( *ever* )不能为我们<br> // 释放出一个大的连续区域.<br> goto try_again;<br> /*<br> * If __GFP_IO isn't set, we can't wait on kswapd because<br> * kswapd just might need some IO locks /we/ are holding ...<br> *<br> * SUBTLE: The scheduling point above makes sure that<br> * kswapd does get the chance to free memory we can't<br> * free ourselves...<br> */<br> } else if (gfp_mask & __GFP_WAIT) {<br> //不能进行io的情况下代替kswapd做些<br> //不进行io 努力<br> try_to_free_pages(gfp_mask);<br> memory_pressure++;<br> if (!order)<br> goto try_again;<br> }<br><br> }<br><br> /*<br> * Final phase: allocate anything we can!<br> *<br> * Higher order allocations, GFP_ATOMIC allocations and<br> * recursive allocations (PF_MEMALLOC) end up here.<br> *<br> * Only recursive allocations can use the very last pages<br> * in the system, otherwise it would be just too easy to<br> * deadlock the system...<br> */<br> zone = zonelist->zones;<br> for (;;) {<br> zone_t *z = *(zone++);<br> struct page * page = NULL;<br> if (!z)<br> break;<br> if (!z->size)<br> BUG();<br><br> /*<br> * SUBTLE: direct_reclaim is only possible if the task<br> * becomes PF_MEMALLOC while looping above. This will<br> * happen when the OOM killer selects this task for<br> * instant execution...(看英文吧)<br> */<br> if (direct_reclaim) {<br> page = reclaim_page(z);<br> if (page)<br> return page;<br> }<br><br> /* XXX: is pages_min/4 a good amount to reserve for this? */<br> if (z->free_pages < z->pages_min / 4 &&<br> !(current->flags & PF_MEMALLOC))<br> continue;<br> page = rmqueue(z, order);<br> if (page)<br> return page;<br> }<br><br> /* No luck.. */<br> printk(KERN_ERR "__alloc_pages: %lu-order allocation failed.\n", order);<br> return NULL;<br>}<br> <br> 与内存分配有关的函数还有:<br>unsigned long get_zeroed_page(int gfp_mask)<br>void __free_pages(struct page *page, unsigned long order)<br>void free_pages(unsigned long addr, unsigned long order)<br> 另外还有几个用于统计内存压力的函数:<br> unsigned int nr_free_pages (void)<br> unsigned int nr_inactive_clean_pages (void)<br> unsigned int nr_free_buffer_pages (void)<br> unsigned int nr_free_highpages (void)<br> 这些函数较为简单,不再分析.<br><br><br>IV)zone 初始化的相关函数<br> 重点来看看2.4的zonelist.不知道你是否有疑问,其实2.4的zonelist用处不是<br>想象的那么大,还没有成熟,或者说还没有写完.<br> 2.4的pgdata,zonelist的关系如下图:<br> +---------+ zone_list<br> pgdat->node_zonelist |zone_list|-----> +--------+ <br> +---------+ | normal?+<br> 256个 + dma? +<br> +---------+ | high? +<br> +--------+<br> zone_list中第一个zone的类型是由zone_list在node_zonelist中的索引所决定<br>的,即由GFP_XXX决定.<br> 可惜看GFP的定义,根本不能够有256个,这是其一. 其次,pgdat中的zone_list都<br>是指向本node的zone,而不是依据距node的距离排列,这就起不到node分区的本意.看<br>函数 alloc_pages(mm/numa.c),是在pgdat之间遍历而已,并无优先级的考虑.<br> 再看mm/page_alloc.c中的build_zonelists:<br>static inline void build_zonelists(pg_data_t *pgdat)<br>{<br> int i, j, k;<br><br> for (i = 0; i < NR_GFPINDEX; i++) {//依次设置pgdat->node_zonelist[GPF]<br> zonelist_t *zonelist;<br> zone_t *zone;<br><br> zonelist = pgdat->node_zonelists + i;<br> memset(zonelist, 0, sizeof(*zonelist));<br><br> zonelist->gfp_mask = i;<br> j = 0;<br> k = ZONE_NORMAL;<br> if (i & __GFP_HIGHMEM)<br> k = ZONE_HIGHMEM;<br> if (i & __GFP_DMA)<br> k = ZONE_DMA;<br><br> switch (k) {<br> default:<br> BUG();<br> /*<br> * fallthrough:(注意没有break,若请求dma无可替代)<br> */<br> case ZONE_HIGHMEM:<br> zone = pgdat->node_zones + ZONE_HIGHMEM;<br> if (zone->size) {<br>#ifndef CONFIG_HIGHMEM<br> BUG();<br>#endif<br> zonelist->zones[j++] = zone;<br> }<br> case ZONE_NORMAL:<br> zone = pgdat->node_zones + ZONE_NORMAL;<br> if (zone->size)<br> zonelist->zones[j++] = zone;<br> case ZONE_DMA:<br> zone = pgdat->node_zones + ZONE_DMA;<br> if (zone->size)<br> zonelist->zones[j++] = zone;<br> }<br> zonelist->zones[j++] = NULL;<br> } <br>}<br><br>和2.6的内核对比一下,2.6中结构变成:<br> +---------+ zone_list<br> pgdat->node_zonelist |zone_list|-----> +--------+ <br> +---------+ | nodes*3+<br> 3个 + +<br> +---------+ | +<br> | n.d.h | +--------+<br>不详细分析2.6中的相关代码了.<br><br> 最后剩下函数free_area_init_core,在分析mm/memory.c的时候曾详细讨论过他<br>设置mem_map的方式,请参考理解free_area_init_core对mem_map的初始化,这里简<br>单讨论对zone和buddy的初始化,就不列出详细的代码了:<br>/*<br> * 设置一个节点的zone:<br> * - 所有的页面标记为保留 reserved<br> * - page 中所以内存队列为空<br> * - 清空区内buddy系统位图<br> * nid 节点号 pgdat 节点<br> * gmap 全局mem map zones_size 节点区大小数组<br> * zone_start_paddr 节点开始的物理地址<br> * zholes_size 节点空洞大小数组<br> * lmem_map 节点本地mem map数组<br> */<br>void __init free_area_init_core(int nid, pg_data_t *pgdat, <br> struct page **gmap, unsigned long *zones_size , <br> unsigned long zone_start_paddr, unsigned long *zholes_size, <br> struct page *lmem_map)<br>{<br> /* NON_NUMA:(like i386) pgdat:contig_page_data,gmap:&mem_map, lmem_map:0<br> * NUMA:(like i64-sn1): pgdat:NOD_PGDAT , gmap:&discard, lmem_map:0<br> * mem_map 恒定为PAGE_OFFSET<br> */<br> struct page *p;<br> unsigned long i, j;<br> unsigned long map_size;<br> unsigned long totalpages, offset, realtotalpages;<br> unsigned int cumulative = 0;<br><br> /*计算总页面数目*/<br> ....<br> /*总页面数减去每个区的空洞大小*/<br> ...<br> /*<br> * 对lmem_map和全局mem_map的初始化,参考mm/memory.c的分析<br> */<br> map_size = (totalpages + 1)*sizeof(struct page);<br> if (lmem_map == (struct page *)0) {<br> lmem_map = (struct page *) alloc_bootmem_node(pgdat, map_size);<br> lmem_map = (struct page *)(PAGE_OFFSET + <br> MAP_ALIGN((unsigned long)lmem_map - PAGE_OFFSET));<br> }<br> *gmap = pgdat->node_mem_map = lmem_map;<br> pgdat->node_size = totalpages;<br> pgdat->node_start_paddr = zone_start_paddr;<br> pgdat->node_start_mapnr = (lmem_map - mem_map);<br> /*<br> * 初始状态下所有的页面都是保留的 - 空闲页面在早期初始化完成之后<br> * 由 free_all_bootmem() 一次性统一释放.<br> */<br> for (p = lmem_map; p < lmem_map + totalpages; p++) {<br> set_page_count(p, 0);<br> SetPageReserved(p);<br> init_waitqueue_head(&p->wait);<br> memlist_init(&p->list);<br> }<br><br> offset = lmem_map - mem_map; <br> for (j = 0; j < MAX_NR_ZONES; j++) {//初始化zone,3个<br> zone_t *zone = pgdat->node_zones + j;<br> unsigned long mask;<br> unsigned long size, realsize;<br><br> realsize = size = zones_size[j];<br> if (zholes_size)<br> realsize -= zholes_size[j];<br><br> printk("zone(%lu): %lu pages.\n", j, size);<br> zone->size = size;<br> ........//初始化各个变量,省略<br> zone->zone_mem_map = mem_map + offset;<br> zone->zone_start_mapnr = offset;<br> zone->zone_start_paddr = zone_start_paddr;<br><br> /*<br> * 初始化本区内的page 的虚拟地址和所属区<br> */<br> for (i = 0; i < size; i++) {<br> struct page *page = mem_map + offset + i;<br> page->zone = zone;<br> if (j != ZONE_HIGHMEM) {<br> page->virtual = __va(zone_start_paddr);<br> zone_start_paddr += PAGE_SIZE;<br> }<br> }<br><br> /*<br> * 初始化区内的buddy 系统<br> */<br> offset += size;<br> mask = -1; //FFFFffff<br> for (i = 0; i < MAX_ORDER; i++) {<br> unsigned long bitmap_size;<br><br> memlist_init(&zone->free_area[i].free_list);<br> mask += mask;// 11110,11100,11000...<br> size = (size + ~mask) & mask;//向上对齐,是2^order的整倍数<br> bitmap_size = size >> i;<br> bitmap_size = (bitmap_size + 7) >> 3;<br> bitmap_size = LONG_ALIGN(bitmap_size);<br> zone->free_area[i].map = <br> (unsigned int *) alloc_bootmem_node(pgdat, bitmap_size);<br> }<br> }<br> build_zonelists(pgdat); //建立2.4的zonelist.<br>}<br> <br><br><br><br><br><br><br><br> <br></pre>
</td>
</tr>
</tbody>
</table></body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -