⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 page-writeback.c

📁 最新最稳定的Linux内存管理模块源代码
💻 C
📖 第 1 页 / 共 3 页
字号:
 * balance_dirty_pages() must be called by processes which are generating dirty * data.  It looks at the number of dirty pages in the machine and will force * the caller to perform writeback if the system is over `vm_dirty_ratio'. * If we're over `background_thresh' then pdflush is woken to perform some * writeout. */static void balance_dirty_pages(struct address_space *mapping){	long nr_reclaimable, bdi_nr_reclaimable;	long nr_writeback, bdi_nr_writeback;	unsigned long background_thresh;	unsigned long dirty_thresh;	unsigned long bdi_thresh;	unsigned long pages_written = 0;	unsigned long write_chunk = sync_writeback_pages();	struct backing_dev_info *bdi = mapping->backing_dev_info;	for (;;) {		struct writeback_control wbc = {			.bdi		= bdi,			.sync_mode	= WB_SYNC_NONE,			.older_than_this = NULL,			.nr_to_write	= write_chunk,			.range_cyclic	= 1,		};		get_dirty_limits(&background_thresh, &dirty_thresh,				&bdi_thresh, bdi);		nr_reclaimable = global_page_state(NR_FILE_DIRTY) +					global_page_state(NR_UNSTABLE_NFS);		nr_writeback = global_page_state(NR_WRITEBACK);		bdi_nr_reclaimable = bdi_stat(bdi, BDI_RECLAIMABLE);		bdi_nr_writeback = bdi_stat(bdi, BDI_WRITEBACK);		if (bdi_nr_reclaimable + bdi_nr_writeback <= bdi_thresh)			break;		/*		 * Throttle it only when the background writeback cannot		 * catch-up. This avoids (excessively) small writeouts		 * when the bdi limits are ramping up.		 */		if (nr_reclaimable + nr_writeback <				(background_thresh + dirty_thresh) / 2)			break;		if (!bdi->dirty_exceeded)			bdi->dirty_exceeded = 1;		/* Note: nr_reclaimable denotes nr_dirty + nr_unstable.		 * Unstable writes are a feature of certain networked		 * filesystems (i.e. NFS) in which data may have been		 * written to the server's write cache, but has not yet		 * been flushed to permanent storage.		 */		if (bdi_nr_reclaimable) {			writeback_inodes(&wbc);			pages_written += write_chunk - wbc.nr_to_write;			get_dirty_limits(&background_thresh, &dirty_thresh,				       &bdi_thresh, bdi);		}		/*		 * In order to avoid the stacked BDI deadlock we need		 * to ensure we accurately count the 'dirty' pages when		 * the threshold is low.		 *		 * Otherwise it would be possible to get thresh+n pages		 * reported dirty, even though there are thresh-m pages		 * actually dirty; with m+n sitting in the percpu		 * deltas.		 */		if (bdi_thresh < 2*bdi_stat_error(bdi)) {			bdi_nr_reclaimable = bdi_stat_sum(bdi, BDI_RECLAIMABLE);			bdi_nr_writeback = bdi_stat_sum(bdi, BDI_WRITEBACK);		} else if (bdi_nr_reclaimable) {			bdi_nr_reclaimable = bdi_stat(bdi, BDI_RECLAIMABLE);			bdi_nr_writeback = bdi_stat(bdi, BDI_WRITEBACK);		}		if (bdi_nr_reclaimable + bdi_nr_writeback <= bdi_thresh)			break;		if (pages_written >= write_chunk)			break;		/* We've done our duty */		congestion_wait(WRITE, HZ/10);	}	if (bdi_nr_reclaimable + bdi_nr_writeback < bdi_thresh &&			bdi->dirty_exceeded)		bdi->dirty_exceeded = 0;	if (writeback_in_progress(bdi))		return;		/* pdflush is already working this queue */	/*	 * In laptop mode, we wait until hitting the higher threshold before	 * starting background writeout, and then write out all the way down	 * to the lower threshold.  So slow writers cause minimal disk activity.	 *	 * In normal mode, we start background writeout at the lower	 * background_thresh, to keep the amount of dirty memory low.	 */	if ((laptop_mode && pages_written) ||			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)					  + global_page_state(NR_UNSTABLE_NFS)					  > background_thresh)))		pdflush_operation(background_writeout, 0);}void set_page_dirty_balance(struct page *page, int page_mkwrite){	if (set_page_dirty(page) || page_mkwrite) {		struct address_space *mapping = page_mapping(page);		if (mapping)			balance_dirty_pages_ratelimited(mapping);	}}/** * balance_dirty_pages_ratelimited_nr - balance dirty memory state * @mapping: address_space which was dirtied * @nr_pages_dirtied: number of pages which the caller has just dirtied * * Processes which are dirtying memory should call in here once for each page * which was newly dirtied.  The function will periodically check the system's * dirty state and will initiate writeback if needed. * * On really big machines, get_writeback_state is expensive, so try to avoid * calling it too often (ratelimiting).  But once we're over the dirty memory * limit we decrease the ratelimiting by a lot, to prevent individual processes * from overshooting the limit by (ratelimit_pages) each. */void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,					unsigned long nr_pages_dirtied){	static DEFINE_PER_CPU(unsigned long, ratelimits) = 0;	unsigned long ratelimit;	unsigned long *p;	ratelimit = ratelimit_pages;	if (mapping->backing_dev_info->dirty_exceeded)		ratelimit = 8;	/*	 * Check the rate limiting. Also, we do not want to throttle real-time	 * tasks in balance_dirty_pages(). Period.	 */	preempt_disable();	p =  &__get_cpu_var(ratelimits);	*p += nr_pages_dirtied;	if (unlikely(*p >= ratelimit)) {		*p = 0;		preempt_enable();		balance_dirty_pages(mapping);		return;	}	preempt_enable();}EXPORT_SYMBOL(balance_dirty_pages_ratelimited_nr);void throttle_vm_writeout(gfp_t gfp_mask){	unsigned long background_thresh;	unsigned long dirty_thresh;        for ( ; ; ) {		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);                /*                 * Boost the allowable dirty threshold a bit for page                 * allocators so they don't get DoS'ed by heavy writers                 */                dirty_thresh += dirty_thresh / 10;      /* wheeee... */                if (global_page_state(NR_UNSTABLE_NFS) +			global_page_state(NR_WRITEBACK) <= dirty_thresh)                        	break;                congestion_wait(WRITE, HZ/10);		/*		 * The caller might hold locks which can prevent IO completion		 * or progress in the filesystem.  So we cannot just sit here		 * waiting for IO to complete.		 */		if ((gfp_mask & (__GFP_FS|__GFP_IO)) != (__GFP_FS|__GFP_IO))			break;        }}/* * writeback at least _min_pages, and keep writing until the amount of dirty * memory is less than the background threshold, or until we're all clean. */static void background_writeout(unsigned long _min_pages){	long min_pages = _min_pages;	struct writeback_control wbc = {		.bdi		= NULL,		.sync_mode	= WB_SYNC_NONE,		.older_than_this = NULL,		.nr_to_write	= 0,		.nonblocking	= 1,		.range_cyclic	= 1,	};	for ( ; ; ) {		unsigned long background_thresh;		unsigned long dirty_thresh;		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);		if (global_page_state(NR_FILE_DIRTY) +			global_page_state(NR_UNSTABLE_NFS) < background_thresh				&& min_pages <= 0)			break;		wbc.more_io = 0;		wbc.encountered_congestion = 0;		wbc.nr_to_write = MAX_WRITEBACK_PAGES;		wbc.pages_skipped = 0;		writeback_inodes(&wbc);		min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {			/* Wrote less than expected */			if (wbc.encountered_congestion || wbc.more_io)				congestion_wait(WRITE, HZ/10);			else				break;		}	}}/* * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns * -1 if all pdflush threads were busy. */int wakeup_pdflush(long nr_pages){	if (nr_pages == 0)		nr_pages = global_page_state(NR_FILE_DIRTY) +				global_page_state(NR_UNSTABLE_NFS);	return pdflush_operation(background_writeout, nr_pages);}static void wb_timer_fn(unsigned long unused);static void laptop_timer_fn(unsigned long unused);static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);/* * Periodic writeback of "old" data. * * Define "old": the first time one of an inode's pages is dirtied, we mark the * dirtying-time in the inode's address_space.  So this periodic writeback code * just walks the superblock inode list, writing back any inodes which are * older than a specific point in time. * * Try to run once per dirty_writeback_interval.  But if a writeback event * takes longer than a dirty_writeback_interval interval, then leave a * one-second gap. * * older_than_this takes precedence over nr_to_write.  So we'll only write back * all dirty pages if they are all attached to "old" mappings. */static void wb_kupdate(unsigned long arg){	unsigned long oldest_jif;	unsigned long start_jif;	unsigned long next_jif;	long nr_to_write;	struct writeback_control wbc = {		.bdi		= NULL,		.sync_mode	= WB_SYNC_NONE,		.older_than_this = &oldest_jif,		.nr_to_write	= 0,		.nonblocking	= 1,		.for_kupdate	= 1,		.range_cyclic	= 1,	};	sync_supers();	oldest_jif = jiffies - dirty_expire_interval;	start_jif = jiffies;	next_jif = start_jif + dirty_writeback_interval;	nr_to_write = global_page_state(NR_FILE_DIRTY) +			global_page_state(NR_UNSTABLE_NFS) +			(inodes_stat.nr_inodes - inodes_stat.nr_unused);	while (nr_to_write > 0) {		wbc.more_io = 0;		wbc.encountered_congestion = 0;		wbc.nr_to_write = MAX_WRITEBACK_PAGES;		writeback_inodes(&wbc);		if (wbc.nr_to_write > 0) {			if (wbc.encountered_congestion || wbc.more_io)				congestion_wait(WRITE, HZ/10);			else				break;	/* All the old data is written */		}		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;	}	if (time_before(next_jif, jiffies + HZ))		next_jif = jiffies + HZ;	if (dirty_writeback_interval)		mod_timer(&wb_timer, next_jif);}/* * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs */int dirty_writeback_centisecs_handler(ctl_table *table, int write,	struct file *file, void __user *buffer, size_t *length, loff_t *ppos){	proc_dointvec_userhz_jiffies(table, write, file, buffer, length, ppos);	if (dirty_writeback_interval)		mod_timer(&wb_timer, jiffies + dirty_writeback_interval);	else		del_timer(&wb_timer);	return 0;}static void wb_timer_fn(unsigned long unused){	if (pdflush_operation(wb_kupdate, 0) < 0)		mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */}static void laptop_flush(unsigned long unused){	sys_sync();}static void laptop_timer_fn(unsigned long unused){	pdflush_operation(laptop_flush, 0);}/* * We've spun up the disk and we're in laptop mode: schedule writeback * of all dirty data a few seconds from now.  If the flush is already scheduled * then push it back - the user is still using the disk. */void laptop_io_completion(void){	mod_timer(&laptop_mode_wb_timer, jiffies + laptop_mode);}/* * We're in laptop mode and we've just synced. The sync's writes will have * caused another writeback to be scheduled by laptop_io_completion. * Nothing needs to be written back anymore, so we unschedule the writeback. */void laptop_sync_completion(void){	del_timer(&laptop_mode_wb_timer);}/* * If ratelimit_pages is too high then we can get into dirty-data overload * if a large number of processes all perform writes at the same time. * If it is too low then SMP machines will call the (expensive) * get_writeback_state too often. * * Here we set ratelimit_pages to a level which ensures that when all CPUs are * dirtying in parallel, we cannot go more than 3% (1/32) over the dirty memory * thresholds before writeback cuts in. * * But the limit should not be set too high.  Because it also controls the * amount of memory which the balance_dirty_pages() caller has to write back. * If this is too large then the caller will block on the IO queue all the * time.  So limit it to four megabytes - the balance_dirty_pages() caller * will write six megabyte chunks, max. */void writeback_set_ratelimit(void){	ratelimit_pages = vm_total_pages / (num_online_cpus() * 32);	if (ratelimit_pages < 16)		ratelimit_pages = 16;	if (ratelimit_pages * PAGE_CACHE_SIZE > 4096 * 1024)		ratelimit_pages = (4096 * 1024) / PAGE_CACHE_SIZE;}static int __cpuinitratelimit_handler(struct notifier_block *self, unsigned long u, void *v){	writeback_set_ratelimit();	return NOTIFY_DONE;}static struct notifier_block __cpuinitdata ratelimit_nb = {	.notifier_call	= ratelimit_handler,	.next		= NULL,};/* * Called early on to tune the page writeback dirty limits. * * We used to scale dirty pages according to how total memory * related to pages that could be allocated for buffers (by * comparing nr_free_buffer_pages() to vm_total_pages. * * However, that was when we used "dirty_ratio" to scale with * all memory, and we don't do that any more. "dirty_ratio" * is now applied to total non-HIGHPAGE memory (by subtracting * totalhigh_pages from vm_total_pages), and as such we can't * get into the old insane situation any more where we had * large amounts of dirty pages compared to a small amount of * non-HIGHMEM memory. * * But we might still want to scale the dirty_ratio by how * much memory the box has.. */void __init page_writeback_init(void){	int shift;	mod_timer(&wb_timer, jiffies + dirty_writeback_interval);	writeback_set_ratelimit();	register_cpu_notifier(&ratelimit_nb);	shift = calc_period_shift();	prop_descriptor_init(&vm_completions, shift);	prop_descriptor_init(&vm_dirties, shift);}/** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write * @wbc: subtract the number of written pages from *@wbc->nr_to_write * @writepage: function called for each page * @data: data passed to writepage function * * If a page is already under I/O, write_cache_pages() skips it, even * if it's dirty.  This is desirable behaviour for memory-cleaning writeback, * but it is INCORRECT for data-integrity system calls such as fsync().  fsync() * and msync() need to guarantee that all the data which was dirty at the time * the call was made get new I/O started against them.  If wbc->sync_mode is * WB_SYNC_ALL then we were called for data integrity and we must wait for * existing IO to complete. */int write_cache_pages(struct address_space *mapping,		      struct writeback_control *wbc, writepage_t writepage,		      void *data){	struct backing_dev_info *bdi = mapping->backing_dev_info;	int ret = 0;	int done = 0;	struct pagevec pvec;	int nr_pages;	pgoff_t uninitialized_var(writeback_index);	pgoff_t index;	pgoff_t end;		/* Inclusive */	pgoff_t done_index;	int cycled;	int range_whole = 0;	long nr_to_write = wbc->nr_to_write;	if (wbc->nonblocking && bdi_write_congested(bdi)) {		wbc->encountered_congestion = 1;		return 0;	}	pagevec_init(&pvec, 0);	if (wbc->range_cyclic) {		writeback_index = mapping->writeback_index; /* prev offset */		index = writeback_index;		if (index == 0)			cycled = 1;		else			cycled = 0;		end = -1;	} else {		index = wbc->range_start >> PAGE_CACHE_SHIFT;		end = wbc->range_end >> PAGE_CACHE_SHIFT;		if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)			range_whole = 1;		cycled = 1; /* ignore range_cyclic tests */	}retry:

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -