⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 vm_map.c

📁 open bsd vm device design
💻 C
📖 第 1 页 / 共 5 页
字号:
/*  * Copyright (c) 1991, 1993 *	The Regents of the University of California.  All rights reserved. * * This code is derived from software contributed to Berkeley by * The Mach Operating System project at Carnegie-Mellon University. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright *    notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright *    notice, this list of conditions and the following disclaimer in the *    documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software *    must display the following acknowledgement: *	This product includes software developed by the University of *	California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors *    may be used to endorse or promote products derived from this software *    without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * *	@(#)vm_map.c	8.9 (Berkeley) 5/17/95 * * * Copyright (c) 1987, 1990 Carnegie-Mellon University. * All rights reserved. * * Authors: Avadis Tevanian, Jr., Michael Wayne Young *  * Permission to use, copy, modify and distribute this software and * its documentation is hereby granted, provided that both the copyright * notice and this permission notice appear in all copies of the * software, derivative works or modified versions, and any portions * thereof, and that both notices appear in supporting documentation. *  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND  * FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE. *  * Carnegie Mellon requests users of this software to return to * *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU *  School of Computer Science *  Carnegie Mellon University *  Pittsburgh PA 15213-3890 * * any improvements or extensions that they make and grant Carnegie the * rights to redistribute these changes. *//* *	Virtual memory mapping module. */#include <sys/param.h>#include <sys/systm.h>#include <sys/malloc.h>#include <vm/vm.h>#include <vm/vm_page.h>/* *	Virtual memory maps provide for the mapping, protection, *	and sharing of virtual memory objects.  In addition, *	this module provides for an efficient virtual copy of *	memory from one map to another. * *	Synchronization is required prior to most operations. * *	Maps consist of an ordered doubly-linked list of simple *	entries; a single hint is used to speed up lookups. * *	In order to properly represent the sharing of virtual *	memory regions among maps, the map structure is bi-level. *	Top-level ("address") maps refer to regions of sharable *	virtual memory.  These regions are implemented as *	("sharing") maps, which then refer to the actual virtual *	memory objects.  When two address maps "share" memory, *	their top-level maps both have references to the same *	sharing map.  When memory is virtual-copied from one *	address map to another, the references in the sharing *	maps are actually copied -- no copying occurs at the *	virtual memory object level. * *	Since portions of maps are specified by start/end addreses, *	which may not align with existing map entries, all *	routines merely "clip" entries to these start/end values. *	[That is, an entry is split into two, bordering at a *	start or end value.]  Note that these clippings may not *	always be necessary (as the two resulting entries are then *	not changed); however, the clipping is done for convenience. *	No attempt is currently made to "glue back together" two *	abutting entries. * *	As mentioned above, virtual copy operations are performed *	by copying VM object references from one sharing map to *	another, and then marking both regions as copy-on-write. *	It is important to note that only one writeable reference *	to a VM object region exists in any map -- this means that *	shadow object creation can be delayed until a write operation *	occurs. *//* *	vm_map_startup: * *	Initialize the vm_map module.  Must be called before *	any other vm_map routines. * *	Map and entry structures are allocated from the general *	purpose memory pool with some exceptions: * *	- The kernel map and kmem submap are allocated statically. *	- Kernel map entries are allocated out of a static pool. * *	These restrictions are necessary since malloc() uses the *	maps and requires map entries. */vm_offset_t	kentry_data;vm_size_t	kentry_data_size;vm_map_entry_t	kentry_free;vm_map_t	kmap_free;static void	_vm_map_clip_end __P((vm_map_t, vm_map_entry_t, vm_offset_t));static void	_vm_map_clip_start __P((vm_map_t, vm_map_entry_t, vm_offset_t));voidvm_map_startup(){	register int i;	register vm_map_entry_t mep;	vm_map_t mp;	/*	 * Static map structures for allocation before initialization of	 * kernel map or kmem map.  vm_map_create knows how to deal with them.	 */	kmap_free = mp = (vm_map_t) kentry_data;	i = MAX_KMAP;	while (--i > 0) {		mp->header.next = (vm_map_entry_t) (mp + 1);		mp++;	}	mp++->header.next = NULL;	/*	 * Form a free list of statically allocated kernel map entries	 * with the rest.	 */	kentry_free = mep = (vm_map_entry_t) mp;	i = (kentry_data_size - MAX_KMAP * sizeof *mp) / sizeof *mep;	while (--i > 0) {		mep->next = mep + 1;		mep++;	}	mep->next = NULL;}/* * Allocate a vmspace structure, including a vm_map and pmap, * and initialize those structures.  The refcnt is set to 1. * The remaining fields must be initialized by the caller. */struct vmspace *vmspace_alloc(min, max, pageable)	vm_offset_t min, max;	int pageable;{	register struct vmspace *vm;	MALLOC(vm, struct vmspace *, sizeof(struct vmspace), M_VMMAP, M_WAITOK);	bzero(vm, (caddr_t) &vm->vm_startcopy - (caddr_t) vm);	vm_map_init(&vm->vm_map, min, max, pageable);	pmap_pinit(&vm->vm_pmap);	vm->vm_map.pmap = &vm->vm_pmap;		/* XXX */	vm->vm_refcnt = 1;	return (vm);}voidvmspace_free(vm)	register struct vmspace *vm;{	if (--vm->vm_refcnt == 0) {		/*		 * Lock the map, to wait out all other references to it.		 * Delete all of the mappings and pages they hold,		 * then call the pmap module to reclaim anything left.		 */		vm_map_lock(&vm->vm_map);		(void) vm_map_delete(&vm->vm_map, vm->vm_map.min_offset,		    vm->vm_map.max_offset);		pmap_release(&vm->vm_pmap);		FREE(vm, M_VMMAP);	}}/* *	vm_map_create: * *	Creates and returns a new empty VM map with *	the given physical map structure, and having *	the given lower and upper address bounds. */vm_map_tvm_map_create(pmap, min, max, pageable)	pmap_t		pmap;	vm_offset_t	min, max;	boolean_t	pageable;{	register vm_map_t	result;	extern vm_map_t		kmem_map;	if (kmem_map == NULL) {		result = kmap_free;		if (result == NULL)			panic("vm_map_create: out of maps");		kmap_free = (vm_map_t) result->header.next;	} else		MALLOC(result, vm_map_t, sizeof(struct vm_map),		       M_VMMAP, M_WAITOK);	vm_map_init(result, min, max, pageable);	result->pmap = pmap;	return(result);}/* * Initialize an existing vm_map structure * such as that in the vmspace structure. * The pmap is set elsewhere. */voidvm_map_init(map, min, max, pageable)	register struct vm_map *map;	vm_offset_t	min, max;	boolean_t	pageable;{	map->header.next = map->header.prev = &map->header;	map->nentries = 0;	map->size = 0;	map->ref_count = 1;	map->is_main_map = TRUE;	map->min_offset = min;	map->max_offset = max;	map->entries_pageable = pageable;	map->first_free = &map->header;	map->hint = &map->header;	map->timestamp = 0;	lockinit(&map->lock, PVM, "thrd_sleep", 0, 0);	simple_lock_init(&map->ref_lock);	simple_lock_init(&map->hint_lock);}/* *	vm_map_entry_create:	[ internal use only ] * *	Allocates a VM map entry for insertion. *	No entry fields are filled in.  This routine is */vm_map_entry_tvm_map_entry_create(map)	vm_map_t	map;{	vm_map_entry_t	entry;#ifdef DEBUG	extern vm_map_t		kernel_map, kmem_map, mb_map, pager_map;	boolean_t		isspecial;	isspecial = (map == kernel_map || map == kmem_map ||		     map == mb_map || map == pager_map);	if (isspecial && map->entries_pageable ||	    !isspecial && !map->entries_pageable)		panic("vm_map_entry_create: bogus map");#endif	if (map->entries_pageable) {		MALLOC(entry, vm_map_entry_t, sizeof(struct vm_map_entry),		       M_VMMAPENT, M_WAITOK);	} else {		if (entry = kentry_free)			kentry_free = kentry_free->next;	}	if (entry == NULL)		panic("vm_map_entry_create: out of map entries");	return(entry);}/* *	vm_map_entry_dispose:	[ internal use only ] * *	Inverse of vm_map_entry_create. */voidvm_map_entry_dispose(map, entry)	vm_map_t	map;	vm_map_entry_t	entry;{#ifdef DEBUG	extern vm_map_t		kernel_map, kmem_map, mb_map, pager_map;	boolean_t		isspecial;	isspecial = (map == kernel_map || map == kmem_map ||		     map == mb_map || map == pager_map);	if (isspecial && map->entries_pageable ||	    !isspecial && !map->entries_pageable)		panic("vm_map_entry_dispose: bogus map");#endif	if (map->entries_pageable) {		FREE(entry, M_VMMAPENT);	} else {		entry->next = kentry_free;		kentry_free = entry;	}}/* *	vm_map_entry_{un,}link: * *	Insert/remove entries from maps. */#define	vm_map_entry_link(map, after_where, entry) \		{ \		(map)->nentries++; \		(entry)->prev = (after_where); \		(entry)->next = (after_where)->next; \		(entry)->prev->next = (entry); \		(entry)->next->prev = (entry); \		}#define	vm_map_entry_unlink(map, entry) \		{ \		(map)->nentries--; \		(entry)->next->prev = (entry)->prev; \		(entry)->prev->next = (entry)->next; \		}/* *	vm_map_reference: * *	Creates another valid reference to the given map. * */voidvm_map_reference(map)	register vm_map_t	map;{	if (map == NULL)		return;	simple_lock(&map->ref_lock);#ifdef DEBUG	if (map->ref_count == 0)		panic("vm_map_reference: zero ref_count");#endif	map->ref_count++;	simple_unlock(&map->ref_lock);}/* *	vm_map_deallocate: * *	Removes a reference from the specified map, *	destroying it if no references remain. *	The map should not be locked. */voidvm_map_deallocate(map)	register vm_map_t	map;{	if (map == NULL)		return;	simple_lock(&map->ref_lock);	if (--map->ref_count > 0) {		simple_unlock(&map->ref_lock);		return;	}	/*	 *	Lock the map, to wait out all other references	 *	to it.	 */	vm_map_lock_drain_interlock(map);	(void) vm_map_delete(map, map->min_offset, map->max_offset);	pmap_destroy(map->pmap);	vm_map_unlock(map);	FREE(map, M_VMMAP);}/* *	vm_map_insert: * *	Inserts the given whole VM object into the target *	map at the specified address range.  The object's *	size should match that of the address range. * *	Requires that the map be locked, and leaves it so. */intvm_map_insert(map, object, offset, start, end)	vm_map_t	map;	vm_object_t	object;	vm_offset_t	offset;	vm_offset_t	start;	vm_offset_t	end;{	register vm_map_entry_t		new_entry;	register vm_map_entry_t		prev_entry;	vm_map_entry_t			temp_entry;	/*	 *	Check that the start and end points are not bogus.	 */	if ((start < map->min_offset) || (end > map->max_offset) ||			(start >= end))		return(KERN_INVALID_ADDRESS);	/*	 *	Find the entry prior to the proposed	 *	starting address; if it's part of an	 *	existing entry, this range is bogus.	 */	if (vm_map_lookup_entry(map, start, &temp_entry))		return(KERN_NO_SPACE);	prev_entry = temp_entry;	/*	 *	Assert that the next entry doesn't overlap the	 *	end point.	 */	if ((prev_entry->next != &map->header) &&			(prev_entry->next->start < end))		return(KERN_NO_SPACE);	/*	 *	See if we can avoid creating a new entry by	 *	extending one of our neighbors.	 */	if (object == NULL) {		if ((prev_entry != &map->header) &&		    (prev_entry->end == start) &&		    (map->is_main_map) &&		    (prev_entry->is_a_map == FALSE) &&		    (prev_entry->is_sub_map == FALSE) &&		    (prev_entry->inheritance == VM_INHERIT_DEFAULT) &&		    (prev_entry->protection == VM_PROT_DEFAULT) &&		    (prev_entry->max_protection == VM_PROT_DEFAULT) &&		    (prev_entry->wired_count == 0)) {			if (vm_object_coalesce(prev_entry->object.vm_object,					NULL,					prev_entry->offset,					(vm_offset_t) 0,					(vm_size_t)(prev_entry->end						     - prev_entry->start),					(vm_size_t)(end - prev_entry->end))) {				/*				 *	Coalesced the two objects - can extend				 *	the previous map entry to include the				 *	new range.				 */				map->size += (end - prev_entry->end);				prev_entry->end = end;				return(KERN_SUCCESS);			}		}	}	/*	 *	Create a new entry	 */	new_entry = vm_map_entry_create(map);	new_entry->start = start;	new_entry->end = end;	new_entry->is_a_map = FALSE;	new_entry->is_sub_map = FALSE;	new_entry->object.vm_object = object;	new_entry->offset = offset;	new_entry->copy_on_write = FALSE;	new_entry->needs_copy = FALSE;	if (map->is_main_map) {		new_entry->inheritance = VM_INHERIT_DEFAULT;		new_entry->protection = VM_PROT_DEFAULT;		new_entry->max_protection = VM_PROT_DEFAULT;		new_entry->wired_count = 0;	}	/*	 *	Insert the new entry into the list	 */	vm_map_entry_link(map, prev_entry, new_entry);	map->size += new_entry->end - new_entry->start;	/*	 *	Update the free space hint	 */	if ((map->first_free == prev_entry) && (prev_entry->end >= new_entry->start))

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -