⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 uvm_km.c

📁 基于组件方式开发操作系统的OSKIT源代码
💻 C
📖 第 1 页 / 共 2 页
字号:
/*	$NetBSD: uvm_km.c,v 1.41 2000/11/27 04:36:40 nisimura Exp $	*//*  * Copyright (c) 1997 Charles D. Cranor and Washington University. * Copyright (c) 1991, 1993, The Regents of the University of California.   * * All rights reserved. * * This code is derived from software contributed to Berkeley by * The Mach Operating System project at Carnegie-Mellon University. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright *    notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright *    notice, this list of conditions and the following disclaimer in the *    documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software *    must display the following acknowledgement: *	This product includes software developed by Charles D. Cranor, *      Washington University, the University of California, Berkeley and  *      its contributors. * 4. Neither the name of the University nor the names of its contributors *    may be used to endorse or promote products derived from this software *    without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * *	@(#)vm_kern.c   8.3 (Berkeley) 1/12/94 * from: Id: uvm_km.c,v 1.1.2.14 1998/02/06 05:19:27 chs Exp * * * Copyright (c) 1987, 1990 Carnegie-Mellon University. * All rights reserved. *  * Permission to use, copy, modify and distribute this software and * its documentation is hereby granted, provided that both the copyright * notice and this permission notice appear in all copies of the * software, derivative works or modified versions, and any portions * thereof, and that both notices appear in supporting documentation. *  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND  * FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE. *  * Carnegie Mellon requests users of this software to return to * *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU *  School of Computer Science *  Carnegie Mellon University *  Pittsburgh PA 15213-3890 * * any improvements or extensions that they make and grant Carnegie the * rights to redistribute these changes. */#include "opt_uvmhist.h"/* * uvm_km.c: handle kernel memory allocation and management *//* * overview of kernel memory management: * * the kernel virtual address space is mapped by "kernel_map."   kernel_map * starts at VM_MIN_KERNEL_ADDRESS and goes to VM_MAX_KERNEL_ADDRESS. * note that VM_MIN_KERNEL_ADDRESS is equal to vm_map_min(kernel_map). * * the kernel_map has several "submaps."   submaps can only appear in  * the kernel_map (user processes can't use them).   submaps "take over" * the management of a sub-range of the kernel's address space.  submaps * are typically allocated at boot time and are never released.   kernel * virtual address space that is mapped by a submap is locked by the  * submap's lock -- not the kernel_map's lock. * * thus, the useful feature of submaps is that they allow us to break * up the locking and protection of the kernel address space into smaller * chunks. * * the vm system has several standard kernel submaps, including: *   kmem_map => contains only wired kernel memory for the kernel *		malloc.   *** access to kmem_map must be protected *		by splimp() because we are allowed to call malloc() *		at interrupt time *** *   mb_map => memory for large mbufs,  *** protected by splimp *** *   pager_map => used to map "buf" structures into kernel space *   exec_map => used during exec to handle exec args *   etc... * * the kernel allocates its private memory out of special uvm_objects whose * reference count is set to UVM_OBJ_KERN (thus indicating that the objects * are "special" and never die).   all kernel objects should be thought of * as large, fixed-sized, sparsely populated uvm_objects.   each kernel  * object is equal to the size of kernel virtual address space (i.e. the * value "VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS"). * * most kernel private memory lives in kernel_object.   the only exception * to this is for memory that belongs to submaps that must be protected * by splimp().    each of these submaps has their own private kernel  * object (e.g. kmem_object, mb_object). * * note that just because a kernel object spans the entire kernel virutal * address space doesn't mean that it has to be mapped into the entire space. * large chunks of a kernel object's space go unused either because  * that area of kernel VM is unmapped, or there is some other type of  * object mapped into that range (e.g. a vnode).    for submap's kernel * objects, the only part of the object that can ever be populated is the * offsets that are managed by the submap. * * note that the "offset" in a kernel object is always the kernel virtual * address minus the VM_MIN_KERNEL_ADDRESS (aka vm_map_min(kernel_map)). * example: *   suppose VM_MIN_KERNEL_ADDRESS is 0xf8000000 and the kernel does a *   uvm_km_alloc(kernel_map, PAGE_SIZE) [allocate 1 wired down page in the *   kernel map].    if uvm_km_alloc returns virtual address 0xf8235000, *   then that means that the page at offset 0x235000 in kernel_object is *   mapped at 0xf8235000.    * * note that the offsets in kmem_object and mb_object also follow this * rule.   this means that the offsets for kmem_object must fall in the * range of [vm_map_min(kmem_object) - vm_map_min(kernel_map)] to * [vm_map_max(kmem_object) - vm_map_min(kernel_map)], so the offsets * in those objects will typically not start at zero. * * kernel object have one other special property: when the kernel virtual * memory mapping them is unmapped, the backing memory in the object is * freed right away.   this is done with the uvm_km_pgremove() function. * this has to be done because there is no backing store for kernel pages * and no need to save them after they are no longer referenced. */#include <sys/param.h>#include <sys/systm.h>#include <sys/proc.h>#include <uvm/uvm.h>/* * global data structures */vm_map_t kernel_map = NULL;struct vmi_list vmi_list;simple_lock_data_t vmi_list_slock;/* * local data structues */#ifdef OSKITextern struct vmspace		vmspace0;#define kernel_map_store	(vmspace0.vm_map)#elsestatic struct vm_map		kernel_map_store;#endifstatic struct uvm_object	kmem_object_store;static struct uvm_object	mb_object_store;/* * All pager operations here are NULL, but the object must have * a pager ops vector associated with it; various places assume * it to be so. */static struct uvm_pagerops	km_pager;/* * uvm_km_init: init kernel maps and objects to reflect reality (i.e. * KVM already allocated for text, data, bss, and static data structures). * * => KVM is defined by VM_MIN_KERNEL_ADDRESS/VM_MAX_KERNEL_ADDRESS. *    we assume that [min -> start] has already been allocated and that *    "end" is the end. */voiduvm_km_init(start, end)	vaddr_t start, end;{	vaddr_t base = VM_MIN_KERNEL_ADDRESS;	/*	 * first, initialize the interrupt-safe map list.	 */	LIST_INIT(&vmi_list);	simple_lock_init(&vmi_list_slock);	/*	 * next, init kernel memory objects.	 */	/* kernel_object: for pageable anonymous kernel memory */	uao_init();	uvm.kernel_object = uao_create(VM_MAX_KERNEL_ADDRESS -				 VM_MIN_KERNEL_ADDRESS, UAO_FLAG_KERNOBJ);	/*	 * kmem_object: for use by the kernel malloc().  Memory is always	 * wired, and this object (and the kmem_map) can be accessed at	 * interrupt time.	 */	simple_lock_init(&kmem_object_store.vmobjlock);	kmem_object_store.pgops = &km_pager;	TAILQ_INIT(&kmem_object_store.memq);	kmem_object_store.uo_npages = 0;	/* we are special.  we never die */	kmem_object_store.uo_refs = UVM_OBJ_KERN_INTRSAFE; 	uvmexp.kmem_object = &kmem_object_store;	/*	 * mb_object: for mbuf cluster pages on platforms which use the	 * mb_map.  Memory is always wired, and this object (and the mb_map)	 * can be accessed at interrupt time.	 */	simple_lock_init(&mb_object_store.vmobjlock);	mb_object_store.pgops = &km_pager;	TAILQ_INIT(&mb_object_store.memq);	mb_object_store.uo_npages = 0;	/* we are special.  we never die */	mb_object_store.uo_refs = UVM_OBJ_KERN_INTRSAFE; 	uvmexp.mb_object = &mb_object_store;	/*	 * init the map and reserve allready allocated kernel space 	 * before installing.	 */	uvm_map_setup(&kernel_map_store, base, end, VM_MAP_PAGEABLE);	kernel_map_store.pmap = pmap_kernel();	if (uvm_map(&kernel_map_store, &base, start - base, NULL,	    UVM_UNKNOWN_OFFSET, 0, UVM_MAPFLAG(UVM_PROT_ALL, UVM_PROT_ALL,	    UVM_INH_NONE, UVM_ADV_RANDOM,UVM_FLAG_FIXED)) != KERN_SUCCESS)		panic("uvm_km_init: could not reserve space for kernel");		/*	 * install!	 */	kernel_map = &kernel_map_store;}/* * uvm_km_suballoc: allocate a submap in the kernel map.   once a submap * is allocated all references to that area of VM must go through it.  this * allows the locking of VAs in kernel_map to be broken up into regions. * * => if `fixed' is true, *min specifies where the region described *      by the submap must start * => if submap is non NULL we use that as the submap, otherwise we *	alloc a new map */struct vm_map *uvm_km_suballoc(map, min, max, size, flags, fixed, submap)	struct vm_map *map;	vaddr_t *min, *max;		/* OUT, OUT */	vsize_t size;	int flags;	boolean_t fixed;	struct vm_map *submap;{	int mapflags = UVM_FLAG_NOMERGE | (fixed ? UVM_FLAG_FIXED : 0);	size = round_page(size);	/* round up to pagesize */	/*	 * first allocate a blank spot in the parent map	 */	if (uvm_map(map, min, size, NULL, UVM_UNKNOWN_OFFSET, 0,	    UVM_MAPFLAG(UVM_PROT_ALL, UVM_PROT_ALL, UVM_INH_NONE,	    UVM_ADV_RANDOM, mapflags)) != KERN_SUCCESS) {	       panic("uvm_km_suballoc: unable to allocate space in parent map");	}	/*	 * set VM bounds (min is filled in by uvm_map)	 */	*max = *min + size;	/*	 * add references to pmap and create or init the submap	 */	pmap_reference(vm_map_pmap(map));	if (submap == NULL) {		submap = uvm_map_create(vm_map_pmap(map), *min, *max, flags);		if (submap == NULL)			panic("uvm_km_suballoc: unable to create submap");	} else {		uvm_map_setup(submap, *min, *max, flags);		submap->pmap = vm_map_pmap(map);	}	/*	 * now let uvm_map_submap plug in it...	 */	if (uvm_map_submap(map, *min, *max, submap) != KERN_SUCCESS)		panic("uvm_km_suballoc: submap allocation failed");	return(submap);}/* * uvm_km_pgremove: remove pages from a kernel uvm_object. * * => when you unmap a part of anonymous kernel memory you want to toss *    the pages right away.    (this gets called from uvm_unmap_...). */#define UKM_HASH_PENALTY 4      /* a guess */voiduvm_km_pgremove(uobj, start, end)	struct uvm_object *uobj;	vaddr_t start, end;{	boolean_t by_list;	struct vm_page *pp, *ppnext;	vaddr_t curoff;	UVMHIST_FUNC("uvm_km_pgremove"); UVMHIST_CALLED(maphist);	KASSERT(uobj->pgops == &aobj_pager);	simple_lock(&uobj->vmobjlock);	/* choose cheapest traversal */	by_list = (uobj->uo_npages <=	     ((end - start) >> PAGE_SHIFT) * UKM_HASH_PENALTY); 	if (by_list)		goto loop_by_list;	/* by hash */	for (curoff = start ; curoff < end ; curoff += PAGE_SIZE) {		pp = uvm_pagelookup(uobj, curoff);		if (pp == NULL)			continue;		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,		    pp->flags & PG_BUSY, 0, 0);		/* now do the actual work */		if (pp->flags & PG_BUSY) {			/* owner must check for this when done */			pp->flags |= PG_RELEASED;		} else {			/* free the swap slot... */			uao_dropswap(uobj, curoff >> PAGE_SHIFT);			/*			 * ...and free the page; note it may be on the			 * active or inactive queues.			 */			uvm_lock_pageq();			uvm_pagefree(pp);			uvm_unlock_pageq();		}	}	simple_unlock(&uobj->vmobjlock);	return;loop_by_list:	for (pp = TAILQ_FIRST(&uobj->memq); pp != NULL; pp = ppnext) {		ppnext = TAILQ_NEXT(pp, listq);		if (pp->offset < start || pp->offset >= end) {			continue;		}		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,		    pp->flags & PG_BUSY, 0, 0);		if (pp->flags & PG_BUSY) {			/* owner must check for this when done */			pp->flags |= PG_RELEASED;		} else {			/* free the swap slot... */			uao_dropswap(uobj, pp->offset >> PAGE_SHIFT);			/*			 * ...and free the page; note it may be on the			 * active or inactive queues.			 */			uvm_lock_pageq();			uvm_pagefree(pp);			uvm_unlock_pageq();		}	}	simple_unlock(&uobj->vmobjlock);}/* * uvm_km_pgremove_intrsafe: like uvm_km_pgremove(), but for "intrsafe" *    objects * * => when you unmap a part of anonymous kernel memory you want to toss *    the pages right away.    (this gets called from uvm_unmap_...). * => none of the pages will ever be busy, and none of them will ever *    be on the active or inactive queues (because these objects are *    never allowed to "page"). */voiduvm_km_pgremove_intrsafe(uobj, start, end)	struct uvm_object *uobj;	vaddr_t start, end;{	boolean_t by_list;	struct vm_page *pp, *ppnext;	vaddr_t curoff;	UVMHIST_FUNC("uvm_km_pgremove_intrsafe"); UVMHIST_CALLED(maphist);	KASSERT(UVM_OBJ_IS_INTRSAFE_OBJECT(uobj));	simple_lock(&uobj->vmobjlock);		/* lock object */	/* choose cheapest traversal */	by_list = (uobj->uo_npages <=	     ((end - start) >> PAGE_SHIFT) * UKM_HASH_PENALTY); 	if (by_list)		goto loop_by_list;	/* by hash */	for (curoff = start ; curoff < end ; curoff += PAGE_SIZE) {		pp = uvm_pagelookup(uobj, curoff);		if (pp == NULL) {			continue;		}		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,		    pp->flags & PG_BUSY, 0, 0);		KASSERT((pp->flags & PG_BUSY) == 0);		KASSERT((pp->pqflags & PQ_ACTIVE) == 0);		KASSERT((pp->pqflags & PQ_INACTIVE) == 0);		uvm_pagefree(pp);	}	simple_unlock(&uobj->vmobjlock);	return;loop_by_list:	for (pp = TAILQ_FIRST(&uobj->memq); pp != NULL; pp = ppnext) {		ppnext = TAILQ_NEXT(pp, listq);		if (pp->offset < start || pp->offset >= end) {			continue;

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -