⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 pmap.c

📁 基于组件方式开发操作系统的OSKIT源代码
💻 C
📖 第 1 页 / 共 5 页
字号:
/*	$NetBSD: pmap.c,v 1.115 2000/12/07 21:53:46 thorpej Exp $	*//* * * Copyright (c) 1997 Charles D. Cranor and Washington University. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright *    notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright *    notice, this list of conditions and the following disclaimer in the *    documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software *    must display the following acknowledgement: *      This product includes software developed by Charles D. Cranor and *      Washington University. * 4. The name of the author may not be used to endorse or promote products *    derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. *//* * pmap.c: i386 pmap module rewrite * Chuck Cranor <chuck@ccrc.wustl.edu> * 11-Aug-97 * * history of this pmap module: in addition to my own input, i used *    the following references for this rewrite of the i386 pmap: * * [1] the NetBSD i386 pmap.   this pmap appears to be based on the *     BSD hp300 pmap done by Mike Hibler at University of Utah. *     it was then ported to the i386 by William Jolitz of UUNET *     Technologies, Inc.   Then Charles M. Hannum of the NetBSD *     project fixed some bugs and provided some speed ups. * * [2] the FreeBSD i386 pmap.   this pmap seems to be the *     Hibler/Jolitz pmap, as modified for FreeBSD by John S. Dyson *     and David Greenman. * * [3] the Mach pmap.   this pmap, from CMU, seems to have migrated *     between several processors.   the VAX version was done by *     Avadis Tevanian, Jr., and Michael Wayne Young.    the i386 *     version was done by Lance Berc, Mike Kupfer, Bob Baron, *     David Golub, and Richard Draves.    the alpha version was *     done by Alessandro Forin (CMU/Mach) and Chris Demetriou *     (NetBSD/alpha). */#include "opt_cputype.h"#include "opt_user_ldt.h"#include "opt_largepages.h"#include <sys/param.h>#include <sys/systm.h>#include <sys/proc.h>#include <sys/malloc.h>#include <sys/pool.h>#include <sys/user.h>#include <sys/kernel.h>#include <uvm/uvm.h>#include <machine/cpu.h>#include <machine/specialreg.h>#include <machine/gdt.h>#ifndef OSKIT#include <dev/isa/isareg.h>#include <machine/isa_machdep.h>#endif#ifdef OSKIT#include <oskit/x86/base_paging.h>#include "oskit_uvm_internal.h"#endif/* * general info: * *  - for an explanation of how the i386 MMU hardware works see *    the comments in <machine/pte.h>. * *  - for an explanation of the general memory structure used by *    this pmap (including the recursive mapping), see the comments *    in <machine/pmap.h>. * * this file contains the code for the "pmap module."   the module's * job is to manage the hardware's virtual to physical address mappings. * note that there are two levels of mapping in the VM system: * *  [1] the upper layer of the VM system uses vm_map's and vm_map_entry's *      to map ranges of virtual address space to objects/files.  for *      example, the vm_map may say: "map VA 0x1000 to 0x22000 read-only *      to the file /bin/ls starting at offset zero."   note that *      the upper layer mapping is not concerned with how individual *      vm_pages are mapped. * *  [2] the lower layer of the VM system (the pmap) maintains the mappings *      from virtual addresses.   it is concerned with which vm_page is *      mapped where.   for example, when you run /bin/ls and start *      at page 0x1000 the fault routine may lookup the correct page *      of the /bin/ls file and then ask the pmap layer to establish *      a mapping for it. * * note that information in the lower layer of the VM system can be * thrown away since it can easily be reconstructed from the info * in the upper layer. * * data structures we use include: * *  - struct pmap: describes the address space of one thread *  - struct pv_entry: describes one <PMAP,VA> mapping of a PA *  - struct pv_head: there is one pv_head per managed page of *	physical memory.   the pv_head points to a list of pv_entry *	structures which describe all the <PMAP,VA> pairs that this *      page is mapped in.    this is critical for page based operations *      such as pmap_page_protect() [change protection on _all_ mappings *      of a page] *  - pv_page/pv_page_info: pv_entry's are allocated out of pv_page's. *      if we run out of pv_entry's we allocate a new pv_page and free *      its pv_entrys. * - pmap_remove_record: a list of virtual addresses whose mappings *	have been changed.   used for TLB flushing. *//* * memory allocation * *  - there are three data structures that we must dynamically allocate: * * [A] new process' page directory page (PDP) *	- plan 1: done at pmap_create() we use *	  uvm_km_alloc(kernel_map, PAGE_SIZE)  [fka kmem_alloc] to do this *	  allocation. * * if we are low in free physical memory then we sleep in * uvm_km_alloc -- in this case this is ok since we are creating * a new pmap and should not be holding any locks. * * if the kernel is totally out of virtual space * (i.e. uvm_km_alloc returns NULL), then we panic. * * XXX: the fork code currently has no way to return an "out of * memory, try again" error code since uvm_fork [fka vm_fork] * is a void function. * * [B] new page tables pages (PTP) * 	- plan 1: call uvm_pagealloc() * 		=> success: zero page, add to pm_pdir * 		=> failure: we are out of free vm_pages * 	- plan 2: using a linked LIST of active pmaps we attempt * 	to "steal" a PTP from another process.   we lock * 	the target pmap with simple_lock_try so that if it is * 	busy we do not block. * 		=> success: remove old mappings, zero, add to pm_pdir * 		=> failure: highly unlikely * 	- plan 3: panic * * note: for kernel PTPs, we start with NKPTP of them.   as we map * kernel memory (at uvm_map time) we check to see if we've grown * the kernel pmap.   if so, we call the optional function * pmap_growkernel() to grow the kernel PTPs in advance. * * [C] pv_entry structures *	- plan 1: try to allocate one off the free list *		=> success: done! *		=> failure: no more free pv_entrys on the list *	- plan 2: try to allocate a new pv_page to add a chunk of *	pv_entrys to the free list *		[a] obtain a free, unmapped, VA in kmem_map.  either *		we have one saved from a previous call, or we allocate *		one now using a "vm_map_lock_try" in uvm_map *		=> success: we have an unmapped VA, continue to [b] *		=> failure: unable to lock kmem_map or out of VA in it. *			move on to plan 3. *		[b] allocate a page in kmem_object for the VA *		=> success: map it in, free the pv_entry's, DONE! *		=> failure: kmem_object locked, no free vm_pages, etc. *			save VA for later call to [a], go to plan 3. *	- plan 3: using the pv_entry/pv_head lists find a pv_entry *		structure that is part of a non-kernel lockable pmap *		and "steal" that pv_entry by removing the mapping *		and reusing that pv_entry. *		=> success: done *		=> failure: highly unlikely: unable to lock and steal *			pv_entry *	- plan 4: we panic. *//* * locking * * we have the following locks that we must contend with: * * "normal" locks: * *  - pmap_main_lock *    this lock is used to prevent deadlock and/or provide mutex *    access to the pmap system.   most operations lock the pmap *    structure first, then they lock the pv_lists (if needed). *    however, some operations such as pmap_page_protect lock *    the pv_lists and then lock pmaps.   in order to prevent a *    cycle, we require a mutex lock when locking the pv_lists *    first.   thus, the "pmap = >pv_list" lockers must gain a *    read-lock on pmap_main_lock before locking the pmap.   and *    the "pv_list => pmap" lockers must gain a write-lock on *    pmap_main_lock before locking.    since only one thread *    can write-lock a lock at a time, this provides mutex. * * "simple" locks: * * - pmap lock (per pmap, part of uvm_object) *   this lock protects the fields in the pmap structure including *   the non-kernel PDEs in the PDP, and the PTEs.  it also locks *   in the alternate PTE space (since that is determined by the *   entry in the PDP). * * - pvh_lock (per pv_head) *   this lock protects the pv_entry list which is chained off the *   pv_head structure for a specific managed PA.   it is locked *   when traversing the list (e.g. adding/removing mappings, *   syncing R/M bits, etc.) * * - pvalloc_lock *   this lock protects the data structures which are used to manage *   the free list of pv_entry structures. * * - pmaps_lock *   this lock protects the list of active pmaps (headed by "pmaps"). *   we lock it when adding or removing pmaps from this list. * * - pmap_copy_page_lock *   locks the tmp kernel PTE mappings we used to copy data * * - pmap_zero_page_lock *   locks the tmp kernel PTE mapping we use to zero a page * * - pmap_tmpptp_lock *   locks the tmp kernel PTE mapping we use to look at a PTP *   in another process * * XXX: would be nice to have per-CPU VAs for the above 4 *//* * locking data structures */static struct lock pmap_main_lock;static simple_lock_data_t pvalloc_lock;static simple_lock_data_t pmaps_lock;static simple_lock_data_t pmap_copy_page_lock;static simple_lock_data_t pmap_zero_page_lock;static simple_lock_data_t pmap_tmpptp_lock;#define PMAP_MAP_TO_HEAD_LOCK() \     (void) spinlockmgr(&pmap_main_lock, LK_SHARED, NULL)#define PMAP_MAP_TO_HEAD_UNLOCK() \     (void) spinlockmgr(&pmap_main_lock, LK_RELEASE, NULL)#define PMAP_HEAD_TO_MAP_LOCK() \     (void) spinlockmgr(&pmap_main_lock, LK_EXCLUSIVE, NULL)#define PMAP_HEAD_TO_MAP_UNLOCK() \     (void) spinlockmgr(&pmap_main_lock, LK_RELEASE, NULL)/* * global data structures */struct pmap kernel_pmap_store;	/* the kernel's pmap (proc0) *//* * nkpde is the number of kernel PTPs allocated for the kernel at * boot time (NKPTP is a compile time override).   this number can * grow dynamically as needed (but once allocated, we never free * kernel PTPs). */int nkpde = NKPTP;#ifdef NKPDE#error "obsolete NKPDE: use NKPTP"#endif/* * pmap_pg_g: if our processor supports PG_G in the PTE then we * set pmap_pg_g to PG_G (otherwise it is zero). */int pmap_pg_g = 0;#ifdef LARGEPAGES/* * pmap_largepages: if our processor supports PG_PS and we are * using it, this is set to TRUE. */int pmap_largepages;#endif/* * i386 physical memory comes in a big contig chunk with a small * hole toward the front of it...  the following 4 paddr_t's * (shared with machdep.c) describe the physical address space * of this machine. */paddr_t avail_start;	/* PA of first available physical page */paddr_t avail_end;	/* PA of last available physical page *//* * other data structures */static pt_entry_t protection_codes[8];     /* maps MI prot to i386 prot code */static boolean_t pmap_initialized = FALSE; /* pmap_init done yet? *//* * the following two vaddr_t's are used during system startup * to keep track of how much of the kernel's VM space we have used. * once the system is started, the management of the remaining kernel * VM space is turned over to the kernel_map vm_map. */static vaddr_t virtual_avail;	/* VA of first free KVA */static vaddr_t virtual_end;	/* VA of last free KVA *//* * pv_page management structures: locked by pvalloc_lock */TAILQ_HEAD(pv_pagelist, pv_page);static struct pv_pagelist pv_freepages;	/* list of pv_pages with free entrys */static struct pv_pagelist pv_unusedpgs; /* list of unused pv_pages */static int pv_nfpvents;			/* # of free pv entries */static struct pv_page *pv_initpage;	/* bootstrap page from kernel_map */static vaddr_t pv_cachedva;		/* cached VA for later use */#define PVE_LOWAT (PVE_PER_PVPAGE / 2)	/* free pv_entry low water mark */#define PVE_HIWAT (PVE_LOWAT + (PVE_PER_PVPAGE * 2))					/* high water mark *//* * linked list of all non-kernel pmaps */static struct pmap_head pmaps;static struct pmap *pmaps_hand = NULL;	/* used by pmap_steal_ptp *//* * pool that pmap structures are allocated from */struct pool pmap_pmap_pool;/* * pool and cache that PDPs are allocated from */struct pool pmap_pdp_pool;struct pool_cache pmap_pdp_cache;int	pmap_pdp_ctor(void *, void *, int);/* * special VAs and the PTEs that map them */static pt_entry_t *csrc_pte, *cdst_pte, *zero_pte, *ptp_pte;static caddr_t csrcp, cdstp, zerop, ptpp;caddr_t vmmap; /* XXX: used by mem.c... it should really uvm_map_reserve it */#ifndef OSKITextern vaddr_t msgbuf_vaddr;extern paddr_t msgbuf_paddr;#endifextern vaddr_t idt_vaddr;			/* we allocate IDT early */extern paddr_t idt_paddr;#if defined(I586_CPU)/* stuff to fix the pentium f00f bug */extern vaddr_t pentium_idt_vaddr;#endif/* * local prototypes */static struct pv_entry	*pmap_add_pvpage __P((struct pv_page *, boolean_t));static struct vm_page	*pmap_alloc_ptp __P((struct pmap *, int, boolean_t));static struct pv_entry	*pmap_alloc_pv __P((struct pmap *, int)); /* see codes below */#define ALLOCPV_NEED	0	/* need PV now */#define ALLOCPV_TRY	1	/* just try to allocate, don't steal */#define ALLOCPV_NONEED	2	/* don't need PV, just growing cache */static struct pv_entry	*pmap_alloc_pvpage __P((struct pmap *, int));static void		 pmap_enter_pv __P((struct pv_head *,					    struct pv_entry *, struct pmap *,					    vaddr_t, struct vm_page *));static void		 pmap_free_pv __P((struct pmap *, struct pv_entry *));static void		 pmap_free_pvs __P((struct pmap *, struct pv_entry *));static void		 pmap_free_pv_doit __P((struct pv_entry *));static void		 pmap_free_pvpage __P((void));static struct vm_page	*pmap_get_ptp __P((struct pmap *, int, boolean_t));static boolean_t	 pmap_is_curpmap __P((struct pmap *));static pt_entry_t	*pmap_map_ptes __P((struct pmap *));static struct pv_entry	*pmap_remove_pv __P((struct pv_head *, struct pmap *,					     vaddr_t));static void		 pmap_do_remove __P((struct pmap *, vaddr_t,						vaddr_t, int));static boolean_t	 pmap_remove_pte __P((struct pmap *, struct vm_page *,					      pt_entry_t *, vaddr_t, int));static void		 pmap_remove_ptes __P((struct pmap *,					       struct pmap_remove_record *,					       struct vm_page *, vaddr_t,					       vaddr_t, vaddr_t, int));#define PMAP_REMOVE_ALL		0	/* remove all mappings */#define PMAP_REMOVE_SKIPWIRED	1	/* skip wired mappings */static struct vm_page	*pmap_steal_ptp __P((struct uvm_object *,					     vaddr_t));static vaddr_t		 pmap_tmpmap_pa __P((paddr_t));static pt_entry_t	*pmap_tmpmap_pvepte __P((struct pv_entry *));static void		 pmap_tmpunmap_pa __P((void));static void		 pmap_tmpunmap_pvepte __P((struct pv_entry *));#if 0static boolean_t	 pmap_transfer_ptes __P((struct pmap *,					 struct pmap_transfer_location *,					 struct pmap *,					 struct pmap_transfer_location *,					 int, boolean_t));#endifstatic boolean_t	 pmap_try_steal_pv __P((struct pv_head *,						struct pv_entry *,						struct pv_entry *));static void		pmap_unmap_ptes __P((struct pmap *));/* * p m a p   i n l i n e   h e l p e r   f u n c t i o n s *//* * pmap_is_curpmap: is this pmap the one currently loaded [in %cr3]? *		of course the kernel is always loaded */__inline static boolean_tpmap_is_curpmap(pmap)	struct pmap *pmap;{	return((pmap == pmap_kernel()) ||	       (pmap->pm_pdirpa == (paddr_t) rcr3()));}/* * pmap_tmpmap_pa: map a page in for tmp usage * * => returns with pmap_tmpptp_lock held */__inline static vaddr_tpmap_tmpmap_pa(pa)	paddr_t pa;{	simple_lock(&pmap_tmpptp_lock);#if defined(DIAGNOSTIC)	if (*ptp_pte)		panic("pmap_tmpmap_pa: ptp_pte in use?");#endif	*ptp_pte = PG_V | PG_RW | pa;		/* always a new mapping */	return((vaddr_t)ptpp);}/* * pmap_tmpunmap_pa: unmap a tmp use page (undoes pmap_tmpmap_pa) * * => we release pmap_tmpptp_lock */__inline static voidpmap_tmpunmap_pa(){#if defined(DIAGNOSTIC)	if (!pmap_valid_entry(*ptp_pte))		panic("pmap_tmpunmap_pa: our pte invalid?");#endif	*ptp_pte = 0;		/* zap! */	pmap_update_pg((vaddr_t)ptpp);	simple_unlock(&pmap_tmpptp_lock);}/* * pmap_tmpmap_pvepte: get a quick mapping of a PTE for a pv_entry *

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -