⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 vm_page.c

📁 <B>Digital的Unix操作系统VAX 4.2源码</B>
💻 C
📖 第 1 页 / 共 3 页
字号:
#ifndef lintstatic	char	*sccsid = "@(#)vm_page.c	4.1	(ULTRIX)	7/2/90";#endif lint/************************************************************************ *									* *			Copyright (c) 1986 by				* *		Digital Equipment Corporation, Maynard, MA		* *			All rights reserved.				* *									* *   This software is furnished under a license and may be used and	* *   copied  only  in accordance with the terms of such license and	* *   with the  inclusion  of  the  above  copyright  notice.   This	* *   software  or  any  other copies thereof may not be provided or	* *   otherwise made available to any other person.  No title to and	* *   ownership of the software is hereby transferred.			* *									* *   This software is  derived  from  software  received  from  the	* *   University    of   California,   Berkeley,   and   from   Bell	* *   Laboratories.  Use, duplication, or disclosure is  subject  to	* *   restrictions  under  license  agreements  with  University  of	* *   California and with AT&T.						* *									* *   The information in this software is subject to change  without	* *   notice  and should not be construed as a commitment by Digital	* *   Equipment Corporation.						* *									* *   Digital assumes no responsibility for the use  or  reliability	* *   of its software on equipment which is not supplied by Digital.	* *									* ************************************************************************//* * *   Modification history: * * 02 Feb 88 -- jaa *	Fixed checkpage so that if a proc is locked or exiting, it  *	is not eligible for paging * * 04 Sep 87 -- depp *      Due to the demise of xflush_free_text(), checkpage no longer has *      to flush remote hashed pages on pageout. * * 14 Jul 87 -- cb *	Added in rr's changes. * * 20 Feb 87 -- depp *	Added check for GTRC flag in gnode.  If set, the pagein() routine * 	will not hash the page or look for the page in the hash lists. * * 21 Jan 87 -- jaw *	performance fixes to syscall. * * 15 Jan 87 -- depp *	Fixed SM bug in pagein(). * * 15 Dec 86 -- depp *	Fixed problem with PG_M not properly propagating to all attached *	text PTEs (pagein()). * * 09 Oct 86 -- depp *	Changed checkpage() to remove from the hash lists any pages that *	are from a remote file on pageout.  Also, fixed problem in shared *	memory - on pagein, if a SM page is intrans, it was sleeping on an *	address in the proc structure, rather than a global address. * * 11 Sep 86 -- koehler *	a few mount ops became macros, gnode name change * * 18 Jun 86 -- depp *      Added shared memory ZFOD support. * * 29 Apr 86 -- depp *	converted to lock macros from calls routines. * * 02-Apr-86 -- jrs *	Add set of runrun so that halt of slaves will really take *	effect when we want it to * * 02 Apr 86 -- depp *	Added in performance enhancements from 4.3UCB.  The first is the *	2 hand clock algorithm for large memory configurations.  The second *	is klustering of reads from a gnode. * * 18-Mar-86 -- jrs *	Clean up cpu premption to use new intrcpu instead of intrslave * * 11 Mar 86 -- depp *	Added conditional call to "vcleanu" (vm_mem.c) to "pagein" routine *	to remove any "u" areas found on the "u" list to the freelist. * * 24 Feb 86 -- depp *	Moved the setting of the PG_M bit in the routine "pagein" until *	just after the page has be read in.  This move was necessary because *	the SRM considers the PG_M bit set and the PG_V reset as an invalid *	combination. * * 13 Nov 85 -- depp *	Added "cm" parameter to distsmpte call.  This parameter indicates that *	the "pg_m" bit is to be cleared in the processes PTEs that are sharing *	a data segment.  This replaces the "pg_cm" definition of "pg_alloc" *	which could cause a conflict. * * 11 Nov 85 -- depp *	Removed all conditional compiles for System V IPC. * * 30 Sep 85 -- depp *	Added checks for page locking into "pageout" * * 19 Jul 85 -- depp *	Added the setting of pg_cm (for shared memory only) if the  *	page is to be pushed, so that in distsmpte, the attached process *	pte's will have their pg_m field properly cleared. * * 001 - March 11 1985 - Larry Cohen *     disable mapped in files so NOFILE can be larger than 32 * * * 11 Mar 85 -- depp *	Added in System V shared memory. * */#include "../machine/reg.h"#include "../machine/pte.h"#include "../h/param.h"#include "../h/systm.h"#include "../h/mount.h"#include "../h/gnode.h"#include "../h/dir.h"#include "../h/user.h"#include "../h/proc.h"#include "../h/buf.h"#include "../h/text.h"#include "../h/cmap.h"#include "../h/vm.h"#include "../h/file.h"#include "../h/trace.h"#include "../h/ipc.h"#include "../h/shm.h"#include "../h/cpudata.h"#ifdef mips#include "../machine/cpu.h"#endif mipsextern struct smem smem[];extern struct sminfo sminfo;extern int extracpu;#ifdef GFSDEBUGextern short GFS[];#endifint	nohash = 0;		/* turn on/off hashing */int	nobufcache = 1;		/* turn on/off buf cache for data *//* * Handle a page fault. * * Basic outline *	If page is allocated, but just not valid: *		Wait if intransit, else just revalidate *		Done *	Compute <dev,bn> from which page operation would take place *	If page is text page, and filling from file system or swap space: *		If in free list cache, reattach it and then done *	Allocate memory for page in *		If block here, restart because we could have swapped, etc. *	Lock process from swapping for duration *	Update pte's to reflect that page is intransit. *	If page is zero fill on demand: *		Clear pages and flush free list cache of stale cacheing *		for this swap page (e.g. before initializing again due *		to 407/410 exec). *	If page is fill from file and in buffer cache: *		Copy the page from the buffer cache. *	If not a fill on demand: *		Determine swap address and cluster to page in *	Do the swap to bring the page in *	Instrument the pagein *	After swap validate the required new page *	Leave prepaged pages reclaimable (not valid) *	Update shared copies of text page tables *	Complete bookkeeping on pages brought in: *		No longer intransit *		Hash text pages into core hash structure *		Unlock pages (modulo raw i/o requirements) *		Flush translation buffer *	Process pagein is done */#ifdef TRACE#define	pgtrace(e)	trace(e,v,u.u_procp->p_pid)#else#define	pgtrace(e)#endifint	preptofree = 1;		/* send pre-paged pages to free list */int	buf_pagein_cnt = 0;int	buf_pagein_bytes = 0;pagein(virtaddr, dlyu)	unsigned virtaddr;	int dlyu;{	register struct proc *p;	register struct pte *pte;	register u_int v;	register int i, j;	register struct cmap *c;	unsigned pf;	int type, fileno;	struct pte opte;	dev_t dev;	int klsize;	unsigned vsave;	int smindex;		/* SHMEM */	struct smem *sp;	/* SHMEM */	daddr_t bn, bncache, bnswap;	int si, sk;	int use_buffer_cache = 0;	int klmax = KLMAX;	/* maybe less if paging in thru buffer cache */#ifdef PGINPROF#ifdef vax#include "machine/mtpr.h"#endif vax	int otime, olbolt, oicr, a, s;#ifdef vax	s = spl6();#endif vax#ifdef mips	s = splclock();	XPRINTF(XPR_VM,"enter pagein",0,0,0,0);#endif mips	otime = time, olbolt = lbolt, oicr = mfpr(ICR);#endif PGINPROF	cnt.v_faults++;	/*	 * Classify faulted page into a segment and get a pte	 * for the faulted page.	 */	vsave = v = clbase(btop(virtaddr));	p = u.u_procp;	if (isatsv(p, v)) {		type = CTEXT;		pte = tptopte(p, vtotp(p, v));	} else if (isadsv(p, v)) {		type = CDATA;		pte = dptopte(p, vtodp(p, v));#ifdef vax		/* begin SHMEM */		if(vtodp(p, v) >= p->p_dsize){			register int xp;			type = CSMEM;			if(p->p_sm == (struct p_sm *) NULL) {				panic("pagin: p_sm");			}			/* translate the process data-space PTE	*/			/* to the non-swapped shared memory PTE	*/			xp = vtotp(p, v);			if(p->p_sm != (struct p_sm *) NULL) {				for(i = 0; i < sminfo.smseg; i++){					if(p->p_sm[i].sm_p == NULL)						continue;					if(xp >= p->p_sm[i].sm_spte  &&					   xp < p->p_sm[i].sm_spte +					   btoc(p->p_sm[i].sm_p->sm_size))						break;				}				if(i >= sminfo.smseg)					panic("pagein SMEM");				sp = p->p_sm[i].sm_p;				pte = sp->sm_ptaddr +					(xp - p->p_sm[i].sm_spte);				smindex = i;				if (sp->sm_perm.mode & IPC_SYSTEM)					panic("pagein: Attempt to pagein kernel/user shared memory page");			}		}		/* end SHMEM */#endif vax#ifdef mips        } else if (isasmsv(p, v, &smindex)) {		struct p_sm *psm = &p->p_sm[smindex];		type = CSMEM;		sp = psm->sm_p;		if (sp->sm_perm.mode & IPC_SYSTEM)			panic("pagein: Attempt to pagein kernel/user shared memory page");		pte = sp->sm_ptaddr + vtosmp(psm,v);		XPRINTF(XPR_SM,"pagein: got one sp 0x%x pte 0x%x *pte 0x%x",			sp, pte, *(int *) pte, 0);#endif mips	} else {		type = CSTACK;		pte = sptopte(p, vtosp(p, v));	}	if (pte->pg_v)		return;#ifdef notdef		panic("pagein");#endif notdef#ifdef DEPPDEBUG	if (*(int *) pte == 0)		panic("pagein: *pte == 0");#endif DEPPDEBUG	/*	 * If page is reclaimable, reclaim it.	 * If page is text and intransit, sleep while it is intransit,	 * If it is valid after the sleep, we are done.	 * Otherwise we have to start checking again, since page could	 * even be reclaimable now (we may have swapped for a long time).	 */restart:	/* if any free "u" pages to be placed on free list, do it now */	if (nucmap)		vcleanu();	if (pte->pg_fod == 0 && pte->pg_pfnum) {		if (type == CTEXT && cmap[pgtocm(pte->pg_pfnum)].c_intrans) {			pgtrace(TR_INTRANS);			sleep((caddr_t)p->p_textp, PSWP+1);			pgtrace(TR_EINTRANS);			pte = vtopte(p, v);			if (pte->pg_v) {valid:				if (dlyu) {					c = &cmap[pgtocm(pte->pg_pfnum)];					if (c->c_lock) {						c->c_want = 1;						sleep((caddr_t)c, PSWP+1);						goto restart;					}					c->c_lock = 1;				}#ifdef mips                                newptes(p, v, CLSIZE);#endif mips#ifdef vax				newptes(pte, v, CLSIZE);#endif vax				cnt.v_intrans++;				return;			}			goto restart;		}		/* begin SHMEM */		if (type == CSMEM  &&				cmap[pgtocm(pte->pg_pfnum)].c_intrans) {			pgtrace(TR_INTRANS);			sleep((caddr_t)&sp->sm_flag, PSWP+1);			pgtrace(TR_EINTRANS);			/* recalculating the PTE is currently	*/			/* not necessary because the SMEM page	*/			/* tables are "wired-down". When (if)	*/			/* the SMEM is generalized to allow the	*/			/* page table to be swapped then 	*/			/* recalculation will be necessary.	*/			if (pte->pg_v) {				if (dlyu) {					c= &cmap[pgtocm(pte->pg_pfnum)];					if (c->c_lock) {						c->c_want = 1;						sleep((caddr_t)c,								PSWP+1);						goto restart;					}					c->c_lock = 1;				}#ifdef mips                		newptes(p, v, CLSIZE);#endif mips#ifdef vax				newptes(pte, v, CLSIZE);#endif vax				cnt.v_intrans++;				return;			}			goto restart;		}		/* end SHMEM */		/*		 * If page is in the free list, then take		 * it back into the resident set, updating		 * the size recorded for the resident set.		 */		si = splimp();		if (cmap[pgtocm(pte->pg_pfnum)].c_free) {			pgtrace(TR_FRECLAIM);			munlink(pte->pg_pfnum);			cnt.v_pgfrec++;			if (type == CTEXT) {				p->p_textp->x_rssize += CLSIZE;			/*  SHMEM */			} else if (type == CSMEM)				sp->sm_rssize							+= CLSIZE;			else				p->p_rssize += CLSIZE;		} else			pgtrace(TR_RECLAIM);		splx(si);		pte->pg_v = 1;		if (anycl(pte, pg_m))			pte->pg_m = 1;		distcl(pte);#ifdef vax		if (type == CTEXT)			distpte(p->p_textp, vtotp(p, v), pte);		else if (type == CSMEM)		/* SHMEM */			distsmpte(sp,					vtotp(p, v) -					p->p_sm[smindex].sm_spte,					pte, PG_NOCLRM);#endif vax		u.u_ru.ru_minflt++;		cnt.v_pgrec++;		if (dlyu) {			c = &cmap[pgtocm(pte->pg_pfnum)];			if (c->c_lock) {				c->c_want = 1;				sleep((caddr_t)c, PSWP+1);				goto restart;			}			c->c_lock = 1;		}#ifdef mips                newptes(p, v, CLSIZE);#endif mips#ifdef vax		newptes(pte, v, CLSIZE);#endif vax#ifdef PGINPROF		a = vmtime(otime, olbolt, oicr);		rectime += a;		if (a >= 0)			vmfltmon(rmon, a, rmonmin, rres, NRMON);		splx(s);#endif		return;	}#ifdef PGINPROF	splx(s);#endif	/*	 * <dev,bn> is where data comes from/goes to.	 * <dev,bncache> is where data is cached from/to.	 * <swapdev,bnswap> is where data will eventually go.	 */	if (pte->pg_fod == 0) {		fileno = -1;		bnswap = bncache = bn = vtod(p, v, &u.u_dmap, &u.u_smap);		dev = swapdev;	} else {		fileno = ((struct fpte *)pte)->pg_fileno;#ifdef mips                bn = PG_BLKNO(pte);#endif mips#ifdef vax		bn = ((struct fpte *)pte)->pg_blkno;#endif vax		bnswap = vtod(p, v, &u.u_dmap, &u.u_smap);		if (fileno > PG_FMAX)			panic("pagein pg_fileno");		if (fileno == PG_FTEXT) {			if (p->p_textp == 0)				panic("pagein PG_FTEXT");			dev = p->p_textp->x_gptr->g_dev;			bncache = bn;					} else if (fileno == PG_FZERO) {			dev = swapdev;			bncache = bnswap;					}	}	klsize = 1;	opte = *pte;	/*	 * Check for text detached but in free list.	 * This can happen only if the page is filling	 * from a gnode or from the swap device, (e.g. not when reading	 * in 407/410 execs to a zero fill page.)	 */	if (type == CTEXT && fileno != PG_FZERO && !nohash &&				((p->p_textp->x_gptr->g_flag & GTRC) == 0)) {		si = splimp(); 		while ((c = mfind(dev, bncache, p->p_textp->x_gptr)) != 0) { 			if (c->c_lock == 0) 				break;			MWAIT(c); 		}		if (c) {			if (c->c_type != CTEXT || c->c_gone == 0 ||			    c->c_free == 0) 				panic("pagein mfind");			p->p_textp->x_rssize += CLSIZE;			/*			 * Following code mimics memall().			 */ 			pf = cmtopg(c - cmap);	/* moved by rr from while loop*/			munlink(pf);			for (j = 0; j < CLSIZE; j++) {#ifdef vax				*(int *)pte = pf++;#endif vax#ifdef mips                                *(int *)pte = (pf++<<PTE_PFNSHIFT);#endif mips				pte->pg_prot = opte.pg_prot;				pte++;			}			pte -= CLSIZE;			c->c_free = 0;			c->c_gone = 0;			if (c->c_intrans || c->c_want)				panic("pagein intrans|want");			c->c_lock = 1;			if (c->c_page != vtotp(p, v))				panic("pagein c_page chgd");			c->c_ndx = p->p_textp - &text[0];			if (dev == swapdev) {				cnt.v_xsfrec++;				pgtrace(TR_XSFREC);			} else {				cnt.v_xifrec++;				pgtrace(TR_XIFREC);			}			cnt.v_pgrec++;			u.u_ru.ru_minflt++;			if (dev != swapdev) {				c = mfind(swapdev, bnswap, p->p_textp->x_gptr);				if(c)munhash(swapdev,bnswap,p->p_textp->x_gptr);#ifdef mips                /* force it to backing store to retain clean copy */                                pte->pg_swapm = 1;#endif mips#ifdef vax				pte->pg_m = 1;#endif vax			}			splx(si);			goto skipswap;		}		splx(si);	}	/*	 * Wasn't reclaimable or reattachable.	 * Have to prepare to bring the page in.	 * We allocate the page before locking so we will	 * be swappable if there is no free memory.	 * If we block we have to start over, since anything	 * could have happened.	 */	sk = splimp();			/* lock memalls until [fod]kluster */	if (freemem < CLSIZE * KLMAX) {		pgtrace(TR_WAITMEM);		while (freemem < CLSIZE * KLMAX)			sleep((caddr_t)&freemem, PSWP+2);		pgtrace(TR_EWAITMEM);		splx(sk);		if (type != CSMEM)	/* SHMEM */			pte = vtopte(p, v);		if (pte->pg_v)			goto valid;		goto restart;	}	/*	 * Now can get memory and committed to bringing in the page.	 * Lock this process, get a page,	 * construct the new pte, and increment	 * the (process or text) resident set size.	 */	p->p_flag |= SPAGE;	if(type == CSMEM) {	/* SHMEM */		i = memall(pte, CLSIZE, sp, type, v, V_CACHE);#ifdef vax		pte->pg_alloc = 1;#endif vax	}	else		i = memall(pte, CLSIZE, p, type, v, V_CACHE);	if (i == 0)		panic("pagein: memall");	pte->pg_prot = opte.pg_prot;	pf = pte->pg_pfnum;	cmap[pgtocm(pf)].c_intrans = 1;	distcl(pte);	if (type == CTEXT) {		p->p_textp->x_rssize += CLSIZE;#ifdef vax		distpte(p->p_textp, vtotp(p, v), pte);#endif vax

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -