⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 vm_mem.c

📁 <B>Digital的Unix操作系统VAX 4.2源码</B>
💻 C
📖 第 1 页 / 共 3 页
字号:
#ifndef lintstatic	char	*sccsid = "@(#)vm_mem.c	4.1	(ULTRIX)	7/2/90";#endif lint/************************************************************************ *									* *			Copyright (c) 1986 by				* *		Digital Equipment Corporation, Maynard, MA		* *			All rights reserved.				* *									* *   This software is furnished under a license and may be used and	* *   copied  only  in accordance with the terms of such license and	* *   with the  inclusion  of  the  above  copyright  notice.   This	* *   software  or  any  other copies thereof may not be provided or	* *   otherwise made available to any other person.  No title to and	* *   ownership of the software is hereby transferred.			* *									* *   This software is  derived  from  software  received  from  the	* *   University    of   California,   Berkeley,   and   from   Bell	* *   Laboratories.  Use, duplication, or disclosure is  subject  to	* *   restrictions  under  license  agreements  with  University  of	* *   California and with AT&T.						* *									* *   The information in this software is subject to change  without	* *   notice  and should not be construed as a commitment by Digital	* *   Equipment Corporation.						* *									* *   Digital assumes no responsibility for the use  or  reliability	* *   of its software on equipment which is not supplied by Digital.	* *									* ************************************************************************//* * *   Modification history: * * 7-Jun-88  -- jaa *	Fixed get_sys_ptes() to round up to page boundary. * * 27-Apr-88 -- jaa *	Linted file, removed km_debug printf's in km_alloc *	added range checking in km_alloc/km_free *	corrected error leg in km_alloc that didn't release map resources *	km_alloc now sleeps on kmemmap if no map resources available *	and km_free wakes up anybody sleeping on kmemmap  * * 02 Feb 88 -- jaa *	Moved M_requests[] into KMEMSTATS and made it a circular list *	panic string corrections * * 14 Dec 87 -- jaa *	Integrated new km_alloc/km_free code * * 04 Sep 87 -- depp *      A number of changes, all involved with removing the xflush_free_text() *      algorithm, and replacing it with an array (x_hcmap) to hold the *      indexes of remote cmap entries that are hashed.  Maunhash() was *      added to be rid of those silly "psuedo-munhash" code fragments. *      Also, mhash(), munhash(), maunhash(), and mpurge() now call macros *      to manipulate the remote hash array  (x_hcmap) in the text struct. * * 09 Jul 87 -- depp *	Removed conditionals around the collection of kernel memory stats. *	They will now be collected, and may be reported via vmstat -K. * * 12 Jan 86 -- depp *	Added changes to 2 routines, memfree() and vcleanu().  Memfree() *	will now check to see if the "u" pages list (see 11 Mar 86 comment *	below) is getting too long, if so, the list is flushed before new *	pages are added to it.	Vclearu() now does a wakeup() if memory  *	has been low.  * * 15 Dec 86 -- depp *	Changed kmemall() so that if the resource map is exhaused and *	KM_SLEEP set, then the process will sleep (and cycle) on lbolt. *	This means that kmemall() is guaranteed to return successfully *	if KM_SLEEP is set. * * 11 Sep 86 -- koehler *	gnode name change and more informative printf * * 27 Aug 86 -- depp *	Moved common code in kmemall and memall into a new routine pfclear * * 29 Apr 86 -- depp *	converted to locking macros from calls routines.  Plus added  *	KM_CONTIG option to kmemall() {see that routine for more information}. * *	mlock/munlock/mwait have been converted to macros MLOCK/MUNLOCK/MWAIT *	and are now defined in /sys/h/vmmac.h. * * * 11 Mar 86 -- depp *	Fixed stale kernel stack problem by having "vrelu" and "vrelpt" *	indicate to [v]memfree to place "u" pages on a temporary list,  *	to be cleared by a new routine "vcleanu" (called by pagein). * * 24 Feb 86 -- depp *	Added 6 new routines to this file: *		pfalloc/pffree		physical page allocator/deallocator *		kmemall/kmemfree	System virtual cluster alloc/dealloc *		km_alloc/km_free	System virtual block alloc/dealloc *	Also, to insure proper sequencing of memory requests, "vmemall" now *	raises the IPL whenever "freemem" is referenced. * * 13 Nov 85 -- depp *	Added "cm" parameter to distsmpte call.  This parameter indicates that *	the "pg_m" bit is to be cleared in the processes PTEs that are sharing *	a data segment.  This replaces the "pg_cm" definition of "pg_alloc" *	which could cause a conflict. * * 11 Nov 85 -- depp *	Removed all conditional compiles for System V IPC. * * 001 - March 11 1985 - Larry Cohen *     disable mapped in files so NOFILE can be larger than 32 * * * 11 Mar 85 -- depp *	Added in System V shared memory support. * */#include "../machine/pte.h"#include "../machine/cpu.h"#include "../h/param.h"#include "../h/systm.h"#include "../h/cmap.h"#include "../h/dir.h"#include "../h/user.h"#include "../h/proc.h"#include "../h/text.h"#include "../h/vm.h"#include "../h/file.h"#include "../h/gnode.h"#include "../h/buf.h"#include "../h/mount.h"#include "../h/trace.h"#include "../h/map.h"#include "../h/kernel.h"#include "../h/ipc.h"#include "../h/shm.h"#include "../h/types.h"#include "../h/kmalloc.h"#ifdef vax#include "../machine/mtpr.h"#endif vax#include "../machine/psl.h"extern struct smem smem[];#ifdef KMEMSTATSint km_debug = 0;#endif KMEMSTATS/* * Allocate memory, and always succeed * by jolting page-out daemon * so as to obtain page frames. * To be used in conjunction with vmemfree(). */vmemall(pte, size, p, type)	register struct pte *pte;	register int size;	register struct proc *p;{	register int m;	register int s;#ifdef mips	XPRINTF(XPR_VM,"enter vmemall",0,0,0,0);#endif mips	if (size <= 0 || size > maxmem)		panic("vmemall size");	s = splimp();	while (size > 0) {		if (freemem < desfree)			outofmem();		while (freemem == 0) {			sleep((caddr_t)&freemem, PSWP+2);		}		m = freemem;		if (m > size) m = size;	/* m = min of freemem and size */#ifdef mips		(void) memall(pte, m, p, type, NULL, V_NOOP);#endif mips#ifdef vax		(void) memall(pte, m, p, type);#endif vax		size -= m;		pte += m;	}	if (freemem < desfree)		outofmem();	splx(s);	/*	 * Always succeeds, but return success for	 * vgetu and vgetpt (e.g.) which call either	 * memall or vmemall depending on context.	 */	return (1);}/* * Free valid and reclaimable page frames belonging to the * count pages starting at pte.  If a page is valid * or reclaimable and locked (but not a system page), then * we simply mark the page as c_gone and let the pageout * daemon free the page when it is through with it. * If a page is reclaimable, and already in the free list, then * we mark the page as c_gone, and (of course) don't free it. * * Determines the largest contiguous cluster of * valid pages and frees them in one call to memfree. */vmemfree(pte, count)	register struct pte *pte;	register int count;{	register struct cmap *c;	register struct pte *spte;	register int j;	int size, pcnt;	register int flg = KMF_DETACH;#ifdef mips	XPRINTF(XPR_VM,"enter vmemfree",0,0,0,0);#endif mips	/* Are we deallocating "u" pages or it's PTEs ? */	if (count < 0) {		flg = KMF_UAREA;		count = -count;	}	if (count % CLSIZE)		panic("vmemfree");	for (size = 0, pcnt = 0; count > 0; pte += CLSIZE, count -= CLSIZE) {		if (pte->pg_fod == 0 && pte->pg_pfnum) {			c = &cmap[pgtocm(pte->pg_pfnum)];			if (c->c_lock && c->c_type != CSYS) {				for (j = 0; j < CLSIZE; j++)					*(int *)(pte+j) &= PG_PROT;				c->c_gone = 1;				pcnt += CLSIZE;							goto free;			}			if (c->c_free) {				for (j = 0; j < CLSIZE; j++)					*(int *)(pte+j) &= PG_PROT;#ifdef vax				if (c->c_type == CTEXT)					distpte(&text[c->c_ndx],						(int)c->c_page, pte);				/* SHMEM */				else if (c->c_type == CSMEM)					distsmpte(&smem[c->c_ndx],						(int)c->c_page, pte,						PG_NOCLRM);#endif vax				c->c_gone = 1;				goto free;			}			pcnt += CLSIZE;						if (size == 0)				spte = pte;			size += CLSIZE;			continue;		}#ifdef notdef /* 001 */		/* Don't do anything with mapped ptes */		if (pte->pg_fod && pte->pg_v)			goto free;#endif		if (pte->pg_fod) {#ifdef notdef /* 001 */			fileno = ((struct fpte *)pte)->pg_fileno;			if (fileno < NOFILE)				panic("vmemfree vread");#endif notdef			for (j = 0; j < CLSIZE; j++)				*(int *)(pte+j) &= PG_PROT;		}free:		if (size) {			memfree(spte, size, flg);			size = 0;		}	}	if (size)		memfree(spte, size, flg);	return (pcnt);}/* * Unlink a page frame from the free list - * * Performed if the page being reclaimed * is in the free list. */munlink(pf)	unsigned pf;{	register int next, prev;#ifdef mips	XPRINTF(XPR_VM,"enter munlink",0,0,0,0);#endif mips	next = cmap[pgtocm(pf)].c_next;	prev = cmap[pgtocm(pf)].c_prev;	cmap[prev].c_next = next;	cmap[next].c_prev = prev;	cmap[pgtocm(pf)].c_free = 0;	if (freemem < minfree)		outofmem();	freemem -= CLSIZE;}/* ***************************************************************************** ***************************************************************************** * * Function: * *	pfclear -- clears CMAP entry (on the free list) of encumberances *		   so that it may be reallocated. * * Function description: * *	This function is used by memall() to provide a common mechanism  *	to clear CMAP entries prior to reallocation.  * * Interface: * *	PFCLEAR(c); *	  struct cmap *c;	 CMAP entry to be cleared * * Return Value: * *	None * * Error Handling: * *	Panics only * * Panics: * * *	"pfclear: ecmap" *		The cmap entry should be on a hash list, but isn't. * *	"pfclear: mfind" *		Mfind indicates (by non-0 return) that this cmap entry has *		not been unhashed. *	 ***************************************************************************** ***************************************************************************** */pfclear(c) register struct cmap *c;{	register int index = c->c_ndx;	register struct proc *rp;	struct pte *rpte;#ifdef mips	XPRINTF(XPR_VM,"enter pfclear",0,0,0,0);#endif mips	/* If reclaimable, then clear associated PTEs */	if (c->c_gone == 0 && c->c_type != CSYS) {		if (c->c_type == CTEXT)			rp = text[index].x_caddr;		else			rp = &proc[index];		if(c->c_type != CSMEM)			while (rp->p_flag & SNOVM)				rp = rp->p_xlink;		switch (c->c_type) {		case CTEXT:			rpte = tptopte(rp, c->c_page);			break;		case CDATA:			rpte = dptopte(rp, c->c_page);			break;		case CSMEM: /* SHMEM */			rpte = smem[index].sm_ptaddr +						c->c_page;			break;		case CSTACK:			rpte = sptopte(rp, c->c_page);			break;		}		zapcl(rpte, pg_pfnum) = 0;#ifdef vax		if (c->c_type == CTEXT)			distpte(&text[index], (int)c->c_page,							rpte);		else if (c->c_type == CSMEM)			distsmpte(&smem[index],					(int)c->c_page, rpte,					PG_NOCLRM);#endif vax	}	/* If on CMAP hash lists; then remove */	if (c->c_blkno)	        maunhash(c);}#ifdef mipsint class_hits = 0, class_misses = 0, class_ends = 0, class_tries = 0;#endif mips/* * Allocate memory - * * The free list appears as a doubly linked list * in the core map with cmap[0] serving as a header. */#ifdef	mips memall(pte, size, p, type, v, flag)#endif	mips#ifdef	vaxmemall(pte, size, p, type)#endif	vax	register struct pte *pte;	int size;	struct proc *p;#ifdef mips	unsigned int v;	int flag;#endif mips{	register struct cmap *c;	register int i, j;	register unsigned pf;	register int next, curpos;	int s;#ifdef mips	unsigned mask;        int class_match;#endif mips	#ifdef mips	XPRINTF(XPR_VM,"enter memall",0,0,0,0);#endif mips	if (size % CLSIZE)		panic("memall");	s = splimp();	/* Insure that enough free memory exists to make allocation */	if (size > freemem) {		splx(s);		return (0);	}	trace(TR_MALL, size, u.u_procp->p_pid);	/* Allocation loop: by page cluster */	for (i = size; i > 0; i -= CLSIZE) {#ifdef mips		if(flag == V_CACHE) {			if (type == CTEXT) {			    class_match = 				(v + (u.u_procp->p_textp - text)) & icachemask;			    mask = icachemask;			} else {			    class_match = (v + u.u_procp->p_pid) & dcachemask;			    mask = dcachemask;			}			curpos = cmap[CMHEAD].c_next;			/* nasty constant! */			for (j = 0 ; j < 64 ; j++) {				/* walk down list looking for good page */				if (curpos == CMHEAD) { /* end of list */					curpos = cmap[CMHEAD].c_next; /*fail*/					class_ends++;					goto out;				}				if ((cmtopg(curpos) & mask) == class_match){					class_hits++;					goto out;				}				curpos = cmap[curpos].c_next;			}			/* nasty constant! */			if (j == 64) {				curpos = cmap[CMHEAD].c_next; /* fail */				class_misses++;			}		} else#endif mips		/* Retrieve next free entry from TOP of list */		curpos = cmap[CMHEAD].c_next;#ifdef mipsout:#endif mips		c = &cmap[curpos];		freemem -= CLSIZE;		next = c->c_next;#ifdef mips		 if (flag == V_CACHE) { 		 /* may have taken it not from the head */                        cmap[cmap[curpos].c_prev].c_next = next;                        cmap[next].c_prev = cmap[curpos].c_prev;                } else {#endif mips		cmap[CMHEAD].c_next = next;		cmap[next].c_prev = CMHEAD;#ifdef mips		}#endif mips		if(c->c_free == 0)			panic("dup mem alloc");		if (cmtopg(curpos) > maxfree)			panic("bad mem alloc");		/*		 * If reclaimable, then clear encumberances		 */		pfclear(c);		/* 		 * Initialize CMAP entry		 */		switch (type) {		case CSYS:			c->c_ndx = p->p_ndx;			break;		case CTEXT:			c->c_page = vtotp(p, ptetov(p, pte));			c->c_ndx = p->p_textp - &text[0];			break;		case CDATA:			c->c_page = vtodp(p, ptetov(p, pte));			c->c_ndx = p->p_ndx;			break;		case CSMEM: /* SHMEM */			c->c_page = pte - ((struct smem *)p)->sm_ptaddr;			c->c_ndx = (struct smem *)p - &smem[0];			break;		case CSTACK:			c->c_page = vtosp(p, ptetov(p, pte));			c->c_ndx = p->p_ndx;			break;		}				pf = cmtopg(curpos);		for (j = 0; j < CLSIZE; j++) {#ifdef mips			if (type==CTEXT)				clean_icache(PHYS_TO_K0(ptob(pf)), NBPG);			*(int *)pte++ = pf++ << PTE_PFNSHIFT;#endif mips#ifdef vax			*(int *)pte++ = pf++;#endif vax		}		c->c_free = 0;		c->c_gone = 0;		if (c->c_intrans || c->c_want)			panic("memall intrans|want");		c->c_lock = 1;		c->c_type = type;	}	splx(s);	return (size);}/* * Free memory - * * The page frames being returned are inserted * to the head/tail of the free list depending * on whether there is any possible future use of them, * unless "flg" indicates that the page frames should be  * temporily stored on the "u" list until after the * context switch occurs.  In this case, the cmap entries * are deallocated as free, but place on the list {ucmap,eucmap} * until "vcleanu" is called to push them onto the free list. * * If the freemem count had been zero, * the processes sleeping for memory * are awakened. */memfree(pte, size, flg)	register struct pte *pte;	register int size;{	register int i, j, prev, next;	register struct cmap *c;#ifdef mips	struct text *xp;#endif mips	int s;	void vcleanu();#ifdef mips	XPRINTF(XPR_VM,"enter memfree",0,0,0,0);#endif mips	if (size % CLSIZE)		panic("memfree");	if (freemem < CLSIZE * KLMAX)		wakeup((caddr_t)&freemem);	while (size > 0) {

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -