⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 vmlib.c

📁 the vxworks system kernel souce packeg.there may be something you need .
💻 C
📖 第 1 页 / 共 3 页
字号:
    /* set the state of all the global memory we just created */     for (i = 0; i < numDescArrayElements; i++)        {        thisDesc = &pMemDescArray[i];	if (vmStateSet (&sysVmContext, thisDesc->virtualAddr, thisDesc->len,		thisDesc->initialStateMask, thisDesc->initialState) == ERROR)	    return (NULL);        }    currentContext = &sysVmContext;    MMU_CURRENT_SET (sysVmContext.mmuTransTbl);    if (enable)	if (MMU_ENABLE (TRUE) == ERROR)	    return (NULL);    return (&sysVmContext);    }/********************************************************************************* vmContextCreate - create a new virtual memory context (VxVMI Option)** This routine creates a new virtual memory context.  The newly created* context does not become the current context until explicitly installed by* a call to vmCurrentSet().  Modifications to the context state (mappings,* state changes, etc.) may be performed on any virtual memory context, even* if it is not the current context.** This routine should not be called from interrupt level.** AVAILABILITY* This routine is distributed as a component of the unbundled virtual memory* support option, VxVMI.** RETURNS: A pointer to a new virtual memory context, or * NULL if the allocation or initialization fails.*/VM_CONTEXT_ID vmContextCreate (void)    {    VM_CONTEXT_ID context;    /* call the vm class's memory allocator to get memory for the object */    context = (VM_CONTEXT *) objAlloc ((OBJ_CLASS *) vmContextClassId);    if (context == NULL)	return (NULL);    if (vmContextInit (context) == ERROR)	{	objFree ((OBJ_CLASS *) vmContextClassId, (char *) context);	return (NULL);	}        return (context);    }/********************************************************************************* vmContextInit - initialize VM_CONTEXT structures** This routine may be used to initialize static definitions of VM_CONTEXT* structures, instead of dynamically creating the object with* vmContextCreate().  Note that virtual memory contexts created in this* manner may not be deleted.** This routine should not be called from interrupt level.** AVAILABILITY* This routine is distributed as a component of the unbundled virtual memory* support option, VxVMI.** RETURNS: OK, or ERROR if the translation table cannot be created.** NOMANUAL*/STATUS vmContextInit     (    VM_CONTEXT *pContext    )    {    objCoreInit (&pContext->objCore, (CLASS_ID) vmContextClassId);    semMInit (&pContext->sem, mutexOptionsVmLib);    pContext->mmuTransTbl = MMU_TRANS_TBL_CREATE ();    if (pContext->mmuTransTbl == NULL)	return (ERROR);    lstAdd (&vmContextList, &pContext->links);    return (OK);    }/********************************************************************************* vmContextDelete - delete a virtual memory context (VxVMI Option)** This routine deallocates the underlying translation table associated with* a virtual memory context.  It does not free the physical memory already* mapped into the virtual memory space.** This routine should not be called from interrupt level.** AVAILABILITY* This routine is distributed as a component of the unbundled virtual memory* support option, VxVMI.** RETURNS: OK, or ERROR if <context> is not a valid context descriptor or* if an error occurs deleting the translation table.*/STATUS vmContextDelete     (    VM_CONTEXT_ID context    )    {    if (OBJ_VERIFY (context, vmContextClassId) != OK)	return (ERROR);    /* take the context's mutual exclusion semaphore - this is really      * inadequate.     */    semTake (&context->sem, WAIT_FOREVER);    /* invalidate the object */    objCoreTerminate (&context->objCore);    if (MMU_TRANS_TBL_DELETE (context->mmuTransTbl) == ERROR)	return (ERROR);    lstDelete (&vmContextList, &context->links);    free (context);    return (OK);    }/********************************************************************************* vmStateSet - change the state of a block of virtual memory (VxVMI Option)** This routine changes the state of a block of virtual memory.  Each page* of virtual memory has at least three elements of state information:* validity, writability, and cacheability.  Specific architectures may* define additional state information; see vmLib.h for additional* architecture-specific states.  Memory accesses to a page marked as* invalid will result in an exception.  Pages may be invalidated to prevent* them from being corrupted by invalid references.  Pages may be defined as* read-only or writable, depending on the state of the writable bits.* Memory accesses to pages marked as not-cacheable will always result in a* memory cycle, bypassing the cache.  This is useful for multiprocessing,* multiple bus masters, and hardware control registers.** The following states are provided and may be or'ed together in the * state parameter:  ** .TS* tab(|);* l2 l2 l .* VM_STATE_VALID     | VM_STATE_VALID_NOT     | valid/invalid* VM_STATE_WRITABLE  | VM_STATE_WRITABLE_NOT  | writable/write-protected* VM_STATE_CACHEABLE | VM_STATE_CACHEABLE_NOT | cacheable/not-cacheable* .TE** Additionally, the following masks are provided so that only specific* states may be set.  These may be or'ed together in the `stateMask' parameter. **  VM_STATE_MASK_VALID*  VM_STATE_MASK_WRITABLE*  VM_STATE_MASK_CACHEABLE** If <context> is specified as NULL, the current context is used.** This routine is callable from interrupt level.** AVAILABILITY* This routine is distributed as a component of the unbundled virtual memory* support option, VxVMI.** RETURNS: OK or, ERROR if the validation fails, <pVirtual> is not on a page* boundary, <len> is not a multiple of page size, or the* architecture-dependent state set fails for the specified virtual address.** ERRNO: * S_vmLib_NOT_PAGE_ALIGNED,* S_vmLib_BAD_STATE_PARAM,* S_vmLib_BAD_MASK_PARAM*/STATUS vmStateSet     (    VM_CONTEXT_ID context, 	/* context - NULL == currentContext         */    void *pVirtual, 		/* virtual address to modify state of       */    int len, 			/* len of virtual space to modify state of  */    UINT stateMask, 		/* state mask                               */    UINT state			/* state                                    */    )    {    FAST int	pageSize 	= vmPageSize;    FAST char *	thisPage 	= (char *) pVirtual;    FAST UINT 	numBytesProcessed = 0;    UINT 	archDepState;    UINT 	archDepStateMask;    STATUS 	retVal 		= OK;    if (!vmLibInfo.vmLibInstalled)	return (ERROR);    if (context == NULL)	context = currentContext;    if (OBJ_VERIFY (context, vmContextClassId) != OK)	return (ERROR);    if (NOT_PAGE_ALIGNED (thisPage))	{	errno = S_vmLib_NOT_PAGE_ALIGNED;        return (ERROR); 	}    if (NOT_PAGE_ALIGNED (len))	{	errno = S_vmLib_NOT_PAGE_ALIGNED;        return (ERROR); 	}    if (state > NUM_PAGE_STATES)	{	errno = S_vmLib_BAD_STATE_PARAM;	return (ERROR);	}    if (stateMask > NUM_PAGE_STATES)	{	errno = S_vmLib_BAD_MASK_PARAM;	return (ERROR);	}    archDepState = vmStateTransTbl [state];    archDepStateMask = vmMaskTransTbl [stateMask];    while (numBytesProcessed < len)        {        if (MMU_STATE_SET (context->mmuTransTbl, thisPage,                         archDepStateMask, archDepState) == ERROR)           {	   retVal = ERROR;	   break;	   }        thisPage += pageSize;	numBytesProcessed += pageSize;	}    return (retVal);    }/********************************************************************************* vmStateGet - get the state of a page of virtual memory (VxVMI Option)** This routine extracts state bits with the following masks: **  VM_STATE_MASK_VALID*  VM_STATE_MASK_WRITABLE*  VM_STATE_MASK_CACHEABLE** Individual states may be identified with the following constants:** .TS* tab(|);* l2 l2 l .* VM_STATE_VALID    | VM_STATE_VALID_NOT     | valid/invalid* VM_STATE_WRITABLE | VM_STATE_WRITABLE_NOT  | writable/write-protected* VM_STATE_CACHEABLE| VM_STATE_CACHEABLE_NOT | cacheable/not-cacheable* .TE** For example, to see if a page is writable, the following code would be used:** .CS*     vmStateGet (vmContext, pageAddr, &state);*     if ((state & VM_STATE_MASK_WRITABLE) & VM_STATE_WRITABLE)*        ...* .CE** If <context> is specified as NULL, the current virtual memory context * is used.** This routine is callable from interrupt level.** AVAILABILITY* This routine is distributed as a component of the unbundled virtual memory* support option, VxVMI.** RETURNS: OK, or ERROR if <pageAddr> is not on a page boundary, the* validity check fails, or the architecture-dependent state get fails for* the specified virtual address.** ERRNO: S_vmLib_NOT_PAGE_ALIGNED*/STATUS vmStateGet     (    VM_CONTEXT_ID context, 	/* context - NULL == currentContext */    void *pPageAddr, 		/* virtual page addr                */    UINT *pState		/* where to return state            */    )    {    UINT archDepStateGotten;    int j;    if (context == NULL)	context = currentContext;    if (OBJ_VERIFY (context, vmContextClassId) != OK)	return (ERROR);    if (NOT_PAGE_ALIGNED (pPageAddr))	{	errno = S_vmLib_NOT_PAGE_ALIGNED;        return (ERROR); 	}    *pState = 0;    if (MMU_STATE_GET (context->mmuTransTbl,		       pPageAddr, &archDepStateGotten) == ERROR)	return (ERROR);    /* translate from arch dependent state to arch independent state */    for (j = 0; j < mmuStateTransArraySize; j++)	{	STATE_TRANS_TUPLE *thisTuple = &mmuStateTransArray[j];	UINT archDepMask = thisTuple->archDepMask;	UINT archDepState = thisTuple->archDepState;	UINT archIndepState = thisTuple->archIndepState;	if ((archDepStateGotten & archDepMask) == archDepState)	    *pState |= archIndepState;	}    return (OK);    }/********************************************************************************* vmMap - map physical space into virtual space (VxVMI Option)** This routine maps physical pages into a contiguous block of virtual* memory.  <virtualAddr> and <physicalAddr> must be on page boundaries, and* <len> must be evenly divisible by the page size.  After the call to* vmMap(), the state of all pages in the the newly mapped virtual memory is* valid, writable, and cacheable.** The vmMap() routine can fail if the specified virtual address space* conflicts with the translation tables of the global virtual memory space.* The global virtual address space is architecture-dependent and is* initialized at boot time with calls to vmGlobalMap() by* vmGlobalMapInit().  If a conflict results, `errno' is set to* S_vmLib_ADDR_IN_GLOBAL_SPACE.  To avoid this conflict, use* vmGlobalInfoGet() to ascertain which portions of the virtual address space* are reserved for the global virtual address space.  If <context> is* specified as NULL, the current virtual memory context is used.** This routine should not be called from interrupt level.** AVAILABILITY* This routine is distributed as a component of the unbundled virtual memory* support option, VxVMI.** RETURNS: OK, or ERROR if <virtualAddr> or <physicalAddr> are not * on page boundaries, <len> is not a multiple of the page size, * the validation fails, or the mapping fails.** ERRNO:* S_vmLib_NOT_PAGE_ALIGNED,* S_vmLib_ADDR_IN_GLOBAL_SPACE*/STATUS vmMap     (    VM_CONTEXT_ID context, 	/* context - NULL == currentContext   */    void *virtualAddr, 		/* virtual address                    */    void *physicalAddr,		/* physical address                   */    UINT len			/* len of virtual and physical spaces */    )    {    int pageSize = vmPageSize;    char *thisVirtPage = (char *) virtualAddr;    char *thisPhysPage = (char *) physicalAddr;    FAST UINT numBytesProcessed = 0;    STATUS retVal = OK;    if (context == NULL)	context = currentContext;    if (OBJ_VERIFY (context, vmContextClassId) != OK)	return (ERROR);    if (NOT_PAGE_ALIGNED (thisVirtPage))	{	errno = S_vmLib_NOT_PAGE_ALIGNED;        return (ERROR); 	}    if ((!mmuPhysAddrShift) && (NOT_PAGE_ALIGNED (thisPhysPage)))	{	errno = S_vmLib_NOT_PAGE_ALIGNED;        return (ERROR); 	}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -