⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 cippd_pmaint.c

📁 <B>Digital的Unix操作系统VAX 4.2源码</B>
💻 C
📖 第 1 页 / 共 3 页
字号:
     * has not yet been and is just about to be crashed ).  However, the reason     * which is cached depends upon the state of the path at the time it is     * crashed.  A general path failure reason is cached in PBs representing     * formative paths.  The specific path crash reason is cached for all open     * paths.  It is latter mapped into the appropriate general reason during     * actual processing of the crash request.     *     * It is only necessary to cache the name of the local SYSAP( Step 5 )     * responsible for crashing the path when the reason for crashing the path     * == E_SYSAP.  In all other cases, no SYSAP is responsible for crashing     * the path and no name exists to be cached.     *     * Panicing of the system( Step 7 ) immediately terminates all further     * processing of the crash request.  Such panicing is OPTIONAL.  It is     * requested only when the CI PPD configuration variable flag     * cippd_pc_panic is appropriately set.  This flag is set only when special     * debugging activity is required.  It may be set to trigger panicing on     * requests to crash any path or on only requests to crash open paths.     *     * Following pre-processing this routine invokes the CI PPD finite state     * machine to ascertain whether the path has already been crashed but has     * not yet been cleaned up.  If this is indeed the case, as indicated by     * current path state == PS_PATH_FAILURE, then the current request is     * dismissed.  Otherwise, actual crashing of the path by the finite state     * machine commences with disablement of the path including:     *     * 1. Mapping the specific reason for crashing the path into a more generic     *	  reason for latter SYSAP( local or remote ) consumption.     * 2. Disabling the CI PPD path in a PD specific fashion.     * 3. Transmitting a CI PPD STOP datagram to the remote CI PPD to inform it     *	  of local failure of the specified path.     * 4. Invoking a PD specific routine to optionally invalidate the     *	  appropriate local port translation cache.     * 5. Incrementing the PB semaphore.     * 6. Scheduling asynchronous PB clean up.     * 7. Setting the path state to PS_PATH_FAILURE to prohibit additional     *    crashings of this incarnation of the path.     * 8. Unlocking the PB whenever it was locked INTERNALLY.     * 9. Unlocking the PCCB whenever it was locked INTERNALLY.     *     * Established CI PPD paths always proceed through these nine steps.  This     * is not always the case for formative paths, some of which do not require     * disabling( Step 2 ) because the CI PPD has not yet enabled them.     *     * At the time the decision is made to crash the path, the processor on     * which the path is crashed exists in one of two environments     * distinguished by whether the processor is at kernel mode or interrupt     * level.  The existence of two possible environments does not interfere     * with path disablement.  Unfortunately, the same is not necessary true     * for path clean up, a section of the CI PPD which is quite complicated in     * its own right.  The solution to this potential problem is to decouple     * path disablement from path clean up by always scheduling clean up of the     * failed path to occur asynchronously( Step 6 ).  Now when path clean up     * eventually commences, it always proceeds in a constant environment.     * This avoids all potential environmental related problems and allows     * certain code simplifying assumptions to be made.     *     * Prior to scheduling asynchronous path clean up, the PB semaphore is     * incremented to prevent another CI PPD thread from deallocating the PB.     * Actually, this incrementing is superfluous.  Only cleaned up paths are     * deallocated and only crashed paths are cleaned up and each path may only     * be crashed once per path incarnation.  Therefore, it is not possible for     * another CI PPD thread to be in a position to deallocate the PB and the     * single threaded nature of path clean up should be sufficient to     * guarantee PB validity when scheduled clean up eventually commences.  The     * semaphore is incremented anyway to further protect the PB and to detect     * errors in error recovery logic.  The semaphore is always decremented     * prior to deallocation of the PB following its clean up.     *     * Finally, a few additional notes on path disablement before proceeding     * with what is involved in path clean up:     *     * 1. The CI PPD finite state machine never unlocks PBs while processing     *	  path failures.  This requirement is necessary to maintain path crash     *	  single threading by preventing other CI PPD threads from gaining     *	  access to the PB until the state change which prohibits sequential     *	  crashings of this path incarnation occurs( Step 7 ).     * 2. Transmission of CI PPD STOP datagrams( Step 3 ) is currently not     *	  supported because of VMS unwillingness to compromise during SCA     *	  Review Group meetings.  The intent is to support such remote     *	  notification when and if the issue is finally resolved.     *     * Clean up of aborted formative paths proceeds differently from clean up     * of failed established ones and as a consequence different routines are     * scheduled to perform each type of clean up.  The routine     * cippd_clean_fpb() is scheduled whenever an aborted formative path is to     * be cleaned up.  When it executes, it cleans up the specified path as     * follows:     *     *  1. The PCCB is locked.     *  2. The PB is locked.     *  3. The PB semaphore is synchronized to.     *  4. The CI PPD datagram reserved for path establishment is removed from     *     the appropriate local port datagram free queue and deallocated.     *  5. All emergency port specific buffers are optionally deallocated.     *  6. The formative SB is  optionally deallocated.     *  7. The PB semaphore is decremented.     *  8. The formative PB is removed from the appropriate local port     *	   formative PB queue, unlocked, and deallocated.     *  9. The number of formative paths originating at the port is decremented     *     and port initialization is scheduled through forking whenever there     *     are no longer any paths originating at the port and port clean up is     *	   currently in progress.     * 10. The PCCB is unlocked.     *     * The routine cippd_clean_pb() is scheduled whenever a failed established     * path is to be cleaned up.  When it executes, it cleans up the specified     * path as follows:     *     * 1. The PB is locked.     * 2. The PB semaphore is synchronized to.     * 3. The PB is unlocked.     * 4. SCS is notified of failure of the specified path.     *     * Once SCS is notified of path failure( Step 4 ), it assumes     * responsibility for directing PB clean up including the clean up of all     * SCS connections associated with the failed path.  Clean up of the last     * connection triggers SCS invocation of cippd_remove_pb().  It is this CI     * PPD routine which removes the PB from from all system-wide databases,     * decrements its semaphore, deallocates it, and also schedules port     * initialization through forking whenever there are no longer any paths     * originating at the port and port clean up is currently in progress.     *     * SCS requires the PB address to be guaranteed at the time it is notified     * of path failure and the same mechanisms used to guarantee PB address     * validity on entry to cippd_remove_pb() are employed.  These are single     * threading of path clean up so that no other thread can possibly delete     * the PB and incrementing the PB semaphore.  The former mechanism should     * be all that is necessary to protect the PB, but the latter is utilized     * as further protection and to detect errors in error recovery logic.     *     * Prominent in the clean up of both types of paths is synchronization to     * the PB semaphore.  Such synchronization is very important in a SMP     * environment for the following reason.  Occasions exist within the CI     * PPD which require threads to temporarily release all locks, including PB     * locks, while still protecting PBs from deletion.  PB semaphores are the     * mechanism chosen to meet this requirement.  A thread wishing to     * temporarily protect a PB, increments its semaphore prior to releasing     * all locks.  The semaphore is decremented only after all locks have been     * re-obtained and the need for protecting the PB in this fashion has     * passed.  In the interim, any thread desiring to clean up and deallocate     * this PB must first synchronize to its semaphore before proceeding.     *     * The PCCB fork block is used to schedule port initialization whenever and     * wherever such initialization becomes necessary.  It should always be     * available because it is used only for port clean up and initialization,     * these activities are single threaded, and initialization always follows     * clean up.  Guaranteed availability of the PCCB fork block is one of the     * benefits of single threading of port clean up and initialization.     *     * One final note on path clean up.  Once PB removal and deallocation     * completes, no matter who does it, the CI PPD is free to attempt     * establishment of a new path incarnation.  Such attempts do not occur     * until after the path is re-discovered through polling.  New path     * incarnations are fully subject to crashing on encountering of     * sufficiently serious errors.     *     * It is possible for this routine to be invoked to crash a non-existent     * path.  Typically this situation develops when some error requiring path      * crashing for recovery purposes is encountered, but the PB representing     * the path can NOT be retrieved, and this routine is invoked without a PB.     * Non-existent paths are pre-processed as if they existed, and then the     * appropriate local ports are themselves crashed with a special reason     * code( SE_NOPATH ).  Handling non-existent paths in this fashion allows     * for recovery from the error to occur while still obtaining and saving     * the maximum amount of information about the error for latter analysis.     *     * Path failures may also occur or be discovered within the CI PPD finite     * state machine itself during processing of some asynchronously occurring     * event.  This routine is invoked by the finite state machine to crash     * such failed paths only when the paths were established at time of     * failure.  Otherwise, processing of these formative path failures is left     * entirely to the finite state machine.  This processing is very similar     * to the crashing of formative paths by the finite state machine.  There     * are a few significant exceptions and these are listed below:     *     * 1. The formative path is always crashable.  It is never in a     *	  PS_PATH_FAILURE path state, nor is it ever in the process of     *	  transitioning to such a state.     * 2. Pre-processing of the crash request is handled internally to the     *	  finite state machine.     *     * Other than these few exceptions, processing of these failed formative     * paths is identical to processing of aborted formative paths including:     *     * 1. Logging of the failure provided it is the first such occurrence.     * 2. Disabling the path if it had been enabled at time of failure.     * 3. Transitioning the path state to PS_PATH_FAILURE to prevent other CI     *	  PPD threads from crashing the same path incarnation.     * 4. Scheduling the identical routine to asynchronously clean up the     *    failed formative path.     */    if( !Test_pccb_lock( pccb )) {	Lock_pccb( pccb )	unlock_pccb = 1;    }    if( pb ) {	if( !Test_pb_lock( pb )) {	    Lock_pb( pb )	    unlock_pb = 1;	}	if( pb->pinfo.state == PS_OPEN ) {	    Set_pc_event( reason )	    pb->pinfo.reason = reason;	} else if( pb->pinfo.state != PS_PATH_FAILURE ) {	    pb->pinfo.reason = PF_PPDPROTOCOL;	}    } else {	pccb->Elogopt.port_num = EL_UNDEF;    }    if( scsbp ) {	if( Mask_esevmod( reason ) != E_SYSAP ) {	    cippdbp = Scs_to_ppd( scsbp );	} else {	    cippdbp = NULL;	    Move_name(( u_char * )scsbp, pccb->Elogopt.sysapname )	}    } else {	cippdbp = NULL;    }    ( void )cippd_log_path( pccb, pb, cippdbp, reason );    if( cippd_pc_panic > SCA_PANIC1 &&	 ( pb->pinfo.state == PS_OPEN || cippd_pc_panic > SCA_PANIC2 )) {	( void )panic( PPDPANIC_REQPC );    } else if( cippdbp ) {	if( disposal == RECEIVE_BUF ) {	    if( cippdbp->mtype == SCSMSG ) {		( void )( *pccb->Add_msg )( pccb, scsbp );	    } else {		( void )( *pccb->Add_dg )( pccb, scsbp );	    }	} else if( disposal == DEALLOC_BUF ) {	    if( cippdbp->mtype == SCSMSG ) {		( void )( *pccb->Dealloc_msg )( pccb, scsbp );	    } else {		( void )( *pccb->Dealloc_dg )( pccb, scsbp );	    }	}    }    if( pb ) {	( void )cippd_dispatch( pccb, pb, CNFE_PATH_FAIL, NULL );    } else {	( void )( *pccb->Crash_lport )( pccb, SE_NOPATH, NULL );    }    if( unlock_pb ) {	Unlock_pb( pb )    }    if( unlock_pccb ) {	Unlock_pccb( pccb )    }}/*   Name:	cippd_get_pb	- Retrieve Path Block * *   Abstract:	This function retrieves a PB corresponding to a specific path. *		The path may either have failed, be fully established, or be in *		a formative path state. * *		The path may be targeted either by a buffer( message or *		datagram ) or explicitly by its remote port station address. * *		NOTE: SCA port numbers are 6 bytes in size; however, maximum *		      CI PPD port numbers only occupy 1 byte, the low-order *		      byte of a port station address.  Port numbers are passed *		      as 4 bytes entities back and forth between the CI PPD and *		      its client port drivers. * *   Inputs: * *   IPL_SCS			- Interrupt processor level *   type			- NO_BUF or BUF *   pccb			- Port Command and Control Block pointer *   scsbp			- Address of SCS header( type == BUF ) *				- Address of remote port address( type != BUF ) * *   Outputs: * *   IPL_SCS			- Interrupt processor level * *   Return Values: * *   pb				- Address of PB if successful *   NULL			- No PB found * *   SMP:	The PCCB is locked INTERNALLY whenever the PCCB was not locked *		EXTERNALLY prior to function invocation.  Locking the PCCB *		allows exclusive access to PCCB contents, prevents potential PB *		deletion, and is required by PD routines which log invalid port *		numbers in case such logging becomes necessary.  PCCB addresses *		are always valid because these data structures are never *		deleted once their corresponding ports have been initialized. * *		EXTERNALLY held locks are responsible for preventing PB *		deletion once this function retrieves and returns it. */PB *cippd_get_pb( pccb, scsbp, type )    register PCCB	*pccb;    SCSH		*scsbp;    u_long		type;{    register pbq	*pb;    register u_long	port, unlock = 0;    /* The steps involved in retrieving the target PB are:     *     * 1. Lock the PCCB whenever it was not locked EXTERNALLY.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -