⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme.cv

📁 pthread source code,you can compile directly
💻 CV
📖 第 1 页 / 共 5 页
字号:
too   * This sync.level supports _timedwait and cancellation   */  else    {      result = pthread_mutex_unlock(&(cv->mtxUnblockLock));    }  return(result);}                               /* ptw32_cond_unblock */intpthread_cond_wait (pthread_cond_t * cond,                   pthread_mutex_t * mutex){  /* The NULL abstime arg means INFINITE waiting. */  return(ptw32_cond_timedwait(cond, mutex, NULL));}                               /* pthread_cond_wait */intpthread_cond_timedwait (pthread_cond_t * cond,                        pthread_mutex_t * mutex,                        const struct timespec *abstime){  if (abstime == NULL)    {(abstime      return EINVAL;    }  return(ptw32_cond_timedwait(cond, mutex, abstime));}                               /* pthread_cond_timedwait */intpthread_cond_signal (pthread_cond_t * cond){  /* The '0'(FALSE) unblockAll arg means unblock ONE waiter. */  return(ptw32_cond_unblock(cond, 0));}                               /* pthread_cond_signal */intpthread_cond_broadcast (pthread_cond_t * cond){  /* The '1'(TRUE) unblockAll arg means unblock ALL waiters. */  return(ptw32_cond_unblock(cond, 1));}                               /* pthread_cond_broadcast */TEREKHOV@de.ibm.com on 17.01.2001 01:00:57Please respond to TEREKHOV@de.ibm.comTo:   pthreads-win32@sourceware.cygnus.comcc:   schmidt@uci.eduSubject:  win32 conditions: sem+counter+event = broadcast_deadlock +      spur.wakeup/unfairness/incorrectness ??Hi,Problem 1: broadcast_deadlockIt seems that current implementation does not provide "atomic"broadcasts. That may lead to "nested" broadcasts... and it seemsthat nested case is not handled correctly -> producing a broadcastDEADLOCK as a result.Scenario:N (>1) waiting threads W1..N are blocked (in _wait) on condition'ssemaphore.Thread B1 calls pthread_cond_broadcast, which results in "releasing" NW threads via incrementing semaphore counter by N (stored incv->waiters) BUT cv->waiters counter does not change!! The callerthread B1 remains blocked on cv->waitersDone event (auto-reset!!) BUTcondition is not protected from starting another broadcast (when calledon another thread) while still waiting for the "old" broadcast tocomplete on thread B1.M (>=0, <N) W threads are fast enough to go thru their _wait call anddecrement cv->waiters counter.L (N-M) "late" waiter W threads are a) still blocked/not returned fromtheir semaphore wait call or b) were preempted after sem_wait but beforelock( &cv->waitersLock ) or c) are blocked on cv->waitersLock.cv->waiters is still > 0 (= L).Another thread B2 (or some W thread from M group) callspthread_cond_broadcast and gains access to counter... neither a) nor b)prevent thread B2 in pthread_cond_broadcast from gaining access tocounter and starting another broadcast ( for c) - it depends oncv->waitersLock scheduling rules: FIFO=OK, PRTY=PROBLEM,... )That call to pthread_cond_broadcast (on thread B2) will result inincrementing semaphore by cv->waiters (=L) which is INCORRECT (allW1..N were in fact already released by thread B1) and waiting on_auto-reset_ event cv->waitersDone which is DEADLY WRONG (produces adeadlock)...All late W1..L threads now have a chance to complete their _wait call.Last W_L thread sets an auto-reselt event cv->waitersDone which willrelease either B1 or B2 leaving one of B threads in a deadlock.Problem 2: spur.wakeup/unfairness/incorrectnessIt seems that:a) because of the same problem with counter which does not reflect theactual number of NOT RELEASED waiters, the signal call may incrementa semaphore counter w/o having a waiter blocked on it. That will resultin (best case) spurious wake ups - performance degradation due tounnecessary context switches and predicate re-checks and (in worth case)unfairness/incorrectness problem - see b)b) neither signal nor broadcast prevent other threads - "new waiters"(and in the case of signal, the caller thread as well) from going into_wait and overtaking "old" waiters (already released but still not returnedfrom sem_wait on condition's semaphore). Win semaphore just [API DOC]:"Maintains a count between zero and some maximum value, limiting the numberof threads that are simultaneously accessing a shared resource." CallingReleaseSemaphore does not imply (at least not documented) that on returnfrom ReleaseSemaphore all waiters will in fact become released (returnedfrom their Wait... call) and/or that new waiters calling Wait... afterwardswill become less importance. It is NOT documented to be an atomic releaseofwaiters... And even if it would be there is still a problem with a threadbeing preempted after Wait on semaphore and before Wait on cv->waitersLockand scheduling rules for cv->waitersLock itself(??WaitForMultipleObjects??)That may result in unfairness/incorrectness problem as describedfor SetEvent impl. in "Strategies for Implementing POSIX ConditionVariableson Win32": http://www.cs.wustl.edu/~schmidt/win32-cv-1.htmlUnfairness -- The semantics of the POSIX pthread_cond_broadcast function isto wake up all threads currently blocked in wait calls on the conditionvariable. The awakened threads then compete for the external_mutex. Toensurefairness, all of these threads should be released from theirpthread_cond_wait calls and allowed to recheck their condition expressionsbefore other threads can successfully complete a wait on the conditionvariable.Unfortunately, the SetEvent implementation above does not guarantee thatallthreads sleeping on the condition variable when cond_broadcast is calledwillacquire the external_mutex and check their condition expressions. Althoughthe Pthreads specification does not mandate this degree of fairness, thelack of fairness can cause starvation.To illustrate the unfairness problem, imagine there are 2 threads, C1 andC2,that are blocked in pthread_cond_wait on condition variable not_empty_ thatis guarding a thread-safe message queue. Another thread, P1 then places twomessages onto the queue and calls pthread_cond_broadcast. If C1 returnsfrompthread_cond_wait, dequeues and processes the message, and immediatelywaitsagain then it and only it may end up acquiring both messages. Thus, C2 willnever get a chance to dequeue a message and run.The following illustrates the sequence of events:1.   Thread C1 attempts to dequeue and waits on CV non_empty_2.   Thread C2 attempts to dequeue and waits on CV non_empty_3.   Thread P1 enqueues 2 messages and broadcasts to CV not_empty_4.   Thread P1 exits5.   Thread C1 wakes up from CV not_empty_, dequeues a message and runs6.   Thread C1 waits again on CV not_empty_, immediately dequeues the 2nd        message and runs7.   Thread C1 exits8.   Thread C2 is the only thread left and blocks forever since        not_empty_ will never be signaledDepending on the algorithm being implemented, this lack of fairness mayyieldconcurrent programs that have subtle bugs. Of course, applicationdevelopersshould not rely on the fairness semantics of pthread_cond_broadcast.However,there are many cases where fair implementations of condition variables cansimplify application code.Incorrectness -- A variation on the unfairness problem described aboveoccurswhen a third consumer thread, C3, is allowed to slip through even though itwas not waiting on condition variable not_empty_ when a broadcast occurred.To illustrate this, we will use the same scenario as above: 2 threads, C1andC2, are blocked dequeuing messages from the message queue. Another thread,P1then places two messages onto the queue and calls pthread_cond_broadcast.C1returns from pthread_cond_wait, dequeues and processes the message. At thistime, C3 acquires the external_mutex, calls pthread_cond_wait and waits onthe events in WaitForMultipleObjects. Since C2 has not had a chance to runyet, the BROADCAST event is still signaled. C3 then returns fromWaitForMultipleObjects, and dequeues and processes the message in thequeue.Thus, C2 will never get a chance to dequeue a message and run.The following illustrates the sequence of events:1.   Thread C1 attempts to dequeue and waits on CV non_empty_2.   Thread C2 attempts to dequeue and waits on CV non_empty_3.   Thread P1 enqueues 2 messages and broadcasts to CV not_empty_4.   Thread P1 exits5.   Thread C1 wakes up from CV not_empty_, dequeues a message and runs6.   Thread C1 exits7.   Thread C3 waits on CV not_empty_, immediately dequeues the 2nd        message and runs8.   Thread C3 exits9.   Thread C2 is the only thread left and blocks forever since        not_empty_ will never be signaledIn the above case, a thread that was not waiting on the condition variablewhen a broadcast occurred was allowed to proceed. This leads to incorrectsemantics for a condition variable.COMMENTS???regards,alexander.-----------------------------------------------------------------------------Subject: RE: FYI/comp.programming.threads/Re: pthread_cond_*     implementation questionsDate: Wed, 21 Feb 2001 11:54:47 +0100From: TEREKHOV@de.ibm.comTo: lthomas@arbitrade.comCC: rpj@ise.canberra.edu.au, Thomas Pfaff <tpfaff@gmx.net>,     Nanbor Wang <nanbor@cs.wustl.edu>Hi Louis,generation number 8..had some time to revisit timeouts/spurious wakeup problem..found some bugs (in 7.b/c/d) and something to improve(7a - using IPC semaphores but it should speedup Win32version as well).regards,alexander.---------- Algorithm 8a / IMPL_SEM,UNBLOCK_STRATEGY == UNBLOCK_ALL ------given:semBlockLock - bin.semaphoresemBlockQueue - semaphoremtxExternal - mutex or CSmtxUnblockLock - mutex or CSnWaitersGone - intnWaitersBlocked - intnWaitersToUnblock - intwait( timeout ) {  [auto: register int result          ]     // error checking omitted  [auto: register int nSignalsWasLeft ]  [auto: register int nWaitersWasGone ]  sem_wait( semBlockLock );  nWaitersBlocked++;  sem_post( semBlockLock );  unlock( mtxExternal );  bTimedOut = sem_wait( semBlockQueue,timeout );  lock( mtxUnblockLock );  if ( 0 != (nSignalsWasLeft = nWaitersToUnblock) ) {    if ( bTimeout ) {                       // timeout (or canceled)      if ( 0 != nWaitersBlocked ) {        nWaitersBlocked--;      }      else {        nWaitersGone++;                     // count spurious wakeups      }    }    if ( 0 == --nWaitersToUnblock ) {      if ( 0 != nWaitersBlocked ) {        sem_post( semBlockLock );           // open the gate        nSignalsWasLeft = 0;                // do not open the gate belowagain      }      else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {        nWaitersGone = 0;      }    }  }  else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurioussemaphore :-)    sem_wait( semBlockLock );    nWaitersBlocked -= nWaitersGone;        // something is going on here -test of timeouts? :-)    sem_post( semBlockLock );    nWaitersGone = 0;  }  unlock( mtxUnblockLock );  if ( 1 == nSignalsWasLeft ) {    if ( 0 != nWaitersWasGone ) {      // sem_adjust( -nWaitersWasGone );      while ( nWaitersWasGone-- ) {        sem_wait( semBlockLock );          // better now than spuriouslater      }    }    sem_post( semBlockLock );              // open the gate  }  lock( mtxExternal );  return ( bTimedOut ) ? ETIMEOUT : 0;}signal(bAll) {  [auto: register int result         ]  [auto: register int nSignalsToIssue]  lock( mtxUnblockLock );  if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!    if ( 0 == nWaitersBlocked ) { // NO-OP      return unlock( mtxUnblockLock );    }    if (bAll) {      nWaitersToUnblock += nSignalsToIssue=nWaitersBlocked;      nWaitersBlocked = 0;    }    else {      nSignalsToIssue = 1;      nWaitersToUnblock++;      nWaitersBlocked--;    }  }  else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!    sem_wait( semBlockLock ); // close the gate    if ( 0 != nWaitersGone ) {      nWaitersBlocked -= nWaitersGone;      nWaitersGone = 0;    }    if (bAll) {      nSignalsToIssue = nWaitersToUnblock = nWaitersBlocked;      nWaitersBlocked = 0;    }    else {      nSignalsToIssue = nWaitersToUnblock = 1;      nWaitersBlocked--;    }  }  else { // NO-OP    return unlock( mtxUnblockLock );  }  unlock( mtxUnblockLock );  sem_post( semBlockQueue,nSignalsToIssue );  return result;}---------- Algorithm 8b / IMPL_SEM,UNBLOCK_STRATEGY == UNBLOCK_ONEBYONE------given:semBlockLock - bin.semaphoresemBlockQueue - bin.semaphoremtxExternal - mutex or CSmtxUnblockLock - mutex or CSnWaitersGone - intnWaitersBlocked - intnWaitersToUnblock - intwait( timeout ) {  [auto: register int result          ]     // error checking omitted  [auto: register int nWaitersWasGone ]  [auto: register int nSignalsWasLeft ]  sem_wait( semBlockLock );  nWaitersBlocked++;  sem_post( semBlockLock );  unlock( mtxExternal );  bTimedOut = sem_wait( semBlockQueue,timeout );  lock( mtxUnblockLock );  if ( 0 != (nSignalsWasLeft = nWaitersToUnblock) ) {    if ( bTimeout ) {                       // timeout (or canceled)      if ( 0 != nWaitersBlocked ) {        nWaitersBlocked--;        nSignalsWasLeft = 0;                // do not unblock next waiterbelow (already unblocked)      }      else {        nWaitersGone = 1;                   // spurious wakeup pending!!      }    }    if ( 0 == --nWaitersToUnblock &&      if ( 0 != nWaitersBlocked ) {        sem_post( semBlockLock );           // open the gate        nSignalsWasLeft = 0;                // do not open the gate belowagain      }      else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {        nWaitersGone = 0;      }    }  }  else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurioussemaphore :-)    sem_wait( semBlockLock );    nWaitersBlocked -= nWaitersGone;        // something is going on here -test of timeouts? :-)    sem_post( semBlockLock );    nWaitersGone = 0;  }  unlock( mtxUnblockLock );  if ( 1 == nSignalsWasLeft ) {    if ( 0 != nWaitersWasGone ) {      // sem_adjust( -1 );      sem_wait( semBlockQueue );           // better now than spuriouslater    }    sem_post( semBlockLock );              // open the gate  }  else if ( 0 != nSignalsWasLeft ) {    sem_post( semBlockQueue );             // unblock next waiter  }  lock( mtxExternal );  return ( bTimedOut ) ? ETIMEOUT : 0;}signal(bAll) {  [auto: register int result ]  lock( mtxUnblockLock );  if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!    if ( 0 == nWaitersBlocked ) { // NO-OP      return unlock( mtxUnblockLock );    }    if (bAll) {      nWaitersToUnblock += nWaitersBlocked;      nWaitersBlocked = 0;    }    else {      nWaitersToUnblock++;      nWaitersBlocked--;    }    unlock( mtxUnblockLock );  }  else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!    sem_wait( semBlockLock ); // close the gate    if ( 0 != nWaitersGone ) {      nWaitersBlocked -= nWaitersGone;      nWaitersGone = 0;    }    if (bAll) {      nWaitersToUnblock = nWaitersBlocked;      nWaitersBlocked = 0;    }    else {      nWaitersToUnblock = 1;      nWaitersBlocked--;    }    unlock( mtxUnblockLock );    sem_post( semBlockQueue );  }  else { // NO-OP    unlock( mtxUnblockLock );  }  return result;}---------- Algorithm 8c / IMPL_EVENT,UNBLOCK_STRATEGY == UNBLOCK_ONEBYONE---------given:hevBlockLock - auto-reset eventhevBlockQueue - auto-reset event

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -