📄 ch18.htm
字号:
its tasks faster than other processes running on the computer. The higher a thread's
priority, the more CPU time that thread will receive, and the less CPU time all other
processes and threads on the computer will receive.
<HR>
</BLOCKQUOTE>
<P>The next argument to the AfxBeginThread function is the stack size to be provided
for the new thread. The default value for this argument is 0, which provides the
thread the same size stack as the main application.</P>
<P>The next argument to the AfxBeginThread function is the thread-creation flag.
This flag can contain one of two values and controls how the thread is started. If
CREATE_SUSPENDED is passed as this argument, the thread is created in suspended mode.
The thread does not run until the ResumeThread function is called for the thread.
If you supply 0 as this argument, which is the default value, the thread begins executing
the moment it is created.</P>
<P>The final argument to the AfxBeginThread function is a pointer to the security
attributes for the thread. The default value for this argument is NULL, which causes
the thread to be created with the same security profile as the application. Unless
you are building applications to run on Windows NT and you need to provide a thread
with a specific security profile, you should always use the default value for this
argument.</P>
<P>
<H4>Building Structures</H4>
<P>Imagine that you have an application running two threads, each parsing its own
set of variables at the same time. Imagine also that the application is using a global
object array to hold these variables. If the method of allocating and resizing the
array consisted of checking the current size and adding one position onto the end
of the array, your two threads might build an array populated something like the
one in Figure 18.1, where array positions populated by the first thread are intermingled
with those created by the second thread. This could easily confuse each thread as
it retrieves values from the array for its processing needs because each thread is
just as likely to pull a value that actually belongs to the other thread. This would
cause each thread to operate on wrong data and return the wrong results.</P>
<P><A HREF="javascript:popUp('18fig01.gif')"><B>FIGURE 18.1.</B></A><B> </B><I>Two
threads populating a common array.</I></P>
<P>If the application built these arrays as localized arrays, instead of global arrays,
it could keep access to each array limited to only the thread that builds the array.
In Figure 18.2, for example, there is no intermingling of data from multiple threads.
If you take this approach to using arrays and other memory structures, each thread
can perform its processing and return the results to the client, confident that the
results are correct because the calculations were performed on uncorrupted data.</P>
<P><A HREF="javascript:popUp('18fig02.gif')"><B>FIGURE 18.2.</B></A><B> </B><I>Two
threads populating localized arrays.</I></P>
<P><I></I>
<H4>Managing Access to Shared Resources</H4>
<P>Not all variables can be localized, and you will often want to share some resources
between all the threads running in your applications. Such sharing creates an issue
with multithreaded applications. Suppose that three threads all share a single counter,
which is generating unique numbers. Because you don't know when control of the processor
is going to switch from one thread to the next, your application might generate duplicate
"unique" numbers, as shown in Figure 18.3.</P>
<P><A HREF="javascript:popUp('18fig03.gif')"><B>FIGURE 18.3.</B></A><B> </B><I>Three
threads sharing a single counter.</I></P>
<P>As you can see, this sharing doesn't work too well in a multithreaded application.
You need a way to limit access to a common resource to only one thread at a time.
In reality, there are four mechanisms for limiting access to common resources and
synchronizing processing between threads, all of which work in different ways and
whose suitability depends on the circumstances. The four mechanisms are</P>
<P>
<UL>
<LI>Critical sections
<P>
<LI>Mutexes
<P>
<LI>Semaphores
<P>
<LI>Events
</UL>
<P><B>Critical Sections</B></P>
<P>A critical section is a mechanism that limits access to a certain resource to
a single thread within an application. A thread enters the critical section before
it needs to work with the specific shared resource and then exits the critical section
after it is finished accessing the resource. If another thread tries to enter the
critical section before the first thread exits the critical section, the second thread
is blocked and does not take any processor time until the first thread exits the
critical section, allowing the second to enter. You use critical sections to mark
sections of code that only one thread should execute at a time. This doesn't prevent
the processor from switching from that thread to another; it just prevents two or
more threads from entering the same section of code.</P>
<P>If you use a critical section with the counter shown in Figure 18.3, you can force
each thread to enter a critical section before checking the current value of the
counter. If each thread does not leave the critical section until after it has incremented
and updated the counter, you can guarantee that--no matter how many threads are executing
and regardless of their execution order--truly unique numbers are generated, as shown
in Figure 18.4.</P>
<P>If you need to use a critical section object in your application, create an instance
of the CCriticalSection class. This object contains two methods, Lock and Unlock,
which you can use to gain and release control of the critical section.</P>
<P><B>Mutexes</B></P>
<P>Mutexes work in basically the same way as critical sections, but you use mutexes
when you want to share the resource between multiple applications. By using a mutex,
you can guarantee that no two threads running in any number of applications will
access the same resource at the same time.</P>
<P>Because of their availability across the operating system, mutexes carry much
more overhead than critical sections do. A mutex lifetime does not end when the application
that created it shuts down. The mutex might still be in use by other applications,
so the operating system must track which applications are using a mutex and then
destroy the mutex once it is no longer needed. In contrast, critical sections have
little overhead because they do not exist outside the application that creates and
uses them. After the application ends, the critical section is gone.</P>
<P>If you need to use a mutex in your applications, you will create an instance of
the CMutex class. The constructor of the CMutex class has three available arguments.
The first argument is a boolean value that specifies whether the thread creating
the CMutex object is the initial owner of the mutex. If so, then this thread must
release the mutex before any other threads can access it.</P>
<P><A HREF="javascript:popUp('18fig04.gif')"><B>FIGURE 18.4.</B></A><B> </B><I>Three
threads using the same counter, which is protected by a critical section.</I></P>
<P>The second argument is the name for the mutex. All the applications that need
to share the mutex can identify it by this textual name. The third and final argument
to the CMutex constructor is a pointer to the security attributes for the mutex object.
If a NULL is passed for this pointer, the mutex object uses the security attributes
of the thread that created it.</P>
<P>Once you create a CMutex object, you can lock and unlock it using the Lock and
Unlock member functions. This allows you to build in the capabilities to control
access to a shared resource between multiple threads in multiple applications.</P>
<P><B>Semaphores</B></P>
<P>Semaphores work very differently from critical sections and mutexes. You use semaphores
with resources that are not limited to a single thread at a time-- a resource that
should be limited to a fixed number of threads. A semaphore is a form of counter,
and threads can increment or decrement it. The trick to semaphores is that they cannot
go any lower than zero. Therefore, if a thread is trying to decrement a semaphore
that is at zero, that thread is blocked until another thread increments the semaphore.</P>
<P>Suppose you have a queue that is populated by multiple threads, and one thread
removes the items from the queue and performs processing on each item. If the queue
is empty, the thread that removes and processes items has nothing to do. This thread
could go into an idle loop, checking the queue every so often to see whether something
has been placed in it. The problem with this scenario is that the thread takes up
processing cycles doing absolutely nothing. These processor cycles could go to another
thread that does have something to do. If you use a semaphore to control the queue,
each thread that places items into the queue can increment the semaphore for each
item placed in the queue, and the thread that removes the items can decrement the
semaphore just before removing each item from the queue. If the queue is empty, the
semaphore is zero, and the thread removing items is blocked on the call to decrement
the queue. This thread does not take any processor cycles until one of the other
threads increments the semaphore to indicate that it has placed an item in the queue.
Then, the thread removing items is immediately unblocked, and it can remove the item
that was placed in the queue and begin processing it, as shown in Figure 18.5.</P>
<P>If you need to use a semaphore in your application, you can create an instance
of the CSemaphore class. This class has four arguments that can be passed to the
class constructor. The first argument is the starting usage count for the semaphore.
The second argument is the maximum usage count for the semaphore. You can use these
two arguments to control how many threads and processes can have access to a shared
resource at any one time. The third argument is the name for the semaphore, which
is used to identify the semaphore by all applications running on the system, just
as with the CMutex class.</P>
<P><A HREF="javascript:popUp('18fig05.gif')"><B>FIGURE 18.5.</B></A><B> </B><I>Multiple
threads placing objects into a queue.</I></P>
<P>The final argument is a pointer to the security attributes for the semaphore.</P>
<P>With the CSemaphore object, you can use the Lock and Unlock member functions to
gain or release control of the semaphore. When you call the Lock function, if the
semaphore usage count is greater than zero, the usage count is decremented and your
program is allowed to continue. If the usage count is already zero, the Lock function
waits until the usage count is incremented so that your process can gain access to
the shared resource. When you call the Unlock function, the usage count of the semaphore
is incremented.</P>
<P><B>Events</B></P>
<P>As much as thread synchronization mechanisms are designed to control access to
limited resources, they are also intended to prevent threads from using unnecessary
processor cycles. The more threads running at one time, the slower each of those
threads performs its tasks. Therefore, if a thread does not have anything to do,
block it and let it sit idle, allowing other threads to use more processor time and
thus run faster until the conditions are met that provide the idle thread with something
to do.</P>
<P>This is why you use events--to allow threads to be idle until the conditions are
such that they have something to do. Events take their name from the events that
drive most Windows applications, only with a twist. Thread synchronization events
do not use the normal event queuing and handling mechanisms. Instead of being assigned
a number and then waiting for that number to be passed through the Windows event
handler, thread synchronization events are actual objects held in memory. Each thread
that needs to wait for an event tells the event that it is waiting for it to be triggered
and then goes to sleep. When the event is triggered, it sends wake-up calls to every
thread that told it that it was waiting to be triggered. The threads pick up their
processing at the exact point where they each told the event that they were waiting
for it.</P>
<P>If you need to use an event in your application, you can create a CEvent object.
You need to create the CEvent object when you need to access and wait for the event.
Once the CEvent constructor has returned, the event has occurred and your thread
can continue on its way.</P>
<P>The constructor for the CEvent class can take four arguments. The first argument
is a boolean flag to indicate whether the thread creating the event will own it initially.
This value should be set to TRUE if the thread creating the CEvent object is the
thread that will determine when the event has occurred.</P>
<P>The second argument to the CEvent constructor specifies whether the event is an
automatic or manual event. A manual event remains in the signaled or unsignaled state
until it is specifically set to the other state by the thread that owns the event
object. An automatic event remains in the unsignaled state most of the time. When
the event is set to the signaled state, and at least one thread has been released
and continued on its execution path, the event is returned to the unsignaled state.</P>
<P>The third argument to the event constructor is the name for the event. This name
will be used to identify the event by all threads that need to access the event.
The fourth and final argument is a pointer to the security attributes for the event
object.</P>
<P>The CEvent class has several member functions that you can use to control the
state of the event. Table 18.2 lists these functions.</P>
<P>
<H4>TABLE 18.2. CEvent MEMBER FUNCTIONS.</H4>
<P>
<TABLE BORDER="1">
<TR ALIGN="LEFT" VALIGN="TOP">
<TD ALIGN="LEFT"><I>Function</I></TD>
<TD ALIGN="LEFT"><I>Description</I></TD>
</TR>
<TR ALIGN="LEFT" VALIGN="TOP">
<TD ALIGN="LEFT">SetEvent </TD>
<TD ALIGN="LEFT">Puts the event into the signaled state. </TD>
</TR>
<TR ALIGN="LEFT" VALIGN="TOP">
<TD ALIGN="LEFT">PulseEvent </TD>
<TD ALIGN="LEFT">Puts the event into the signaled state and then resets the event back to the unsignaled
state. </TD>
</TR>
<TR ALIGN="LEFT" VALIGN="TOP">
<TD ALIGN="LEFT">ResetEvent </TD>
<TD ALIGN="LEFT">Puts the event into the unsignaled state. </TD>
</TR>
<TR ALIGN="LEFT" VALIGN="TOP">
<TD ALIGN="LEFT">Unlock </TD>
<TD ALIGN="LEFT">Releases the event object. </TD>
</TR>
</TABLE>
<H2><A NAME="Heading5"></A>Building a Multitasking Application</H2>
<P>To see how you can create your own multitasking applications, you'll create an
application that has four spinning color wheels, each running on its own thread.
Two of the spinners will use the OnIdle function, and the other two will run as independent
threads. This setup will enable you to see the difference between the two types of
threading, as well as learn how you can use each. Your application window will have
four check boxes to start and stop each of the threads so that you can see how much
load is put on the system as each runs alone or in combination with the others.</P>
<P>
<H3><A NAME="Heading6"></A>Creating a Framework</H3>
<P>For the application that you will build today, you'll need an SDI application
framework, with the view class inherited from the CFormView class, so that you can
use the dialog editor to lay out the few controls on the window. It will use the
document class to house the spinners and the independent threads, whereas the view
will have the check boxes and variables that control whether each thread is running
or idle.</P>
<P>To create the framework for your application, create a new project workspace using
the MFC Application Wizard. Give your application a suitable project name, such as
Tasking.</P>
<P>In the AppWizard, specify that you are creating a single document (SDI) application.
You can accept the defaults through most of the rest of the AppWizard, although you
won't need support for ActiveX controls, a docking toolbar, the initial status bar,
or printing and print preview, so you can unselect these options if you so desire.
Once you reach the final AppWizard step, specify that your view class is inherited
from the CFormView class.</P>
<P>Once you create the application framework, remove the static text from the main
application window, and add four check boxes at approximately the upper-left corner
of each quarter of the window space, as in Figure 18.6. Set the properties of the
check boxes as in Table 18.3.</P>
<P>
<H4>TABLE 18.3. CONTROL PROPERTY SETTINGS.</H4>
<P>
<TABLE BORDER="1">
<TR ALIGN="LEFT" VALIGN="TOP">
<TD ALIGN="LEFT"><I>Object</I></TD>
<TD ALIGN="LEFT"><I>Property</I></TD>
<TD ALIGN="LEFT"><I>Setting</I></TD>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -