📄 ch6.htm
字号:
and Gentlemen, Start Your Testing</FONT></A></H2>
<P>
There are a number of schools of thought on how to debug an application.
One of these is the "pound on it 'til it breaks" school,
and it works like this:
<OL>
<LI>Put the program somewhere.
<LI>Randomly, and aggressively, do anything you can think of to
it.
<LI>Fix whatever appears to be broken.
</OL>
<P>
If you have a couple hundred monkeys with keyboards and some spare
time, this can be a great testing method. Of course, you could
just as easily have the monkeys write the code itself and hope
for the best. This isn't to say that some good old fashioned boot
stomping on the application doesn't help as part of an organized
testing situation, but it ends up wasting your time as a programmer.
How do the testers know what's really a bug, and what's just a
function? Who instructs them? What if they don't know what kind
of results you're looking for?
<P>
Sure, you could test a search engine by having people type things
into it and seeing if they get a response result. But what if
all the responses point to the same place, even though the labels
for the pointers say they're different? And what's to say there
isn't some combination that's not being tested? You need to get
organized to get results.
<H3><A NAME="TheTestingProcess">The Testing Process</A></H3>
<P>
To really do some testing, you need two things: people to do the
testing and a plan of attack for how you're going to do it.
<H4>Marshal Your Forces</H4>
<P>
Testing an application by yourself is not the best possible option.
If you're testing by yourself, you normally have a pretty short
list of resources-you, some caffeine, a computer, and lots of
time. You're just one person, and you're also biased: you wrote
the code. This means that you might, even subconsciously, miss
seeing small problems, because you relate them to something else
that you were thinking of adding later, or that you didn't take
out in the first version. It also means that it's going to be
a long time before your program can be completely tested, and
that, while you're working on fixing any problems, no other testing
is taking place.
<P>
By corralling a few of your friends, co-workers, neighbors, or
relatives, you can create a team of testers. These don't have
to be programmers, they don't even have to be familiar with computers.
All you have to do is show them what to do and let them go after
it. The purpose, after all, of CGI programs is to let a wide variety
of people use them to perform a function. Sometimes the problems
you can find in an application aren't bugs, they're design flaws.
You don't have to admit them to people necessarily, but you should
certainly be willing to be flexible. After all, you're not necessarily
the one who's going to be using the program most of the time.
<P>
The number of people you need for testing your program is relative
to the importance of the finished product, as well as the anticipated
number of users. If it's an unimportant system administration
tool that you and maybe two other people will be using, then just
you and those other two people should be more than enough. If
it's something more important, like an online tax return helper,
you better start calling in favors from everyone you know.
<P>
Once you have these piles of people, there's an important thing
you need to think about: What the heck are they going to do? You
can't sit them down in front of a machine and say "OK, test
it!" You have to create an organized plan for which elements
of the program should be tested in what order, and how. Even if
you're stuck doing it by yourself this is necessary to keep both
your sanity and your time well under control.
<H4>Elements of a Testing Plan</H4>
<P>
A testing plan is like a battle plan-you have your objectives,
you know your resources, and you analyze the best way to take
control of the situation. You have to approach it in an organized
and methodical manner to make sure you, and any people you have
helping you, don't miss something that's going to harm the program
when it's found later.
<P>
You've already completed two parts of a testing plan: reviewing
your work and testing it on a command line. Now you need to organize
your methods into more Web server-focused efforts.
<P>
First, look at the program and see what it is you're allowing
people to do. Are they searching for text? Filling out a survey?
Trying to be directed to a random link? If you're accepting input,
ask yourself the following questions:
<UL>
<LI><FONT COLOR=#000000>Are the instructions clear on what information
they should supply?</FONT>
<LI><FONT COLOR=#000000>What format am I expecting information
in?</FONT>
<LI><FONT COLOR=#000000>What if the information they submit isn't
in that form?</FONT>
<LI><FONT COLOR=#000000>What limits have I placed on the amounts
or types of input?</FONT>
</UL>
<P>
For every action that you allow the user, you need to verify the
data that corresponds to that request. If you ask them to type
in a serial number, are you checking to see if it follows a specific
convention? Are you checking to see if they enter anything at
all? One of the first things you can do is create a short list
of what kind of data you're expecting. Table 6.1 shows how this
might be laid out for our sample.<BR>
<P>
<CENTER><B>Table 6.1. Laying out data to be used in your program.</B></CENTER>
<CENTER><TABLE BORDERCOLOR=#000000 BORDER=1 WIDTH=80%>
<TR><TD><I>Data</I></TD><TD WIDTH=187><I>Expected Format</I>
</TD><TD WIDTH=168><I>Special Considerations</I></TD></TR>
<TR><TD WIDTH=121>Name</TD><TD WIDTH=187>Text, up to 40 characters
</TD><TD WIDTH=168>Generates error if left blank</TD></TR>
<TR><TD WIDTH=121>E-mail Address</TD><TD WIDTH=187>Text up to 60 characters that contains '@' symbol
</TD><TD WIDTH=168>Generates error if left blank or if '@' symbol not present
</TD></TR>
<TR><TD WIDTH=121>Comments</TD><TD WIDTH=187>Text, up to 500 characters
</TD><TD WIDTH=168>None</TD></TR>
</TABLE></CENTER>
<P>
<P>
This immediately gives you something to experiment with. If you
fill out only one field, you should be getting at least one error
(preferably two). If you try it, and it merrily accepts just the
one field you entered, you know immediately that there's a problem.
You can go ahead and check any elements that require special formatting,
such as the e-mail address. If you type in "foo@bar,"
it should generate an error. If it doesn't, you've got another
problem.
<P>
This kind of testing is the first step in verifying input, and
is called Boundary Testing-you know what you're expecting to receive,
and what limits you've placed on what people should type in. You
need to verify that the program behaves as expected when accepting
the data, especially if the data does not fall in the accepted
value boundaries.
<P>
It's very important to keep in mind that, just because you've
somehow limited what people can type in (through a form tag or
other front end interface), you can't guarantee that data coming
in will conform to those specifications. Remember the command
line tests, where you can specify your own <TT><FONT FACE="Courier">QUERY_STRING</FONT></TT>
and other data? It's easy for someone to write a program that
does the same kind of thing, except, instead of executing a local
script, it executes a remote one such as yours. This isn't a very
common thing to encounter, but your script shouldn't rely on the
"Well, that'll never happen" theory. If you do, Murphy's
Law steps in and beats you about the head and shoulders when you
least expect it. If data is supposed to be in a particular form,
or of a certain size, your program won't choke on things that
don't meet its criteria.
<P>
Besides allotting time for some Boundary Testing, you should take
examination of the data to the next step-Input Verification. You
want to ensure that once data gets to your application it's being
interpreted correctly and not mangled by some other process. This
can be done with something as simple as a feedback script, which
just echoes out what was typed in, before the script continues
with the rest of its functions. Listing 6.5 shows an example of
placing Input Verification at the beginning of an application.
<HR>
<BLOCKQUOTE>
<B>Listing 6.5. An example feedback script, for examining received
data.<BR>
</B>
</BLOCKQUOTE>
<BLOCKQUOTE>
<TT><FONT FACE="Courier">#!/bin/perl<BR>
<BR>
require 'cgi-lib.pl';<BR>
<BR>
#Grab the incoming data and place it in variables<BR>
&ReadParse(*input);<BR>
<BR>
# Let's see what we've got..<BR>
print "Content-type: text/html \n\n";<BR>
print "Name received was: $input{'name'} <br>\n";
<BR>
print "Serial number received was: $input{'serial'} <br>\n";
<BR>
print "Comments received were: $input{'comments'} <br>\n";
<BR>
<BR>
#Do the rest of the program<BR>
.......</FONT></TT>
</BLOCKQUOTE>
<HR>
<P>
Once you've verified that you're actually getting the data you
think you're getting, it's time to see what the processes in your
program are doing with it. Based on the input, which you can now
verify if it's correct or not, your program should be able to
run through its processes correctly and generate the output you're
expecting.
<H4>Running Through the Processes</H4>
<P>
As mentioned earlier, you could easily just bang on the program
randomly and look to see what happens. This isn't going to get
you very far very fast. What you need is an organized testing
plan that covers not only every function, but every situation
that could be encountered. As your application gets more complex,
this becomes pretty involved.
<P>
A good testing plan is one that covers all the functions in the
application one by one, as well as en masse. Just because the
first subroutine works is no reason to celebrate. It's good, but
the whole application has to work before you can put the application
up for general access.
<P>
The first step towards this is to review your code and see what
major sections of functionality there are. If there's only one,
you can just break that out into a list of specific functions.
If there's more than one, each one of those parts should comprise
a testing category, such as Receiving User Data, Checking Serial
Number, Saving Data to Log File, Creating HTML Output, and so
on…whatever components best describe sections of work that
are done in your program.
<P>
Once you have these sections, review what each section needs in
order to do its job. If you need a valid serial number before
going through the portion of your code where it generates HTML
output, any testing sequence that is just supposed to target the
HTML generating portion will have to take that into account, through
hard-coding or some other bypass method.
<H4>Is Automated Testing Right for You?</H4>
<P>
Automated testing is tough to set up, but it has the advantage
that once it's set up it can make testing an application very
easy. The simplest form of automated testing is a command-line
script that reads test cases from a text file, and then sends
that test data to the application and records the output to a
file to be examined later. More sophisticated options include
custom-made programs that test application speed and results against
expected output. They end up recording problems or desired test
data to the file, reducing the amount of time that anyone has
to spend sorting through the results.
<P>
As a general rule, the more your application is seen, the more
seriously you should consider automated testing. If it's for a
commercial service or for something that should be self-sufficient,
that adds more value to automated testing as well. You should
take into account, though, that there's a point where automated
testing efforts are more work than creating the original program.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -