📄 tutorial.sgml
字号:
<programlisting>fail_unless(money_amount(m) == 5, NULL);</programlisting> <para>This is equivalent to the line: </para> <programlisting>fail_unless(money_amount(m) == 5, "Assertion 'money_amount (m) == 5' failed");</programlisting> <para>All fail functions also support varargs and accept printf-style format strings and arguments. This is especially useful while debugging. With printf-style formatting the message could look like this: </para> <programlisting>fail_unless(money_amount(m) == 5, "Amount was %d, instead of 5", money_amount(m));</programlisting> <para>When we try to compile and run the test suite now, we get a whole host of compilation errors. It may seem a bit strange to deliberately write code that won't compile, but notice what we are doing: in creating the unit test, we are also defining requirements for the money interface. Compilation errors are, in a way, unit test failures of their own, telling us that the implementation does not match the specification. If all we do is edit the sources so that the unit test compiles, we are actually making progress, guided by the unit tests, so that's what we will now do. </para> <para>We will add the following to our header money.h: </para> <programlisting>typedef struct Money Money; Money *money_create(int amount, char *currency); int money_amount(Money *m); char *money_currency(Money *m); void money_free(Money *m);</programlisting> <para>and our code now compiles, but fails to link, since we haven't implemented any of the functions. Let's do that now, creating stubs for all of the functions: </para> <programlisting>#include <stdlib.h>#include "money.h"Money *money_create(int amount, char *currency) { return NULL; }int money_amount(Money *m) { return 0; }char *money_currency(Money *m) { return NULL; }void money_free(Money *m) { return; }</programlisting> <para>Now, everything compiles, and we still pass all our tests. How can that be??? Of course -- we haven't run any of our tests yet.... </para> </section> <section> <title>Creating a suite </title> <para>To run unit tests with Check, we must create some test cases, aggregate them into a suite, and run them with a suite runner. That's a bit of overhead, but it is mostly one-off. Here's the code in check_money.c. Note that we include stdlib.h to get the definitions of EXIT_SUCCESS and EXIT_FAILURE. </para> <programlisting>#include <stdlib.h>#include <check.h>#include "money.h"START_TEST (test_create){ Money *m; m = money_create(5, "USD"); fail_unless(money_amount(m) == 5, "Amount not set correctly on creation"); fail_unless(strcmp(money_currency(m), "USD") == 0, "Currency not set correctly on creation"); money_free(m);}END_TESTSuite *money_suite(void){ Suite *s = suite_create("Money"); TCase *tc_core = tcase_create("Core"); suite_add_tcase (s, tc_core); tcase_add_test(tc_core, test_create); return s;}int main(void){ int nf; Suite *s = money_suite(); SRunner *sr = srunner_create(s); srunner_run_all(sr, CK_NORMAL); nf = srunner_ntests_failed(sr); srunner_free(sr); return (nf == 0) ? EXIT_SUCCESS : EXIT_FAILURE;}</programlisting> <para>Most of the money_suite code should be self-explanatory. We are creating a suite, creating a test case, adding the test case to the suite, and adding the unit test we created above to the test case. Why separate this off into a separate function, rather than inlining it in main? Because any new tests will get added in money_suite, but nothing will need to change in main for the rest of this example, so main will stay relatively clean and simple. </para> <para>Unit tests are internally defined as static functions. This means that the code to add unit tests to test cases must be in the same compilation unit as the unit tests themselves. This provides another reason to put the creation of the test suite in a separate function: you may later want to keep one source file per suite; defining a uniquely named suite creation function allows you later to define a header file giving prototypes for all the suite creation functions, and encapsulate the details of where and how unit tests are defined behind those functions. See the test program defined for Check itself for an example of this strategy. </para> <para>The code in main bears some explanation. We are creating a suite runner object from the suite we created in money_suite. We then run the suite, using the CK_NORMAL flag to specify that we should print a summary of the run, and list any failures that may have occurred. We capture the number of failures that occurred during the run, and use that to decide how to return. The check target created by Automake uses the return value to decide whether the tests passed or failed. </para> </section> <section> <title>SRunner output </title> <para>The function to run tests in an SRunner is defined as follows: </para> <programlisting>void srunner_run_all(SRunner *sr, enum print_output print_mode);</programlisting> <para>This function does two things: </para> <orderedlist> <listitem> <para>Runs all of the unit tests for all of the test cases defined for all of the suites in the SRunner, and collects the results in the SRunner </para> </listitem> <listitem> <para>Prints the results according to the print mode specified </para> </listitem> </orderedlist> <para>For SRunners that have already been run, there is also a separate printing function defined as follows: </para> <programlisting>void srunner_print(SRunner *sr, enum print_output print_mode);</programlisting> <para>The enumeration values defined in Check to control print output are as follows: </para> <variablelist> <varlistentry> <term>CK_SILENT</term> <listitem><para>Specifies that no output is to be generated. If you use this flag, you either need to programmatically examine the SRunner object, print separately, or use test logging (described below: <link linkend="TestLogging">Test Logging</link>). </para></listitem> </varlistentry> <varlistentry> <term>CK_MINIMAL</term> <listitem><para>Only a summary of the test run will be printed (number run, passed, failed, errors). </para></listitem> </varlistentry> <varlistentry> <term>CK_NORMAL</term> <listitem><para>Prints the summary of the run, and prints one message per failed tests. </para></listitem> </varlistentry> <varlistentry> <term>CK_VERBOSE</term> <listitem><para>Prints the summary, and one message per test (passed or failed) </para></listitem> </varlistentry> <varlistentry> <term>CK_ENV</term> <listitem><para>Gets the print mode from the environment variable CK_VERBOSITY, which can have the values "silent", "minimal", "normal, "verbose". If the variable is not found or the value is not recognized, the print mode is set to CK_NORMAL. </para></listitem> </varlistentry> </variablelist> <para>With the CK_NORMAL flag specified, let's rerun make check now. We get the following satisfying output: </para> <programlisting>Running suite(s): Money 0%: Checks: 1, Failures: 1, Errors: 0 check_money.c:9:F:Core:test_create: Amount not set correctly on creation</programlisting> <para>The first number in the summary line tells us that 0% of our tests passed, and the rest of the line tells us that there was one check, and one failure. The next line tells us exactly where that failure occurred, what kind of failure it was (P for pass, F for failure, E for error). </para> <para>Let's implement the money_amount function, so that it will pass its tests. We first have to create a Money structure to hold the amount: </para> <programlisting>struct Money { int amount; };</programlisting> <para>Then we will implement the money_amount function to return the correct amount: </para> <programlisting>int money_amount(Money *m) { return m->amount;}</programlisting> <para>We will now rerun make check and... What's this? The output is now as follows: </para> <programlisting>Running suite(s): Money 0%: Checks: 1, Failures: 0, Errors: 1 check_money.c:5:E:Core:test_create: (after this point) Received signal 11</programlisting> <para>What does this mean? Note that we now have an error, rather than a failure. This means that our unit test either exited early, or was signaled. Next note that the failure message says “after this point” This means that somewhere after the point noted (check_money.c, line 5) there was a problem: signal 11 (AKA segmentation fault). The last point reached is set on entry to the unit test, and after every call to fail_unless, fail, or the special function mark_point. E.g., if we wrote some test code as follows: </para> <programlisting>stuff_that_works();mark_point();stuff_that_dies();</programlisting> <para>then the point returned will be that marked by mark_point. </para> <para>The reason our test failed so horribly is that we haven't implemented money_create to create any money. Go ahead and implement that, and money_currency, to make the unit tests pass. </para> </section> </chapter> <chapter> <title>Advanced Features </title> <section> <title>Running multiple cases </title> <para>What happens if we pass -1 as the amount in money_create? What should happen? Let's write a unit test. Since we are testing limits, we should also test what happens when we create money with amount 0: </para>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -