📄 regress.sgml
字号:
<!-- $Header: /cvsroot/pgsql/doc/src/sgml/regress.sgml,v 1.35.2.1 2003/11/04 09:45:29 petere Exp $ --> <chapter id="regress"> <title id="regress-title">Regression Tests</title> <indexterm zone="regress"> <primary>regression tests</primary> </indexterm> <indexterm zone="regress"> <primary>test</primary> </indexterm> <para> The regression tests are a comprehensive set of tests for the SQL implementation in <productname>PostgreSQL</productname>. They test standard SQL operations as well as the extended capabilities of <productname>PostgreSQL</productname>. From <productname>PostgreSQL</productname> 6.1 onward, the regression tests are current for every official release. </para> <sect1 id="regress-run"> <title>Running the Tests</title> <para> The regression test can be run against an already installed and running server, or using a temporary installation within the build tree. Furthermore, there is a <quote>parallel</quote> and a <quote>sequential</quote> mode for running the tests. The sequential method runs each test script in turn, whereas the parallel method starts up multiple server processes to run groups of tests in parallel. Parallel testing gives confidence that interprocess communication and locking are working correctly. For historical reasons, the sequential test is usually run against an existing installation and the parallel method against a temporary installation, but there are no technical reasons for this. </para> <para> To run the regression tests after building but before installation, type<screen>gmake check</screen> in the top-level directory. (Or you can change to <filename>src/test/regress</filename> and run the command there.) This will first build several auxiliary files, such as some sample user-defined trigger functions, and then run the test driver script. At the end you should see something like<screen><computeroutput>====================== All 93 tests passed.======================</computeroutput></screen> or otherwise a note about which tests failed. See <xref linkend="regress-evaluation"> below for more. </para> <para> Because this test method runs a temporary server, it will not work when you are the root user (since the server will not start as root). If you already did the build as root, you do not have to start all over. Instead, make the regression test directory writable by some other user, log in as that user, and restart the tests. For example<screen><prompt>root# </prompt><userinput>chmod -R a+w src/test/regress</userinput><prompt>root# </prompt><userinput>chmod -R a+w contrib/spi</userinput><prompt>root# </prompt><userinput>su - joeuser</userinput><prompt>joeuser$ </prompt><userinput>cd <replaceable>top-level build directory</></userinput><prompt>joeuser$ </prompt><userinput>gmake check</userinput></screen> (The only possible <quote>security risk</quote> here is that other users might be able to alter the regression test results behind your back. Use common sense when managing user permissions.) </para> <para> Alternatively, run the tests after installation. </para> <para> The parallel regression test starts quite a few processes under your user ID. Presently, the maximum concurrency is twenty parallel test scripts, which means sixty processes: there's a server process, a <application>psql</>, and usually a shell parent process for the <application>psql</> for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at least seventy-five or so, else you may get random-seeming failures in the parallel test. If you are not in a position to raise the limit, you can cut down the degree of parallelism by setting the <literal>MAX_CONNECTIONS</> parameter. For example,<screen>gmake MAX_CONNECTIONS=10 check</screen> runs no more than ten tests concurrently. </para> <para> On some systems, the default Bourne-compatible shell (<filename>/bin/sh</filename>) gets confused when it has to manage too many child processes in parallel. This may cause the parallel test run to lock up or fail. In such cases, specify a different Bourne-compatible shell on the command line, for example:<screen>gmake SHELL=/bin/ksh check</screen> If no non-broken shell is available, you may be able to work around the problem by limiting the number of connections, as shown above. </para> <para> To run the tests after installation<![%standalone-ignore;[ (see <xref linkend="installation">)]]>, initialize a data area and start the server, <![%standalone-ignore;[as explained in <xref linkend="runtime">, ]]> then type<screen>gmake installcheck</screen> The tests will expect to contact the server at the local host and the default port number, unless directed otherwise by <envar>PGHOST</envar> and <envar>PGPORT</envar> environment variables. </para> </sect1> <sect1 id="regress-evaluation"> <title>Test Evaluation</title> <para> Some properly installed and fully functional <productname>PostgreSQL</productname> installations can <quote>fail</quote> some of these regression tests due to platform-specific artifacts such as varying floating-point representation and time zone support. The tests are currently evaluated using a simple <command>diff</command> comparison against the outputs generated on a reference system, so the results are sensitive to small system differences. When a test is reported as <quote>failed</quote>, always examine the differences between expected and actual results; you may well find that the differences are not significant. Nonetheless, we still strive to maintain accurate reference files across all supported platforms, so it can be expected that all tests pass. </para> <para> The actual outputs of the regression tests are in files in the <filename>src/test/regress/results</filename> directory. The test script uses <command>diff</command> to compare each output file against the reference outputs stored in the <filename>src/test/regress/expected</filename> directory. Any differences are saved for your inspection in <filename>src/test/regress/regression.diffs</filename>. (Or you can run <command>diff</command> yourself, if you prefer.) </para> <sect2> <title>Error message differences</title> <para> Some of the regression tests involve intentional invalid input values. Error messages can come from either the <productname>PostgreSQL</productname> code or from the host platform system routines. In the latter case, the messages may vary between platforms, but should reflect similar information. These differences in messages will result in a <quote>failed</quote> regression test that can be validated by inspection. </para> </sect2> <sect2> <title>Locale differences</title> <para> If you run the tests against an already-installed server that was initialized with a collation-order locale other than C, then there may be differences due to sort order and follow-up failures. The regression test suite is set up to handle this problem by providing alternative result files that together are known to handle a large number of locales. For example, for the <literal>char</literal> test, the expected file <filename>char.out</filename> handles the <literal>C</> and <literal>POSIX</> locales, and the file <filename>char_1.out</filename> handles many other locales. The regression test driver will automatically pick the best file to match against when checking for success and for computing failure differences. (This means that the regression tests cannot detect whether the results are appropriate for the configured locale. The tests will simply pick the one result file that works best.) </para> <para> If for some reason the existing expected files do not cover some locale, you can add a new file. The naming scheme is <literal><replaceable>testname</>_<replaceable>digit</>.out</>. The actual digit is not significant. Remember that the regression test driver will consider all such files to be equally valid test results. If the test results are platform-specific, the technique described in <xref linkend="regress-platform"> should be used instead. </para> </sect2> <sect2> <title>Date and time differences</title> <para> A few of the queries in the <filename>horology</filename> test will fail if you run the test on the day of a daylight-saving time changeover, or the day after one. These queries expect that the intervals between midnight yesterday, midnight today and
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -