📄 regress.sgml
字号:
<!-- $PostgreSQL: pgsql/doc/src/sgml/regress.sgml,v 1.49 2005/10/18 21:43:33 tgl Exp $ --> <chapter id="regress"> <title id="regress-title">Regression Tests</title> <indexterm zone="regress"> <primary>regression tests</primary> </indexterm> <indexterm zone="regress"> <primary>test</primary> </indexterm> <para> The regression tests are a comprehensive set of tests for the SQL implementation in <productname>PostgreSQL</productname>. They test standard SQL operations as well as the extended capabilities of <productname>PostgreSQL</productname>. </para> <sect1 id="regress-run"> <title>Running the Tests</title> <para> The regression tests can be run against an already installed and running server, or using a temporary installation within the build tree. Furthermore, there is a <quote>parallel</quote> and a <quote>sequential</quote> mode for running the tests. The sequential method runs each test script in turn, whereas the parallel method starts up multiple server processes to run groups of tests in parallel. Parallel testing gives confidence that interprocess communication and locking are working correctly. For historical reasons, the sequential test is usually run against an existing installation and the parallel method against a temporary installation, but there are no technical reasons for this. </para> <para> To run the regression tests after building but before installation, type<screen>gmake check</screen> in the top-level directory. (Or you can change to <filename>src/test/regress</filename> and run the command there.) This will first build several auxiliary files, such as some sample user-defined trigger functions, and then run the test driver script. At the end you should see something like<screen><computeroutput>====================== All 98 tests passed.======================</computeroutput></screen> or otherwise a note about which tests failed. See <xref linkend="regress-evaluation"> below before assuming that a <quote>failure</> represents a serious problem. </para> <para> Because this test method runs a temporary server, it will not work when you are the root user (since the server will not start as root). If you already did the build as root, you do not have to start all over. Instead, make the regression test directory writable by some other user, log in as that user, and restart the tests. For example<screen><prompt>root# </prompt><userinput>chmod -R a+w src/test/regress</userinput><prompt>root# </prompt><userinput>chmod -R a+w contrib/spi</userinput><prompt>root# </prompt><userinput>su - joeuser</userinput><prompt>joeuser$ </prompt><userinput>cd <replaceable>top-level build directory</></userinput><prompt>joeuser$ </prompt><userinput>gmake check</userinput></screen> (The only possible <quote>security risk</quote> here is that other users might be able to alter the regression test results behind your back. Use common sense when managing user permissions.) </para> <para> Alternatively, run the tests after installation. </para> <para> If you have configured <productname>PostgreSQL</productname> to install into a location where an older <productname>PostgreSQL</productname> installation already exists, and you perform <literal>gmake check</> before installing the new version, you may find that the tests fail because the new programs try to use the already-installed shared libraries. (Typical symptoms are complaints about undefined symbols.) If you wish to run the tests before overwriting the old installation, you'll need to build with <literal>configure --disable-rpath</>. It is not recommended that you use this option for the final installation, however. </para> <para> The parallel regression test starts quite a few processes under your user ID. Presently, the maximum concurrency is twenty parallel test scripts, which means sixty processes: there's a server process, a <application>psql</>, and usually a shell parent process for the <application>psql</> for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at least seventy-five or so, else you may get random-seeming failures in the parallel test. If you are not in a position to raise the limit, you can cut down the degree of parallelism by setting the <literal>MAX_CONNECTIONS</> parameter. For example,<screen>gmake MAX_CONNECTIONS=10 check</screen> runs no more than ten tests concurrently. </para> <para> On some systems, the default Bourne-compatible shell (<filename>/bin/sh</filename>) gets confused when it has to manage too many child processes in parallel. This may cause the parallel test run to lock up or fail. In such cases, specify a different Bourne-compatible shell on the command line, for example:<screen>gmake SHELL=/bin/ksh check</screen> If no non-broken shell is available, you may be able to work around the problem by limiting the number of connections, as shown above. </para> <para> To run the tests after installation<![%standalone-ignore;[ (see <xref linkend="installation">)]]>, initialize a data area and start the server, <![%standalone-ignore;[as explained in <xref linkend="runtime">, ]]> then type<screen>gmake installcheck</screen>or for a parallel test<screen>gmake installcheck-parallel</screen> The tests will expect to contact the server at the local host and the default port number, unless directed otherwise by <envar>PGHOST</envar> and <envar>PGPORT</envar> environment variables. </para> <para> The source distribution also contains regression tests for the optional procedural languages and for some of the <filename>contrib</> modules. At present, these tests can be used only against an already-installed server. To run the tests for all procedural languages that have been built and installed, change to the <filename>src/pl</> directory of the build tree and type<screen>gmake installcheck</screen> You can also do this in any of the subdirectories of <filename>src/pl</> to run tests for just one procedural language. To run the tests for all <filename>contrib</> modules that have them, change to the <filename>contrib</> directory of the build tree and type<screen>gmake installcheck</screen> The <filename>contrib</> modules must have been built and installed first. You can also do this in a subdirectory of <filename>contrib</> to run the tests for just one module. </para> </sect1> <sect1 id="regress-evaluation"> <title>Test Evaluation</title> <para> Some properly installed and fully functional <productname>PostgreSQL</productname> installations can <quote>fail</quote> some of these regression tests due to platform-specific artifacts such as varying floating-point representation and message wording. The tests are currently evaluated using a simple <command>diff</command> comparison against the outputs generated on a reference system, so the results are sensitive to small system differences. When a test is reported as <quote>failed</quote>, always examine the differences between expected and actual results; you may well find that the differences are not significant. Nonetheless, we still strive to maintain accurate reference files across all supported platforms, so it can be expected that all tests pass. </para> <para> The actual outputs of the regression tests are in files in the <filename>src/test/regress/results</filename> directory. The test script uses <command>diff</command> to compare each output file against the reference outputs stored in the <filename>src/test/regress/expected</filename> directory. Any differences are saved for your inspection in <filename>src/test/regress/regression.diffs</filename>. (Or you can run <command>diff</command> yourself, if you prefer.) </para> <para> If for some reason a particular platform generates a <quote>failure</> for a given test, but inspection of the output convinces you that the result is valid, you can add a new comparison file to silence the failure report in future test runs. See <xref linkend="regress-variant"> for details. </para> <sect2> <title>Error message differences</title> <para> Some of the regression tests involve intentional invalid input values. Error messages can come from either the <productname>PostgreSQL</productname> code or from the host platform system routines. In the latter case, the messages may vary between platforms, but should reflect similar information. These differences in messages will result in a <quote>failed</quote> regression test that can be validated by inspection. </para> </sect2> <sect2> <title>Locale differences</title> <para> If you run the tests against an already-installed server that was initialized with a collation-order locale other than C, then there may be differences due to sort order and follow-up failures. The regression test suite is set up to handle this problem by providing alternative result files that together are known to handle a large number of locales.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -