⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme

📁 PostgreSQL7.4.6 for Linux
💻
字号:
                                Regression TestsThe regression tests are a comprehensive set of tests for the SQLimplementation in PostgreSQL. They test standard SQL operations as well as theextended capabilities of PostgreSQL. From PostgreSQL 6.1 onward, the regressiontests are current for every official release.-------------------------------------------------------------------------------Running the TestsThe regression test can be run against an already installed and running server,or using a temporary installation within the build tree. Furthermore, there isa "parallel" and a "sequential" mode for running the tests. The sequentialmethod runs each test script in turn, whereas the parallel method starts upmultiple server processes to run groups of tests in parallel. Parallel testinggives confidence that interprocess communication and locking are workingcorrectly. For historical reasons, the sequential test is usually run againstan existing installation and the parallel method against a temporaryinstallation, but there are no technical reasons for this.To run the regression tests after building but before installation, type  gmake checkin the top-level directory. (Or you can change to "src/test/regress" and runthe command there.) This will first build several auxiliary files, such as somesample user-defined trigger functions, and then run the test driver script. Atthe end you should see something like  ======================   All 93 tests passed.  ======================or otherwise a note about which tests failed. See the Section called TestEvaluation below for more.Because this test method runs a temporary server, it will not work when you arethe root user (since the server will not start as root). If you already did thebuild as root, you do not have to start all over. Instead, make the regressiontest directory writable by some other user, log in as that user, and restartthe tests. For example  root# chmod -R a+w src/test/regress  root# chmod -R a+w contrib/spi  root# su - joeuser  joeuser$ cd top-level build directory  joeuser$ gmake check(The only possible "security risk" here is that other users might be able toalter the regression test results behind your back. Use common sense whenmanaging user permissions.)Alternatively, run the tests after installation.The parallel regression test starts quite a few processes under your user ID.Presently, the maximum concurrency is twenty parallel test scripts, which meanssixty processes: there's a server process, a psql, and usually a shell parentprocess for the psql for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at leastseventy-five or so, else you may get random-seeming failures in the paralleltest. If you are not in a position to raise the limit, you can cut down thedegree of parallelism by setting the MAX_CONNECTIONS parameter. For example,  gmake MAX_CONNECTIONS=10 checkruns no more than ten tests concurrently.On some systems, the default Bourne-compatible shell ("/bin/sh") gets confusedwhen it has to manage too many child processes in parallel. This may cause theparallel test run to lock up or fail. In such cases, specify a differentBourne-compatible shell on the command line, for example:  gmake SHELL=/bin/ksh checkIf no non-broken shell is available, you may be able to work around the problemby limiting the number of connections, as shown above.To run the tests after installation, initialize a data area and start theserver, then type  gmake installcheckThe tests will expect to contact the server at the local host and the defaultport number, unless directed otherwise by PGHOST and PGPORT environmentvariables.-------------------------------------------------------------------------------Test EvaluationSome properly installed and fully functional PostgreSQL installations can"fail" some of these regression tests due to platform-specific artifacts suchas varying floating-point representation and time zone support. The tests arecurrently evaluated using a simple "diff" comparison against the outputsgenerated on a reference system, so the results are sensitive to small systemdifferences. When a test is reported as "failed", always examine thedifferences between expected and actual results; you may well find that thedifferences are not significant. Nonetheless, we still strive to maintainaccurate reference files across all supported platforms, so it can be expectedthat all tests pass.The actual outputs of the regression tests are in files in the "src/test/regress/results" directory. The test script uses "diff" to compare each outputfile against the reference outputs stored in the "src/test/regress/expected"directory. Any differences are saved for your inspection in "src/test/regress/regression.diffs". (Or you can run "diff" yourself, if you prefer.)-------------------------------------------------------------------------------Error message differencesSome of the regression tests involve intentional invalid input values. Errormessages can come from either the PostgreSQL code or from the host platformsystem routines. In the latter case, the messages may vary between platforms,but should reflect similar information. These differences in messages willresult in a "failed" regression test that can be validated by inspection.-------------------------------------------------------------------------------Locale differencesIf you run the tests against an already-installed server that was initializedwith a collation-order locale other than C, then there may be differences dueto sort order and follow-up failures. The regression test suite is set up tohandle this problem by providing alternative result files that together areknown to handle a large number of locales. For example, for the char test, theexpected file "char.out" handles the C and POSIX locales, and the file"char_1.out" handles many other locales. The regression test driver willautomatically pick the best file to match against when checking for success andfor computing failure differences. (This means that the regression tests cannotdetect whether the results are appropriate for the configured locale. The testswill simply pick the one result file that works best.)If for some reason the existing expected files do not cover some locale, youcan add a new file. The naming scheme is testname_digit.out. The actual digitis not significant. Remember that the regression test driver will consider allsuch files to be equally valid test results. If the test results are platform-specific, the technique described in the Section called Platform-specificcomparison files should be used instead.-------------------------------------------------------------------------------Date and time differencesA few of the queries in the "horology" test will fail if you run the test onthe day of a daylight-saving time changeover, or the day after one. Thesequeries expect that the intervals between midnight yesterday, midnight todayand midnight tomorrow are exactly twenty-four hours --- which is wrong ifdaylight-saving time went into or out of effect meanwhile.     Note: Because USA daylight-saving time rules are used, this problem     always occurs on the first Sunday of April, the last Sunday of     October, and their following Mondays, regardless of when daylight-     saving time is in effect where you live. Also note that the problem     appears or disappears at midnight Pacific time (UTC-7 or UTC-8), not     midnight your local time. Thus the failure may appear late on     Saturday or persist through much of Tuesday, depending on where you     live.Most of the date and time results are dependent on the time zone environment.The reference files are generated for time zone PST8PDT (Berkeley, California),and there will be apparent failures if the tests are not run with that timezone setting. The regression test driver sets environment variable PGTZ toPST8PDT, which normally ensures proper results. However, your operating systemmust provide support for the PST8PDT time zone, or the time zone-dependenttests will fail. To verify that your machine does have this support, type thefollowing:  env TZ=PST8PDT dateThe command above should have returned the current system time in the PST8PDTtime zone. If the PST8PDT time zone is not available, then your system may havereturned the time in UTC. If the PST8PDT time zone is missing, you can set thetime zone rules explicitly:  PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZThere appear to be some systems that do not accept the recommended syntax forexplicitly setting the local time zone rules; you may need to use a differentPGTZ setting on such machines.Some systems using older time-zone libraries fail to apply daylight-savingcorrections to dates before 1970, causing pre-1970 PDT times to be displayed inPST instead. This will result in localized differences in the test results.-------------------------------------------------------------------------------Floating-point differencesSome of the tests involve computing 64-bit floating-point numbers (doubleprecision) from table columns. Differences in results involving mathematicalfunctions of double precision columns have been observed. The float8 andgeometry tests are particularly prone to small differences across platforms, oreven with different compiler optimization options. Human eyeball comparison isneeded to determine the real significance of these differences which areusually 10 places to the right of the decimal point.Some systems display minus zero as -0, while others just show 0.Some systems signal errors from pow() and exp() differently from the mechanismexpected by the current PostgreSQL code.-------------------------------------------------------------------------------Row ordering differencesYou might see differences in which the same rows are output in a differentorder than what appears in the expected file. In most cases this is not,strictly speaking, a bug. Most of the regression test scripts are not sopedantic as to use an ORDER BY for every single SELECT, and so their result roworderings are not well-defined according to the letter of the SQLspecification. In practice, since we are looking at the same queries beingexecuted on the same data by the same software, we usually get the same resultordering on all platforms, and so the lack of ORDER BY isn't a problem. Somequeries do exhibit cross-platform ordering differences, however. (Orderingdifferences can also be triggered by non-C locale settings.)Therefore, if you see an ordering difference, it's not something to worryabout, unless the query does have an ORDER BY that your result is violating.But please report it anyway, so that we can add an ORDER BY to that particularquery and thereby eliminate the bogus "failure" in future releases.You might wonder why we don't order all the regression test queries explicitlyto get rid of this issue once and for all. The reason is that that would makethe regression tests less useful, not more, since they'd tend to exercise queryplan types that produce ordered results to the exclusion of those that don't.-------------------------------------------------------------------------------The "random" testThere is at least one case in the random test script that is intended toproduce random results. This causes random to fail the regression test once ina while (perhaps once in every five to ten trials). Typing  diff results/random.out expected/random.outshould produce only one or a few lines of differences. You need not worryunless the random test always fails in repeated attempts. (On the other hand,if the random test is *never* reported to fail even in many trials of theregression tests, you probably *should* worry.)-------------------------------------------------------------------------------Platform-specific comparison filesSince some of the tests inherently produce platform-specific results, we haveprovided a way to supply platform-specific result comparison files. Frequently,the same variation applies to multiple platforms; rather than supplying aseparate comparison file for every platform, there is a mapping file thatdefines which comparison file to use. So, to eliminate bogus test "failures"for a particular platform, you must choose or make a variant result file, andthen add a line to the mapping file, which is "src/test/regress/resultmap".Each line in the mapping file is of the form  testname/platformpattern=comparisonfilenameThe test name is just the name of the particular regression test module. Theplatform pattern is a pattern in the style of the Unix tool "expr" (that is, aregular expression with an implicit ^ anchor at the start). It is matchedagainst the platform name as printed by "config.guess" followed by :gcc or :cc,depending on whether you use the GNU compiler or the system's native compiler(on systems where there is a difference). The comparison file name is the nameof the substitute result comparison file.For example: some systems using older time zone libraries fail to applydaylight-saving corrections to dates before 1970, causing pre-1970 PDT times tobe displayed in PST instead. This causes a few differences in the "horology"regression test. Therefore, we provide a variant comparison file, "horology-no-DST-before-1970.out", which includes the results to be expected on thesesystems. To silence the bogus "failure" message on HPUX platforms, "resultmap"includes  horology/.*-hpux=horology-no-DST-before-1970which will trigger on any machine for which the output of "config.guess"includes -hpux. Other lines in "resultmap" select the variant comparison filefor other platforms where it's appropriate.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -