📄 perform.sgml
字号:
<!--$Header: /cvsroot/pgsql/doc/src/sgml/perform.sgml,v 1.37 2003/11/01 01:56:29 petere Exp $--> <chapter id="performance-tips"> <title>Performance Tips</title> <para> Query performance can be affected by many things. Some of these can be manipulated by the user, while others are fundamental to the underlying design of the system. This chapter provides some hints about understanding and tuning <productname>PostgreSQL</productname> performance. </para> <sect1 id="using-explain"> <title>Using <command>EXPLAIN</command></title> <indexterm zone="using-explain"> <primary>EXPLAIN</primary> </indexterm> <indexterm zone="using-explain"> <primary>query plan</primary> </indexterm> <para> <productname>PostgreSQL</productname> devises a <firstterm>query plan</firstterm> for each query it is given. Choosing the right plan to match the query structure and the properties of the data is absolutely critical for good performance. You can use the <command>EXPLAIN</command> command to see what query plan the system creates for any query. Plan-reading is an art that deserves an extensive tutorial, which this is not; but here is some basic information. </para> <para> The numbers that are currently quoted by <command>EXPLAIN</command> are: <itemizedlist> <listitem> <para> Estimated start-up cost (Time expended before output scan can start, e.g., time to do the sorting in a sort node.) </para> </listitem> <listitem> <para> Estimated total cost (If all rows were to be retrieved, which they may not be: a query with a <literal>LIMIT</> clause will stop short of paying the total cost, for example.) </para> </listitem> <listitem> <para> Estimated number of rows output by this plan node (Again, only if executed to completion) </para> </listitem> <listitem> <para> Estimated average width (in bytes) of rows output by this plan node </para> </listitem> </itemizedlist> </para> <para> The costs are measured in units of disk page fetches. (CPU effort estimates are converted into disk-page units using some fairly arbitrary fudge factors. If you want to experiment with these factors, see the list of run-time configuration parameters in <xref linkend="runtime-config-resource">.) </para> <para> It's important to note that the cost of an upper-level node includes the cost of all its child nodes. It's also important to realize that the cost only reflects things that the planner/optimizer cares about. In particular, the cost does not consider the time spent transmitting result rows to the frontend, which could be a pretty dominant factor in the true elapsed time; but the planner ignores it because it cannot change it by altering the plan. (Every correct plan will output the same row set, we trust.) </para> <para> Rows output is a little tricky because it is <emphasis>not</emphasis> the number of rows processed/scanned by the query, it is usually less, reflecting the estimated selectivity of any <literal>WHERE</>-clause conditions that are being applied at this node. Ideally the top-level rows estimate will approximate the number of rows actually returned, updated, or deleted by the query. </para> <para> Here are some examples (using the regression test database after a <literal>VACUUM ANALYZE</>, and 7.3 development sources):<programlisting>EXPLAIN SELECT * FROM tenk1; QUERY PLAN------------------------------------------------------------- Seq Scan on tenk1 (cost=0.00..333.00 rows=10000 width=148)</programlisting> </para> <para> This is about as straightforward as it gets. If you do<programlisting>SELECT * FROM pg_class WHERE relname = 'tenk1';</programlisting> you will find out that <classname>tenk1</classname> has 233 disk pages and 10000 rows. So the cost is estimated at 233 page reads, defined as costing 1.0 apiece, plus 10000 * <varname>cpu_tuple_cost</varname> which is currently 0.01 (try <command>SHOW cpu_tuple_cost</command>). </para> <para> Now let's modify the query to add a <literal>WHERE</> condition:<programlisting>EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 1000; QUERY PLAN------------------------------------------------------------ Seq Scan on tenk1 (cost=0.00..358.00 rows=1033 width=148) Filter: (unique1 < 1000)</programlisting> The estimate of output rows has gone down because of the <literal>WHERE</> clause. However, the scan will still have to visit all 10000 rows, so the cost hasn't decreased; in fact it has gone up a bit to reflect the extra CPU time spent checking the <literal>WHERE</> condition. </para> <para> The actual number of rows this query would select is 1000, but the estimate is only approximate. If you try to duplicate this experiment, you will probably get a slightly different estimate; moreover, it will change after each <command>ANALYZE</command> command, because the statistics produced by <command>ANALYZE</command> are taken from a randomized sample of the table. </para> <para> Modify the query to restrict the condition even more:<programlisting>EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50; QUERY PLAN------------------------------------------------------------------------------- Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.33 rows=49 width=148) Index Cond: (unique1 < 50)</programlisting> and you will see that if we make the <literal>WHERE</> condition selective enough, the planner will eventually decide that an index scan is cheaper than a sequential scan. This plan will only have to visit 50 rows because of the index, so it wins despite the fact that each individual fetch is more expensive than reading a whole disk page sequentially. </para> <para> Add another condition to the <literal>WHERE</> clause:<programlisting>EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50 AND stringu1 = 'xxx'; QUERY PLAN------------------------------------------------------------------------------- Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.45 rows=1 width=148) Index Cond: (unique1 < 50) Filter: (stringu1 = 'xxx'::name)</programlisting> The added condition <literal>stringu1 = 'xxx'</literal> reduces the output-rows estimate, but not the cost because we still have to visit the same set of rows. Notice that the <literal>stringu1</> clause cannot be applied as an index condition (since this index is only on the <literal>unique1</> column). Instead it is applied as a filter on the rows retrieved by the index. Thus the cost has actually gone up a little bit to reflect this extra checking. </para> <para> Let's try joining two tables, using the columns we have been discussing:<programlisting>EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 AND t1.unique2 = t2.unique2; QUERY PLAN---------------------------------------------------------------------------- Nested Loop (cost=0.00..327.02 rows=49 width=296) -> Index Scan using tenk1_unique1 on tenk1 t1 (cost=0.00..179.33 rows=49 width=148) Index Cond: (unique1 < 50) -> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.00..3.01 rows=1 width=148) Index Cond: ("outer".unique2 = t2.unique2)</programlisting> </para> <para> In this nested-loop join, the outer scan is the same index scan we had in the example before last, and so its cost and row count are the same because we are applying the <literal>WHERE</> clause <literal>unique1 < 50</literal> at that node. The <literal>t1.unique2 = t2.unique2</literal> clause is not relevant yet, so it doesn't affect row count of the outer scan. For the inner scan, the <literal>unique2</> value of the current outer-scan row is plugged into the inner index scan to produce an index condition like <literal>t2.unique2 = <replaceable>constant</replaceable></literal>. So we get the same inner-scan plan and costs that we'd get from, say, <literal>EXPLAIN SELECT * FROM tenk2 WHERE unique2 = 42</literal>. The costs of the loop node are then set on the basis of the cost of the outer scan, plus one repetition of the inner scan for each outer row (49 * 3.01, here), plus a little CPU time for join processing. </para> <para> In this example the join's output row count is the same as the product of the two scans' row counts, but that's not true in general, because in general you can have <literal>WHERE</> clauses that mention both tables and so can only be applied at the join point, not to either input scan. For example, if we added <literal>WHERE ... AND t1.hundred < t2.hundred</literal>, that would decrease the output row count of the join node, but not change either input scan. </para> <para> One way to look at variant plans is to force the planner to disregard whatever strategy it thought was the winner, using the enable/disable flags for each plan type. (This is a crude tool, but useful. See
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -