⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 backup.sgml

📁 PostgreSQL 8.1.4的源码 适用于Linux下的开源数据库系统
💻 SGML
📖 第 1 页 / 共 4 页
字号:
<!--$PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.75.2.1 2006/02/24 14:03:11 momjian Exp $--><chapter id="backup"> <title>Backup and Restore</title> <indexterm zone="backup"><primary>backup</></> <para>  As with everything that contains valuable data, <productname>PostgreSQL</>  databases should be backed up regularly. While the procedure is  essentially simple, it is important to have a basic understanding of  the underlying techniques and assumptions. </para> <para>  There are three fundamentally different approaches to backing up  <productname>PostgreSQL</> data:  <itemizedlist>   <listitem><para><acronym>SQL</> dump</para></listitem>   <listitem><para>File system level backup</para></listitem>   <listitem><para>On-line backup</para></listitem>  </itemizedlist>  Each has its own strengths and weaknesses. </para> <sect1 id="backup-dump">  <title><acronym>SQL</> Dump</title>  <para>   The idea behind the SQL-dump method is to generate a text file with SQL   commands that, when fed back to the server, will recreate the   database in the same state as it was at the time of the dump.   <productname>PostgreSQL</> provides the utility program   <xref linkend="app-pgdump"> for this purpose. The basic usage of this   command is:<synopsis>pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">outfile</replaceable></synopsis>   As you see, <application>pg_dump</> writes its results to the   standard output. We will see below how this can be useful.  </para>  <para>   <application>pg_dump</> is a regular <productname>PostgreSQL</>   client application (albeit a particularly clever one). This means   that you can do this backup procedure from any remote host that has   access to the database. But remember that <application>pg_dump</>   does not operate with special permissions. In particular, it must   have read access to all tables that you want to back up, so in   practice you almost always have to run it as a database superuser.  </para>  <para>   To specify which database server <application>pg_dump</> should   contact, use the command line options <option>-h   <replaceable>host</></> and <option>-p <replaceable>port</></>. The   default host is the local host or whatever your   <envar>PGHOST</envar> environment variable specifies. Similarly,   the default port is indicated by the <envar>PGPORT</envar>   environment variable or, failing that, by the compiled-in default.   (Conveniently, the server will normally have the same compiled-in   default.)  </para>  <para>   As any other <productname>PostgreSQL</> client application,   <application>pg_dump</> will by default connect with the database   user name that is equal to the current operating system user name. To override   this, either specify the <option>-U</option> option or set the   environment variable <envar>PGUSER</envar>. Remember that   <application>pg_dump</> connections are subject to the normal   client authentication mechanisms (which are described in <xref   linkend="client-authentication">).  </para>  <para>   Dumps created by <application>pg_dump</> are internally consistent,   that is, updates to the database while <application>pg_dump</> is   running will not be in the dump. <application>pg_dump</> does not   block other operations on the database while it is working.   (Exceptions are those operations that need to operate with an   exclusive lock, such as <command>VACUUM FULL</command>.)  </para>  <important>   <para>    When your database schema relies on OIDs (for instance as foreign    keys) you must instruct <application>pg_dump</> to dump the OIDs    as well. To do this, use the <option>-o</option> command line    option.   </para>  </important>  <sect2 id="backup-dump-restore">   <title>Restoring the dump</title>   <para>    The text files created by <application>pg_dump</> are intended to    be read in by the <application>psql</application> program. The    general command form to restore a dump is<synopsis>psql <replaceable class="parameter">dbname</replaceable> &lt; <replaceable class="parameter">infile</replaceable></synopsis>    where <replaceable class="parameter">infile</replaceable> is what    you used as <replaceable class="parameter">outfile</replaceable>    for the <application>pg_dump</> command. The database <replaceable    class="parameter">dbname</replaceable> will not be created by this    command, you must create it yourself from <literal>template0</> before executing    <application>psql</> (e.g., with <literal>createdb -T template0    <replaceable class="parameter">dbname</></literal>).    <application>psql</> supports options similar to <application>pg_dump</>     for controlling the database server location and the user name. See    <xref linkend="app-psql">'s reference page for more information.   </para>   <para>    Not only must the target database already exist before starting to    run the restore, but so must all the users who own objects in the    dumped database or were granted permissions on the objects.  If they    do not, then the restore will fail to recreate the objects with the    original ownership and/or permissions.  (Sometimes this is what you want,    but usually it is not.)   </para>   <para>    Once restored, it is wise to run <xref linkend="sql-analyze"    endterm="sql-analyze-title"> on each database so the optimizer has    useful statistics. An easy way to do this is to run    <command>vacuumdb -a -z</> to    <command>VACUUM ANALYZE</> all databases; this is equivalent to    running <command>VACUUM ANALYZE</command> manually.   </para>   <para>    The ability of <application>pg_dump</> and <application>psql</> to    write to or read from pipes makes it possible to dump a database    directly from one server to another; for example:<programlisting>pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</></programlisting>   </para>   <important>    <para>     The dumps produced by <application>pg_dump</> are relative to     <literal>template0</>. This means that any languages, procedures,     etc. added to <literal>template1</> will also be dumped by     <application>pg_dump</>. As a result, when restoring, if you are     using a customized <literal>template1</>, you must create the     empty database from <literal>template0</>, as in the example     above.    </para>   </important>   <para>    For advice on how to load large amounts of data into    <productname>PostgreSQL</productname> efficiently, refer to <xref    linkend="populate">.   </para>  </sect2>  <sect2 id="backup-dump-all">   <title>Using <application>pg_dumpall</></title>   <para>    The above mechanism is cumbersome and inappropriate when backing    up an entire database cluster. For this reason the <xref    linkend="app-pg-dumpall"> program is provided.    <application>pg_dumpall</> backs up each database in a given    cluster, and also preserves cluster-wide data such as users and    groups. The basic usage of this command is:<synopsis>pg_dumpall &gt; <replaceable>outfile</></synopsis>    The resulting dump can be restored with <application>psql</>:<synopsis>psql -f <replaceable class="parameter">infile</replaceable> postgres</synopsis>    (Actually, you can specify any existing database name to start from,    but if you are reloading in an empty cluster then <literal>postgres</>    should generally be used.)  It is always necessary to have    database superuser access when restoring a <application>pg_dumpall</>    dump, as that is required to restore the user and group information.   </para>  </sect2>  <sect2 id="backup-dump-large">   <title>Handling large databases</title>   <para>    Since <productname>PostgreSQL</productname> allows tables larger    than the maximum file size on your system, it can be problematic    to dump such a table to a file, since the resulting file will likely    be larger than the maximum size allowed by your system. Since    <application>pg_dump</> can write to the standard output, you can    just use standard Unix tools to work around this possible problem.   </para>   <formalpara>    <title>Use compressed dumps.</title>    <para>     You can use your favorite compression program, for example     <application>gzip</application>.<programlisting>pg_dump <replaceable class="parameter">dbname</replaceable> | gzip &gt; <replaceable class="parameter">filename</replaceable>.gz</programlisting>     Reload with<programlisting>createdb <replaceable class="parameter">dbname</replaceable>gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable></programlisting>     or<programlisting>cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable></programlisting>    </para>   </formalpara>   <formalpara>    <title>Use <command>split</>.</title>    <para>     The <command>split</command> command     allows you to split the output into pieces that are     acceptable in size to the underlying file system. For example, to     make chunks of 1 megabyte:<programlisting>pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable></programlisting>     Reload with<programlisting>createdb <replaceable class="parameter">dbname</replaceable>cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable></programlisting>    </para>   </formalpara>   <formalpara>    <title>Use the custom dump format.</title>    <para>     If <productname>PostgreSQL</productname> was built on a system with the     <application>zlib</> compression library installed, the custom dump     format will compress data as it writes it to the output file. This will     produce dump file sizes similar to using <command>gzip</command>, but it     has the added advantage that tables can be restored selectively. The     following command dumps a database using the custom dump format:<programlisting>pg_dump -Fc <replaceable class="parameter">dbname</replaceable> &gt; <replaceable class="parameter">filename</replaceable></programlisting>     A custom-format dump is not a script for <application>psql</>, but     instead must be restored with <application>pg_restore</>.     See the <xref linkend="app-pgdump"> and <xref     linkend="app-pgrestore"> reference pages for details.    </para>   </formalpara>  </sect2> </sect1> <sect1 id="backup-file">  <title>File system level backup</title>  <para>   An alternative backup strategy is to directly copy the files that   <productname>PostgreSQL</> uses to store the data in the database. In   <xref linkend="creating-cluster"> it is explained where these files   are located, but you have probably found them already if you are   interested in this method. You can use whatever method you prefer   for doing usual file system backups, for example<programlisting>tar -cf backup.tar /usr/local/pgsql/data</programlisting>  </para>  <para>   There are two restrictions, however, which make this method   impractical, or at least inferior to the <application>pg_dump</>   method:   <orderedlist>    <listitem>     <para>      The database server <emphasis>must</> be shut down in order to      get a usable backup. Half-way measures such as disallowing all      connections will <emphasis>not</emphasis> work      (mainly because <command>tar</command> and similar tools do not take an      atomic snapshot of the state of the file system at a point in      time). Information about stopping the server can be found in      <xref linkend="postmaster-shutdown">.  Needless to say that you      also need to shut down the server before restoring the data.     </para>    </listitem>    <listitem>     <para>      If you have dug into the details of the file system layout of the      database, you may be tempted to try to back up or restore only certain      individual tables or databases from their respective files or      directories. This will <emphasis>not</> work because the      information contained in these files contains only half the      truth. The other half is in the commit log files      <filename>pg_clog/*</filename>, which contain the commit status of      all transactions. A table file is only usable with this      information. Of course it is also impossible to restore only a      table and the associated <filename>pg_clog</filename> data      because that would render all other tables in the database      cluster useless.  So file system backups only work for complete      restoration of an entire database cluster.     </para>    </listitem>   </orderedlist>  </para>  <para>   An alternative file-system backup approach is to make a   <quote>consistent snapshot</quote> of the data directory, if the   file system supports that functionality (and you are willing to   trust that it is implemented correctly).  The typical procedure is

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -