📄 onlinecompress.java
字号:
/* Derby - Class org.apache.derby.iapi.db.OnlineCompress Copyright 2005 The Apache Software Foundation or its licensors, as applicable. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */package org.apache.derby.iapi.db;import org.apache.derby.iapi.error.StandardException;import org.apache.derby.iapi.error.PublicAPI;import org.apache.derby.iapi.sql.dictionary.DataDictionaryContext;import org.apache.derby.iapi.sql.dictionary.DataDictionary;import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;import org.apache.derby.iapi.sql.dictionary.TableDescriptor;import org.apache.derby.iapi.sql.dictionary.ColumnDescriptor;import org.apache.derby.iapi.sql.dictionary.ColumnDescriptorList;import org.apache.derby.iapi.sql.dictionary.ConstraintDescriptor;import org.apache.derby.iapi.sql.dictionary.ConstraintDescriptorList;import org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor;import org.apache.derby.iapi.sql.depend.DependencyManager;import org.apache.derby.iapi.sql.execute.ExecRow;import org.apache.derby.iapi.sql.execute.ExecutionContext;import org.apache.derby.iapi.types.DataValueDescriptor;import org.apache.derby.iapi.types.DataValueFactory;import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;import org.apache.derby.iapi.sql.conn.ConnectionUtil;import org.apache.derby.iapi.store.access.TransactionController;import org.apache.derby.iapi.types.RowLocation;import org.apache.derby.iapi.store.access.ScanController;import org.apache.derby.iapi.store.access.ConglomerateController;import org.apache.derby.iapi.store.access.GroupFetchScanController;import org.apache.derby.iapi.store.access.RowUtil;import org.apache.derby.iapi.store.access.Qualifier;import org.apache.derby.iapi.services.sanity.SanityManager;import org.apache.derby.iapi.reference.SQLState;import org.apache.derby.iapi.services.io.FormatableBitSet;import java.sql.SQLException;/**Implementation of SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE().<p>Code which implements the following system procedure:void SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE( IN SCHEMANAME VARCHAR(128), IN TABLENAME VARCHAR(128), IN PURGE_ROWS SMALLINT, IN DEFRAGMENT_ROWS SMALLINT, IN TRUNCATE_END SMALLINT)<p>Use the SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE system procedure to reclaim unused, allocated space in a table and its indexes. Typically, unused allocatedspace exists when a large amount of data is deleted from a table, and therehave not been subsequent inserts to use the space freed by the deletes. By default, Derby does not return unused space to the operating system. For example, once a page has been allocated to a table or index, it is not automatically returned to the operating system until the table or index is destroyed. SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE allows you to return unused space to the operating system.<p>This system procedure can be used to force 3 levels of in place compressionof a SQL table: PURGE_ROWS, DEFRAGMENT_ROWS, TRUNCATE_END. Unlike SYSCS_UTIL.SYSCS_COMPRESS_TABLE() all work is done in place in the existingtable/index.<p>Syntax:SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE( IN SCHEMANAME VARCHAR(128), IN TABLENAME VARCHAR(128), IN PURGE_ROWS SMALLINT, IN DEFRAGMENT_ROWS SMALLINT, IN TRUNCATE_END SMALLINT)<p>SCHEMANAME: An input argument of type VARCHAR(128) that specifies the schema of the table. Passing a null will result in an error.<p>TABLENAME:An input argument of type VARCHAR(128) that specifies the table name of the table. The string must exactly match the case of the table name, and the argument of "Fred" will be passed to SQL as the delimited identifier 'Fred'. Passing a null will result in an error.<p>PURGE_ROWS:If PURGE_ROWS is set to non-zero then a single pass is made through the table which will purge committed deleted rows from the table. This space is thenavailable for future inserted rows, but remains allocated to the table.As this option scans every page of the table, it's performance is linearly related to the size of the table.<p>DEFRAGMENT_ROWS:If DEFRAGMENT_ROWS is set to non-zero then a single defragment pass is madewhich will move existing rows from the end of the table towards the frontof the table. The goal of the defragment run is to empty a set of pagesat the end of the table which can then be returned to the OS by theTRUNCATE_END option. It is recommended to only run DEFRAGMENT_ROWS, if alsospecifying the TRUNCATE_END option. This option scans the whole table andneeds to update index entries for every base table row move, and thus executiontime is linearly related to the size of the table.<p>TRUNCATE_END:If TRUNCATE_END is set to non-zero then all contiguous pages at the end ofthe table will be returned to the OS. Running the PURGE_ROWS and/or DEFRAGMENT_ROWS passes options may increase the number of pages affected. This option itself does no scans of the table, so performs on the order of a few system calls.<p>SQL example:To compress a table called CUSTOMER in a schema called US, using all available compress options:call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('US', 'CUSTOMER', 1, 1, 1);To quickly just return the empty free space at the end of the same table, this option will run much quicker than running all phases but will likelyreturn much less space:call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('US', 'CUSTOMER', 0, 0, 1);Java example:To compress a table called CUSTOMER in a schema called US, using all available compress options:CallableStatement cs = conn.prepareCall("CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE(?, ?, ?, ?, ?)");cs.setString(1, "US");cs.setString(2, "CUSTOMER");cs.setShort(3, (short) 1);cs.setShort(4, (short) 1);cs.setShort(5, (short) 1);cs.execute();To quickly just return the empty free space at the end of the same table, this option will run much quicker than running all phases but will likelyreturn much less space:CallableStatement cs = conn.prepareCall("CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE(?, ?, ?, ?, ?)");cs.setString(1, "US");cs.setString(2, "CUSTOMER");cs.setShort(3, (short) 0);cs.setShort(4, (short) 0);cs.setShort(5, (short) 1);cs.execute();<p>It is recommended that the SYSCS_UTIL.SYSCS_COMPRESS_TABLE procedure is issued in auto-commit mode.Note: This procedure acquires an exclusive table lock on the table being compressed. All statement plans dependent on the table or its indexes are invalidated. For information on identifying unused space, see the Derby Server and Administration Guide.TODO LIST:o defragment requires table level lock in nested user transaction, which will conflict with user lock on same table in user transaction.**/public class OnlineCompress{ /** no requirement for a constructor */ private OnlineCompress() { } /** * Implementation of SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE(). * <p> * Top level implementation of the system procedure. All the * real work is found in the other routines in this file implementing * the 3 phases of inplace compression: purge, defragment, and truncate. * <p> * @param schemaName schema name of table, required * @param tableName table name to be compressed * @param purgeRows if true, do a purge pass on the table * @param defragmentRows if true, do a defragment pass on the table * @param truncateEnd if true, return empty pages at end to OS. * * @exception SQLException Errors returned by throwing SQLException. **/ public static void compressTable( String schemaName, String tableName, boolean purgeRows, boolean defragmentRows, boolean truncateEnd) throws SQLException { LanguageConnectionContext lcc = ConnectionUtil.getCurrentLCC(); TransactionController tc = lcc.getTransactionExecute(); try { DataDictionary data_dictionary = lcc.getDataDictionary(); // Each of the following may give up locks allowing ddl on the // table, so each phase needs to do the data dictionary lookup. // The order is important as it makes sense to first purge // deleted rows, then defragment existing non-deleted rows, and // finally to truncate the end of the file which may have been // made larger by the previous purge/defragment pass. if (purgeRows) purgeRows(schemaName, tableName, data_dictionary, tc); if (defragmentRows) defragmentRows(schemaName, tableName, data_dictionary, tc); if (truncateEnd) truncateEnd(schemaName, tableName, data_dictionary, tc); } catch (StandardException se) { throw PublicAPI.wrapStandardException(se); } } /** * Defragment rows in the given table. * <p> * Scans the rows at the end of a table and moves them to free spots * towards the beginning of the table. In the same transaction all * associated indexes are updated to reflect the new location of the * base table row. * <p> * After a defragment pass, if was possible, there will be a set of * empty pages at the end of the table which can be returned to the * operating system by calling truncateEnd(). The allocation bit * maps will be set so that new inserts will tend to go to empty and * half filled pages starting from the front of the conglomerate. * * @param schemaName schema of table to defragement * @param tableName name of table to defragment * @param data_dictionary An open data dictionary to look up the table in. * @param tc transaction controller to use to do updates. * **/ private static void defragmentRows( String schemaName, String tableName, DataDictionary data_dictionary, TransactionController tc) throws SQLException { GroupFetchScanController base_group_fetch_cc = null; int num_indexes = 0; int[][] index_col_map = null; ScanController[] index_scan = null; ConglomerateController[] index_cc = null; DataValueDescriptor[][] index_row = null; LanguageConnectionContext lcc = ConnectionUtil.getCurrentLCC(); TransactionController nested_tc = null; try { SchemaDescriptor sd = data_dictionary.getSchemaDescriptor( schemaName, nested_tc, true); TableDescriptor td = data_dictionary.getTableDescriptor(tableName, sd); nested_tc = tc.startNestedUserTransaction(false); if (td == null) { throw StandardException.newException( SQLState.LANG_TABLE_NOT_FOUND, schemaName + "." + tableName); } switch (td.getTableType()) { /* Skip views and vti tables */ case TableDescriptor.VIEW_TYPE: return; // other types give various errors here // DERBY-719,DERBY-720 default: break; } ConglomerateDescriptor heapCD = td.getConglomerateDescriptor(td.getHeapConglomerateId()); /* Get a row template for the base table */ ExecRow baseRow = lcc.getExecutionContext().getExecutionFactory().getValueRow( td.getNumberOfColumns()); /* Fill the row with nulls of the correct type */ ColumnDescriptorList cdl = td.getColumnDescriptorList(); int cdlSize = cdl.size(); for (int index = 0; index < cdlSize; index++) { ColumnDescriptor cd = (ColumnDescriptor) cdl.elementAt(index); baseRow.setColumn(cd.getPosition(), cd.getType().getNull()); } DataValueDescriptor[][] row_array = new DataValueDescriptor[100][]; row_array[0] = baseRow.getRowArray(); RowLocation[] old_row_location_array = new RowLocation[100]; RowLocation[] new_row_location_array = new RowLocation[100]; // Create the following 3 arrays which will be used to update // each index as the scan moves rows about the heap as part of // the compress: // index_col_map - map location of index cols in the base row, // ie. index_col_map[0] is column offset of 1st // key collumn in base row. All offsets are 0 // based. // index_scan - open ScanController used to delete old index row // index_cc - open ConglomerateController used to insert new // row ConglomerateDescriptor[] conglom_descriptors = td.getConglomerateDescriptors(); // conglom_descriptors has an entry for the conglomerate and each // one of it's indexes. num_indexes = conglom_descriptors.length - 1; // if indexes exist, set up data structures to update them if (num_indexes > 0) { // allocate arrays index_col_map = new int[num_indexes][]; index_scan = new ScanController[num_indexes]; index_cc = new ConglomerateController[num_indexes]; index_row = new DataValueDescriptor[num_indexes][]; setup_indexes( nested_tc, td, index_col_map, index_scan, index_cc, index_row); } /* Open the heap for reading */ base_group_fetch_cc = nested_tc.defragmentConglomerate(
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -