⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 transactioncontroller.java

📁 derby database source code.good for you.
💻 JAVA
📖 第 1 页 / 共 5 页
字号:
     * parent transaction be the transaction used to make the      * startNestedUserTransaction() call, and let the child transaction be the     * transaction returned by the startNestedUserTransaction() call.     * <p>     * Only 1 non-readOnly nested user transaction can exist.  If a subsequent     * non-readOnly transaction creation is attempted prior to destroying an     * existing write nested user transaction an exception will be thrown.       * <p>     * The nesting is limited to one level deep.  An exception will be thrown     * if a subsequent getNestedUserTransaction() is called on the child     * transaction.     * <p>     * The locks in the child transaction of a readOnly nested user transaction     * will be compatible with the locks of the parent transaction.  The     * locks in the child transaction of a non-readOnly nested user transaction     * will NOT be compatible with those of the parent transaction - this is     * necessary for correct recovery behavior.     * <p>     * A commit in the child transaction will release locks associated with     * the child transaction only, work can continue in the parent transaction     * at this point.       * <p>     * Any abort of the child transaction will result in an abort of both     * the child transaction and parent transaction, either initiated by     * an explict abort() call or by an exception that results in an abort.     * <p>     * A TransactionController.destroy() call should be made on the child     * transaction once all child work is done, and the caller wishes to      * continue work in the parent transaction.     * <p>     * AccessFactory.getTransaction() will always return the "parent"      * transaction, never the child transaction.  Thus clients using      * nested user transactions must keep track of the transaction, as there     * is no interface to query the storage system to get the current     * child transaction.  The idea is that a nested user transaction should     * be used to for a limited amount of work, committed, and then work     * continues in the parent transaction.     * <p>     * Nested User transactions are meant to be used to implement      * system work necessary to commit as part of implementing a user's     * request, but where holding the lock for the duration of the user     * transaction is not acceptable.  2 examples of this are system catalog     * read locks accumulated while compiling a plan, and auto-increment.     * <p>     * Once the first write of a non-readOnly nested transaction is done,     * then the nested user transaction must be committed or aborted before     * any write operation is attempted in the parent transaction.       *     * @param readOnly  Is transaction readonly?  Only 1 non-readonly nested     *                  transaction is allowed per transaction.     *	 * @return The new nested user transaction.     *	 * @exception  StandardException  Standard exception policy.     **/    public TransactionController startNestedUserTransaction(boolean readOnly)        throws StandardException;    /**     * A superset of properties that "users" can specify.     * <p>     * A superset of properties that "users" (ie. from sql) can specify.  Store     * may implement other properties which should not be specified by users.     * Layers above access may implement properties which are not known at     * all to Access.     * <p>     * This list is a superset, as some properties may not be implemented by     * certain types of conglomerates.  For instant an in-memory store may not     * implement a pageSize property.  Or some conglomerates may not support     * pre-allocation.     * <p>     * This interface is meant to be used by the SQL parser to do validation     * of properties passsed to the create table statement, and also by the     * various user interfaces which present table information back to the      * user.     * <p>     * Currently this routine returns the following list:     *      derby.storage.initialPages     *      derby.storage.minimumRecordSize     *      derby.storage.pageReservedSpace     *      derby.storage.pageSize     *	 * @return The superset of properties that "users" can specify.     *     **/    Properties getUserCreateConglomPropList();    /**     * Open a conglomerate for use.       * <p>     * The lock level indicates the minimum lock level to get locks at, the     * underlying conglomerate implementation may actually lock at a higher     * level (ie. caller may request MODE_RECORD, but the table may be locked     * at MODE_TABLE instead).     * <p>     * The close method is on the ConglomerateController interface.     *	 * @return a ConglomerateController to manipulate the conglomerate.     *     * @param conglomId         The identifier of the conglomerate to open.     *	 * @param hold              If true, will be maintained open over commits.     *	 * @param open_mode         Specifiy flags to control opening of table.       *                          OPENMODE_FORUPDATE - if set open the table for     *                          update otherwise open table shared.     *     * @param lock_level        One of (MODE_TABLE, MODE_RECORD).     *     * @param isolation_level   The isolation level to lock the conglomerate at.     *                          One of (ISOLATION_READ_COMMITTED,      *                          ISOLATION_REPEATABLE_READ or      *                          ISOLATION_SERIALIZABLE).     *	 * @exception  StandardException  if the conglomerate could not be opened      *                                for some reason.  Throws      *                                SQLState.STORE_CONGLOMERATE_DOES_NOT_EXIST     *                                if the conglomId being requested does not     *                                exist for some reason (ie. someone has      *                                dropped it).     **/    ConglomerateController openConglomerate(    long                            conglomId,     boolean                         hold,    int                             open_mode,    int                             lock_level,    int                             isolation_level)		throws StandardException;    /**     * Open a conglomerate for use, optionally include "compiled" info.       * <p>     * Same as openConglomerate(), except that one can optionally provide     * "compiled" static_info and/or dynamic_info.  This compiled information     * must have be gotten from getDynamicCompiledConglomInfo() and/or     * getStaticCompiledConglomInfo() calls on the same conglomid being opened.     * It is up to caller that "compiled" information is still valid and     * is appropriately multi-threaded protected.     * <p>     *     * @see TransactionController#openConglomerate     * @see TransactionController#getDynamicCompiledConglomInfo     * @see TransactionController#getStaticCompiledConglomInfo     * @see DynamicCompiledOpenConglomInfo     * @see StaticCompiledOpenConglomInfo     *	 * @return The identifier to be used to open the conglomerate later.     *	 * @param hold              If true, will be maintained open over commits.	 * @param open_mode         Specifiy flags to control opening of table.       * @param lock_level        One of (MODE_TABLE, MODE_RECORD).     * @param isolation_level   The isolation level to lock the conglomerate at.     *                          One of (ISOLATION_READ_COMMITTED,      *                          ISOLATION_REPEATABLE_READ or      *                          ISOLATION_SERIALIZABLE).     * @param static_info       object returned from      *                          getStaticCompiledConglomInfo() call on this id.     * @param dynamic_info      object returned from     *                          getDynamicCompiledConglomInfo() call on this id.     *	 * @exception  StandardException  Standard exception policy.     **/    ConglomerateController openCompiledConglomerate(    boolean                         hold,    int                             open_mode,    int                             lock_level,    int                             isolation_level,    StaticCompiledOpenConglomInfo   static_info,    DynamicCompiledOpenConglomInfo  dynamic_info)		throws StandardException;    /**     * Create a HashSet which contains all rows that qualify for the      * described scan.     * <p>     * All parameters shared between openScan() and this routine are      * interpreted exactly the same.  Logically this routine calls     * openScan() with the passed in set of parameters, and then places     * all returned rows into a newly created HashSet and returns, actual     * implementations will likely perform better than actually calling     * openScan() and doing this.  For documentation of the openScan      * parameters see openScan().     * <p>     *	 * @return the BackingStoreHashtable which was created.     *	 * @param conglomId             see openScan()     * @param open_mode             see openScan()     * @param lock_level            see openScan()     * @param isolation_level       see openScan()     * @param scanColumnList        see openScan()     * @param startKeyValue         see openScan()     * @param startSearchOperator   see openScan()     * @param qualifier             see openScan()     * @param stopKeyValue          see openScan()     * @param stopSearchOperator    see openScan()     *     * @param max_rowcnt            The maximum number of rows to insert into      *                              the HashSet.  Pass in -1 if there is no      *                              maximum.     * @param key_column_numbers    The column numbers of the columns in the     *                              scan result row to be the key to the      *                              Hashtable.  "0" is the first column in the      *                              scan result row (which may be different      *                              than the first row in the table of the      *                              scan).     * @param remove_duplicates     Should the HashSet automatically remove     *                              duplicates, or should it create the Vector      *                              of duplicates?     * @param estimated_rowcnt      The number of rows that the caller      *                              estimates will be inserted into the sort.      *                              -1 indicates that the caller has no idea.     *                              Used by the sort to make good choices about     *                              in-memory vs. external sorting, and to size     *                              merge runs.     * @param max_inmemory_rowcnt   The number of rows at which the underlying     *                              Hashtable implementation should cut over     *                              from an in-memory hash to a disk based     *                              access method.     * @param initialCapacity       If not "-1" used to initialize the java     *                              Hashtable.     * @param loadFactor            If not "-1" used to initialize the java     *                              Hashtable.     * @param collect_runtimestats  If true will collect up runtime stats during     *                              scan processing for retrieval by     *                              BackingStoreHashtable.getRuntimeStats().	 * @param skipNullKeyColumns	Whether or not to skip rows with 1 or more null key columns     *     * @see BackingStoreHashtable     * @see TransactionController#openScan     *	 * @exception  StandardException  Standard exception policy.     **/    BackingStoreHashtable createBackingStoreHashtableFromScan(    long                    conglomId,    int                     open_mode,    int                     lock_level,    int                     isolation_level,    FormatableBitSet        scanColumnList,    DataValueDescriptor[]   startKeyValue,    int                     startSearchOperator,    Qualifier               qualifier[][],    DataValueDescriptor[]   stopKeyValue,    int                     stopSearchOperator,    long                    max_rowcnt,    int[]                   key_column_numbers,    boolean                 remove_duplicates,    long                    estimated_rowcnt,    long                    max_inmemory_rowcnt,    int                     initialCapacity,    float                   loadFactor,    boolean                 collect_runtimestats,    boolean		            skipNullKeyColumns)        throws StandardException;	/**	Open a scan on a conglomerate.  The scan will return all	rows in the conglomerate which are between the	positions defined by {startKeyValue, startSearchOperator} and	{stopKeyValue, stopSearchOperator}, which also match the qualifier.	<P>	The way that starting and stopping keys and operators are used	may best be described by example. Say there's an ordered conglomerate	with two columns, where the 0-th column is named 'x', and the 1st	column is named 'y'.  The values of the columns are as follows:	<blockquote><pre>	  x: 1 3 4 4 4 5 5 5 6 7 9	  y: 1 1 2 4 6 2 4 6 1 1 1	</blockquote></pre>	<P>	A {start key, search op} pair of {{5.2}, GE} would position on	{x=5, y=2}, whereas the pair {{5}, GT} would position on {x=6, y=1}.	<P>	Partial keys are used to implement partial key scans in SQL.	For example, the SQL "select * from t where x = 5" would	open a scan on the conglomerate (or a useful index) of t	using a starting position partial key of {{5}, GE} and	a stopping position partial key of {{5}, GT}.	<P>	Some more examples:	<p>	<blockquote><pre>	+-------------------+------------+-----------+--------------+--------------+	| predicate         | start key  | stop key  | rows         | rows locked  |	|                   | value | op | value |op | returned     |serialization |	+-------------------+-------+----+-------+---+--------------+--------------+	| x = 5             | {5}   | GE | {5}   |GT |{5,2} .. {5,6}|{4,6} .. {5,6}|	| x > 5             | {5}   | GT | null  |   |{6,1} .. {9,1}|{5,6} .. {9,1}|	| x >= 5            | {5}   | GE | null  |   |{5,2} .. {9,1}|{4,6} .. {9,1}|	| x <= 5            | null  |    | {5}   |GT |{1,1} .. {5,6}|first .. {5,6}|  	| x < 5             | null  |    | {5}   |GE |{1,1} .. {4,6}|first .. {4,6}|	| x >= 5 and x <= 7 | {5},  | GE | {7}   |GT |{5,2} .. {7,1}|{4,6} .. {7,1}|	| x = 5  and y > 2  | {5,2} | GT | {5}   |GT |{5,4} .. {5,6}|{5,2} .. {5,6}|	| x = 5  and y >= 2 | {5,2} | GE | {5}   |GT |{5,2} .. {5,6}|{4,6} .. {5,6}|	| x = 5  and y < 5  | {5}   | GE | {5,5} |GE |{5,2} .. {5,4}|{4,6} .. {5,4}|	| x = 2             | {2}   | GE | {2}   |GT | none         |{1,1} .. {1,1}|	+-------------------+-------+----+-------+---+--------------+--------------+	</blockquote></pre>	<P>	As the above table implies, the underlying scan may lock	more rows than it returns in order to guarantee serialization.    <P>    For each row which meets the start and stop position, as described above    the row is "qualified" to see whether it should be returned.  The    qualification is a 2 dimensional array of @see Qualifiers, which represents

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -