⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dblqh.hpp

📁 mysql-5.0.22.tar.gz源码包
💻 HPP
📖 第 1 页 / 共 5 页
字号:
 * (that manages the tuples). * * Dblqh also keeps track of the participants and acts as a coordinator of * 2-phase commits.  Logical redo logging is also handled by the Dblqh * block. * * @section secModules Modules * * The code is partitioned into the following modules: * - START / RESTART  *   - Start phase 1: Load our block reference and our processor id *   - Start phase 2: Initiate all records within the block *                    Connect LQH with ACC and TUP. *   - Start phase 4: Connect LQH with LQH.  Connect every LQH with  *                    every LQH in the database system.            *	              If initial start, then create the fragment log files. *	              If system restart or node restart,  *                    then open the fragment log files and    *	              find the end of the log files. * - ADD / DELETE FRAGMENT<br> *     Used by dictionary to create new fragments and delete old fragments. *  - EXECUTION<br> *    handles the reception of lqhkeyreq and all processing         *    of operations on behalf of this request.  *    This does also involve reception of various types of attrinfo  *    and keyinfo.  *    It also involves communication with ACC and TUP. *  - LOG<br> *    The log module handles the reading and writing of the log. *    It is also responsible for handling system restart.  *    It controls the system restart in TUP and ACC as well. *  - TRANSACTION<br> *    This module handles the commit and the complete phases. *  - MODULE TO HANDLE TC FAILURE<br> *  - SCAN<br> *    This module contains the code that handles a scan of a particular  *    fragment. *    It operates under the control of TC and orders ACC to  *    perform a scan of all tuples in the fragment. *    TUP performs the necessary search conditions *    to ensure that only valid tuples are returned to the application. *  - NODE RECOVERY<br> *    Used when a node has failed.  *    It performs a copy of a fragment to a new replica of the fragment.  *    It does also shut down all connections to the failed node. *  - LOCAL CHECKPOINT<br> *    Handles execution and control of LCPs *    It controls the LCPs in TUP and ACC.  *    It also interacts with DIH to control which GCPs are recoverable. *  - GLOBAL CHECKPOINT<br> *    Helps DIH in discovering when GCPs are recoverable.  *    It handles the request gcp_savereq that requests LQH to  *    save a particular GCP to disk and respond when completed.   *  - FILE HANDLING<br> *    With submodules:  *    - SIGNAL RECEPTION *    - NORMAL OPERATION *    - FILE CHANGE *    - INITIAL START *    - SYSTEM RESTART PHASE ONE *    - SYSTEM RESTART PHASE TWO, *    - SYSTEM RESTART PHASE THREE *    - SYSTEM RESTART PHASE FOUR *  - ERROR  *  - TEST  *  - LOG  */class Dblqh: public SimulatedBlock {public:  enum LcpCloseState {    LCP_IDLE = 0,    LCP_RUNNING = 1,       // LCP is running    LCP_CLOSE_STARTED = 2, // Completion(closing of files) has started    ACC_LCP_CLOSE_COMPLETED = 3,    TUP_LCP_CLOSE_COMPLETED = 4  };  enum ExecUndoLogState {    EULS_IDLE = 0,    EULS_STARTED = 1,    EULS_COMPLETED = 2,    EULS_ACC_COMPLETED = 3,    EULS_TUP_COMPLETED = 4  };  struct AddFragRecord {    enum AddFragStatus {      FREE = 0,      ACC_ADDFRAG = 1,      WAIT_TWO_TUP = 2,      WAIT_ONE_TUP = 3,      WAIT_TWO_TUX = 4,      WAIT_ONE_TUX = 5,      WAIT_ADD_ATTR = 6,      TUP_ATTR_WAIT1 = 7,      TUP_ATTR_WAIT2 = 8,      TUX_ATTR_WAIT1 = 9,      TUX_ATTR_WAIT2 = 10    };    LqhAddAttrReq::Entry attributes[LqhAddAttrReq::MAX_ATTRIBUTES];    UintR accConnectptr;    AddFragStatus addfragStatus;    UintR dictConnectptr;    UintR fragmentPtr;    UintR nextAddfragrec;    UintR noOfAllocPages;    UintR schemaVer;    UintR tup1Connectptr;    UintR tup2Connectptr;    UintR tux1Connectptr;    UintR tux2Connectptr;    UintR checksumIndicator;    UintR GCPIndicator;    BlockReference dictBlockref;    Uint32 m_senderAttrPtr;    Uint16 addfragErrorCode;    Uint16 attrSentToTup;    Uint16 attrReceived;    Uint16 addFragid;    Uint16 fragid1;    Uint16 fragid2;    Uint16 noOfAttr;    Uint16 noOfNull;    Uint16 tabId;    Uint16 totalAttrReceived;    Uint16 fragCopyCreation;    Uint16 noOfKeyAttr;    Uint32 noOfNewAttr; // noOfCharsets in upper half    Uint16 noOfAttributeGroups;    Uint16 lh3DistrBits;    Uint16 tableType;    Uint16 primaryTableId;  };// Size 108 bytes  typedef Ptr<AddFragRecord> AddFragRecordPtr;    /* $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ */  /* $$$$$$$               ATTRIBUTE INFORMATION RECORD              $$$$$$$ */  /* $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ */  /**   *       Can contain one (1) attrinfo signal.    *       One signal contains 24 attr. info words.    *       But 32 elements are used to make plex happy.     *       Some of the elements are used to the following things:   *       - Data length in this record is stored in the   *         element indexed by ZINBUF_DATA_LEN.     *       - Next attrinbuf is pointed out by the element    *         indexed by ZINBUF_NEXT.   */  struct Attrbuf {    UintR attrbuf[32];  }; // Size 128 bytes  typedef Ptr<Attrbuf> AttrbufPtr;  /* $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ */  /* $$$$$$$                         DATA BUFFER                     $$$$$$$ */  /* $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ */  /**   *       This buffer is used as a general data storage.                        */  struct Databuf {    UintR data[4];    UintR nextDatabuf;  }; // size 20 bytes  typedef Ptr<Databuf> DatabufPtr;  struct ScanRecord {    enum ScanState {      SCAN_FREE = 0,      WAIT_STORED_PROC_COPY = 1,      WAIT_STORED_PROC_SCAN = 2,      WAIT_NEXT_SCAN_COPY = 3,      WAIT_NEXT_SCAN = 4,      WAIT_DELETE_STORED_PROC_ID_SCAN = 5,      WAIT_DELETE_STORED_PROC_ID_COPY = 6,      WAIT_ACC_COPY = 7,      WAIT_ACC_SCAN = 8,      WAIT_SCAN_NEXTREQ = 10,      WAIT_CLOSE_SCAN = 12,      WAIT_CLOSE_COPY = 13,      WAIT_RELEASE_LOCK = 14,      WAIT_TUPKEY_COPY = 15,      WAIT_LQHKEY_COPY = 16,      IN_QUEUE = 17    };    enum ScanType {      ST_IDLE = 0,      SCAN = 1,      COPY = 2    };    UintR scan_acc_op_ptr[32];    Uint32 scan_acc_index;    Uint32 scan_acc_attr_recs;    UintR scanApiOpPtr;    UintR scanLocalref[2];        Uint32 m_max_batch_size_rows;    Uint32 m_max_batch_size_bytes;    Uint32 m_curr_batch_size_rows;    Uint32 m_curr_batch_size_bytes;    bool check_scan_batch_completed() const;        UintR copyPtr;    union {      Uint32 nextPool;      Uint32 nextList;    };    Uint32 prevList;    Uint32 nextHash;    Uint32 prevHash;    bool equal(const ScanRecord & key) const {      return scanNumber == key.scanNumber && fragPtrI == key.fragPtrI;    }    Uint32 hashValue() const {      return fragPtrI ^ scanNumber;    }        UintR scanAccPtr;    UintR scanAiLength;    UintR scanErrorCounter;    UintR scanLocalFragid;    UintR scanSchemaVersion;    /**     * This is _always_ main table, even in range scan     *   in which case scanTcrec->fragmentptr is different     */    Uint32 fragPtrI;    UintR scanStoredProcId;    ScanState scanState;    UintR scanTcrec;    ScanType scanType;    BlockReference scanApiBlockref;    NodeId scanNodeId;    Uint16 scanReleaseCounter;    Uint16 scanNumber;    // scan source block ACC TUX TUP    BlockReference scanBlockref;     Uint8 scanCompletedStatus;    Uint8 scanFlag;    Uint8 scanLockHold;    Uint8 scanLockMode;    Uint8 readCommitted;    Uint8 rangeScan;    Uint8 descending;    Uint8 tupScan;    Uint8 scanTcWaiting;    Uint8 scanKeyinfoFlag;    Uint8 m_last_row;  }; // Size 272 bytes  typedef Ptr<ScanRecord> ScanRecordPtr;  struct Fragrecord {    enum ExecSrStatus {      IDLE = 0,      ACTIVE_REMOVE_AFTER = 1,      ACTIVE = 2    };    /**     * Possible state transitions are:     * - FREE -> DEFINED                 Fragment record is allocated     * - DEFINED -> ACTIVE               Add fragment is completed and      *                                   fragment is ready to           *                                   receive operations.     * - DEFINED -> ACTIVE_CREATION      Add fragment is completed and      *                                   fragment is ready to           *                                   receive operations in parallel      *                                   with a copy fragment          *                                   which is performed from the      *                                   primary replica                  * - DEFINED -> CRASH_RECOVERING     A fragment is ready to be      *                                   recovered from a local             *                                   checkpoint on disk     * - ACTIVE -> BLOCKED               A local checkpoint is to be      *                                   started.  No more operations      *                                   are allowed to be started until      *                                   the local checkpoint         *                                   has been started.     * - ACTIVE -> REMOVING              A fragment is removed from the node     * - BLOCKED -> ACTIVE               Operations are allowed again in      *                                   the fragment.                * - CRASH_RECOVERING -> ACTIVE      A fragment has been recovered and      *                                   are now ready for          *                                   operations again.     * - CRASH_RECOVERING -> REMOVING    Fragment recovery failed or      *                                   was cancelled.                   * - ACTIVE_CREATION -> ACTIVE       A fragment is now copied and now      *                                   is a normal fragment        * - ACTIVE_CREATION -> REMOVING     Copying of the fragment failed     * - REMOVING -> FREE                Removing of the fragment is      *                                   completed and the fragment       *                                   is now free again.     */    enum FragStatus {      FREE = 0,               ///< Fragment record is currently not in use      FSACTIVE = 1,           ///< Fragment is defined and usable for operations      DEFINED = 2,            ///< Fragment is defined but not yet usable by                               ///< operations      BLOCKED = 3,            ///< LQH is waiting for all active operations to                               ///< complete the current phase so that the                               ///< local checkpoint can be started.      ACTIVE_CREATION = 4,    ///< Fragment is defined and active but is under                               ///< creation by the primary LQH.      CRASH_RECOVERING = 5,   ///< Fragment is recovering after a crash by                               ///< executing the fragment log and so forth.                               ///< Will need further breakdown.      REMOVING = 6            ///< The fragment is currently removed.                               ///< Operations are not allowed.     };    enum LogFlag {      STATE_TRUE = 0,      STATE_FALSE = 1    };    enum SrStatus {      SS_IDLE = 0,      SS_STARTED = 1,      SS_COMPLETED = 2    };    enum LcpFlag {      LCP_STATE_TRUE = 0,      LCP_STATE_FALSE = 1    };    /**     *        Last GCI for executing the fragment log in this phase.     */    UintR execSrLastGci[4];    /**     *       Start GCI for executing the fragment log in this phase.     */    UintR execSrStartGci[4];    /**     *       Requesting user pointer for executing the fragment log in     *       this phase     */    UintR execSrUserptr[4];    /**

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -