⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 umc_h264_enc_cpb.h

📁 这是在PCA下的基于IPP库示例代码例子,在网上下了IPP的库之后,设置相关参数就可以编译该代码.
💻 H
📖 第 1 页 / 共 2 页
字号:
////               INTEL CORPORATION PROPRIETARY INFORMATION//  This software is supplied under the terms of a license agreement or//  nondisclosure agreement with Intel Corporation and may not be copied//  or disclosed except in accordance with the terms of that agreement.//        Copyright (c) 2004 - 2005 Intel Corporation. All Rights Reserved.//#ifndef __UMC_H264_ENC_CPB_H__#define __UMC_H264_ENC_CPB_H__#include "umc_h264_pub.h"#include "vm_debug.h"namespace UMC{// To be fully general, this class should allow the amount of padding to be// specified via a constructor parameter.  However, we don't need such// generality currently.  To simplify the class, we will use the hard coded// padding amounts defined below.class H264EncYUVBufferPadded{    DYNAMIC_CAST_DECL_BASE(H264EncYUVBufferPadded)public:    Ipp8u                 *m_pAllocatedBuffer;    // m_pAllocatedBuffer contains the pointer returned when    // we allocated space for the data.    Ipp32u                 m_allocatedSize;    // This is the size with which m_pAllocatedBuffer was allocated.    Ipp8u                 *m_pBuffer;    // m_pBuffer is a "YUV_ALIGNMENT"-byte aligned address, pointing    // to the beginning of the padded YUV data within    // m_pAllocatedBuffer.    sDimensions                m_lumaSize;    // m_lumaSize specifies the dimensions of the Y plane, as    // specified when allocate() was most recently called.    //    // For clarity, it should be noted that in the context of our    // codec, these dimensions have typically already been rounded    // up to a multiple of 16.  However, such rounding is performed    // outside of this class.  We use whatever dimensions come into    // allocate().    Ipp32u                    m_pitch;    // m_pitch is 0 if the buffer hasn't been allocated.    // Otherwise it is the current pitch in effect (typically    // 2*Y_PADDING + m_lumaSize.width).public:    Ipp8u                *m_pYPlane;    Ipp8u                *m_pUPlane;    Ipp8u                *m_pVPlane;    H264EncYUVBufferPadded();    virtual            ~H264EncYUVBufferPadded();    void    clear()    {        m_pYPlane = m_pUPlane = m_pVPlane = 0;    }    Status                    allocate(const sDimensions &lumaSize,Ipp32s);    Status                    ExchangePointers(H264EncYUVBufferPadded *src);    // Reallocate the buffer, if necessary, so that it is large enough    // to hold a YUV image of the given dimensions, and set the plane    // pointers and pitch appropriately.  If the existing buffer is    // already big enough, then we reuse it.  If this behavior is not    // desired, then call deallocate() prior to calling allocate().    virtual        void                    deallocate();    // Deallocate the buffer and clear all state information.    virtual        void                    conditionalDeallocate(const sDimensions&);    // Deallocate the buffer only if its current luma dimensions    // differ from those specified in the parameter.    const sDimensions&        lumaSize() { return m_lumaSize; }    Ipp32u                   pitch()    { return m_pitch; }};// The H264EncYUVWorkSpace class represents a padded YUV image which can have// various processing performed on it, such as replicating the pixels// around its edges, and applying postfilters.class H264EncYUVWorkSpace : public H264EncYUVBufferPadded{private:    bool            m_isExpanded;    // This indicates whether pixels around the edges of the planes    // have been replicated into the surrounding padding area.    // The expand() method performs this operation.    sDimensions        m_macroBlockSize;    //sDimensions        m_subBlockSize;    // The image's width and height, in units of macroblocks and    // 4x4 subblocks.  For example, a QCIF image is 11 MBs wide and    // 9 MBs high; or 44 subblocks wide and 36 subblocks high.private:    void        clearState()    {        // Reset our state to indicate no processing has been performed.        // This is called when our buffer is allocated.        m_isExpanded = false;        m_macroBlockSize /*= m_subBlockSize*/ = sDimensions(0,0);    }public:    H264EncYUVWorkSpace() { clearState(); }    virtual            ~H264EncYUVWorkSpace() { }    virtual Status                allocate(const sDimensions &lumaSize,Ipp32s);    // Reallocate the buffer, if necessary, so that it is large enough    // to hold a YUV image of the given dimensions, rounded up to    // the next macroblock boundary.  If the existing buffer is    // already big enough, then we reuse it.  If this behavior is not    // desired, then call deallocate() prior to calling allocate().    // Regardless of whether the buffer is reallocated, our state    // is cleared to indicate that no processing has been performed    // on this buffer.    virtual void                conditionalDeallocate(const sDimensions &);    // Deallocate the buffer only if its current dimensions    // differ from those specified in the parameter, after rounding    // up to the next macroblock boundary.    void                expand(bool is_field_flag, Ipp8u is_bottom_field);    bool                isExpanded()     { return m_isExpanded; }    void                setExpanded()     { m_isExpanded = true; }    const sDimensions&    macroBlockSize() { return m_macroBlockSize; }    //const sDimensions&    subBlockSize()   { return m_subBlockSize; }    void                copyState(H264EncYUVWorkSpace &/*src*/)    {        m_isExpanded  = false;    }};class H264EncoderFrame : public H264EncYUVWorkSpace{    // These point to the previous and future reference frames    // used to decode this frame.    // These are also used to maintain a doubly-linked list of    // reference frames in the H264EncoderFrameList class.  So, these    // may be non-NULL even if the current frame is an I frame,    // for example.  m_pPreviousFrame will always be valid when    // decoding a P or B frame, and m_pFutureFrame will always    // be valid when decoding a B frame.public:    // L0 and L1 refer to RefList 0 and RefList 1 in JVT spec.    // In this implementation, L0 is Forward MVs in B slices, and all MVs in P slices    // L1 is Backward MVs in B slices and unused in P slices.    H264GlobalMacroblocksDescriptor * m_mbinfo;    T_ECORE_MV  *pMVL0;      // current MV    T_ECORE_MV  *pMVL1;      // current backward MV    T_ECORE_BIGMV   *pDMVL0;     // current DMV    T_ECORE_BIGMV   *pDMVL1;     // current backward DMV    T_RefIdx    *pRefIdxL0;  // current RefIdx    T_RefIdx    *pRefIdxL1;  // current backward RefIdx    Ipp8u*          pPred4DirectB;      // the 16x16 MB prediction for direct B mode    Ipp8u*          pPred4BiPred;       // the 16x16 MB prediction for BiPredicted B Mode    Ipp8u*          pTempBuff4DirectB;  // 16x16 working buffer for direct B    T_AIMode    *pAIMode;           // current AI mode into here    T_NumCoeffs *pYNumCoeffs;       // Number of nonzero coeffs per 4x4 block    T_NumCoeffs *pUNumCoeffs;       // Number of nonzero coeffs per 4x4 block    T_NumCoeffs *pVNumCoeffs;       // Number of nonzero coeffs per 4x4 block    T_RLE_Data Block_RLE[27];       // [0-15] Luma, [16-23] Chroma, [24-25] Chroma DC, [26] Luma DC    T_Block_CABAC_Data Block_CABAC[27];     // [0-15] Luma, [16-23] Chroma, [24-25] Chroma DC, [26] Luma DC    T_EncodeMBData  *pMBData;    // current into here    T_EncodeMBOffsets   *pMBOffsets;    SliceData *pSliceDataBase;      // Points to first element of the per slice data struct    Ipp32u  uWidth;    Ipp32u  uHeight;    Ipp32u  uPitch;                 // pitch in frame picture structure.    Ipp32u  fPitch;                 // pitch in the current picture structure (frame or field)    Ipp32u  y_line_shift;           // line shift for the first line of the picture. It is needed for bottom fields.    Ipp32u  uv_line_shift;          // line shift for the first line of the picture. It is needed for bottom fields.//    Ipp32u  uWidthInMBs;//    Ipp32u  uHeightInMBs;//    Ipp32u  uWidthIn4x4Blocks;//    Ipp32u  uHeightIn4x4Blocks;    // time reference for direct B    //Ipp32s m_DistScaleFactor;   // Used for Direct Mode and BiPred Scaling    bool use_implicit_weighted_bipred;    Ipp8u*  pMBEncodeBuffer;            // temp work buffer    Ipp16u uNumSlices;                  // Number of Slices in the Frame    Ipp32u uSliceLength;                // Number of Macroblocks in each slice    // Except the last slice which may have    // more macroblocks.    Ipp32u uSliceRemainder;         // Number of extra macroblocks in last slice.    // Equals 0 if the last slice has exactly    // uSliceLength blocks.    Ipp32u uMBxpos;                 // current MB luma x offset    Ipp32u uMBypos;                 // current MB luma y offset    Ipp32u block_positions[16];#ifdef TIMING_DETAIL    Ipp32u uIntegerSearchCounter;       // these are incremented by MEOneMB    Ipp32u uSubpelSearchCounter;    Ipp32u uDirectBCounter;#endif    H264EncoderFrame       *m_pPreviousFrame;    H264EncoderFrame       *m_pFutureFrame;    //Ipp64f                 m_cumulativeTR;    // This is the TR of the picture, relative to the very first    // frame decoded after a call to Start_Sequence.  The Ipp64f type    // is used to avoid wrap around problems.    bool                m_wasEncoded;    // When true indicates this decoded frame contains a decoded    // frame ready for output. The frame should not be overwritten    // until wasOutputted is also true.private:    Ipp32u      m_sequence_number;    Ipp32u      m_heightInMBs;    Ipp32u      m_widthInMBs;public:    Ipp32u      GetSequenceNumber() const { return m_sequence_number; }public:    double           m_dFrameTime;    Ipp8u            m_PictureStructureForRef;    Ipp8u            m_PictureStructureForDec;    EnumSliceType m_SlicesType;    EnumPicCodType   m_PicCodType;    Ipp32s           totalMBs;    // For type 1 calculation of m_PicOrderCnt. m_FrameNum is needed to    // be used as previous frame num.    Ipp32s                    m_PicNum[2];    Ipp32s                    m_LongTermPicNum[2];    Ipp32s                    m_FrameNum;    Ipp32s                    m_FrameNumWrap;    Ipp32s                    m_LongTermFrameIdx;    Ipp32s                    m_RefPicListResetCount[2];    Ipp32s                    m_PicOrderCnt[2];    // Display order picture count mod MAX_PIC_ORDER_CNT.    Ipp32s                    m_crop_left;    Ipp32s                    m_crop_right;    Ipp32s                    m_crop_top;    Ipp32s                    m_crop_bottom;    Ipp8s                     m_crop_flag;    bool                      m_isShortTermRef[2];    bool                      m_isLongTermRef[2];    // MB work buffer, allocated buffer pointer for freeing    Ipp8u*  m_pAllocatedMBEncodeBuffer;    // motion vector buffer, allocated pointer for freeing    Ipp8u* m_pAllocatedMVBuffer;    // reference index buffer, allocated pointer for freeing    Ipp8u* m_pAllocatedRefIdxBuffer;    // advanced intra mode buffer, allocated pointer for freeing    Ipp8u* m_pAllocatedAIBuffer;    // NumCoeffs buffer, allocated pointer for freeing    Ipp8u* m_pAllocatedNCBuffer;    // MB data info buffer, allocated pointer for freeing    Ipp8u* m_pAllocatedMBDataBuffer;    // Per Slice database, allocated pointer for freeing    Ipp8u* m_pAllocatedSliceDataBase;    //H264DecoderGlobalMacroblocksDescriptor m_mbinfo; //Global MB Data    //Ipp32u              m_FrameNum;            // Decode order frame label, from slice header    Ipp8u               m_PQUANT;            // Picture QP (from first slice header)    Ipp8u               m_PQUANT_S;            // Picture QS (from first slice header)    sDimensions         m_dimensions;    Ipp8u               m_bottom_field_flag[2];    // The above variables are used for management of reference frames    // on reference picture lists maintained in m_RefPicList. They are    // updated as reference picture management information is decoded    // from the bitstream. The picture and frame number variables determine    // reference picture ordering on the lists.    H264EncoderFrame();    H264EncoderFrame(        Ipp32u              pageoffset,        Ipp32u              width,        Ipp32u              height,        Ipp32u              wrap_around,        Ipp32s              num_slices,        Status&        ctor_status);    virtual            ~H264EncoderFrame();    void    SetPicCodType(EnumPicCodType pic_type) {m_PicCodType = pic_type;}    virtual Status  allocate(const sDimensions &lumaSize,Ipp32s);    // This allocate method first clears our state, and then    // calls H264EncYUVWorkSpace::allocate.    // An existing buffer, if any, is not reallocated if it    // is already large enough.    // The following methods provide access to the H264Decoder's doubly    // linked list of H264EncoderFrames.  Note that m_pPreviousFrame can    // be non-NULL even for an I frame.    H264EncoderFrame       *previous() { return m_pPreviousFrame; }    H264EncoderFrame       *future()   { return m_pFutureFrame; }    void                setPrevious(H264EncoderFrame *pPrev)    {        m_pPreviousFrame = pPrev;    }    void                setFuture(H264EncoderFrame *pFut)    {        m_pFutureFrame = pFut;    }    bool        wasEncoded()    { return m_wasEncoded; }    void        setWasEncoded() { m_wasEncoded = true; }    void        unsetWasEncoded() { m_wasEncoded = false; }    bool        isDisposable()    {        return (!m_isShortTermRef[0] &&        !m_isShortTermRef[1] &&        !m_isLongTermRef[0] &&        !m_isLongTermRef[0] &&        m_wasEncoded );    }    // A decoded frame can be "disposed" if it is not an active reference

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -