📄 libmpcodecs.txt
字号:
The libMPcodecs API details, hints - by A'rpi==================================See also: colorspaces.txt, codec-devel.txt, dr-methods.txt, codecs.conf.txtThe VIDEO path:=============== [MPlayer core] | (1) _____V______ (2) /~~~~~~~~~~\ (3,4) |~~~~~~| | | -----> | vd_XXX.c | -------> | vd.c | | decvideo | \__________/ <-(3a)-- |______| | | -----, ,.............(3a,4a).....: ~~~~~~~~~~~~ (6) V V /~~~~~~~~\ /~~~~~~~~\ (8) | vf_X.c | --> | vf_Y.c | ----> vf_vo.c / ve_XXX.c \________/ \________/ | ^ (7) | |~~~~~~| : (7a) `-> | vf.c |...: |______|Short description of video path:1. MPlayer/MEncoder core requests the decoding of a compressed video frame: calls decvideo.c::decode_video()2. decode_video() calls the previously ( init_video() ) selected video codec (vd_XXX.c file, where XXX == vfm name, see the 'driver' line of codecs.conf)3. The codec should initialize the output device before decoding the first frame, it may happen in init() or at the middle of the first decode(), see 3a. It means calling vd.c::mpcodecs_config_vo() with the image dimensions, and the _preferred_ (mean: internal, native, best) colorspace. NOTE: This colorspace may not be equal to the actually used colorspace, it's just a _hint_ for the csp matching algorithm, and mainly used _only_ when csp conversion is required, as input format of the converter.3a. Selecting the best output colorspace: The vd.c::mpcodecs_config_vo() function will go through the outfmt list defined by codecs.conf's 'out' lines, and query both vd (codec) and vo (output device/filter/encoder) if it's supported or not. For the vo, it calls the query_format() function of vf_XXX.c or ve_XXX.c. It should return a set of feature flags, the most important ones for this stage are: VFCAP_CSP_SUPPORTED (csp supported directly or by conversion) and VFCAP_CSP_SUPPORTED_BY_HW (csp supported WITHOUT any conversion). For the vd (codec), control() with VDCTRL_QUERY_FORMAT will be called. If it doesn't implement VDCTRL_QUERY_FORMAT, (i.e. answers CONTROL_UNKNOWN or CONTROL_NA), it will be assumed to be CONTROL_TRUE (csp supported)! So, by default, if the list of supported colorspaces is constant, doesn't depend on the actual file's/stream's header, it's enough to list them in codecs.conf ('out' field), and don't implement VDCTRL_QUERY_FORMAT. This is the case for most codecs. If the supported csp list depends on the file being decoded, list the possible out formats (colorspaces) in codecs.conf, and implement the VDCTRL_QUERY_FORMAT to test the availability of the given csp for the given video file/stream. The vd.c core will find the best matching colorspace, depending on the VFCAP_CSP_SUPPORTED_BY_HW flag (see vfcap.h). If no match at all, it will try again with the 'scale' filter inserted between vd and vo. If still no match, it will fail :(4. Requesting buffer for the decoded frame: The codec has to call mpcodecs_get_image() with proper imgtype & imgflag. It will find the optimal buffering setup (preferred stride, alignment etc) and return a pointer to the allocated and filled up mpi (mp_image_t*) struct. The 'imgtype' controls the buffering setup, i.e. STATIC (just one buffer, it 'remembers' its contents between frames), TEMP (write-only, full update), EXPORT (memory allocation is done by the codec, not recommended) and so on. The 'imgflags' set up the limits for the buffer, i.e. stride limitations, readability, remembering content etc. See mp_image.h for the short description. See dr-methods.txt for the explanation of buffer importing and mpi imgtypes. Always try to implement stride support! (stride == bytes per line) If no stride support, then stride==bytes_per_pixel*image_width. If you have stride support in your decoder, use the mpi->stride[] value for the byte_per_line for each plane. Also take care of other imgflags, like MP_IMGFLAG_PRESERVE and MP_IMGFLAG_READABLE, MP_IMGFLAG_COMMON_STRIDE and MP_IMGFLAG_COMMON_PLANE! The file mp_image.h contains flag descriptions in comments, read it! Ask for help on dev-eng, describing the behaviour your codec, if unsure.4.a. buffer allocation, vd.c::mpcodecs_get_image(): If the requested buffer imgtype!=EXPORT, then vd.c will try to do direct rendering, i.e. asks the next filter/vo for the buffer allocation. It's done by calling get_image() of the vf_XXX.c file. If it was successful, the imgflag MP_IMGFLAG_DIRECT will be set, and one memcpy() will be saved when passing the data from vd to the next filter/vo. See dr-methods.txt for details and examples.5. Decode the frame, to the mpi structure requested in 4., then return the mpi to decvideo.c. Return NULL if the decoding failed or skipped the frame.6. decvideo.c::decode_video() will now pass the 'mpi' to the next filter (vf_X).7. The filter's (vf_X) put_image() then requests a new mpi buffer by calling vf.c::vf_get_image().7.a. vf.c::vf_get_image() will try to get direct rendering by asking the next filter to do the buffer allocation (calls vf_Y's get_image()). If it fails, it will fall back on normal system memory allocation.8. When we're past the whole filter chain (multiple filters can be connected, even the same filter multiple times) then the last, 'leaf' filters will be called. The only difference between leaf and non-leaf filters is that leaf filters have to implement the whole filter API. Currently leaf filters are: vf_vo.c (wrapper over libvo) and ve_XXX.c (video encoders used by MEncoder).Video Filters=============Video filters are plugin-like code modules implementing the interfacedefined in vf.h.Basically it means video output manipulation, i.e. these plugins canmodify the image and the image properties (size, colorspace, etc) betweenthe video decoders (vd.h) and the output layer (libvo or video encoders).The actual API is a mixture of the video decoder (vd.h) and libvo(video_out.h) APIs.main differences:- vf plugins may be "loaded" multiple times, with different parameters and context - it's new in MPlayer, old APIs weren't reentrant.- vf plugins don't have to implement all functions - all functions have a 'fallback' version, so the plugins only override these if wanted.- Each vf plugin has its own get_image context, and they can interchange images/buffers using these get_image/put_image calls.The VIDEO FILTER API:=====================filename: vf_FILTERNAME.cvf_info_t* info; pointer to the filter description structure: const char *info; // description of the filter const char *name; // short name of the filter, must be FILTERNAME const char *author; // name and email/url of the author(s) const char *comment; // comment, url to papers describing algo etc. int (*open)(struct vf_instance_s* vf,char* args); // pointer to the open() function:Sample:vf_info_t vf_info_foobar = { "Universal Foo and Bar filter", "foobar", "Ms. Foo Bar", "based on algorithm described at http://www.foo-bar.org", open};The open() function: open() is called when the filter is appended/inserted in the filter chain. It'll receive the handler (vf) and the optional filter parameters as char* string. Note that encoders (ve_*) and vo wrapper (vf_vo.c) have non-string arg, but it's specially handled by MPlayer/MEncoder. The open() function should fill the vf_instance_t structure, with the implemented functions' pointers (see below). It can optionally allocate memory for its internal data (vf_priv_t) and store the pointer in vf->priv. The open() func should parse (or at least check syntax) of parameters, and fail (return 0) if error.Sample:static int open(vf_instance_t *vf, char* args){ vf->query_format=query_format; vf->config=config; vf->put_image=put_image; // allocate local storage: vf->priv=malloc(sizeof(struct vf_priv_s)); vf->priv->w= vf->priv->h=-1; if(args) // parse args: if(sscanf(args, "%d:%d", &vf->priv->w, &vf->priv->h)!=2) return 0; return 1;
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -