📄 tutorial04.java
字号:
typedef struct VideoPicture { SDL_Overlay *bmp; int width, height; /* source height & width */ int allocated; } VideoPicture; Our big struct has a buffer of these in it where we can store them. However,we need to allocate the SDL_Overlay ourselves (notice the allocated flag thatwill indicate whether we have done so or not). To use this queue, we have two pointers - the writing index and the readingindex. We also keep track of how many actual pictures are in the buffer. Towrite to the queue, we're going to first wait for our buffer to clear out sowe have space to store our VideoPicture. Then we check and see if we havealready allocated the overlay at our writing index. If not, we'll have toallocate some space. We also have to reallocate the buffer if the size of thewindow has changed! However, instead of allocating it here, to avoid lockingissues. (I'm still not quite sure why; I believe it's to avoid calling the SDLoverlay functions in different threads.) int queue_picture(VideoState *is, AVFrame *pFrame) { VideoPicture *vp; int dst_pix_fmt; AVPicture pict; /* wait until we have space for a new pic */ SDL_LockMutex(is->pictq_mutex); while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE && !is->quit) { SDL_CondWait(is->pictq_cond, is->pictq_mutex); } SDL_UnlockMutex(is->pictq_mutex); if(is->quit) return -1; // windex is set to 0 initially vp = &is->pictq[is->pictq_windex]; /* allocate or resize the buffer! */ if(!vp->bmp || vp->width != is->video_st->codec->width || vp->height != is->video_st->codec->height) { SDL_Event event; vp->allocated = 0; /* we have to do it in the main thread */ event.type = FF_ALLOC_EVENT; event.user.data1 = is; SDL_PushEvent(&event); /* wait until we have a picture allocated */ SDL_LockMutex(is->pictq_mutex); while(!vp->allocated && !is->quit) { SDL_CondWait(is->pictq_cond, is->pictq_mutex); } SDL_UnlockMutex(is->pictq_mutex); if(is->quit) { return -1; } } The event mechanism here is the same one we saw earlier when we wanted toquit. We've defined FF_ALLOC_EVENT as SDL_USEREVENT. We push the event andthen wait on the conditional variable for the allocation function to run. Let's look at how we change our event loop: for(;;) { SDL_WaitEvent(&event); switch(event.type) { /* ... */ case FF_ALLOC_EVENT: alloc_picture(event.user.data1); break; Remember that event.user.data1 is our big struct. That was simple enough.Let's look at the alloc_picture() function: void alloc_picture(void *userdata) { VideoState *is = (VideoState *)userdata; VideoPicture *vp; vp = &is->pictq[is->pictq_windex]; if(vp->bmp) { // we already have one make another, bigger/smaller SDL_FreeYUVOverlay(vp->bmp); } // Allocate a place to put our YUV image on that screen vp->bmp = SDL_CreateYUVOverlay(is->video_st->codec->width, is->video_st->codec->height, SDL_YV12_OVERLAY, screen); vp->width = is->video_st->codec->width; vp->height = is->video_st->codec->height; SDL_LockMutex(is->pictq_mutex); vp->allocated = 1; SDL_CondSignal(is->pictq_cond); SDL_UnlockMutex(is->pictq_mutex); } You should recognize the SDL_CreateYUVOverlay function that we've moved fromour main loop to this section. This code should be fairly self-explanitory bynow. Remember that we save the width and height in the VideoPicture structurebecause we need to make sure that our video size doesn't change for somereason. Okay, we're all settled and we have our YUV overlay allocated and ready toreceive a picture. Let's go back to queue_picture and look at the code to copythe frame into the overlay. You should recognize that part of it: int queue_picture(VideoState *is, AVFrame *pFrame) { /* Allocate a frame if we need it... */ /* ... */ /* We have a place to put our picture on the queue */ if(vp->bmp) { SDL_LockYUVOverlay(vp->bmp); dst_pix_fmt = PIX_FMT_YUV420P; /* point pict at the queue */ pict.data[0] = vp->bmp->pixels[0]; pict.data[1] = vp->bmp->pixels[2]; pict.data[2] = vp->bmp->pixels[1]; pict.linesize[0] = vp->bmp->pitches[0]; pict.linesize[1] = vp->bmp->pitches[2]; pict.linesize[2] = vp->bmp->pitches[1]; // Convert the image into YUV format that SDL uses img_convert(&pict, dst_pix_fmt, (AVPicture *)pFrame, is->video_st->codec->pix_fmt, is->video_st->codec->width, is->video_st->codec->height); SDL_UnlockYUVOverlay(vp->bmp); /* now we inform our display thread that we have a pic ready */ if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) { is->pictq_windex = 0; } SDL_LockMutex(is->pictq_mutex); is->pictq_size++; SDL_UnlockMutex(is->pictq_mutex); } return 0; } The majority of this part is simply the code we used earlier to fill the YUVoverlay with our frame. The last bit is simply "adding" our value onto thequeue. The queue works by adding onto it until it is full, and reading from itas long as there is something on it. Therefore everything depends upon theis->pictq_size value, requiring us to lock it. So what we do here is incrementthe write pointer (and rollover if necessary), then lock the queue andincrease its size. Now our reader will know there is more information on thequeue, and if this makes our queue full, our writer will know about it. ### Displaying the VideoThat's it for our video thread! Now we've wrapped up all the loose threadsexcept for one -- remember that we called the schedule_refresh() function wayback? Let's see what that actually did: /* schedule a video refresh in 'delay' ms */ static void schedule_refresh(VideoState *is, int delay) { SDL_AddTimer(delay, sdl_refresh_timer_cb, is); } SDL_AddTimer() is an SDL function that simply makes a callback to theuser-specfied function after a certain number of milliseconds (and optionallycarrying some user data). We're going to use this function to schedule videoupdates - every time we call this function, it will set the timer, which willtrigger an event, which will have our main() function in turn call a functionthat pulls a frame from our picture queue and displays it! Phew! But first thing's first. Let's trigger that event. That sends us over to: static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) { SDL_Event event; event.type = FF_REFRESH_EVENT; event.user.data1 = opaque; SDL_PushEvent(&event); return 0; /* 0 means stop timer */ } Here is the now-familiar event push. FF_REFRESH_EVENT is defined here asSDL_USEREVENT + 1. One thing to notice is that when we return 0, SDL stops thetimer so the callback is not made again. Now we've pushed an FF_REFRESH_EVENT, we need to handle it in our event loop: for(;;) { SDL_WaitEvent(&event); switch(event.type) { /* ... */ case FF_REFRESH_EVENT: video_refresh_timer(event.user.data1); break; and _that_ sends us to this function, which will actually pull the data fromour picture queue: void video_refresh_timer(void *userdata) { VideoState *is = (VideoState *)userdata; VideoPicture *vp; if(is->video_st) { if(is->pictq_size == 0) { schedule_refresh(is, 1); } else { vp = &is->pictq[is->pictq_rindex]; /* Timing code goes here */ schedule_refresh(is, 80); /* show the picture! */ video_display(is); /* update queue for next picture! */ if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) { is->pictq_rindex = 0; } SDL_LockMutex(is->pictq_mutex); is->pictq_size--; SDL_CondSignal(is->pictq_cond); SDL_UnlockMutex(is->pictq_mutex); } } else { schedule_refresh(is, 100); } } For now, this is a pretty simple function: it pulls from the queue when wehave something, sets our timer for when the next video frame should be shown,calls video_display to actually show the video on the screen, then incrementsthe counter on the queue, and decreases its size. You may notice that we don'tactually do anything with vp in this function, and here's why: we will. Later.We're going to use it to access timing information when we start syncing thevideo to the audio. See where it says "timing code here"? In that section,we're going to figure out how soon we should show the next video frame, andthen input that value into the schedule_refresh() function. For now we're justputting in a dummy value of 80. Technically, you could guess and check thisvalue, and recompile it for every movie you watch, but 1) it would drift aftera while and 2) it's quite silly. We'll come back to it later, though. We're almost done; we just have one last thing to do: display the video!Here's that video_display function: void video_display(VideoState *is) { SDL_Rect rect; VideoPicture *vp; AVPicture pict; float aspect_ratio; int w, h, x, y; int i; vp = &is->pictq[is->pictq_rindex]; if(vp->bmp) { if(is->video_st->codec->sample_aspect_ratio.num == 0) { aspect_ratio = 0; } else { aspect_ratio = av_q2d(is->video_st->codec->sample_aspect_ratio) * is->video_st->codec->width / is->video_st->codec->height; } if(aspect_ratio <= 0.0) { aspect_ratio = (float)is->video_st->codec->width / (float)is->video_st->codec->height; } h = screen->h; w = ((int)rint(h * aspect_ratio)) & -3; if(w > screen->w) { w = screen->w; h = ((int)rint(w / aspect_ratio)) & -3; } x = (screen->w - w) / 2; y = (screen->h - h) / 2; rect.x = x; rect.y = y; rect.w = w; rect.h = h; SDL_DisplayYUVOverlay(vp->bmp, &rect); } } Since our screen can be of any size (we set ours to 640x480 and there are waysto set it so it is resizable by the user), we need to dynamically figure outhow big we want our movie rectangle to be. So first we need to figure out ourmovie's **aspect ratio**, which is just the width divided by the height. Somecodecs will have an odd **sample aspect ratio**, which is simply thewidth/height radio of a single pixel, or sample. Since the height and widthvalues in our codec context are measured in pixels, the actual aspect ratio isequal to the aspect ratio times the sample aspect ratio. Some codecs will showan aspect ratio of 0, and this indicates that each pixel is simply of size1x1. Then we scale the movie to fit as big in our screen as we can. The & -3bit-twiddling in there simply rounds the value to the nearest multiple of 4.Then we center the movie, and call SDL_DisplayYUVOverlay(). So is that it? Are we done? Well, we still have to rewrite the audio code touse the new VideoStruct, but those are trivial changes, and you can look atthose in the sample code. The last thing we have to do is to change ourcallback for ffmpeg's internal "quit" callback function: VideoState *global_video_state; int decode_interrupt_cb(void) { return (global_video_state && global_video_state->quit); } We set global_video_state to the big struct in main(). So that's it! Go ahead and compile it: gcc -o tutorial04 tutorial04.c -lavutil -lavformat -lavcodec -lz -lm \ `sdl-config --cflags --libs` and enjoy your unsynced movie! Next time we'll finally build a video playerthat actually works! _**>>** Syncing Video_* * *Function Reference Data Referenceemail:dranger at gmail dot comBack to dranger.comThis work is licensed under the Creative Commons Attribution-Share Alike 2.5License. To view a copy of this license, visithttp://creativecommons.org/licenses/by-sa/2.5/ or send a letter to CreativeCommons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA. Code examples are based off of FFplay, Copyright (c) 2003 Fabrice Bellard, anda tutorial by Martin Bohme.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -