⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 tutorial04.java

📁 ffmpeg开发指南
💻 JAVA
📖 第 1 页 / 共 2 页
字号:
## An ffmpeg and SDL TutorialPage 1 2 3 4 5 6 7 8 End Prev Home Next  Printable version Text version## Tutorial 04: Spawning ThreadsCode: tutorial04.c### OverviewLast time we added audio support by taking advantage of SDL's audio functions.SDL started a thread that made callbacks to a function we defined every timeit needed audio. Now we're going to do the same sort of thing with the videodisplay. This makes the code more modular and easier to work with - especiallywhen we want to add syncing. So where do we start? First we notice that our main function is handling an awful lot: it's runningthrough the event loop, reading in packets, and decoding the video. So whatwe're going to do is split all those apart: we're going to have a thread thatwill be responsible for decoding the packets; these packets will then be addedto the queue and read by the corresponding audio and video threads. The audiothread we have already set up the way we want it; the video thread will be alittle more complicated since we have to display the video ourselves. We willadd the actual display code to the main loop. But instead of just displayingvideo every time we loop, we will integrate the video display into the eventloop. The idea is to decode the video, save the resulting frame in _another_queue, then create a custom event (FF_REFRESH_EVENT) that we add to the eventsystem, then when our event loop sees this event, it will display the nextframe in the queue. Here's a handy ASCII art illustration of what is going on:             ________ audio  _______      _____    |        | pkts |       |    |     | to spkr    | DECODE |----->| AUDIO |--->| SDL |-->    |________|      |_______|    |_____|        |  video     _______        |   pkts    |       |        +---------->| VIDEO |     ________       |_______|   _______    |       |          |       |       |    | EVENT |          +------>| VIDEO | to mon.    | LOOP  |----------------->| DISP. |-->    |_______|<---FF_REFRESH----|_______|    The main purpose of moving controlling the video display via the event loop isthat using an SDL_Delay thread, we can control exactly when the next videoframe shows up on the screen. When we finally sync the video in the nexttutorial, it will be a simple matter to add the code that will schedule thenext video refresh so the right picture is being shown on the screen at theright time. ### Simplifying CodeWe're also going to clean up the code a bit. We have all this audio and videocodec information, and we're going to be adding queues and buffers and whoknows what else. All this stuff is for one logical unit, _viz._ the movie. Sowe're going to make a large struct that will hold all that information calledthe VideoState.             typedef struct VideoState {          AVFormatContext *pFormatCtx;      int             videoStream, audioStream;      AVStream        *audio_st;      PacketQueue     audioq;      uint8_t         audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];      unsigned int    audio_buf_size;      unsigned int    audio_buf_index;      AVPacket        audio_pkt;      uint8_t         *audio_pkt_data;      int             audio_pkt_size;      AVStream        *video_st;      PacketQueue     videoq;          VideoPicture    pictq[VIDEO_PICTURE_QUEUE_SIZE];      int             pictq_size, pictq_rindex, pictq_windex;      SDL_mutex       *pictq_mutex;      SDL_cond        *pictq_cond;            SDL_Thread      *parse_tid;      SDL_Thread      *video_tid;          char            filename[1024];      int             quit;    } VideoState;    Here we see a glimpse of what we're going to get to. First we see the basicinformation - the format context and the indices of the audio and videostream, and the corresponding AVStream objects. Then we can see that we'vemoved some of those audio buffers into this structure. These (audio_buf,audio_buf_size, etc.) were all for information about audio that was stilllying around (or the lack thereof). We've added another queue for the video,and a buffer (which will be used as a queue; we don't need any fancy queueingstuff for this) for the decoded frames (saved as an overlay). The VideoPicturestruct is of our own creations (we'll see what's in it when we come to it). Wealso notice that we've allocated pointers for the two extra threads we willcreate, and the quit flag and the filename of the movie. So now we take it all the way back to the main function to see how thischanges our program. Let's set up our VideoState struct:             int main(int argc, char *argv[]) {          SDL_Event       event;          VideoState      *is;          is = av_mallocz(sizeof(VideoState));    av_mallocz() is a nice function that will allocate memory for us and zero itout. Then we'll initialize our locks for the display buffer (pictq), because sincethe event loop calls our display function - the display function, remember,will be pulling pre-decoded frames from pictq. At the same time, our videodecoder will be putting information into it - we don't know who will get therefirst. Hopefully you recognize that this is a classic **race condition**. Sowe allocate it now before we start any threads. Let's also copy the filenameof our movie into our VideoState.             pstrcpy(is->filename, sizeof(is->filename), argv[1]);        is->pictq_mutex = SDL_CreateMutex();    is->pictq_cond = SDL_CreateCond();    pstrcpy is a function from ffmpeg that does some extra bounds checking beyondstrncpy. ### Our First ThreadNow let's finally launch our threads and get the real work done:             schedule_refresh(is, 40);        is->parse_tid = SDL_CreateThread(decode_thread, is);    if(!is->parse_tid) {      av_free(is);      return -1;    }    schedule_refresh is a function we will define later. What it basically does istell the system to push a FF_REFRESH_EVENT after the specified number ofmilliseconds. This will in turn call the video refresh function when we see itin the event queue. But for now, let's look at SDL_CreateThread(). SDL_CreateThread() does just that - it spawns a new thread that has completeaccess to all the memory of the original process, and starts the threadrunning on the function we give it. It will also pass that functionuser-defined data. In this case, we're calling decode_thread() and with ourVideoState struct attached. The first half of the function has nothing new; itsimply does the work of opening the file and finding the index of the audioand video streams. The only thing we do different is save the format contextin our big struct. After we've found our stream indices, we call anotherfunction that we will define, stream_component_open(). This is a prettynatural way to split things up, and since we do a lot of similar things to setup the video and audio codec, we reuse some code by making this a function. The stream_component_open() function is where we will find our codec decoder,set up our audio options, save important information to our big struct, andlaunch our audio and video threads. This is where we would also insert otheroptions, such as forcing the codec instead of autodetecting it and so forth.Here it is:             int stream_component_open(VideoState *is, int stream_index) {          AVFormatContext *pFormatCtx = is->pFormatCtx;      AVCodecContext *codecCtx;      AVCodec *codec;      SDL_AudioSpec wanted_spec, spec;          if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) {        return -1;      }          // Get a pointer to the codec context for the video stream      codecCtx = pFormatCtx->streams[stream_index]->codec;          if(codecCtx->codec_type == CODEC_TYPE_AUDIO) {        // Set audio settings from codec info        wanted_spec.freq = codecCtx->sample_rate;        /* .... */        wanted_spec.callback = audio_callback;        wanted_spec.userdata = is;                if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {          fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError());          return -1;        }      }      codec = avcodec_find_decoder(codecCtx->codec_id);      if(!codec || (avcodec_open(codecCtx, codec) < 0)) {        fprintf(stderr, "Unsupported codec!\n");        return -1;      }          switch(codecCtx->codec_type) {      case CODEC_TYPE_AUDIO:        is->audioStream = stream_index;        is->audio_st = pFormatCtx->streams[stream_index];        is->audio_buf_size = 0;        is->audio_buf_index = 0;        memset(&is->audio_pkt, 0, sizeof(is->audio_pkt));        packet_queue_init(&is->audioq);        SDL_PauseAudio(0);        break;      case CODEC_TYPE_VIDEO:        is->videoStream = stream_index;        is->video_st = pFormatCtx->streams[stream_index];                packet_queue_init(&is->videoq);        is->video_tid = SDL_CreateThread(video_thread, is);        break;      default:        break;      }    }    This is pretty much the same as the code we had before, except now it'sgeneralized for audio and video. Notice that instead of aCodecCtx, we've setup our big struct as the userdata for our audio callback. We've also saved thestreams themselves as audio_st and video_st. We also have added our videoqueue and set it up in the same way we set up our audio queue. Most of thepoint is to launch the video and audio threads. These bits do it:                 SDL_PauseAudio(0);        break;        /* ...... */            is->video_tid = SDL_CreateThread(video_thread, is);    We remember SDL_PauseAudio() from last time, and SDL_CreateThread() is used asin the exact same way as before. We'll get back to our video_thread()function. Before that, let's go back to the second half of our decode_thread() function.It's basically just a for loop that will read in a packet and put it on theright queue:               for(;;) {        if(is->quit) {          break;        }        // seek stuff goes here        if(is->audioq.size > MAX_AUDIOQ_SIZE ||           is->videoq.size > MAX_VIDEOQ_SIZE) {          SDL_Delay(10);          continue;        }        if(av_read_frame(is->pFormatCtx, packet) < 0) {          if(url_ferror(&pFormatCtx->pb) == 0) {    	SDL_Delay(100); /* no error; wait for user input */    	continue;          } else {    	break;          }        }        // Is this a packet from the video stream?        if(packet->stream_index == is->videoStream) {          packet_queue_put(&is->videoq, packet);        } else if(packet->stream_index == is->audioStream) {          packet_queue_put(&is->audioq, packet);        } else {          av_free_packet(packet);        }      }    Nothing really new here, except that we now have a max size for our audio andvideo queue, and we've added a function that will check for read errors. Theformat context has a ByteIOContext struct inside it called pb. ByteIOContextis the structure that basically keeps all the low-level file information init. url_ferror checks that structure to see if there was some kind of errorreading from our file. After our for loop, we have all the code for waiting for the rest of theprogram to end or informing it that we've ended. This code is instructivebecause it shows us how we push events - something we'll have to later todisplay the video.               while(!is->quit) {        SDL_Delay(100);      }         fail:      if(1){        SDL_Event event;        event.type = FF_QUIT_EVENT;        event.user.data1 = is;        SDL_PushEvent(&event);      }      return 0;    We get values for user events by using the SDL constant SDL_USEREVENT. Thefirst user event should be assigned the value SDL_USEREVENT, the nextSDL_USEREVENT + 1, and so on. FF_QUIT_EVENT is defined in our program asSDL_USEREVENT + 2. We can also pass user data if we like, too, and here wepass our pointer to the big struct. Finally we call SDL_PushEvent(). In ourevent loop switch, we just put this by the SDL_QUIT_EVENT section we hadbefore. We'll see our event loop in more detail; for now, just be assured thatwhen we push the FF_QUIT_EVENT, we'll catch it later and raise our quit flag.### Getting the Frame: video_threadAfter we have our codec prepared, we start our video thread. This thread readsin packets from the video queue, decodes the video into frames, and then callsa queue_picture function to put the processed frame onto a picture queue:             int video_thread(void *arg) {      VideoState *is = (VideoState *)arg;      AVPacket pkt1, *packet = &pkt1      int len1, frameFinished;      AVFrame *pFrame;          pFrame = avcodec_alloc_frame();          for(;;) {        if(packet_queue_get(&is->videoq, packet, 1) < 0) {          // means we quit getting packets          break;        }        // Decode video frame        len1 = avcodec_decode_video(is->video_st->codec, pFrame,&frameFinished,     				packet->data, packet->size);            // Did we get a video frame?        if(frameFinished) {          if(queue_picture(is, pFrame) < 0) {    	break;          }        }        av_free_packet(packet);      }      av_free(pFrame);      return 0;    }    Most of this function should be familiar by this point. We've moved ouravcodec_decode_video function here, just replaced some of the arguments; forexample, we have the AVStream stored in our big struct, so we get our codecfrom there. We just keep getting packets from our video queue until someonetells us to quit or we encounter an error. ### Queueing the FrameLet's look at the function that stores our decoded frame, pFrame in ourpicture queue. Since our picture queue is an SDL overlay (presumably to allowthe video display function to have as little calculation as possible), we needto convert our frame into that. The data we store in the picture queue is astruct of our making:     

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -