⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 ffmpegparser.java

📁 FMJ(freedom media for java)是java视频开发的新选择
💻 JAVA
📖 第 1 页 / 共 2 页
字号:
    }        public static AudioFormat convertCodecAudioFormat(AVCodecContext codecCtx)    {    	// ffmpeg appears to always decode audio into 16 bit samples, regardless of the source.    	return new AudioFormat(AudioFormat.LINEAR, codecCtx.sample_rate, 16, codecCtx.channels);	/// TODO: endian, signed?    	    }    	private abstract class PullSourceStreamTrack extends AbstractTrack	{		public abstract void deallocate();	}		/** 	 * 	 * @param streamIndex the track/stream index	 * @return null on EOM	 */	private AVPacket nextPacket(int streamIndex)	{		// because ffmpeg has a single function that gets the next packet, without regard to track/stream, we have		// to queue up packets that are for a different track/stream.				synchronized (AV_SYNC_OBJ)		{			if (!packetQueues[streamIndex].isEmpty())				return (AVPacket) packetQueues[streamIndex].dequeue(); // we already have one in the queue for this stream					    while (true)		    {		    	final AVPacket packet = new AVPacket();			    if (AVFORMAT.av_read_frame(formatCtx, packet) < 0)		    		break;	// TODO: distinguish between EOM and error?		    			    	// Is this a packet from the desired stream?		        if (packet.stream_index == streamIndex)		        {	return packet;		        }		        else		        {		        	// TODO: This has been observed in other code that uses ffmpeg, not sure if it is needed.		        	//if (AVFORMAT.av_dup_packet(packet) < 0)		        	//	throw new RuntimeException("av_dup_packet failed");		        			        	packetQueues[packet.stream_index].enqueue(packet);		        }		    }		    		    return null;		}	}			private class VideoTrack extends PullSourceStreamTrack	{		// TODO: track listener				private final int videoStreamIndex;		private AVStream stream;		private AVCodecContext codecCtx;		private AVCodec codec;		private AVFrame frame;		private AVFrame frameRGB;		private final VideoFormat format;		private Pointer buffer;		/**		 * We have to keep track of frame number ourselves.		 * frame.display_picture_number seems to often always be zero.		 * See: http://lists.mplayerhq.hu/pipermail/ffmpeg-user/2005-September/001244.html		 */		private long frameNo;				public VideoTrack(int videoStreamIndex, AVStream stream, AVCodecContext codecCtx) throws ResourceUnavailableException		{			super();					this.videoStreamIndex = videoStreamIndex;			this.stream = stream;			this.codecCtx = codecCtx;									synchronized (AV_SYNC_OBJ)	    	{			    // Find the decoder for the video stream			    this.codec = AVCODEC.avcodec_find_decoder(codecCtx.codec_id);			    if (codec == null)			        throw new ResourceUnavailableException("Codec not found for codec_id " + codecCtx.codec_id + " (0x" + Integer.toHexString(codecCtx.codec_id) + ")"); // Codec not found - see AVCodecLibrary.CODEC_ID constants			    			    // Open codec			    if (AVCODEC.avcodec_open(codecCtx, codec) < 0)			    	 throw new ResourceUnavailableException("Could not open codec"); // Could not open codec											    // Allocate video frame			    frame = AVCODEC.avcodec_alloc_frame();			    if (frame == null)			    	throw new ResourceUnavailableException("Could not allocate frame");			    			    // Allocate an AVFrame structure			    frameRGB = AVCODEC.avcodec_alloc_frame();			    if (frameRGB == null)			    	throw new ResourceUnavailableException("Could not allocate frame");			    			    // Determine required buffer size and allocate buffer			    final int numBytes = AVCODEC.avpicture_get_size(AVCodecLibrary.PIX_FMT_RGB24, codecCtx.width, codecCtx.height);			    buffer = AVUTIL.av_malloc(numBytes);			    			    // Assign appropriate parts of buffer to image planes in pFrameRGB			    AVCODEC.avpicture_fill(frameRGB, buffer, AVCodecLibrary.PIX_FMT_RGB24, codecCtx.width, codecCtx.height);			    			    // set format			    format = convertCodecPixelFormat(AVCodecLibrary.PIX_FMT_RGB24, codecCtx.width, codecCtx.height, getFPS(stream, codecCtx));	    	}		}				//@Override		public void deallocate()		{			synchronized (AV_SYNC_OBJ)	    	{			    // Close the codec				if (codecCtx != null)				{	AVCODEC.avcodec_close(codecCtx);					codecCtx = null;				}							    // Free the RGB image				if (frameRGB != null)				{					AVUTIL.av_free(frameRGB.getPointer());					frameRGB = null;				}							    // Free the YUV frame				if (frame != null)				{					AVUTIL.av_free(frame.getPointer());					frame = null;				}								if (buffer != null)				{					AVUTIL.av_free(buffer);					buffer = null;				}	    	}		}		// TODO: implement seeking using av_seek_frame		/**		 * 		 * @return nanos skipped, 0 if unable to skip.		 * @throws IOException		 */		public long skipNanos(long nanos) throws IOException		{			return 0;					}				public boolean canSkipNanos()		{			return false;		}		public Format getFormat()		{			return format;		}//		  TODO: from JAVADOC://		   This method might block if the data for a complete frame is not available. It might also block if the stream contains intervening data for a different interleaved Track. Once the other Track is read by a readFrame call from a different thread, this method can read the frame. If the intervening Track has been disabled, data for that Track is read and discarded.////			Note: This scenario is necessary only if a PullDataSource Demultiplexer implementation wants to avoid buffering data locally and copying the data to the Buffer passed in as a parameter. Implementations might decide to buffer data and not block (if possible) and incur data copy overhead. 		 		public void readFrame(Buffer buffer)		{			// will be set to the minimum dts of all packets that make up a frame.			// TODO: this is not correct in all cases, see comments in getTimestamp.			long dts = -1;						final AVPacket packet = nextPacket(videoStreamIndex);		    if (packet != null)		    {		    	synchronized (AV_SYNC_OBJ)		    	{			    	final IntByReference frameFinished = new IntByReference();		            // Decode video frame			    			        	AVCODEC.avcodec_decode_video(codecCtx, frame, frameFinished, packet.data, packet.size);		        	if (dts == -1 || packet.dts < dts)		        		dts = packet.dts;			            // Did we get a video frame?		            if (frameFinished.getValue() != 0)		            {		                // Convert the image from its native format to RGB		                AVCODEC.img_convert(frameRGB, AVCodecLibrary.PIX_FMT_RGB24, frame, codecCtx.pix_fmt, codecCtx.width, codecCtx.height);		                		                final byte[] data = frameRGB.data0.getByteArray(0, codecCtx.height * frameRGB.linesize[0]);		                buffer.setData(data);		                buffer.setLength(data.length);		                buffer.setOffset(0);		                buffer.setEOM(false);		                buffer.setDiscard(false);		                buffer.setTimeStamp(getTimestamp(frame, stream, codecCtx, frameNo++, dts));		                //System.out.println("frameNo=" + frameNo + " dts=" + dts + " timestamp=" + buffer.getTimeStamp());		                dts = -1;		                		            }		            else		            {		            	buffer.setLength(0);			        	buffer.setDiscard(true);		            }			        // Free the packet that was allocated by av_read_frame			        // AVFORMAT.av_free_packet(packet.getPointer()) - cannot be called because it is an inlined function.			        // so we'll just do the JNA equivalent of the inline:			        if (packet.destruct != null)			        	packet.destruct.callback(packet);		    	}		    }		    else		    {	// TODO: error? EOM?		    	buffer.setLength(0);		    	buffer.setEOM(true);		    	return;		    }		    					}				public Time mapFrameToTime(int frameNumber)		{			return TIME_UNKNOWN;			}		public int mapTimeToFrame(Time t)		{				return FRAME_UNKNOWN;				}							public Time getDuration()		{			if (formatCtx.duration <= 0)				return Duration.DURATION_UNKNOWN;	// not sure what formatCtx.duration is set to for unknown/unspecified lengths, but this seems like a reasonable check.			// formatCtx.duration is in AV_TIME_BASE, is in 1/1000000 sec.  Multiply by 1000 to get nanos.			return new Time(formatCtx.duration * 1000L);		}	}		private class AudioTrack extends PullSourceStreamTrack	{		// TODO: track listener				private final int audioStreamIndex;		AVStream stream;		private AVCodecContext codecCtx;		private final AVCodec codec;		private Pointer buffer;		private int bufferSize;		private final AudioFormat format;				/**		 * We have to keep track of frame number ourselves.		 * frame.display_picture_number seems to often always be zero.		 * See: http://lists.mplayerhq.hu/pipermail/ffmpeg-user/2005-September/001244.html		 */		private long frameNo;				public AudioTrack(int audioStreamIndex, AVStream stream, AVCodecContext codecCtx) throws ResourceUnavailableException		{			super();					this.audioStreamIndex = audioStreamIndex;			this.stream = stream;			this.codecCtx = codecCtx;			synchronized (AV_SYNC_OBJ)	    	{						    // Find the decoder for the video stream			    this.codec = AVCODEC.avcodec_find_decoder(codecCtx.codec_id);			    if (codec == null)			        throw new ResourceUnavailableException("Codec not found for codec_id " + codecCtx.codec_id + " (0x" + Integer.toHexString(codecCtx.codec_id) + ")"); // Codec not found - see AVCodecLibrary.CODEC_ID constants			    			    // Open codec			    if (AVCODEC.avcodec_open(codecCtx, codec) < 0)			    	 throw new ResourceUnavailableException("Could not open codec"); // Could not open codec							    // actually appears to be used as a short array.			    bufferSize = AVCodecLibrary.AVCODEC_MAX_AUDIO_FRAME_SIZE;			    buffer = AVUTIL.av_malloc(bufferSize);			   			    format = convertCodecAudioFormat(codecCtx);	    	}		    		}				//@Override		public void deallocate()		{			synchronized (AV_SYNC_OBJ)	    	{			    // Close the codec				if (codecCtx != null)				{	AVCODEC.avcodec_close(codecCtx);					codecCtx = null;				}								if (buffer != null)				{					AVUTIL.av_free(buffer);					buffer = null;				}	    	}		}		// TODO: implement seeking using av_seek_frame		/**		 * 		 * @return nanos skipped, 0 if unable to skip.		 * @throws IOException		 */		public long skipNanos(long nanos) throws IOException		{			return 0;					}				public boolean canSkipNanos()		{			return false;		}		public Format getFormat()		{			return format;		}//		  TODO: from JAVADOC://		   This method might block if the data for a complete frame is not available. It might also block if the stream contains intervening data for a different interleaved Track. Once the other Track is read by a readFrame call from a different thread, this method can read the frame. If the intervening Track has been disabled, data for that Track is read and discarded.////			Note: This scenario is necessary only if a PullDataSource Demultiplexer implementation wants to avoid buffering data locally and copying the data to the Buffer passed in as a parameter. Implementations might decide to buffer data and not block (if possible) and incur data copy overhead. 		 		public void readFrame(Buffer buffer)		{						// TODO: the reading of packets needs to be done centrally for all tracks			final AVPacket packet = nextPacket(audioStreamIndex);		    if (packet != null)		    {		    			    	synchronized (AV_SYNC_OBJ)		    	{		        	final IntByReference frameSize = new IntByReference();		        	// It is not very clear from the documentation, but it appears that we set the initial frame size to be the size of this.buffer in bytes, not in "shorts".		        	frameSize.setValue(this.bufferSize);		            // Decode 		        	AVCODEC.avcodec_decode_audio2(codecCtx, this.buffer, frameSize, packet.data, packet.size);			            // Did we get a audio data?		        	if (frameSize.getValue() < 0)		        	{	throw new RuntimeException("Failed to read audio frame");	// TODO: how to handle this error?		        	}		        	else if (frameSize.getValue() > 0)		            {		            	if (frameSize.getValue() > this.bufferSize)		            	{	// realloc buffer to make room:		            		// we already allocate the maximum size, so this should never happen.		            		AVUTIL.av_free(this.buffer);		            		this.bufferSize = frameSize.getValue();		         		    this.buffer = AVUTIL.av_malloc(this.bufferSize);		            	}		            			                final byte[] data = this.buffer.getByteArray(0, frameSize.getValue());		                buffer.setData(data);		                buffer.setLength(data.length);		                buffer.setOffset(0);		                buffer.setEOM(false);		                buffer.setDiscard(false);		                buffer.setTimeStamp(System.currentTimeMillis()); // TODO		                		            }		            else		            {		            	buffer.setLength(0);			        	buffer.setDiscard(true);		            }			        // Free the packet that was allocated by av_read_frame			        // AVFORMAT.av_free_packet(packet.getPointer()) - cannot be called because it is an inlined function.			        // so we'll just do the JNA equivalent of the inline:			        if (packet.destruct != null)			        	packet.destruct.callback(packet);		    	}		    }		    else		    {	// TODO: error? EOM?		    	buffer.setLength(0);		    	buffer.setEOM(true);		    	return;		    }		    					}				public Time mapFrameToTime(int frameNumber)		{			return TIME_UNKNOWN;			}		public int mapTimeToFrame(Time t)		{				return FRAME_UNKNOWN;				}							public Time getDuration()		{			if (formatCtx.duration <= 0)				return Duration.DURATION_UNKNOWN;	// not sure what formatCtx.duration is set to for unknown/unspecified lengths, but this seems like a reasonable check.			// formatCtx.duration is in AV_TIME_BASE, which means it is in milliseconds.  Multiply by 1000 to get nanos.			return new Time(formatCtx.duration * 1000L);		}	}		}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -