rfc2035.txt
来自「RFC 的详细文档!」· 文本 代码 · 共 900 行 · 第 1/3 页
TXT
900 行
Network Working Group L. Berc
Request for Comments: 2035 Digital Equipment Corporation
Category: Standards Track W. Fenner
Xerox PARC
R. Frederick
Xerox PARC
S. McCanne
Lawrence Berkeley Laboratory
October 1996
RTP Payload Format for JPEG-compressed Video
Status of this Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Abstract
This memo describes the RTP payload format for JPEG video streams.
The packet format is optimized for real-time video streams where
codec parameters change rarely from frame to frame.
This document is a product of the Audio-Video Transport working group
within the Internet Engineering Task Force. Comments are solicited
and should be addressed to the working group's mailing list at rem-
conf@es.net and/or the author(s).
1. Introduction
The Joint Photographic Experts Group (JPEG) standard [1,2,3] defines
a family of compression algorithms for continuous-tone, still images.
This still image compression standard can be applied to video by
compressing each frame of video as an independent still image and
transmitting them in series. Video coded in this fashion is often
called Motion-JPEG.
We first give an overview of JPEG and then describe the specific
subset of JPEG that is supported in RTP and the mechanism by which
JPEG frames are carried as RTP payloads.
The JPEG standard defines four modes of operation: the sequential DCT
mode, the progressive DCT mode, the lossless mode, and the
hierarchical mode. Depending on the mode, the image is represented
Berc, et. al. Standards Track [Page 1]
RFC 2035 RTP Payload Format for JPEG Video October 1996
in one or more passes. Each pass (called a frame in the JPEG
standard) is further broken down into one or more scans. Within each
scan, there are one to four components,which represent the three
components of a color signal (e.g., "red, green, and blue", or a
luminance signal and two chromanince signals). These components can
be encoded as separate scans or interleaved into a single scan.
Each frame and scan is preceded with a header containing optional
definitions for compression parameters like quantization tables and
Huffman coding tables. The headers and optional parameters are
identified with "markers" and comprise a marker segment; each scan
appears as an entropy-coded bit stream within two marker segments.
Markers are aligned to byte boundaries and (in general) cannot appear
in the entropy-coded segment, allowing scan boundaries to be
determined without parsing the bit stream.
Compressed data is represented in one of three formats: the
interchange format, the abbreviated format, or the table-
specification format. The interchange format contains definitions
for all the table used in the by the entropy-coded segments, while
the abbreviated format might omit some assuming they were defined
out-of-band or by a "previous" image.
The JPEG standard does not define the meaning or format of the
components that comprise the image. Attributes like the color space
and pixel aspect ratio must be specified out-of-band with respect to
the JPEG bit stream. The JPEG File Interchange Format (JFIF) [4] is
a defacto standard that provides this extra information using an
application marker segment (APP0). Note that a JFIF file is simply a
JPEG interchange format image along with the APP0 segment. In the
case of video, additional parameters must be defined out-of-band
(e.g., frame rate, interlaced vs. non-interlaced, etc.).
While the JPEG standard provides a rich set of algorithms for
flexible compression, cost-effective hardware implementations of the
full standard have not appeared. Instead, most hardware JPEG video
codecs implement only a subset of the sequential DCT mode of
operation. Typically, marker segments are interpreted in software
(which "re-programs" the hardware) and the hardware is presented with
a single, interleaved entropy-coded scan represented in the YUV color
space.
2. JPEG Over RTP
To maximize interoperability among hardware-based codecs, we assume
the sequential DCT operating mode [1,Annex F] and restrict the set of
predefined RTP/JPEG "type codes" (defined below) to single-scan,
interleaved images. While this is more restrictive than even
Berc, et. al. Standards Track [Page 2]
RFC 2035 RTP Payload Format for JPEG Video October 1996
baseline JPEG, many hardware implementation fall short of the
baseline specification (e.g., most hardware cannot decode non-
interleaved scans).
In practice, most of the table-specification data rarely changes from
frame to frame within a single video stream. Therefore, RTP/JPEG
data is represented in abbreviated format, with all of the tables
omitted from the bit stream. Each image begins immediately with the
(single) entropy-coded scan. The information that would otherwise be
in both the frame and scan headers is represented entirely within a
64-bit RTP/JPEG header (defined below) that lies between the RTP
header and the JPEG scan and is present in every packet.
While parameters like Huffman tables and color space are likely to
remain fixed for the lifetime of the video stream, other parameters
should be allowed to vary, notably the quantization tables and image
size (e.g., to implement rate-adaptive transmission or allow a user
to adjust the "quality level" or resolution manually). Thus explicit
fields in the RTP/JPEG header are allocated to represent this
information. Since only a small set of quantization tables are
typically used, we encode the entire set of quantization tables in a
small integer field. The image width and height are encoded
explicitly.
Because JPEG frames are typically larger than the underlying
network's maximum packet size, frames must often be fragmented into
several packets. One approach is to allow the network layer below
RTP (e.g., IP) to perform the fragmentation. However, this precludes
rate-controlling the resulting packet stream or partial delivery in
the presence of loss. For example, IP will not deliver a fragmented
datagram to the application if one or more fragments is lost, or IP
might fragment an 8000 byte frame into a burst of 8 back-to-back
packets. Instead, RTP/JPEG defines a simple fragmentation and
reassembly scheme at the RTP level.
3. RTP/JPEG Packet Format
The RTP timestamp is in units of 90000Hz. The same timestamp must
appear across all fragments of a single frame. The RTP marker bit is
set in the last packet of a frame.
Berc, et. al. Standards Track [Page 3]
RFC 2035 RTP Payload Format for JPEG Video October 1996
3.1. JPEG header
A special header is added to each packet that immediately follows the
RTP header:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type specific | Fragment Offset |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Q | Width | Height |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
3.1.1. Type specific: 8 bits
Interpretation depends on the value of the type field.
3.1.2. Fragment Offset: 24 bits
The Fragment Offset is the data offset in bytes of the current packet
in the JPEG scan.
3.1.3. Type: 8 bits
The type field specifies the information that would otherwise be
present in a JPEG abbreviated table-specification as well as the
additional JFIF-style parameters not defined by JPEG. Types 0-127
are reserved as fixed, well-known mappings to be defined by this
document and future revisions of this document. Types 128-255 are
free to be dynamically defined by a session setup protocol (which is
beyond the scope of this document).
3.1.4. Q: 8 bits
The Q field defines the quantization tables for this frame using an
algorithm that determined by the Type field (see below).
3.1.5. Width: 8 bits
This field encodes the width of the image in 8-pixel multiples (e.g.,
a width of 40 denotes an image 320 pixels wide).
3.1.6. Height: 8 bits
This field encodes the height of the image in 8-pixel multiples
(e.g., a height of 30 denotes an image 240 pixels tall).
Berc, et. al. Standards Track [Page 4]
RFC 2035 RTP Payload Format for JPEG Video October 1996
3.1.7. Data
The data following the RTP/JPEG header is an entropy-coded segment
consisting of a single scan. The scan header is not present and is
inferred from the RTP/JPEG header. The scan is terminated either
implicitly (i.e., the point at which the image is fully parsed), or
explicitly with an EOI marker. The scan may be padded to arbitrary
length with undefined bytes. (Existing hardware codecs generate
extra lines at the bottom of a video frame and removal of these lines
would require a Huffman-decoding pass over the data.)
As defined by JPEG, restart markers are the only type of marker that
may appear embedded in the entropy-coded segment. The "type code"
determines whether a restart interval is defined, and therefore
whether restart markers may be present. It also determines if the
restart intervals will be aligned with RTP packets, allowing for
partial decode of frames, thus increasing resiliance to packet drop.
If restart markers are present, the 6-byte DRI segment (define
restart interval marker [1, Sec. B.2.4.4] precedes the scan).
JPEG markers appear explicitly on byte aligned boundaries beginning
with an 0xFF. A "stuffed" 0x00 byte follows any 0xFF byte generated
by the entropy coder [1, Sec. B.1.1.5].
4. Discussion
4.1. The Type Field
The Type field defines the abbreviated table-specification and
additional JFIF-style parameters not defined by JPEG, since they are
not present in the body of the transmitted JPEG data. The Type field
must remain constant for the duration of a session.
Six type codes are currently defined. They correspond to an
abbreviated table-specification indicating the "Baseline DCT
sequential" mode, 8-bit samples, square pixels, three components in
the YUV color space, standard Huffman tables as defined in [1, Annex
K.3], and a single interleaved scan with a scan component selector
indicating components 0, 1, and 2 in that order. The Y, U, and V
color planes correspond to component numbers 0, 1, and 2,
respectively. Component 0 (i.e., the luminance plane) uses Huffman
table number 0 and quantization table number 0 (defined below) and
components 1 and 2 (i.e., the chrominance planes) use Huffman table
number 1 and quantization table number 1 (defined below).
Additionally, video is non-interlaced and unscaled (i.e., the aspect
ratio is determined by the image width and height). The frame rate
is variable and explicit via the RTP timestamp.
Berc, et. al. Standards Track [Page 5]
RFC 2035 RTP Payload Format for JPEG Video October 1996
Six RTP/JPEG types are currently defined that assume all of the
above. The odd types have different JPEG sampling factors from the
even ones:
horizontal vertical
types comp samp. fact. samp. fact.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0/2/4 | 0 | 2 | 1 |
| 0/2/4 | 1 | 1 | 1 |
| 0/2/4 | 2 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 1/3/5 | 0 | 2 | 2 |
| 1/3/5 | 1 | 1 | 1 |
| 1/3/5 | 2 | 1 | 1 |
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?