⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 sbs_video.txt

📁 huffman编码压缩
💻 TXT
📖 第 1 页 / 共 3 页
字号:
Typically, when video is digitized, it is fragmented into individual ‘pixels’, and the pixels are
transmitted sequentially in either serial or parallel streams. Often, color pixels are sent in 24-
bit format (8 bits for Red, 8 bits for Green, 8 bits for Blue), or 32-bit format (another 8 bits
added to these 24 bits for ‘transparency’. In this case, this extra value is called the ‘alpha’
value.)
As with analog video, pixels are sent sequentially, starting from the top left of a frame, and
then left to right, line by line. For digital versions of the interlaced standards, lines are again
sent in interlaced fashion. ITU656 is a standard method of sending NTSC/ RS-170A signals
digitally within systems, in an 8-bit parallel format.
Digital Line Sampling
When lines are digitized, the number of digital samples per second can vary. For example, for
high quality/ DVD quality television, 720 digital samples per line are often taken.
However, this does not result in ‘square pixels’. For applications requiring square pixels, 640
samples are taken. Because there are 480 active video lines on a standard TV picture, the
640 X 480 format results in square pixels on the standard TV screen, with its 4:3 aspect ratio
(width to height ratio).
Broadcast Digital Video: SDI
In the broadcast studio world, video transmission has now largely migrated from analog to
digital, using the SDI – Serial Data Interface.
? 2005 SBS Technologies, Inc., All rights reserved.
7
For standard definition (SD) video as used on normal television broadcasts, SDI is a serial
data stream of 270 megabits per second, and the standard is SMPTE 259. (SMPTE
abbreviates Society of Motion Picture and Television Engineers.)
For high-definition (HD) television, the SDI stream (called HD-SDI) is sent at a rate of 1.485
gigabits per second, and the standard is SMPTE 292M.
In each case, the all data (video, audio, synchronization, and ancillary data such as closedcaptioning)
is sent on a single 75 ohm coaxial cable, in the form of 10-bit packets. This
method is now in wide use in broadcast studios and transmission facilities worldwide.
Display Video: DVI
Another current development is that displays and television monitors are converting from
analog (composite or component RGB) to digital interfaces, using the DVI (Digital Video
Interface) standard. This standard, being digital, ensures no picture degradation between the
video output and the display input.
The video is sent on 4 shielded differential wire pairs, at the rate dictated by the video/
graphics frame rate and resolution. This rate is often well over 1 gigabit per second on each
pair, and the cabling has length limitations making it suitable for close-by connections.
Digital Video
Video Compression Standards Overview
One of the big advantages of digital video is that it can be compressed for reduced bandwidth
applications transmitted over satellite, cable TV and Internet-based networks. Compressed
video is particularly useful for reducing storage requirements for video recorder (DVR)
applications.
There are "lossless" and "lossy" forms of data compression. Lossless data compression is
used when the data must be restored exactly as it was before compression. Since losing a
single character can make restored numbers or text misleading, number and text files are
stored using lossless techniques, such as Huffman Coding, Lempel-Ziv-Welch (LZW). A
lossless compression technique for images, Portable Network Graphics (PNG) is a
recommendation of the World Wide Web Consortium and now an ISO standard, and is
especially useful for images displayed and stored on Web sites.
? 2005 SBS Technologies, Inc., All rights reserved.
8
There are limits, though, to the amount of compression that can be obtained with lossless
compression techniques. Lossless compression ratios are generally in the range of 2:1 to 8:1.
Lossy compression, on the other hand, works on the assumption that the data doesn't have
to be restored perfectly. A good deal of redundant information can be simply thrown away
from images, video data, and audio data, and when uncompressed such data will still be of
acceptable quality. Compression ratios can be an order of magnitude greater than those
available from lossless methods.
Over the years, there have been two main standards bodies doing parallel development of
video compression standards. The first widely-used standards for video compression were
developed by the Moving Picture Experts Group (MPEG). Another standards group, the
International Telecommunication Union (ITU), has developed the H series of compression
standards, mainly for the telecommunications industry.
As standards organizations, the MPEG and ITU committees do not specify end-user product
or equipment. MPEG does, however, standardize Profiles. A Profile is a selection of tools that
a group of participating companies within the standards organizations have selected as a
basis for deploying products to meet specified application areas. For example, MPEG-4
Simple Profile (SP) and Advanced Simple Profile (ASP) were developed for streaming video
over Internet connections.
To become standardized, Profiles pass through a requirements process where the tools and
applications are reviewed and voted on as being an interoperable profile for the industry.
Within each Profile there can be one or more Levels. Levels allow for increasing complexity of
the tools to allow some diversity within a Profile in addressing devices of varying
performance. Levels may thus restrict bit-rates, size, number of nodes etc.
MPEG-1
The first lossy compression scheme developed by the MPEG committee, MPEG-1, is still in
use today for CD-ROM video compression and as part of early Microsoft? Windows? Media
players. The MPEG-1 algorithm uses a combination of techniques to achieve compression,
including use of the Discrete Cosine Transform (DCT) algorithm to first convert each image
into the frequency domain, and then process the frequency coefficients to optimally reduce a
video stream to the required bandwidth.
The DCT algorithm is well known and widely used for data compression. Similar to Fast
Fourier Transform, DCT converts data, such as the pixels in an image, into sets of
? 2005 SBS Technologies, Inc., All rights reserved.
9
frequencies. To compress data, the least meaningful frequencies are stripped away based on
allowable resolution loss—generally user defined. This loss of resolution results in a lossy
compressed image. Rather than fully encoding and compressing every video frame, MPEG-1
compression processes a ‘Group of Pictures’ where it:
· Fully encodes an ‘I’ (independent) frame,
· Encodes only the differences on subsequent ‘P’ (progressive) frames, and,
· In the most complex case, also provides ‘B’ (bi-directional) frames, which
look both ahead and back in time to determine how to best compress the
signal.
In slow–moving scenes, the image differences between successive frames are small,
resulting in higher compression rates without great loss of detail achieving great bandwidth
savings. With fast-moving sequences, image differences are greater and bandwidth savings
are far less as exemplified by sports channels on satellite television which consume more
bandwidth than talk shows, for example.
MPEG-1 found usage on CD-ROM Videos, in early versions of Microsoft? Windows? Media
player, and other PC applications, but does not support higher-quality video such as today’s
DVD standards. Interestingly, the currently popular MP3 (MPEG-1, layer three) audio
standard is actually the audio compression portion of the MPEG-1 standard and provides
about 10:1 compression of audio files at reasonable quality.
MPEG-2
The MPEG-2 compression standard evolved to meet the needs of compressing higher-quality
video. MPEG-2 is used in today’s video DVD’s and digital broadcasts via satellite and cable
and uses bit rates typically ranging from 5 to 8 Mbps, although MPEG-2 is not really limited to
a bit rate range. MPEG-2's basic compression techniques are very similar to MPEG-1, using
DCT transforms, I and P frames, but also provides support for interlaced video (the format
used by broadcast TV systems).
MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1
at 3 Mbit/s and above. MPEG-2 also introduces and defines Transport Streams, which are
designed to carry digital video and audio over unreliable media, and are used in broadcast
applications. With some enhancements, MPEG-2 is also the current standard for High
Definition Television (HDTV) transmission. MPEG-2 also includes additional color
subsampling, improved compression, error correction and multichannel extensions for
surround sound.
? 2005 SBS Technologies, Inc., All rights reserved.
10
Although MPEG-2 excels at full broadcast television, and can be used to retrieve and control
streams from a server, just like MPEG-1 compression, MPEG-2 audio and video
compression are still essentially linear and interactivity is limited to operations such as slow
motion, frame-by-frame or fast forward.
MPEG-3
MPEG-3 is the compression standard that never was. While it was originally intended by the
MPEG committee that an MPEG-3 standard would evolve to support HDTV, it turned out that
this could be done with minor changes to MPEG-2. So MPEG-3 never happened and there
are now ‘profiles’ of MPEG-2 that support HDTV as well as Standard Definition television.
MPEG-4 (also called H.263)
Although a full explanation of the MPEG-4 standard is well beyond the scope of this paper,
MPEG-4 has emerged as much more than a video and audio compression and
decompression standard. The MPEG committee designed MPEG-4 to be a single standard
covering the entire digital media workflow from capture, authoring, and editing to encoding,
distribution, playback, and archiving. It is a container for all types of items—called "media
objects"—beyond audio and video. Media objects can be text, still images, graphic animation,
buttons, web links, and so on. These media objects can be combined to create polished
interactive presentations.
The MPEG-4 file format, based on Apple Computer's QuickTime technology, was developed
by the MPEG committee as a standard designed to deliver interactive multimedia and
graphics applications over networks and to guarantee seamless delivery of high-quality audio
and video over IP-based networks and the Internet.
A major goal of the MPEG-4 standard was to try to solve two video transport problems:
1. Sending video over low-bandwidth channels such as the Internet and video cell
phones.
2. Achieving better compression than MPEG-2 for broadcast signals.
MPEG-4 functions well in terms of compression and it is used in a wide range of bit rates,
from 64 Kbps to 1,800 Mbps. However, it had limited success in achieving dramatically better
compression than MPEG-2 for broadcast signals, and although it is in the range of 15% better

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -