📄 changes_from_vm4_2.txt
字号:
CHANGES IN VM5.0 FROM VM4.2
=============================
______________________________________________________________________________
Basic Code Clean Up
______________________________________________________________________________
* Removed code and makefile logic for the test_er and test_xform programs.
______________________________________________________________________________
Changes in Entropy Coder
______________________________________________________________________________
* There are no longer any explicit sub-blocks and there is no
sub-block significance coding front end. The significance of the
entire code-block, however, is explicitly coded in the T2 coding engine
as it was before.
* The coding primitives are unchanged except that context formation
for the zero coding primitive no longer includes a state which
depends on "FAR" neighbours (i.e. those which are more than one
sample away from the sample being coded).
* The scan pattern used by the coder has now been fixed to be column-by-column
(rather than row-by-row) within stripes (or sub-blocks) of height 4.
* The HL band's code-blocks are no longer transposed and, to compensate, the
relevant zero coding LUT's entries have been transposed.
* There are now three coding passes instead of four. The forward and
backward significance propagation passes have been combined into a
single forward significance propagation pass.
* Context models are no longer initialized to the zero state (uniform
probability with no assumed knowledge); instead, the states corresponding
to the run-length mode and the zero coding mode with all zero neighbours
are initialized to fixed, highly skewed states on the learning curve of
the probability estimation state machine.
* In the reversible mode, the encoder now follows the new policy agreed
in Vancouver in that all coding passes are always included in a
completely lossless bit-stream regardless of whether they decrease the
assumed distortion metric, so as to allow the dequantizer to use
alternative rounding policies other than mid-point rounding and still
achieve lossless reconstruction. There is one exception to the above:
when all bit-planes in the code-block are insignificant, nothing is
actually included in the bit-stream.
OPTIONS:
* The coder includes an option to use only vertically stripe/sub-block causal
neighbours when forming contexts. This affects only those samples lying
on the fourth row of each stripe.
* The coder includes an option to reset the probability models (to the
skewed states mentioned above) at the end of each sub-bitplane coding
pass.
* The coder includes an option to terminate the MQ codeword at the end
of each coding pass and include an indication of the termination points
in the packet headers through the T2 coding engine. The T2 signaling
method for this extra length information is identical to that used
previously to signal the length of the layer contributions from each
code-block, except that the number of lengths signaled differentially
in this manner now depends upon whether or not extra termination points
are inserted. The extra termination points are inserted to enable
parallel encoding and/or decoding. The encoder can use any desired
algorithm to flush the coder and compute lengths, which may vary in
their efficiency, subject to the assumption that the decoder uses the
standard MQ decoder described in the JBIG2 documentation. This is the
same policy which was used in the original EBCOT coder. Two encoder
flushing algorithms are provided which vary slightly in their efficiency
(from essentially optimal [the algorithm which appeared in VM4.1/4.2] to
about 1 bit per termination less efficient) and also in their complexity
and ease of understanding. The coder also includes a third algorithm
with the same length computation as the less complex algorithm mentioned
above which embeds error resilience information in the spare code space
(see below).
* The coder includes a lazy coding option in which the MQ coder is completely
bypassed in the significance propagation and magnitude refinement coding
passes after the 3'rd bit-plane from the point at which the code-block
first becomes significant.
IMPLEMENTATION:
* Substantial effort has been spent on further enhancing the efficiency
of the entropy coder, with an emphasis on the decoding operations. The
MQ coder has been partially reimplemented for efficiency and the
coding pass functions have been carefully reworked. The decoder now
provides two sets of coding pass functions. One set of functions is
more dydactic and slightly less efficient (not on all platforms) -- it
is accessed through the `-Cno_speedup' switch. The other "speedup"
functions which are used by default use a variety of different speedup
tricks to reduce the number of comparison and load operations, which
is usually beneficial (anticipated benefits are largest on heavily
pipelined architectures with little branch prediction capability such
as DSP's).
______________________________________________________________________________
Changes in Error Resilience
______________________________________________________________________________
* As mentioned above, there is an option to insert error resilience information
into the spare code space left at the end of each termination point. This
arises because the termination is required to be byte aligned so there is
an average of 3.5 bits per termination point which can be exploited for
error checking. The `-Cer_term' option causes this behaviour to occur.
For this to be useful, error resilient decoders need to know what algorithm
was used to terminate the codeword segments (non error resilient decoders
don't care so long as all symbols are decoded correctly) and it needs to
know that the spare code space has been used in this way. Consequently, the
global header contains a flag identifying the use of the "standard"
termination algorithm. It should be noted, however, that the embedding does
not alter the compression efficiency in any way (get identical results with
`-Ceasy_term'). On the other hand, in order to be useful, the codestream
must be regularly terminated. This occurs automatically with the parallel
mode (`-Cparallel') and the codestream is also terminated sufficiently
often (though less often, unless -Cparallel is also supplied) in lazy
coding mode. If neither of these options is in force, the `-Cer_term'
option will also force termination at the end of each coding pass, but
will not (unless otherwise requested) introduce the other components
required for parallelism (model reset and causal contexts).
* The operation of the `-Csegmark' option has been restored. It introduces
4 bits of error resilience information at the end of the normalization
pass (P4 in original EBCOT algorithm; now P3) for each bit-plane. The
operation has been slightly modified to complement the `-Cer_term'
implementation and to ensure that it will work well even with
parallel implementations.
* The `-Cer' option in the decoder is fixed and now takes a parameter
indicating how conservative the decoder should be in discarding coding
passes which may contain an error in the event that one is detected.
A value of 2 or 3 appears to give the most satisfactory results.
* When a corrupted packet header is detected inside "ebcot_receive_bits.c",
the packet is no longer discarded, but the problem of detecting which
code-blocks are corrupt is left to the entropy decoder, relying on the
`-Cer' implementation. This probably fixes the "really bad worst case"
problem encountered during the error resilience testing experiments
reported in Seoul.
* Modified the `-Bresync' marker two use the two bytes 0xFFD0 instead of
just 0xFF. This avoids conflicts with the start of tile syntax marker
0xFF50 when resync marker 50 is output to the bit-stream.
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -