📄 line_block_ifc.h
字号:
/*****************************************************************************/
/* Authors: David Taubman and Jim Spring */
/* Version: V2.0 */
/* Last Revised: 9/22/98 */
/*****************************************************************************/
#ifndef LINE_BLOCK_IFC_H
#define LINE_BLOCK_IFC_H
#include <stdio.h>
#include <limits.h>
/* ========================================================================= */
/* ------------------------ Introductory Comments -------------------------- */
/* ========================================================================= */
/**** PURPOSE ****
This header file contains interface definitions which clearly delineate the
roles which should be played by the transform, quantizer, and coder in
memory constrained applications. Memory constrained applications are those
applications in which the image which is to be compressed is too large to
store in memory. During compression, this means that the image will only
be seen once and that a suitable bit-stream must be generated in one pass,
using only modest intermediate buffering. Although the VM does not currently
offer direct support for this mode of operation, it is highly desirable to be
able to verify the performance of different algrorithms under these conditions.
For this reason, the JPEG2000 coding efficiency subgroup decided in Copenhagen
to introduce line- and block-based wavelet transform techniques into the next
release of the VM and to construct a software framework to support line- and
block-based coding algorithms, so that these could readily be incorporated
into the VM after the next internation meeting in Los Angeles, should that
be found to be appropriate. */
/**** OBJECTS in ANSI C ****
The definitions in this file constitute the above-mentioned framework for
line- and block-based coding. Because compression and decompression must
work incrementally, rather than buffering up the entire image between each
stage, local state information must be maintained by the transform, quantizer
and coding stages. This immediately suggests an object-oriented solution.
Rather than introducing C++ concepts with which some readers may not be
familiar, we stick entirely to the C programming language, but we adopt a
number of conventions which should be extremely helpful to all concerned.
Conceptually, the transform, the quantizer and the coder are each objects,
which simply means that they are represented by structures which hold their
state and that data flows between them via well-defined function calls and
not by direct manipulation of the individual fields in the structures. In
fact, the contents of the structures should remain entirely private to the
individual implementations of the different stages in the compression
algorithm. Consequently, in this header file we define only the form of
the function calls supported by each "object" and the type definitions for
the relevant structures. We refer to the set of functions defined for each
object as its "interface" since all interaction takes place via these
functions.
We illustrate the conventions with an example, before providing the actual
interface definitions for the suggested objects. Consider the wavelet
transform analysis stage. The analysis task is encapsulated within a
single object, which we will call "analysis_obj". This is just a structure
whose definition is as follows:
typedef
struct analysis_obj {
initialize_func_defn initialize;
push_func_defn push_line;
terminate_func_defn terminate;
} analysis_obj, *analysis_ref;
When the compression program is started, it first calls a function provided
by the implementor of the wavelet analysis stage which creates a structure
of this type and returns a reference (pointer) to the structure, i.e. something
of type `analysis_ref'. In practice, this function allocates a structure which
is larger than that defined above so as to be able to store all relevant
private state information. That is to say, a separate source file which wraps
up the wavelet analysis implementation might include a private definition of
a larger structure of the form,
typedef
struct my_analysis_obj {
analysis_obj base;
... // Private data fields
... // Private data fields
} my_analysis_obj;
The externally visible portion of this structure contains only a set of
function pointers. In this case, the `initialize', `push_line' and `terminate'
functions constitute the entire interface. The definitions of these functions
are provided through separate typedef statements, together with comments
which carefully describe the roles played by each interface function. Each
interface function takes a reference to the object (structure) itself as its
first argument so that the state information can be readily retrieved. Thus,
for example, the `terminate' function's definition would look like this:
typedef void (*terminate_func_defn)(analysis_ref self);
We apologize to those of you who are not used to working with function
pointers in C. The C++ language was created in part to overcome some of this
ugliness, but that is of no help to us here.
It is intended that ALL implementations of ALL "objects" should provide
support for ALL interface functions. Thus, for example, a line-based
coding algorithm must support the `push_block' interface function, as
well as the `push_line' interface function. Similarly, a block-based coder
cannot simply support `push_block', but must also support `push_line'. There
are a variety of reasons for imposing this requirement, apart from the fact
that this is the normal interpretation of a software interface. Firstly,
this is the only way to support interchangeability between the different
components of the compression and/or decompression system. Secondly, this
is the only way that we can design test conditions which all algorithms
must follow in reporting results. It is easy to see how problems would
arise if one coding algorithm could only report results when working with
a particular transform and/or quantizer implementation, which was not
supported by a different coding algorithm.
The interface definitions have been carefully designed to accommodate the
needs of all algorithms (as far as we are aware), while minimizing the
burden on individual algorithm proposers to implement these interfaces.
The needs of line-, block- and tree-based coding and wavelet transform
algorithms, as well as both scalar and TCQ quantization, were all considered
during the interface design. Of course, the interface definitions may still
need to migrate as new ideas come to light, but the definitions should be
allowed to change only as a course of last resort; the interface should
not be changed just to suit the quirks of one particular implementation
(hackers beware)!! */
/**** PUSH AND PULL MODELS ****
The interface definitions layed out in this header file follow a common and
eminently justifiable paradigm for compression software. The various stages of
the compression process uniformly adhere to a so-called "push" model: image
samples are pushed into the wavelet analysis stage one line at a time; the
wavelet analysis stage pushes subband samples to the quantization stage as
soon as they can possibly be made available (of course, it doesn't wait to
transform the entire image); likewise, the quantization stage pushes quantized
symbol indices to the encoding stage, as soon as they become available, which
ultimately pushes compressed bits to a `bitstream_sink' object. In practice,
this process of pushing information downstream as soon as it is available is
accomplished by calling a `push' function in the relevant object's interface.
Part of the goal here is to correctly apportion responsibility for memory
consumption amongst the different stages in the overall compression system.
For example, the order in which data is pushed to the encoder is always that
which minimizes the resource consumption in the wavelet analysis stage; this
is the sequence which hands subband samples off as soon as they can possibly
be made available. If this sequence does not agree with the order in which
the coder would like to consume subband samples then it must incur the
relevant buffering overhead. Of course, in individual implementations there
is no reason why an algorithm cannot wait for all sample in the entire image
to be pushed in via its `push' call, buffering them up to be processed all at
once internally. In this way, existing implementations which do not follow
the push paradigm can be readily incorporated into the framework. Ultimately,
however, the implementation should be modified to take advantage of whatever
incremental processing capabilities it advertises so that resource
consumption can be fairly compared.
In the decompressor, the dual of the compressor's push paradigm is a
pull model. Image samples are pulled out of the wavelet synthesis stage one
line at a time by means of a call to the `synthesis' object's `pull_line'
function; in order to satisfy this request, the `synthesis' stage will need
to pull subband samples from the `dequantizer', which in turn pulls samples
out of the `decoder'; ultimately, the `decoder' pulls the bit-stream from a
`bitstream_source' object, which will usually be equivalent to reading from
the compressed file. */
/**** CONCLUDING REMARKS ****
We hope that the comments provided in the remainder of this header file will
clearly identify the role played by each object and its interface functions.
For further clarification, example implementations of `main' are
provided, quite independently of the existing VM, so as to simplify
understanding of the framework. Also, both example and realistic
implementations of all interface functions are being made available
separately, both to demonstrate the usefulness of the framework and to
provide a reference point for proposers of different algorithms. */
/* ========================================================================= */
/* -------------------- IMPLEMENTATION PRECISION --------------------------- */
/* ========================================================================= */
#define IMPLEMENTATION_PRECISION 16 /* Must be one of 16 or 32 currently. */
#if (IMPLEMENTATION_PRECISION == 16)
typedef short int ifc_int;
#elif (IMPLEMENTATION_PRECISION == 32)
# if (INT_MAX == 2147483647)
typedef int ifc_int;
# elif (LONG_MAX == 2147483647)
typedef long int ifc_int;
# else
# error "Platform does not support 32 bit integers!"
# endif
#else
# error "IMPLEMENTATION_PRECISION must be one of 16 or 32!"
#endif
/* Together, the IMPLEMENTATION_PRECISION constant and the `ifc_int'
data type define the number of bits per sample which are used to
exchange image sample values, Wavelet transform coefficients and
quantization indices. 16-bit precision is generally sufficient for
8-bit images and probably 12-bit images; however, 32-bit precision
is required to fully support imagery with 16 or more bits per sample.
The interface definitions are constructed in such a way as to ensure
that lower and higher precision implementations should generate and
consume the same bit-stream, except that a bit-stream which was generated
with a higher implementation precision may not be fully decodable within
an implementation which uses only 16-bit precision. The implementation
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -