📄 usage_examples.txt
字号:
a) There is only one multi-component transform stage, whose instance
index is 25 (this is the I25 suffix found on the descriptive
attributes for this stage). The value 25 is entirely arbitrary. I
picked it to make things interesting. There can, in general, be
any number of transform stages.
b) The single transform stage consists of only one transform block,
defined by the `Mstage_xforms:I25' attribute -- there can be
any number of transform blocks, in general.
c) This block takes 35 input components and produces 35 output
components, as indicated by the `Mstage_collections:I25' attribute.
d) The stage inputs and stage outputs are not permuted in this example;
they are enumerated as 0-34 in each case, as given by the
`Mstage_inputs:I25' and `Mstage_outputs:I25' attributes.
e) The transform block itself is implemented using a DWT, whose kernel
ID is 1 (this is the Part-1 5/3 reversible DWT kernel). Block
outputs are added to the offset vector whose instance index is 4
(as given by `Mvector_size:I4' and `Mvector_coeffs:I4') and the
DWT has 3 levels. The final field in the `Mstage_xforms' record
is set to 0, meaning that the canvas origin for the multi-component
DWT is to be taken as 0.
f) Since a multi-component transform is being used, the precision
and signed/unsigned properties of the final decompressed (or
original compressed) image components are given by `Mprecision'
and `Msigned', while their number is given by `Mcomponents'.
g) The `Sprecision' and `Ssigned' attributes record the precision
and signed/unsigned characteristics of what we call the codestream
components -- i.e., the components which are obtained by block
decoding and spatial inverse wavelet transformation. In this
case, the first 5 are low-pass subband components, at the bottom
of the DWT tree; the next 4 are high-pass subband components
from level 3; then come 9 high-pass components from level 2 of
the DWT; and finally the 17 high-pass components belonging to
the first DWT level. DWT normalization conventions for both
reversible and irreversible multi-component transforms dictate
that all high-pass subbands have a passband gain of 2, while
low-pass subbands have a passband gain of 1. This is why all
but the first 5 `Sprecision' values have an extra bit -- remember
that missing entries in the `Sprecision' and `Ssigned' arrays
are obtained by replicating the last supplied value.
Aj) kdu_compress -i catscan.rawl*35@524288 -o catscan.jpx -jpx_layers *
-jpx_space sLUM Sdims={512,512} Clayers 14 -rate 70
Mcomponents=35 Msigned=no Mprecision=12
Sprecision=12,12,12,12,12,13 Ssigned=no,no,no,no,no,yes
Kextension:I2=CON Kreversible:I2=no
Ksteps:I2={1,0,0,0},{1,0,0,0} Kcoeffs:I2=-1.0,0.5
Mvector_size:I4=35 Mvector_coeffs:I4=2048
Mstage_inputs:I25={0,34} Mstage_outputs:I25={0,34}
Mstage_collections:I25={35,35}
Mstage_xforms:I25={DWT,2,4,3,0}
Mnum_stages=1 Mstages=25
-- Same as example Ai), except in this case the compression processes
are irreversible, and a custom DWT transform kernel is used,
described by the `Kextension', `Kreversible', `Ksteps' and
`Kcoeffs' parameter attributes, having instance index 2 (i.e., ":I2").
The DWT kernel used here is the Haar, having 2-tap low- and high-pass
filters.
-- Note that "kdu_compress" consistently expresses bit-rate in terms
of bits-per-pixel. In this case, each pixel is associated with 35
image planes, so "-rate 70" sets the maximum bit-rate to 2 bits
per sample.
Ak) kdu_compress -i confocal.ppm*12@786597 -o confocal.jpx -jpx_layers *
-jpx_space sRGB Cblk={32,32} Cprecincts={64,64}
ORGgen_plt=yes Corder=RPCL Clayers 12 -rate 24
Mcomponents=36 Sprecision=8,8,8,9,9,9,9,9,9,9,9,9,8
Ssigned=no,no,no,yes
Kextension:I2=CON Kreversible:I2=no
Ksteps:I2={1,0,0,0},{1,0,0,0} Kcoeffs:I2=-1.0,0.5
Mmatrix_size:I7=9
Mmatrix_coeffs:I7=1,0,1.402,1,-0.344136,-0.714136,1,1.772,0
Mvector_size:I7=3 Mvector_coeffs:I7=128,128,128
Mstage_inputs:I25={0,35} Mstage_outputs:I25={0,35}
Mstage_collections:I25={12,12},{24,24}
Mstage_xforms:I25={DWT,2,0,2,0},{MAT,0,0,0,0}
Mstage_inputs:I26={0,0},{12,13},{1,1},{14,15},{2,2},{16,17},
{3,3},{18,19},{4,4},{20,21},{5,5},{22,23},
{6,6},{24,25},{7,7},{26,27},{8,8},{28,29},
{9,9},{30,31},{10,10},{32,33},{11,11},{34,35}
Mstage_outputs:I26={0,35}
Mstage_collections:I26={3,3},{3,3},{3,3},{3,3},{3,3},{3,3},
{3,3},{3,3},{3,3},{3,3},{3,3},{3,3}
Mstage_xforms:I26={MATRIX,7,7,0,0},{MATRIX,7,7,0,0},
{MATRIX,7,7,0,0},{MATRIX,7,7,0,0},
{MATRIX,7,7,0,0},{MATRIX,7,7,0,0},
{MATRIX,7,7,0,0},{MATRIX,7,7,0,0},
{MATRIX,7,7,0,0},{MATRIX,7,7,0,0},
{MATRIX,7,7,0,0},{MATRIX,7,7,0,0}
Mnum_stages=2 Mstages=25,26
-- This real doozy of an example can be used to compress a sequence
of 12 related colour images; these might be colour scans from a
confocal microscope at consecutive focal depths, for example. The
original 12 colour images are found in a single file, "confocal.ppm",
which is actually a concatenation of 12 PPM files, each of size
786597 bytes. 12 JPX compositing layers will be created, each
having the sRGB colour space. In the example, two multi-component
transform stages are used. These stages are most easily understood
by working backwards from the second stage.
* The second stage has 12 transform blocks, each of which implements
the conventional YCbCr to RGB transform, producing 12 RGB triplets
(with appropriate offsets to make unsigned data) from the 36 input
components to the stage. The luminance inputs to these 12
transform blocks are derived from outputs 0 through 11 from the
first stage. The chrominance inputs are derived from outputs
12 through 35 (in pairs) from the first stage.
* The first stage has 2 transform blocks. The first is a DWT block
with 2 levels, which implements the irreversible Haar (2x2)
transform. It synthesizes the 12 luminance components from its
12 subband inputs, the first 3 of which are low-pass luminance
subbands, followed by 3 high-pass luminance subbands from the
lowest DWT level and then 6 high-pass luminance subbands from
the first DWT level. The chrominance components are passed
straight through the first stage its NULL transform block.
-- All in all, then, this example employs the conventional YCbCr
transform to exploit correlation amongst the colour channels in
each image, while it uses a 2 level Haar wavelet transform to
exploit correlation amongst the luminance channels of successive
images.
-- Try creating an image like this and viewing it with "kdu_show". You
will also find you can serve it up beautifully using "kdu_server" for
terrific remote browsing experience.
kdu_maketlm
-----------
a) kdu_maketlm input.j2c output.j2c
b) kdu_maketlm input.jp2 output.jp2
-- You can add TLM marker segments to an existing raw code-stream file
or wrapped JP2 file. This can be useful for random access into large
compressed images which have been tiled; it is of marginal value when
an untiled image has multiple tile-parts.
-- Starting from v4.3, TLM information can be included directly by the
codestream generation machinery, which saves resource-hungry file
reading and re-writing operations. Note, however, that the
"kdu_maketlm" facility can often provide a more efficient TLM
representation, or find a legal TLM representation where none can
be determined ahead of time by the codestream generation machinery.
kdu_v_compress
--------------
Accepts similar arguments to `kdu_compress', but the input format
must be a "vix" file (read usage statement to find a detailed description
of this trivial raw video file format -- you can build a vix file by
concatenating raw video frames with a simple text header). The output
format must be one of "*.mj2" or "*.mjc", where the latter is a simple
compressed video format, developed for illustration purposes, while
the former is the Motion JPEG2000 file format described by ISO/IEC 15444-4.
a) kdu_v_compress -i in.vix -o out.mj2 -rate 1 -cpu
-- Compress to a Motion JPEG2000 file, with a bit-rate of 1 bit per pixel
enforced over each individual frame (not including file format wrappers)
and reports the per-frame CPU processing time. For meaningful CPU
times, make sure the input contains a decent number of frames (e.g.,
10 or more)
b) kdu_v_compress -i in.vix -o out.mj2 -rate 1,0.5 -cpu -no_slope_prediction
-- See the effects of slope prediction of compressor processing time.
kdu_merge
---------
a) kdu_merge -i im1.jp2,im2.jp2 -o merge.jpx
-- probably the simplest example of this useful tool. Creates a
single JPX file with two compositing layers, corresponding to the
two input images. Try opening `merge.jpx' in "kdu_show" and using
the "enter" and "backspace" keys to step through the compositing
layers
b) kdu_merge -i video.mj2 -o video.jpx
-- Assigns each codestream of the input MJ2 file to a separate compositing
layer in the output JPX file. Try stepping through the video frames
in "kdu_show".
c) kdu_merge -i video.mj2 -o video.jpx -composit 300@24.0*0+1
-- Same as above, but adds a composition box, containing instructions to
play through the first 300 images (or as many as there are) at a
rate of 24 frames per second.
-- The expression, "0+1" means that the first frame correspondings to
compositing layer 0 (the first one) and that each successive frame
is obtained by incrementing the compositing layer index by 1.
d) kdu_merge -i background.jp2,video.mj2 -o out.jpx
-composit 0@0*0 150@24*1+2@(0.5,0.5,2),2+2@(2.3,3.2,1)
-- Demonstrates a persistent background (0 for the iteration count makes
it persistent), on top of which we write 150 frames (to be played at
24 frames per second), each consisting of 2 compositing layers,
overlayed at different positions and scales. The first frame
overlays compositing layers 1 and 2 (0 is the background), after
which each new frame is obtained by adding 2 to the compositing
layer indices used in the previous frames. The odd-indexed
compositing layers are scaled by 2 and positioned half their scaled
with to the right and half their scaled height below the origin
of the compositing canvas. The others are scaled by 1 and positioned
2.3 times their width to the right and 3.2 times their height below
the origin.
-- The kdu_merge utility also supports cropping of layers prior to
composition and scaling.
e) kdu_merge -i im1.jp2,im2,jp2,alpha.jp2 -o out.jpx
-jpx_layers 2:0 sRGB,alpha,1:0/0,1:0/1,1:0/2,3:0/3
sRGB,alpha,1:0/0,1:0/1,1:0/2,3:0/0
-composit 0@(0,0,2),1@(0.5,0.5,1),2:(0.3,0.3,0.4,0.4)@(1.2,1.2,1)
-- This demonstrates the creation of a single complex image from 3
original images. im1.jp2 and im2.jp2 contain the colour imagery,
while alpha.jp2 is an image with 4 components, which we selectively
associate with the other images as alpha blending channels.
* Three custom compositing layers are created using the `-jpx_layers'
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -