⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 usage_examples.txt

📁 JPEG2000的C++实现代码
💻 TXT
📖 第 1 页 / 共 5 页
字号:
          of the DWT tree; the next 4 are high-pass subband components
          from level 3; then come 9 high-pass components from level 2 of
          the DWT; and finally the 17 high-pass components belonging to
          the first DWT level.  DWT normalization conventions for both
          reversible and irreversible multi-component transforms dictate
          that all high-pass subbands have a passband gain of 2, while
          low-pass subbands have a passband gain of 1.  This is why all
          but the first 5 `Sprecision' values have an extra bit -- remember
          that missing entries in the `Sprecision' and `Ssigned' arrays
          are obtained by replicating the last supplied value.

Aj) kdu_compress -i catscan.rawl*35@524288 -o catscan.jpx -jpx_layers *
                 -jpx_space sLUM Sdims={512,512} Clayers 14 -rate 70
                 Mcomponents=35  Msigned=no  Mprecision=12
                 Sprecision=12,12,12,12,12,13  Ssigned=no,no,no,no,no,yes
                 Kextension:I2=CON  Kreversible:I2=no
                 Ksteps:I2={1,0,0,0},{1,0,0,0}  Kcoeffs:I2=-1.0,0.5
                 Mvector_size:I4=35 Mvector_coeffs:I4=2048
                 Mstage_inputs:I25={0,34}  Mstage_outputs:I25={0,34}
                 Mstage_collections:I25={35,35}
                 Mstage_xforms:I25={DWT,2,4,3,0}
                 Mnum_stages=1  Mstages=25
    -- Same as example Ai), except in this case the compression processes
       are irreversible, and a custom DWT transform kernel is used,
       described by the `Kextension', `Kreversible', `Ksteps' and
       `Kcoeffs' parameter attributes, having instance index 2 (i.e., ":I2").
       The DWT kernel used here is the Haar, having 2-tap low- and high-pass
       filters.
    -- Note that "kdu_compress" consistently expresses bit-rate in terms
       of bits-per-pixel.  In this case, each pixel is associated with 35
       image planes, so "-rate 70" sets the maximum bit-rate to 2 bits
       per sample.

Ak) kdu_compress -i confocal.ppm*12@786597 -o confocal.jpx -jpx_layers *
                 -jpx_space sRGB Cblk={32,32} Cprecincts={64,64}
                 ORGgen_plt=yes Corder=RPCL Clayers 12 -rate 24
                 Mcomponents=36 Sprecision=8,8,8,9,9,9,9,9,9,9,9,9,8
                 Ssigned=no,no,no,yes
                 Kextension:I2=CON  Kreversible:I2=no
                 Ksteps:I2={1,0,0,0},{1,0,0,0}  Kcoeffs:I2=-1.0,0.5
                 Mmatrix_size:I7=9
                 Mmatrix_coeffs:I7=1,0,1.402,1,-0.344136,-0.714136,1,1.772,0
                 Mvector_size:I7=3  Mvector_coeffs:I7=128,128,128
                 Mstage_inputs:I25={0,35}  Mstage_outputs:I25={0,35}
                 Mstage_collections:I25={12,12},{24,24}
                 Mstage_xforms:I25={DWT,2,0,2,0},{MAT,0,0,0,0}
                 Mstage_inputs:I26={0,0},{12,13},{1,1},{14,15},{2,2},{16,17},
                                  {3,3},{18,19},{4,4},{20,21},{5,5},{22,23},
                                  {6,6},{24,25},{7,7},{26,27},{8,8},{28,29},
                                  {9,9},{30,31},{10,10},{32,33},{11,11},{34,35}
                 Mstage_outputs:I26={0,35}
                 Mstage_collections:I26={3,3},{3,3},{3,3},{3,3},{3,3},{3,3},
                                        {3,3},{3,3},{3,3},{3,3},{3,3},{3,3}
                 Mstage_xforms:I26={MAT,7,7,0,0},{MAT,7,7,0,0},{MAT,7,7,0,0},
                                   {MAT,7,7,0,0},{MAT,7,7,0,0},{MAT,7,7,0,0},
                                   {MAT,7,7,0,0},{MAT,7,7,0,0},{MAT,7,7,0,0},
                                   {MAT,7,7,0,0},{MAT,7,7,0,0},{MAT,7,7,0,0}
                 Mnum_stages=2  Mstages=25,26
    -- This real doozy of an example can be used to compress a sequence
       of 12 related colour images; these might be colour scans from a
       confocal microscope at consecutive focal depths, for example.  The
       original 12 colour images are found in a single file, "confocal.ppm",
       which is actually a concatenation of 12 PPM files, each of size
       786597 bytes.  12 JPX compositing layers will be created, each
       having the sRGB colour space.  In the example, two multi-component
       transform stages are used.  These stages are most easily understood
       by working backwards from the second stage.
        * The second stage has 12 transform blocks, each of which implements
          the conventional YCbCr to RGB transform, producing 12 RGB triplets
          (with appropriate offsets to make unsigned data) from the 36 input
          components to the stage.  The luminance inputs to these 12
          transform blocks are derived from outputs 0 through 11 from the
          first stage.  The chrominance inputs are derived from outputs
          12 through 35 (in pairs) from the first stage.
        * The first stage has 2 transform blocks.  The first is a DWT block
          with 2 levels, which implements the irreversible Haar (2x2)
          transform.  It synthesizes the 12 luminance components from its
          12 subband inputs, the first 3 of which are low-pass luminance
          subbands, followed by 3 high-pass luminance subbands from the
          lowest DWT level and then 6 high-pass luminance subbands from
          the first DWT level.  The chrominance components are passed
          straight through the first stage its NULL transform block.
    -- All in all, then, this example employs the conventional YCbCr
       transform to exploit correlation amongst the colour channels in
       each image, while it uses a 2 level Haar wavelet transform to
       exploit correlation amongst the luminance channels of successive
       images.
    -- Try creating an image like this and viewing it with "kdu_show".  You
       will also find you can serve it up beautifully using "kdu_server" for
       terrific remote browsing experience.

kdu_maketlm
-----------
 a) kdu_maketlm -i input.j2c -o output.j2c
 b) kdu_maketlm -i input.jp2 -o output.jp2
    -- You can add TLM marker segments to an existing raw code-stream file
       or wrapped JP2 file.  This can be useful for random access into large
       compressed images which have been tiled; it is of marginal value when
       an untiled image has multiple tile-parts.
    -- Starting from v4.3, TLM information can be included directly by the
       codestream generation machinery, which saves resource-hungry file
       reading and re-writing operations.  Note, however, that the
       "kdu_maketlm" facility can often provide a more efficient TLM
       representation, or find a legal TLM representation where none can
       be determined ahead of time by the codestream generation machinery.

kdu_v_compress
--------------
    Accepts similar arguments to `kdu_compress', but the input format
    must be a "vix" file (read usage statement to find a detailed description
    of this trivial raw video file format -- you can build a vix file by
    concatenating raw video frames with a simple text header).  The output
    format must be one of "*.mj2" or "*.mjc", where the latter is a simple
    compressed video format, developed for illustration purposes, while
    the former is the Motion JPEG2000 file format described by ISO/IEC 15444-4.

 a) kdu_v_compress -i in.vix -o out.mj2 -rate 1 -cpu
    -- Compress to a Motion JPEG2000 file, with a bit-rate of 1 bit per pixel
       enforced over each individual frame (not including file format wrappers)
       and reports the per-frame CPU processing time.  For meaningful CPU
       times, make sure the input contains a decent number of frames (e.g.,
       10 or more)
 b) kdu_v_compress -i in.vix -o out.mj2 -rate 1,0.5 -cpu -no_slope_prediction
    -- See the effects of slope prediction of compressor processing time.

kdu_merge
---------
 a) kdu_merge -i im1.jp2,im2.jp2 -o merge.jpx
    -- probably the simplest example of this useful tool.  Creates a
       single JPX file with two compositing layers, corresponding to the
       two input images.  Try opening `merge.jpx' in "kdu_show" and using
       the "enter" and "backspace" keys to step through the compositing
       layers
 b) kdu_merge -i video.mj2 -o video.jpx
    -- Assigns each codestream of the input MJ2 file to a separate compositing
       layer in the output JPX file.  Try stepping through the video frames
       in "kdu_show".
 c) kdu_merge -i video.mj2 -o video.jpx -composit 300@24.0*0+1
    -- Same as above, but adds a composition box, containing instructions to
       play through the first 300 images (or as many as there are) at a
       rate of 24 frames per second.
    -- The expression, "0+1" means that the first frame correspondings to
       compositing layer 0 (the first one) and that each successive frame
       is obtained by incrementing the compositing layer index by 1.
 d) kdu_merge -i background.jp2,video.mj2 -o out.jpx
              -composit 0@0*0 150@24*1+2@(0.5,0.5,2),2+2@(2.3,3.2,1)
    -- Demonstrates a persistent background (0 for the iteration count makes
       it persistent), on top of which we write 150 frames (to be played at
       24 frames per second), each consisting of 2 compositing layers,
       overlayed at different positions and scales.  The first frame
       overlays compositing layers 1 and 2 (0 is the background), after
       which each new frame is obtained by adding 2 to the compositing
       layer indices used in the previous frames.  The odd-indexed
       compositing layers are scaled by 2 and positioned half their scaled
       with to the right and half their scaled height below the origin
       of the compositing canvas.  The others are scaled by 1 and positioned
       2.3 times their width to the right and 3.2 times their height below
       the origin.
    -- The kdu_merge utility also supports cropping of layers prior to
       composition and scaling.
 e) kdu_merge -i im1.jp2,im2,jp2,alpha.jp2 -o out.jpx
              -jpx_layers 2:0 sRGB,alpha,1:0/0,1:0/1,1:0/2,3:0/3
                              sRGB,alpha,1:0/0,1:0/1,1:0/2,3:0/0
              -composit 0@(0,0,2),1@(0.5,0.5,1),2:(0.3,0.3,0.4,0.4)@(1.2,1.2,1)
    -- This demonstrates the creation of a single complex image from 3
       original images.  im1.jp2 and im2.jp2 contain the colour imagery,
       while alpha.jp2 is an image with 4 components, which we selectively
       associate with the other images as alpha blending channels.
       * Three custom compositing layers are created using the `-jpx_layers'
         command.  The first just consists of the first compositing layer
         from the second image file (note that file numbers all start from 1
         while everything else starts fro 0) -- of course, JP2 files have
         only one compositing layer.  The second custom compositing layer
         has four channels (3 sRGB channels and 1 alpha channel), extracted
         from image components 0-2 of codestream 0 in file 1 and image
         component 3 (the 4'th one) of codestream 0 in file 3 (the alpha
         image).  The relevant codestream colour transforms are applied
         automatically during the rendering process, so that even though the
         components have been compressed using the codestream ICT, they may
         be treated as RGB components.  The third compositing layer is
         similar to the second, but it uses the second component of
         the alpha image for its alpha blending.
       * One composited image is created by combining the 3 layers.  The
         first layer is scaled by 2 and placed at the origin of the
         composition canvas.  The second layer is placed over this, scaled
         by 1 and shifted by half its height and width, below and to the
         right of the composition canvas.  The third layer is placed on top
         after first cropping it (removing 30% of its width and height from
         the left, and preserving 40% of its original with and height) and
         then shifted it by 1.2 times its cropped height and width.
    -- It is worth noting that the final image does not contain multiple
       copies of any of the original imagery; each original image codestream
       is copied once into the merged image and then referenced from
       custom compositing layer header boxes, which are in turn referenced
       from the composition box.  This avoids inefficiencies in the file
       representation and also avoids computational inefficiencies during
       rendering.  Each codestream is opened only once within "kdu_show"
       (actually inside `kdu_region_compositor') but may be used by
       multiple rendering contexts.  One interesting side effect of this is
       that if you attach a metadata label to one of the codestreams in
       the merged file it will appear in all elements of the composited
       result which use that codestream.  You can attach such metadata

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -