⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 usage_examples.txt

📁 JPEG2000的C++实现代码
💻 TXT
📖 第 1 页 / 共 5 页
字号:
       (otherwise, it is hard to efficiently represent or use the PLT
       marker information); and 3) the use of relatively small precincts.
       The additional "ORGtparts=R" attribute introduces tile-part headers
       immediately before each resolution level and locates the packet length
       information with the header of the tile-part to which the packets
       belong.  This has the effect of delaying the loading and parsing of
       packet length identifiers (hundreds of thousands of packets were
       generated in the 500 MByte image example) until an interactive
       viewer or browser requests the relevant resolution.
s) kdu_compress -i small.pgm -o small.jp2 -rate 1 Clayers 5 -no_info
   -- The `-no_info' option prevents Kakadu from including a comment (COM)
      marker segment in the code-stream to identify the rate-distortion slope
      and size associated with each quality layer.  This information is
      generated by default, starting from v3.3, since it allows rendering
      and serving applications to customize their behaviour to the properties
      of the image.  The only reason to turn off this feature is if you
      are processing very small images and are interested in minimizing the
      size of the code-stream.
t) kdu_compress -i massive.ppm -o massive.jp2 -rate -,0.001 Clayers=28
                Creversible=yes Clevels=8 Corder=PCRL ORGgen_plt=yes
                Cprecincts={256,256},{256,256},{128,128},{64,128},{32,128},
                           {16,128},{8,128},{4,128},{2,128} -flush_period 1024
   -- You might use this type of command to compress a really massive image,
      e.g. 64Kx64K or larger, without requiring the use of tiles.  The
      code-stream is incrementally flushed out using the `-flush_period'
      argument to indicate that an attempt should be made to apply incremental
      rate control procedures and flush as much of the generated data to the
      output file as possible, roughly every 1024 lines.  The result is that
      you will only need about 1000*L bytes of memory to perform all
      relevant processing and code-stream management, where L is the image
      width.  It follows that a computer with 256MBytes of RAM could
      losslessly an image measuring as much as 256Kx256K without
      resorting to vertical tiling.  The resulting code-stream can be
      efficiently served up to a remote client using `kdu_server'.
u) kdu_compress -i im32.bmp -o im32.jp2 -jp2_alpha -jp2_box xml.box
   -- Demonstrates the fact that "kdu_compress" can read 32-bit BMP files
      and that you can tell it to regard the fourth component as an alpha
      channel, to be marked as such in the JP2 header.  The "kdu_show"
      application ignores alpha channels only because alpha blending is
      not uniformly supported across the various WIN32 platforms.  The
      Java demo application "KduRender.java" will use an image's alpha
      channel, if any, to customize the display.
   -- The example also demonstrates the inclusion of additional meta-data
      within the file.  Consult the usage statement for more on the structure
      of the files supplied with the `-jp2_box' argument.  To reveal the
      meta-data structure of a JP2 file, use "kdu_show"'s new "meta-show"
      capability, accessed via the `m' accelerator or the view menu.
v) kdu_compress -i im.ppm -o im.jpx -jpx_space ROMMRGB
   -- demonstrates the generation of a true JPX file.
   -- demonstrates the fact that any of the JPX enumerated colour space
      descriptions can now be used; assumes, of course, that the input image
      does have a ROMM RGB colour representation (in this case).
   -- you can actually provide multiple colour spaces now, using `-jp2_space'
      and/or `-jpx_space', with the latter allowing you to provide
      precedence information to indicate preferences for readers which are
      able to interpret more than one of the representations.
w) kdu_compress -i frag1.pgm -o massive.jp2 Creversible=yes
                 Clevels=12 Stiles={32768,32768} Clayers=30
                 -rate -,0.0000001 Cprecincts={256,256},{256,256},{128,128}
                 Corder=RPCL ORGgen_plt=yes ORGtparts=R Cblk={32,32}
                 ORGgen_tlm=13 -frag 0,0,1,1 Sdims={1500000,2300000}
   kdu_compress -i frag2.pgm -o massive.jp2 Creversible=yes
                 Clevels=12 Stiles={32768,32768} Clayers=30
                 -rate -,0.0000001 Cprecincts={256,256},{256,256},{128,128}
                 Corder=RPCL ORGgen_plt=yes ORGtparts=R Cblk={32,32}
                 ORGgen_tlm=13 -frag 0,1,1,1
   kdu_compress -i frag3.pgm -o massive.jp2 Creversible=yes
                 Clevels=12 Stiles={32768,32768} Clayers=30
                 -rate -,0.0000001 Cprecincts={256,256},{256,256},{128,128}
                 Corder=RPCL ORGgen_plt=yes ORGtparts=R Cblk={32,32}
                 ORGgen_tlm=13 -frag 0,0,2,1
   ...
   -- demonstrates the compression of a massive image (about 3.5 Tera-pixels
      in this case) in fragments.  Each fragment represents a whole number of
      tiles (in this case only one tile, each of which contains 1 Giga-pixel)
      from the entire canavs.  The canvas dimensions must be explicitly
      given so that the fragmented generation process can work correctly.
   -- To view the codestream produced at any intermediate step, after
      compressing some initial number of fragments, you can use
      "kdu_expand" or "kdu_show".  Note, however, that while this will work
      with kakadu, you might not be able to view a partial codestream using
      other manufacturers' tools, since the codestream will not generally
      be legal until all fragments have been compressed.
   -- To understand more about fragmented compression, see the usage statement
      for the `-frag' argument in "kdu_compress" or, for a thorough
      picture, you can check out the definition of `kdu_compress::create'.
   -- In this example, the codestream generation machinery itself produces
      TLM (tile-part-length) marker segments.  This is done by selectively
      overwriting an initially empty sandpit for TLM marker segments in the
      main header.  TLM information makes it easier to efficiently access
      selected regions of a tiled image.

x) kdu_compress -i volume.rawl*100@524288 -o volume.jpx -jp2_space sLUM
                -jpx_layers * Clayers=16 Creversible=yes Sdims={512,512}
                Sprecision=12 Ssigned=no Cycc=no
   -- Compresses an image volume consisting of 100 slices, all of which are
      packed into a single raw file, containing 12-bit samples, in the
      least-significant bits of each 2-byte word with little-endian byte order
      (note the ".rawl" suffix means little-endian, while ".raw" means
      big-endian).
   -- The expression "*100@524288" means that the single file "volume.rawl"
      should be unpacked into 100 consecutive images, each separated by
      524288 bytes (this happens to be 512x512x2 bytes).  Of course, we
      could always provide 100 separate input files on the command-line but
      this is pretty tedious.
   -- The "-jpx_layers *" command instructs the compressor to create one
      JPX compositing layer for each image component (each slice of the
      volume).  This will prove particularly interesting when multi-component
      transforms are added (see examples Ai to Ak below).  Take a look at
      the usage statement for other ways to use the new "-jpx_layers" switch.

y) kdu_compress -i geo.tif -o geo.jp2 Creversible=yes Clayers=16 -num_threads 2
   -- Compress a GeoTIFF image, recording the geographical information tags
      in a GeoJP2 box within the resulting JP2 file.  Kakadu can natively
      read a wide range of exotic TIFF files, but not ones which contain
      compressed imagery.  For these, you need to compile against the public
      domain LIBTIFF library (see "Compilation_Instructions.txt").
   -- From version 5.1, Kakadu provides extensive support for multi-threaded
      processing, to leverage parallel processing resources (multiple
      CPU's, multi-core CPU's and/or hyperthreading CPU's).  In this example,
      the `-num_threads' argument is explicitly used to control threading.
      The application selects the number of threads to match the number of
      available CPU's by default, but it is not always possible to detect
      the number of CPU's on all platforms.  To force use of the single
      threaded processing model from previous versions of Kakadu, specify
      "-num_threads 0".  To use the multi-threading framework of v5.1 but
      populate the environment with only 1 thread, specify "-num_threads 1";
      in this latter case, there is still only one thread of execution in
      the program, but the order in which processing steps are performed
      is driven by Kakadu's thread scheduler, rather than the rigid order
      associated with function invocation.

kdu_compress advanced Part-2 Features
-------------------------------------
    These additional examples look a lot more complex than the ones above,
    because they exercise rich features from Part-2 of the JPEG2000 standard.
    The signalling syntax becomes complex and may be difficult to fully
    understand without carefully reading the usage statements printed by
    "kdu_compress -usage", possibly in conjunction with IS15444-2 itself.
    In the specific applications which require these options, you would
    probably configure the relevant codestream parameter attributes directly
    from the application using the binary set/get methods offered by
    `kdu_params', rather than parsing complex text expressions from the
    command-line, as given here.  Nevertheless, everything can be
    prototyped using command-line arguments.

Aa) kdu_compress -i image.pgm -o image.jpx
                 Cdecomp=B(V--:H--:-),B(V--:H--:-),B(-:-:-)
    -- Uses Part-2 arbitrary decomposition styles (ADS) features to describe
       a packet wavelet transform structure, in which the highest two
       resolution levels of HL (horizontally high-pass) and LH (vertically
       high-pass) subbands are further subdivided vertically (HL) and
       horizontally (LH) respectively.  Subsequent DWT levels use the
       regular Mallat decomposition structure of Part-1.
    -- The decomposition structure given here is usually a little more
       efficient than the standard Mallat structure from Part-1.  This
       structure is also compatible with compressed-domain flipping
       functionalities which Kakadu uses to implement efficient rotation
       (for transcoding or rendering).
    -- Much richer splitting structures can be described using the `Cdecomp'
       syntax, but compressed domain flipping becomes fundamentally impossible
       if any final subband involves more than one high-pass filtering
       step in either direction.

Ab) kdu_compress -i image.ppm -o image.jpx
                 Cdecomp=B(BBBBB:BBBBB:B----),B(B----:B----:B----),B(-:-:-)
    -- Similar to example Aa), except that the primary (HL, LH and HH)
       subbands produced by the first two DWT levels are each subjected to
       a variety of further splitting operations.  In this case, the highest
       frequency primary HL and LH subbands are each split horizontally and
       vertically into 4 secondary subbands, and these are each split again
       into 4 tertiary subbands.  The highest frequency primary HH subband
       is split into just 4 secondary subbands, leaving a total of 36
       subbands in the highest resolution level.  In the second DWT level,
       the primary HL, LH and HH subbands are each split horizontally and
       vertically, for a total of 12 subbands.  All subsequent DWT levels
       follow the usual Mallat decomposition structure.

Ac) kdu_compress -i y.pgm,cb.pgm,cr.pgm -o image.jpx
                 Cdecomp:C1=V(-),B(-:-:-) Cdecomp:C2=V(-),B(-:-:-)
    -- Uses Part-2 downsampling factor styles (DFS) features to describe
       a transform in which the first DWT level splits the Cb and Cr image
       components (2'nd and 3'rd components, as supplied by "cb.pgm" and
       "cr.pgm") only in the vertical direction.  Subsequence DWT levels
       use full horizontal and vertical splitting (a la Part-1) for all
       image components.
    -- This sort of thing can be useful for applications in which the
       chrominance components have previously been subsampled horizontally
       (e.g., a 4:2:2 video frame).  In particular, it ensures that whenever
       the image is reconstructed at resolutions (e.g., at half or
       quarter resolution for the luminance), the chrominance components
       can be reconstructed at exactly the same size as the luminance
       component.

Ad) kdu_compress -i image.pgm -o image.jpx  Catk=2
                 Kextension:I2=SYM Kreversible:I2=no
                 Ksteps:I2={2,0,0,0},{2,-1,0,0}
                 Kcoeffs:I2=-0.5,-0.5,0.25,0.25
    -- Uses Part-2 arbitrary transform kernel (ATK) features to describe
       an irreversible version of the spline 5/3 DWT kernel -- Part-1
       uses the reversible version of this kernel for its reversible
       compression path, but does not provide an irreversible version.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -