⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 usage_examples.txt

📁 Kakadu V6.1 for Win32 Kakadu 是JPEG2000编解码的实现之一
💻 TXT
📖 第 1 页 / 共 5 页
字号:
         command.  The first just consists of the first compositing layer
         from the second image file (note that file numbers all start from 1
         while everything else starts fro 0) -- of course, JP2 files have
         only one compositing layer.  The second custom compositing layer
         has four channels (3 sRGB channels and 1 alpha channel), extracted
         from image components 0-2 of codestream 0 in file 1 and image
         component 3 (the 4'th one) of codestream 0 in file 3 (the alpha
         image).  The relevant codestream colour transforms are applied
         automatically during the rendering process, so that even though the
         components have been compressed using the codestream ICT, they may
         be treated as RGB components.  The third compositing layer is
         similar to the second, but it uses the second component of
         the alpha image for its alpha blending.
       * One composited image is created by combining the 3 layers.  The
         first layer is scaled by 2 and placed at the origin of the
         composition canvas.  The second layer is placed over this, scaled
         by 1 and shifted by half its height and width, below and to the
         right of the composition canvas.  The third layer is placed on top
         after first cropping it (removing 30% of its width and height from
         the left, and preserving 40% of its original with and height) and
         then shifted it by 1.2 times its cropped height and width.
    -- It is worth noting that the final image does not contain multiple
       copies of any of the original imagery; each original image codestream
       is copied once into the merged image and then referenced from
       custom compositing layer header boxes, which are in turn referenced
       from the composition box.  This avoids inefficiencies in the file
       representation and also avoids computational inefficiencies during
       rendering.  Each codestream is opened only once within "kdu_show"
       (actually inside `kdu_region_compositor') but may be used by
       multiple rendering contexts.  One interesting side effect of this is
       that if you attach a metadata label to one of the codestreams in
       the merged file it will appear in all elements of the composited
       result which use that codestream.  You can attach such metadata
       labels using the metadata editing facilities of "kdu_show".
 f) kdu_merge -i im1.jpx,im2.jpx,im3.jpx -o album.jpx -album
    -- Make a "photo album" containing the supplied input images (keeps all
       their individual metadata, correctly cross-referenced to the images
       from which it came).  The album is an animation, whose first frame
       contains all images, arranged in tiles, with borders, scaled to
       similar sizes.  This is followed by one frame for each image.  This
       is a great way to create albums of photos to be served up for remote
       interactive access via JPIP.
 g) kdu_merge -i im1.jpx,im2.jpx,im3.jpx -o album.jpx -album 10 -links
    -- As in (f), but the period between frames (during animated playback)
       is set to 10 seconds, and individual photos are not copied into the
       album.  Instead they are simply referenced by fragment table boxes
       (ftbl) in the merged JPX file.  This allows you to present imagery in
       lots of different ways without actually copying it into each
       presentation.  Linked codestreams are properly supported by all Kakadu
       objects and demo apps, including client-server communications using
       "kdu_server".
 h) kdu_merge -i im1.jp2,im2.jp2,im3.jp2 -o video.mj2 -mj2_tracks P:0-2@30
    -- Merges three still images into a single Motion JPEG2000 video track,
       with a nominal play-back frame rate of 30 frames/second.
 i) kdu_merge -i im1.jpx,im2.jpx,... -o video.mj2 -mj2_tracks P:0-@30,1-1@0.5
    -- As above, but merges the compositing layers from all of the input
       files, with a final frame (having 2 seconds duration -- 0.5 frames/s)
       repeating the second actual compositing layer in the input
       collection.
 j) kdu_merge -i vid1.mj2:1,vid1.mj2:0,vid2.mj2 -o out.mj2
    -- Merges the second video track encountered in "vid1.mj2" with
       the first video track encountered in "vid1.mj2" and the first
       video track encountered in "vid2.mj2".  In this case, there is no
       need to explicitly include a -mj2_tracks argument, since timing
       information can be taken from the input video sources.  The
       tracks must be all either progressive or interlaced.

kdu_expand
----------
 a) kdu_expand -i in.j2c -o out.pgm
    -- decompress input code-stream (or first image component thereof).
 b) kdu_expand -i in.j2c -o out.pgm -rate 0.7
    -- read only the initial portion of the code-stream, corresponding to
       an overall bit-rate of 0.7 bits/sample.  It is generally preferrable
       to use the transcoder to generate a reduced rate code-stream first,
       but direct truncation works very well so long as the code-stream has
       a layer-progressive organization with only one tile (unless
       interleaved tile-parts are used).
 c) kdu_expand -i in.j2c -o out.pgm -region {0.3,0.2},{0.6,0.4} -rotate 90
    -- decompress a limited region of the original image (starts 30% down
       and 20% in from left, extends for 60% of the original height and
       40% of the original width).  Concurrently rotates decompressed
       image by 90 degrees clockwise (no extra memory or computational
       resources required for rotation).
    -- Note that the whole code-stream if often not loaded when a region
       of interest is specified, as may be determined by observing the
       reported bit-rate.  This is particularly true of code-streams with
       multiple tiles or spatially progressive packet sequencing.
 d) kdu_expand -i in.j2c -o out.pgm -fussy
    -- most careful to check for conformance with standard.  Checks for
       appearance of marker codes in the wrong places and so forth.
 e) kdu_expand -i in.j2c -o out.pgm -resilient
    -- similar to fussy, but should not fail if a problem is encountered
       (except when problem concerns main or tile headers -- these can all
       be put up front) -- recovers from and/or conceals errors to the
       best of its ability.
 f) kdu_expand -i in.j2c -o out.pgm -reduce 2
    -- discard 2 resolution levels to generate an image whose dimensions
       are each divided by 4.
 g) kdu_expand -i in.j2c -o out.pgm -record log.txt
    -- generate a log file containing all parameter attributes associated
       with the compressed code-stream.  Any or all of these may be
       supplied to "kdu_compress" (often via a switch file).
    -- note that the log file may be incomplete if you instruct
       the decompressor to decompress only a limited region of interest
       so that one or more tiles may never be parsed.
 h) kdu_expand -i in.j2c -cpu 0
    -- measure end-to-end processing time, excluding only the writing of
       the decompressed file (specifying an output file will cause the
       measurement to be excessively influenced by the I/O associated
       with file writing)
 i) kdu_expand -i in.j2c -o out.pgm -precise
    -- force the use of higher precision numerics than are probably
       required (the implementation makes its own decisions based on
       the output bit-depth).  The same argument, supplied to the compressor
       can also have some minor beneficial effect.  Use the `-precise'
       argument during compression and decompression to get reference
       compression performance figures.
 j) kdu_expand -i in.jp2 -o out.ppm
    -- decompress a colour image wrapped up inside a JP2 file.  Note that
       sub-sampled colour components will not be interpolated nor will
       any colour appearance transform be applied to the data.  However,
       palette indices will be de-palettized.  This is probably the most
       appropriate behaviour for an application which decompresses to a
       file output.  Renderers, such as "kdu_show" should do much more.
 k) kdu_expand -i huge.jp2 -o out.ppm -region {0.5,0.3},{0.1,0.15}
               -no_seek -cpu 0
    -- You could try applying this to a huge compressed image, generated in
       a manner similar to that of "kdu_compress" Example (r).  By default,
       the decompressor will efficiently seek over all the elements of
       the code-stream which are not required to reconstruct the small
       subset of the entire image being requested here.  Specifying `-no_seek'
       enables you to disable seekability for the compressed data source,
       forcing linear parsing of the code-stream until all required
       data has been collected.  You might like to use this to compare the
       time taken to decompress an image region with and without parsing.
 l) kdu_expand -i video.jpx -o frame.ppm -jpx_layer 2
    -- Decompresses the first codestream (in many cases, there will be only
       one) used by compositing layer 2 (the 3'rd compositing layer).
 m) kdu_expand -i video.jpx -o out.pgm -raw_components 5 -skip_components 2
    -- Decompresses the 3'rd component of the 6'th codestream in the file.
    -- If any colour transforms (or other multi-component transforms) are
       involved, this may result in the decompression of a larger number of
       raw codestream components, so that the colour/multi-component transform
       can be inverted to recover the required component.  If, instead, you
       want the raw codestream component prior to any colour/multi-component
       transform inversion, you should also specify the
       `-codestream_components' command-line argument.
 n) kdu_expand -i geo.jp2 -o geo.tif -num_threads 2
    -- Decompresses a JP2 file, writing the result in the TIFF format, while
       attempting to record useful JP2 boxes in TIFF tags.  This is only a
       demonstration, rather than a comprehensive attempt to convert all
       possible boxes to tags.  However, one useful box which is converted
       (if present) is the GeoJP2 box, which may be used to store geographical
       information.
    -- See "kdu_compress" example (y) for a discussion of the "-num_threads"
       argument.
 m) kdu_expand -i in.jp2 -o out.tif -stats -reduce 2
    -- The `-stats' option causes the application to report statistics on
       the amount of compressed data which has been parsed, in each successive
       quality layer, at the resolution of interes (in this case, one quarter
       the resolution of the original image, due to the "-reduce 2" option).
       The application also reports the number of additional bytes which were
       parsed from each higher resolution than that required for decompression
       (in this case, there are two higher resolution levels, due to the
       "-reduce 2" option).  This depends upon codestream organization and
       whether or not the compressed data in the codestream was randomly
       accessible.

kdu_v_expand
------------
 a) kdu_v_expand -i in.mj2 -o out.vix
    -- Decompress Motion JPEG2000 file to a raw video output file.  For
       details of the trival VIX file format, consult the usage statement
       printed by `kdu_v_compress' with the `-usage' argument.
 b) kdu_v_expand -i in.mj2 -cpu -quiet
    -- Use this to measure the speed associated with decompression of
       the video, avoiding I/O delays which would be incurred if
       the decompressed video frames had to be written to a file.
 c) timer kdu_v_expand -i in.mj2 -quiet -overlapped_frames -num_threads 2
    -- In this example, multi-threaded processing is used to process each
       frame (actually, the above examples will also do this automatically
       if there are multiple CPU's in your system).  The "-overlapped_frames"
       option allows a second frame to be opened while the first is still
       being processed.  As soon as the number of available jobs on the
       first frame drops permanently below the number of available working
       threads (2 in this case), jobs on the second frame become available to
       Kakadu's scheduler.  This ensures that processing of the first (active)
       frame is given absolute priority, to be completed as fast as possible
       by as many processing resources are available, while at the same time
       providing work to threads which would normally become idle when the
       processing of a frame is nearly complete (near the end of a frame,
       only DWT processing often remains to be done).  For an explanation
       of the term "permanently" in the above description, you should consult
       the discussion of "dormant queue banks" in the description of
       the core Kakadu system function, `kdu_thread_entity:

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -