📄 manual.texi
字号:
during decompression.
Compression and decompression requirements, in bytes, can be estimated
as:
@example
Compression: 400k + ( 8 x block size )
Decompression: 100k + ( 4 x block size ), or
100k + ( 2.5 x block size )
@end example
Larger block sizes give rapidly diminishing marginal returns. Most of
the compression comes from the first two or three hundred k of block
size, a fact worth bearing in mind when using @code{bzip2} on small machines.
It is also important to appreciate that the decompression memory
requirement is set at compression time by the choice of block size.
For files compressed with the default 900k block size, @code{bunzip2}
will require about 3700 kbytes to decompress. To support decompression
of any file on a 4 megabyte machine, @code{bunzip2} has an option to
decompress using approximately half this amount of memory, about 2300
kbytes. Decompression speed is also halved, so you should use this
option only where necessary. The relevant flag is @code{-s}.
In general, try and use the largest block size memory constraints allow,
since that maximises the compression achieved. Compression and
decompression speed are virtually unaffected by block size.
Another significant point applies to files which fit in a single block
-- that means most files you'd encounter using a large block size. The
amount of real memory touched is proportional to the size of the file,
since the file is smaller than a block. For example, compressing a file
20,000 bytes long with the flag @code{-9} will cause the compressor to
allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560
kbytes of it. Similarly, the decompressor will allocate 3700k but only
touch 100k + 20000 * 4 = 180 kbytes.
Here is a table which summarises the maximum memory usage for different
block sizes. Also recorded is the total compressed size for 14 files of
the Calgary Text Compression Corpus totalling 3,141,622 bytes. This
column gives some feel for how compression varies with block size.
These figures tend to understate the advantage of larger block sizes for
larger files, since the Corpus is dominated by smaller files.
@example
Compress Decompress Decompress Corpus
Flag usage usage -s usage Size
-1 1200k 500k 350k 914704
-2 2000k 900k 600k 877703
-3 2800k 1300k 850k 860338
-4 3600k 1700k 1100k 846899
-5 4400k 2100k 1350k 845160
-6 5200k 2500k 1600k 838626
-7 6100k 2900k 1850k 834096
-8 6800k 3300k 2100k 828642
-9 7600k 3700k 2350k 828642
@end example
@unnumberedsubsubsec RECOVERING DATA FROM DAMAGED FILES
@code{bzip2} compresses files in blocks, usually 900kbytes long. Each
block is handled independently. If a media or transmission error causes
a multi-block @code{.bz2} file to become damaged, it may be possible to
recover data from the undamaged blocks in the file.
The compressed representation of each block is delimited by a 48-bit
pattern, which makes it possible to find the block boundaries with
reasonable certainty. Each block also carries its own 32-bit CRC, so
damaged blocks can be distinguished from undamaged ones.
@code{bzip2recover} is a simple program whose purpose is to search for
blocks in @code{.bz2} files, and write each block out into its own
@code{.bz2} file. You can then use @code{bzip2 -t} to test the
integrity of the resulting files, and decompress those which are
undamaged.
@code{bzip2recover}
takes a single argument, the name of the damaged file,
and writes a number of files @code{rec0001file.bz2},
@code{rec0002file.bz2}, etc, containing the extracted blocks.
The output filenames are designed so that the use of
wildcards in subsequent processing -- for example,
@code{bzip2 -dc rec*file.bz2 > recovered_data} -- lists the files in
the correct order.
@code{bzip2recover} should be of most use dealing with large @code{.bz2}
files, as these will contain many blocks. It is clearly
futile to use it on damaged single-block files, since a
damaged block cannot be recovered. If you wish to minimise
any potential data loss through media or transmission errors,
you might consider compressing with a smaller
block size.
@unnumberedsubsubsec PERFORMANCE NOTES
The sorting phase of compression gathers together similar strings in the
file. Because of this, files containing very long runs of repeated
symbols, like "aabaabaabaab ..." (repeated several hundred times) may
compress more slowly than normal. Versions 0.9.5 and above fare much
better than previous versions in this respect. The ratio between
worst-case and average-case compression time is in the region of 10:1.
For previous versions, this figure was more like 100:1. You can use the
@code{-vvvv} option to monitor progress in great detail, if you want.
Decompression speed is unaffected by these phenomena.
@code{bzip2} usually allocates several megabytes of memory to operate
in, and then charges all over it in a fairly random fashion. This means
that performance, both for compressing and decompressing, is largely
determined by the speed at which your machine can service cache misses.
Because of this, small changes to the code to reduce the miss rate have
been observed to give disproportionately large performance improvements.
I imagine @code{bzip2} will perform best on machines with very large
caches.
@unnumberedsubsubsec CAVEATS
I/O error messages are not as helpful as they could be. @code{bzip2}
tries hard to detect I/O errors and exit cleanly, but the details of
what the problem is sometimes seem rather misleading.
This manual page pertains to version 1.0 of @code{bzip2}. Compressed
data created by this version is entirely forwards and backwards
compatible with the previous public releases, versions 0.1pl2, 0.9.0 and
0.9.5, but with the following exception: 0.9.0 and above can correctly
decompress multiple concatenated compressed files. 0.1pl2 cannot do
this; it will stop after decompressing just the first file in the
stream.
@code{bzip2recover} uses 32-bit integers to represent bit positions in
compressed files, so it cannot handle compressed files more than 512
megabytes long. This could easily be fixed.
@unnumberedsubsubsec AUTHOR
Julian Seward, @code{jseward@@acm.org}.
The ideas embodied in @code{bzip2} are due to (at least) the following
people: Michael Burrows and David Wheeler (for the block sorting
transformation), David Wheeler (again, for the Huffman coder), Peter
Fenwick (for the structured coding model in the original @code{bzip},
and many refinements), and Alistair Moffat, Radford Neal and Ian Witten
(for the arithmetic coder in the original @code{bzip}). I am much
indebted for their help, support and advice. See the manual in the
source distribution for pointers to sources of documentation. Christian
von Roques encouraged me to look for faster sorting algorithms, so as to
speed up compression. Bela Lubkin encouraged me to improve the
worst-case compression performance. Many people sent patches, helped
with portability problems, lent machines, gave advice and were generally
helpful.
@end quotation
@chapter Programming with @code{libbzip2}
This chapter describes the programming interface to @code{libbzip2}.
For general background information, particularly about memory
use and performance aspects, you'd be well advised to read Chapter 2
as well.
@section Top-level structure
@code{libbzip2} is a flexible library for compressing and decompressing
data in the @code{bzip2} data format. Although packaged as a single
entity, it helps to regard the library as three separate parts: the low
level interface, and the high level interface, and some utility
functions.
The structure of @code{libbzip2}'s interfaces is similar to
that of Jean-loup Gailly's and Mark Adler's excellent @code{zlib}
library.
All externally visible symbols have names beginning @code{BZ2_}.
This is new in version 1.0. The intention is to minimise pollution
of the namespaces of library clients.
@subsection Low-level summary
This interface provides services for compressing and decompressing
data in memory. There's no provision for dealing with files, streams
or any other I/O mechanisms, just straight memory-to-memory work.
In fact, this part of the library can be compiled without inclusion
of @code{stdio.h}, which may be helpful for embedded applications.
The low-level part of the library has no global variables and
is therefore thread-safe.
Six routines make up the low level interface:
@code{BZ2_bzCompressInit}, @code{BZ2_bzCompress}, and @* @code{BZ2_bzCompressEnd}
for compression,
and a corresponding trio @code{BZ2_bzDecompressInit}, @* @code{BZ2_bzDecompress}
and @code{BZ2_bzDecompressEnd} for decompression.
The @code{*Init} functions allocate
memory for compression/decompression and do other
initialisations, whilst the @code{*End} functions close down operations
and release memory.
The real work is done by @code{BZ2_bzCompress} and @code{BZ2_bzDecompress}.
These compress and decompress data from a user-supplied input buffer
to a user-supplied output buffer. These buffers can be any size;
arbitrary quantities of data are handled by making repeated calls
to these functions. This is a flexible mechanism allowing a
consumer-pull style of activity, or producer-push, or a mixture of
both.
@subsection High-level summary
This interface provides some handy wrappers around the low-level
interface to facilitate reading and writing @code{bzip2} format
files (@code{.bz2} files). The routines provide hooks to facilitate
reading files in which the @code{bzip2} data stream is embedded
within some larger-scale file structure, or where there are
multiple @code{bzip2} data streams concatenated end-to-end.
For reading files, @code{BZ2_bzReadOpen}, @code{BZ2_bzRead},
@code{BZ2_bzReadClose} and @* @code{BZ2_bzReadGetUnused} are supplied. For
writing files, @code{BZ2_bzWriteOpen}, @code{BZ2_bzWrite} and
@code{BZ2_bzWriteFinish} are available.
As with the low-level library, no global variables are used
so the library is per se thread-safe. However, if I/O errors
occur whilst reading or writing the underlying compressed files,
you may have to consult @code{errno} to determine the cause of
the error. In that case, you'd need a C library which correctly
supports @code{errno} in a multithreaded environment.
To make the library a little simpler and more portable,
@code{BZ2_bzReadOpen} and @code{BZ2_bzWriteOpen} require you to pass them file
handles (@code{FILE*}s) which have previously been opened for reading or
writing respectively. That avoids portability problems associated with
file operations and file attributes, whilst not being much of an
imposition on the programmer.
@subsection Utility functions summary
For very simple needs, @code{BZ2_bzBuffToBuffCompress} and
@code{BZ2_bzBuffToBuffDecompress} are provided. These compress
data in memory from one buffer to another buffer in a single
function call. You should assess whether these functions
fulfill your memory-to-memory compression/decompression
requirements before investing effort in understanding the more
general but more complex low-level interface.
Yoshioka Tsuneo (@code{QWF00133@@niftyserve.or.jp} /
@code{tsuneo-y@@is.aist-nara.ac.jp}) has contributed some functions to
give better @code{zlib} compatibility. These functions are
@code{BZ2_bzopen}, @code{BZ2_bzread}, @code{BZ2_bzwrite}, @code{BZ2_bzflush},
@code{BZ2_bzclose},
@code{BZ2_bzerror} and @code{BZ2_bzlibVersion}. You may find these functions
more convenient for simple file reading and writing, than those in the
high-level interface. These functions are not (yet) officially part of
the library, and are minimally documented here. If they break, you
get to keep all the pieces. I hope to document them properly when time
permits.
Yoshioka also contributed modifications to allow the library to be
built as a Windows DLL.
@section Error handling
The library is designed to recover cleanly in all situations, including
the worst-case situation of decompressing random data. I'm not
100% sure that it can always do this, so you might want to add
a signal handler to catch segmentation violations during decompression
if you are feeling especially paranoid. I would be interested in
hearing more about the robustness of the library to corrupted
compressed data.
Version 1.0 is much more robust in this respect than
0.9.0 or 0.9.5. Investigations with Checker (a tool for
detecting problems with memory management, similar to Purify)
indicate that, at least for the few files I tested, all single-bit
errors in the decompressed data are caught properly, with no
segmentation faults, no reads of uninitialised data and no
out of range reads or writes. So it's certainly much improved,
although I wouldn't claim it to be totally bombproof.
The file @code{bzlib.h} contains all definitions needed to use
the library. In particular, you should definitely not include
@code{bzlib_private.h}.
In @code{bzlib.h}, the various return values are defined. The following
list is not intended as an exhaustive description of the circumstances
in which a given value may be returned -- those descriptions are given
later. Rather, it is intended to convey the rough meaning of each
return value. The first five actions are normal and not intended to
denote an error situation.
@table @code
@item BZ_OK
The requested action was completed successfully.
@item BZ_RUN_OK
@itemx BZ_FLUSH_OK
@itemx BZ_FINISH_OK
In @code{BZ2_bzCompress}, the requested flush/finish/nothing-special action
was completed successfully.
@item BZ_STREAM_END
Compression of data was completed, or the logical stream end was
detected during decompression.
@end table
The following return values indicate an error of some kind.
@table @code
@item BZ_CONFIG_ERROR
Indicates that the library has been improperly compiled on your
platform -- a major configuration error. Specifically, it means
that @code{sizeof(char)}, @code{sizeof(short)} and @code{sizeof(int)}
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -