📄 bzip2.1.preformatted
字号:
decompress them. This really performs a trial decompression and throws away the result. --ff ----ffoorrccee Force overwrite of output files. Normally, _b_z_i_p_2 will not overwrite existing output files. --kk ----kkeeeepp Keep (don't delete) input files during compression or decompression. --ss ----ssmmaallll Reduce memory usage, for compression, decompression and testing. Files are decompressed and tested using a modified algorithm which only requires 2.5 bytes per block byte. This means any file can be decompressed in 2300k of memory, albeit at about half the normal speed. During compression, -s selects a block size of 200k, which limits memory use to around the same figure, at the expense of your compression ratio. In short, if your machine is low on memory (8 megabytes or less), use -s for everything. See MEMORY MANAGEMENT above. 4bzip2(1) bzip2(1) --vv ----vveerrbboossee Verbose mode -- show the compression ratio for each file processed. Further -v's increase the ver- bosity level, spewing out lots of information which is primarily of interest for diagnostic purposes. --LL ----lliicceennssee --VV ----vveerrssiioonn Display the software version, license terms and conditions. --11 ttoo --99 Set the block size to 100 k, 200 k .. 900 k when compressing. Has no effect when decompressing. See MEMORY MANAGEMENT above. ----rreeppeettiittiivvee--ffaasstt _b_z_i_p_2 injects some small pseudo-random variations into very repetitive blocks to limit worst-case performance during compression. If sorting runs into difficulties, the block is randomised, and sorting is restarted. Very roughly, _b_z_i_p_2 persists for three times as long as a well-behaved input would take before resorting to randomisation. This flag makes it give up much sooner. ----rreeppeettiittiivvee--bbeesstt Opposite of --repetitive-fast; try a lot harder before resorting to randomisation.RREECCOOVVEERRIINNGG DDAATTAA FFRROOMM DDAAMMAAGGEEDD FFIILLEESS _b_z_i_p_2 compresses files in blocks, usually 900kbytes long. Each block is handled independently. If a media or trans- mission error causes a multi-block .bz2 file to become damaged, it may be possible to recover data from the undamaged blocks in the file. The compressed representation of each block is delimited by a 48-bit pattern, which makes it possible to find the block boundaries with reasonable certainty. Each block also carries its own 32-bit CRC, so damaged blocks can be distinguished from undamaged ones. _b_z_i_p_2_r_e_c_o_v_e_r is a simple program whose purpose is to search for blocks in .bz2 files, and write each block out into its own .bz2 file. You can then use _b_z_i_p_2 _-_t to test the integrity of the resulting files, and decompress those which are undamaged. _b_z_i_p_2_r_e_c_o_v_e_r takes a single argument, the name of the dam- aged file, and writes a number of files "rec0001file.bz2", "rec0002file.bz2", etc, containing the extracted blocks. The output filenames are designed so that the use of 5bzip2(1) bzip2(1) wildcards in subsequent processing -- for example, "bzip2 -dc rec*file.bz2 > recovered_data" -- lists the files in the "right" order. _b_z_i_p_2_r_e_c_o_v_e_r should be of most use dealing with large .bz2 files, as these will contain many blocks. It is clearly futile to use it on damaged single-block files, since a damaged block cannot be recovered. If you wish to min- imise any potential data loss through media or transmis- sion errors, you might consider compressing with a smaller block size.PPEERRFFOORRMMAANNCCEE NNOOTTEESS The sorting phase of compression gathers together similar strings in the file. Because of this, files containing very long runs of repeated symbols, like "aabaabaabaab ..." (repeated several hundred times) may compress extraordinarily slowly. You can use the -vvvvv option to monitor progress in great detail, if you want. Decompres- sion speed is unaffected. Such pathological cases seem rare in practice, appearing mostly in artificially-constructed test files, and in low- level disk images. It may be inadvisable to use _b_z_i_p_2 to compress the latter. If you do get a file which causes severe slowness in compression, try making the block size as small as possible, with flag -1. _b_z_i_p_2 usually allocates several megabytes of memory to operate in, and then charges all over it in a fairly ran- dom fashion. This means that performance, both for com- pressing and decompressing, is largely determined by the speed at which your machine can service cache misses. Because of this, small changes to the code to reduce the miss rate have been observed to give disproportionately large performance improvements. I imagine _b_z_i_p_2 will per- form best on machines with very large caches.CCAAVVEEAATTSS I/O error messages are not as helpful as they could be. _B_z_i_p_2 tries hard to detect I/O errors and exit cleanly, but the details of what the problem is sometimes seem rather misleading. This manual page pertains to version 0.9.0 of _b_z_i_p_2_. Com- pressed data created by this version is entirely forwards and backwards compatible with the previous public release, version 0.1pl2, but with the following exception: 0.9.0 can correctly decompress multiple concatenated compressed files. 0.1pl2 cannot do this; it will stop after decom- pressing just the first file in the stream. 6bzip2(1) bzip2(1) Wildcard expansion for Windows 95 and NT is flaky. _b_z_i_p_2_r_e_c_o_v_e_r uses 32-bit integers to represent bit posi- tions in compressed files, so it cannot handle compressed files more than 512 megabytes long. This could easily be fixed.AAUUTTHHOORR Julian Seward, jseward@acm.org. http://www.muraroa.demon.co.uk The ideas embodied in _b_z_i_p_2 are due to (at least) the fol- lowing people: Michael Burrows and David Wheeler (for the block sorting transformation), David Wheeler (again, for the Huffman coder), Peter Fenwick (for the structured cod- ing model in the original _b_z_i_p_, and many refinements), and Alistair Moffat, Radford Neal and Ian Witten (for the arithmetic coder in the original _b_z_i_p_)_. I am much indebted for their help, support and advice. See the man- ual in the source distribution for pointers to sources of documentation. Christian von Roques encouraged me to look for faster sorting algorithms, so as to speed up compres- sion. Bela Lubkin encouraged me to improve the worst-case compression performance. Many people sent patches, helped with portability problems, lent machines, gave advice and were generally helpful. 7
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -