⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 bzip2.1

📁 ncbi源码
💻 1
📖 第 1 页 / 共 2 页
字号:
During compression, \-s selects a block size of 200k, which limitsmemory use to around the same figure, at the expense of your compressionratio.  In short, if your machine is low on memory (8 megabytes orless), use \-s for everything.  See MEMORY MANAGEMENT below..TP.B \-q --quietSuppress non-essential warning messages.  Messages pertaining toI/O errors and other critical events will not be suppressed..TP.B \-v --verboseVerbose mode -- show the compression ratio for each file processed.Further \-v's increase the verbosity level, spewing out lots ofinformation which is primarily of interest for diagnostic purposes..TP.B \-L --license -V --versionDisplay the software version, license terms and conditions..TP.B \-1 (or \-\-fast) to \-9 (or \-\-best)Set the block size to 100 k, 200 k ..  900 k when compressing.  Has noeffect when decompressing.  See MEMORY MANAGEMENT below.The \-\-fast and \-\-best aliases are primarily for GNU gzip compatibility.  In particular, \-\-fast doesn't make thingssignificantly faster.  And \-\-best merely selects the default behaviour..TP.B \--Treats all subsequent arguments as file names, even if they startwith a dash.  This is so you can handle files with names beginningwith a dash, for example: bzip2 \-- \-myfilename..TP.B \--repetitive-fast --repetitive-bestThese flags are redundant in versions 0.9.5 and above.  They providedsome coarse control over the behaviour of the sorting algorithm inearlier versions, which was sometimes useful.  0.9.5 and above have animproved algorithm which renders these flags irrelevant..SH MEMORY MANAGEMENT.I bzip2 compresses large files in blocks.  The block size affectsboth the compression ratio achieved, and the amount of memory needed forcompression and decompression.  The flags \-1 through \-9specify the block size to be 100,000 bytes through 900,000 bytes (thedefault) respectively.  At decompression time, the block size used forcompression is read from the header of the compressed file, and.I bunzip2then allocates itself just enough memory to decompressthe file.  Since block sizes are stored in compressed files, it followsthat the flags \-1 to \-9 are irrelevant to and so ignoredduring decompression.Compression and decompression requirements, in bytes, can be estimated as:       Compression:   400k + ( 8 x block size )       Decompression: 100k + ( 4 x block size ), or                      100k + ( 2.5 x block size )Larger block sizes give rapidly diminishing marginal returns.  Most ofthe compression comes from the first two or three hundred k of blocksize, a fact worth bearing in mind when using.I bzip2on small machines.It is also important to appreciate that the decompression memoryrequirement is set at compression time by the choice of block size.For files compressed with the default 900k block size,.I bunzip2will require about 3700 kbytes to decompress.  To support decompressionof any file on a 4 megabyte machine, .I bunzip2has an option todecompress using approximately half this amount of memory, about 2300kbytes.  Decompression speed is also halved, so you should use thisoption only where necessary.  The relevant flag is -s.In general, try and use the largest block size memory constraints allow,since that maximises the compression achieved.  Compression anddecompression speed are virtually unaffected by block size.Another significant point applies to files which fit in a single block-- that means most files you'd encounter using a large block size.  Theamount of real memory touched is proportional to the size of the file,since the file is smaller than a block.  For example, compressing a file20,000 bytes long with the flag -9 will cause the compressor toallocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560kbytes of it.  Similarly, the decompressor will allocate 3700k but onlytouch 100k + 20000 * 4 = 180 kbytes.Here is a table which summarises the maximum memory usage for differentblock sizes.  Also recorded is the total compressed size for 14 files ofthe Calgary Text Compression Corpus totalling 3,141,622 bytes.  Thiscolumn gives some feel for how compression varies with block size.These figures tend to understate the advantage of larger block sizes forlarger files, since the Corpus is dominated by smaller files.           Compress   Decompress   Decompress   Corpus    Flag     usage      usage       -s usage     Size     -1      1200k       500k         350k      914704     -2      2000k       900k         600k      877703     -3      2800k      1300k         850k      860338     -4      3600k      1700k        1100k      846899     -5      4400k      2100k        1350k      845160     -6      5200k      2500k        1600k      838626     -7      6100k      2900k        1850k      834096     -8      6800k      3300k        2100k      828642     -9      7600k      3700k        2350k      828642.SH RECOVERING DATA FROM DAMAGED FILES.I bzip2compresses files in blocks, usually 900kbytes long.  Eachblock is handled independently.  If a media or transmission error causesa multi-block .bz2file to become damaged, it may be possible torecover data from the undamaged blocks in the file.The compressed representation of each block is delimited by a 48-bitpattern, which makes it possible to find the block boundaries withreasonable certainty.  Each block also carries its own 32-bit CRC, sodamaged blocks can be distinguished from undamaged ones..I bzip2recoveris a simple program whose purpose is to search forblocks in .bz2 files, and write each block out into its own .bz2 file.  You can then use.I bzip2 \-tto test theintegrity of the resulting files, and decompress those which areundamaged..I bzip2recovertakes a single argument, the name of the damaged file, and writes a number of files "rec00001file.bz2","rec00002file.bz2", etc, containing the  extracted  blocks.The  output  filenames  are  designed  so  that the use ofwildcards in subsequent processing -- for example,  "bzip2 -dc  rec*file.bz2 > recovered_data" -- processes the files inthe correct order..I bzip2recovershould be of most use dealing with large .bz2files,  as  these will contain many blocks.  It is clearlyfutile to use it on damaged single-block  files,  since  adamaged  block  cannot  be recovered.  If you wish to minimise any potential data loss through media  or  transmission errors, you might consider compressing with a smallerblock size..SH PERFORMANCE NOTESThe sorting phase of compression gathers together similar strings in thefile.  Because of this, files containing very long runs of repeatedsymbols, like "aabaabaabaab ..."  (repeated several hundred times) maycompress more slowly than normal.  Versions 0.9.5 and above fare muchbetter than previous versions in this respect.  The ratio betweenworst-case and average-case compression time is in the region of 10:1.For previous versions, this figure was more like 100:1.  You can use the\-vvvv option to monitor progress in great detail, if you want.Decompression speed is unaffected by these phenomena..I bzip2usually allocates several megabytes of memory to operatein, and then charges all over it in a fairly random fashion.  This meansthat performance, both for compressing and decompressing, is largelydetermined by the speed at which your machine can service cache misses.Because of this, small changes to the code to reduce the miss rate havebeen observed to give disproportionately large performance improvements.I imagine .I bzip2will perform best on machines with very large caches..SH CAVEATSI/O error messages are not as helpful as they could be..I bzip2tries hard to detect I/O errors and exit cleanly, but the details ofwhat the problem is sometimes seem rather misleading.This manual page pertains to version 1.0.2 of.I bzip2.  Compressed data created by this version is entirely forwards andbackwards compatible with the previous public releases, versions0.1pl2, 0.9.0, 0.9.5, 1.0.0 and 1.0.1, but with the followingexception: 0.9.0 and above can correctly decompress multipleconcatenated compressed files.  0.1pl2 cannot do this; it will stopafter decompressing just the first file in the stream..I bzip2recoverversions prior to this one, 1.0.2, used 32-bit integers to representbit positions in compressed files, so it could not handle compressedfiles more than 512 megabytes long.  Version 1.0.2 and above uses64-bit ints on some platforms which support them (GNU supportedtargets, and Windows).  To establish whether or not bzip2recover wasbuilt with such a limitation, run it without arguments.  In any eventyou can build yourself an unlimited version if you can recompile itwith MaybeUInt64 set to be an unsigned 64-bit integer..SH AUTHORJulian Seward, jseward@acm.org.http://sources.redhat.com/bzip2The ideas embodied in.I bzip2are due to (at least) the followingpeople: Michael Burrows and David Wheeler (for the block sortingtransformation), David Wheeler (again, for the Huffman coder), PeterFenwick (for the structured coding model in the original.I bzip,and many refinements), and Alistair Moffat, Radford Neal and Ian Witten(for the arithmetic coder in the original.I bzip).  I am muchindebted for their help, support and advice.  See the manual in thesource distribution for pointers to sources of documentation.  Christianvon Roques encouraged me to look for faster sorting algorithms, so as tospeed up compression.  Bela Lubkin encouraged me to improve theworst-case compression performance.  The bz* scripts are derived from those of GNU gzip.Many people sent patches, helpedwith portability problems, lent machines, gave advice and were generallyhelpful.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -