⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 bzip2.txt

📁 cygwin, 著名的在win32下模拟unix操作系统的东东
💻 TXT
📖 第 1 页 / 共 2 页
字号:
              above.  They provided some coarse control over  the              behaviour  of the sorting algorithm in earlier ver-              sions, which was sometimes useful.  0.9.5 and above              have  an  improved  algorithm  which  renders these              flags irrelevant.MEMORY MANAGEMENT       bzip2 compresses large files in blocks.   The  block  size       affects  both  the  compression  ratio  achieved,  and the       amount of memory needed for compression and decompression.       The  flags  -1  through  -9  specify  the block size to be       100,000 bytes through 900,000 bytes (the default)  respec-       tively.   At  decompression  time, the block size used for       compression is read from  the  header  of  the  compressed       file, and bunzip2 then allocates itself just enough memory       to decompress the file.  Since block sizes are  stored  in       compressed  files,  it follows that the flags -1 to -9 are       irrelevant to and so ignored during decompression.       Compression and decompression requirements, in bytes,  can       be estimated as:              Compression:   400k + ( 8 x block size )              Decompression: 100k + ( 4 x block size ), or                             100k + ( 2.5 x block size )       Larger  block  sizes  give  rapidly  diminishing  marginal       returns.  Most of the compression comes from the first two       or  three hundred k of block size, a fact worth bearing in       mind when using bzip2  on  small  machines.   It  is  also       important  to  appreciate  that  the  decompression memory       requirement is set at compression time by  the  choice  of       block size.       For  files  compressed  with  the default 900k block size,       bunzip2 will require about 3700 kbytes to decompress.   To       support decompression of any file on a 4 megabyte machine,       bunzip2 has an option to  decompress  using  approximately       half this amount of memory, about 2300 kbytes.  Decompres-       sion speed is also halved, so you should use  this  option       only where necessary.  The relevant flag is -s.       In general, try and use the largest block size memory con-       straints  allow,  since  that  maximises  the  compression       achieved.   Compression and decompression speed are virtu-       ally unaffected by block size.       Another significant point applies to files which fit in  a       single  block  --  that  means  most files you'd encounter       using a large block  size.   The  amount  of  real  memory       touched is proportional to the size of the file, since the       file is smaller than a block.  For example, compressing  a       file  20,000  bytes  long  with the flag -9 will cause the       compressor to allocate around 7600k of  memory,  but  only       touch 400k + 20000 * 8 = 560 kbytes of it.  Similarly, the       decompressor will allocate 3700k but  only  touch  100k  +       20000 * 4 = 180 kbytes.       Here  is a table which summarises the maximum memory usage       for different block sizes.  Also  recorded  is  the  total       compressed  size for 14 files of the Calgary Text Compres-       sion Corpus totalling 3,141,622 bytes.  This column  gives       some  feel  for  how  compression  varies with block size.       These figures tend to understate the advantage  of  larger       block  sizes  for  larger files, since the Corpus is domi-       nated by smaller files.                  Compress   Decompress   Decompress   Corpus           Flag     usage      usage       -s usage     Size            -1      1200k       500k         350k      914704            -2      2000k       900k         600k      877703            -3      2800k      1300k         850k      860338            -4      3600k      1700k        1100k      846899            -5      4400k      2100k        1350k      845160            -6      5200k      2500k        1600k      838626            -7      6100k      2900k        1850k      834096            -8      6800k      3300k        2100k      828642            -9      7600k      3700k        2350k      828642RECOVERING DATA FROM DAMAGED FILES       bzip2 compresses files in blocks, usually 900kbytes  long.       Each block is handled independently.  If a media or trans-       mission error causes a multi-block  .bz2  file  to  become       damaged,  it  may  be  possible  to  recover data from the       undamaged blocks in the file.       The compressed representation of each block  is  delimited       by  a  48-bit pattern, which makes it possible to find the       block boundaries with reasonable  certainty.   Each  block       also  carries its own 32-bit CRC, so damaged blocks can be       distinguished from undamaged ones.       bzip2recover is a  simple  program  whose  purpose  is  to       search  for blocks in .bz2 files, and write each block out       into its own .bz2 file.  You can then use bzip2 -t to test       the integrity of the resulting files, and decompress those       which are undamaged.       bzip2recover takes a single argument, the name of the dam-       aged file, and writes a number of files "rec0001file.bz2",       "rec0002file.bz2", etc, containing the  extracted  blocks.       The  output  filenames  are  designed  so  that the use of       wildcards in subsequent processing -- for example,  "bzip2       -dc   rec*file.bz2 > recovered_data" -- lists the files in       the correct order.       bzip2recover should be of most use dealing with large .bz2       files,  as  these will contain many blocks.  It is clearly       futile to use it on damaged single-block  files,  since  a       damaged  block  cannot  be recovered.  If you wish to min-       imise any potential data loss through media  or  transmis-       sion errors, you might consider compressing with a smaller       block size.PERFORMANCE NOTES       The sorting phase of compression gathers together  similar       strings  in  the  file.  Because of this, files containing       very long runs of  repeated  symbols,  like  "aabaabaabaab       ..."   (repeated  several hundred times) may compress more       slowly than normal.  Versions 0.9.5 and  above  fare  much       better  than previous versions in this respect.  The ratio       between worst-case and average-case compression time is in       the  region  of  10:1.  For previous versions, this figure       was more like 100:1.  You can use the -vvvv option to mon-       itor progress in great detail, if you want.       Decompression speed is unaffected by these phenomena.       bzip2  usually  allocates  several  megabytes of memory to       operate in, and then charges all over it in a fairly  ran-       dom  fashion.   This means that performance, both for com-       pressing and decompressing, is largely determined  by  the       speed  at  which  your  machine  can service cache misses.       Because of this, small changes to the code to  reduce  the       miss  rate  have  been observed to give disproportionately       large performance improvements.  I imagine bzip2 will per-       form best on machines with very large caches.CAVEATS       I/O  error  messages  are not as helpful as they could be.       bzip2 tries hard to detect I/O errors  and  exit  cleanly,       but  the  details  of  what  the problem is sometimes seem       rather misleading.       This manual page pertains to version 1.0 of bzip2.  Com-       pressed  data created by this version is entirely forwards       and  backwards  compatible  with   the   previous   public       releases,  versions  0.1pl2, 0.9.0 and 0.9.5, but with the       following exception: 0.9.0 and above can correctly  decom-       press multiple concatenated compressed files.  0.1pl2 can-       not do this; it will stop  after  decompressing  just  the       first file in the stream.       bzip2recover  uses  32-bit integers to represent bit posi-       tions in compressed files, so it cannot handle  compressed       files  more than 512 megabytes long.  This could easily be       fixed.AUTHOR       Julian Seward, jseward@acm.org.       http://sourceware.cygnus.com/bzip2       http://www.muraroa.demon.co.uk       The ideas embodied in bzip2 are due to (at least) the fol-       lowing  people: Michael Burrows and David Wheeler (for the       block sorting transformation), David Wheeler  (again,  for       the Huffman coder), Peter Fenwick (for the structured cod-       ing model in the original bzip, and many refinements), and       Alistair  Moffat,  Radford  Neal  and  Ian Witten (for the       arithmetic  coder  in  the  original  bzip).   I  am  much       indebted for their help, support and advice.  See the man-       ual in the source distribution for pointers to sources  of       documentation.  Christian von Roques encouraged me to look       for faster sorting algorithms, so as to speed up  compres-       sion.  Bela Lubkin encouraged me to improve the worst-case       compression performance.  Many people sent patches, helped       with  portability problems, lent machines, gave advice and       were generally helpful.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -