⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 bzip2.1.preformatted.svn-base

📁 絲路server源碼 Silk Road server source
💻 SVN-BASE
📖 第 1 页 / 共 2 页
字号:
              above.  They provided some coarse control over  the              behaviour  of the sorting algorithm in earlier ver­              sions, which was sometimes useful.  0.9.5 and above              have  an  improved  algorithm  which  renders these              flags irrelevant.MMEEMMOORRYY MMAANNAAGGEEMMEENNTT       _b_z_i_p_2 compresses large files in blocks.   The  block  size       affects  both  the  compression  ratio  achieved,  and the       amount of memory needed for compression and decompression.       The  flags  −1  through  −9  specify  the block size to be       100,000 bytes through 900,000 bytes (the default)  respec­       tively.   At  decompression  time, the block size used for       compression is read from  the  header  of  the  compressed       file, and _b_u_n_z_i_p_2 then allocates itself just enough memory       to decompress the file.  Since block sizes are  stored  in       compressed  files,  it follows that the flags −1 to −9 are       irrelevant to and so ignored during decompression.       Compression and decompression requirements, in bytes,  can       be estimated as:              Compression:   400k + ( 8 x block size )              Decompression: 100k + ( 4 x block size ), or                             100k + ( 2.5 x block size )       Larger  block  sizes  give  rapidly  diminishing  marginal       returns.  Most of the compression comes from the first two       or  three hundred k of block size, a fact worth bearing in       mind when using _b_z_i_p_2  on  small  machines.   It  is  also       important  to  appreciate  that  the  decompression memory       requirement is set at compression time by  the  choice  of       block size.       For  files  compressed  with  the default 900k block size,       _b_u_n_z_i_p_2 will require about 3700 kbytes to decompress.   To       support decompression of any file on a 4 megabyte machine,       _b_u_n_z_i_p_2 has an option to  decompress  using  approximately       half this amount of memory, about 2300 kbytes.  Decompres­       sion speed is also halved, so you should use  this  option       only where necessary.  The relevant flag is ‐s.       In general, try and use the largest block size memory con­       straints  allow,  since  that  maximises  the  compression       achieved.   Compression and decompression speed are virtu­       ally unaffected by block size.       Another significant point applies to files which fit in  a       single  block  ‐‐  that  means  most files you’d encounter       using a large block  size.   The  amount  of  real  memory       touched is proportional to the size of the file, since the       file is smaller than a block.  For example, compressing  a       file  20,000  bytes  long  with the flag ‐9 will cause the       compressor to allocate around 7600k of  memory,  but  only       touch 400k + 20000 * 8 = 560 kbytes of it.  Similarly, the       decompressor will allocate 3700k but  only  touch  100k  +       20000 * 4 = 180 kbytes.       Here  is a table which summarises the maximum memory usage       for different block sizes.  Also  recorded  is  the  total       compressed  size for 14 files of the Calgary Text Compres­       sion Corpus totalling 3,141,622 bytes.  This column  gives       some  feel  for  how  compression  varies with block size.       These figures tend to understate the advantage  of  larger       block  sizes  for  larger files, since the Corpus is domi­       nated by smaller files.                  Compress   Decompress   Decompress   Corpus           Flag     usage      usage       ‐s usage     Size            ‐1      1200k       500k         350k      914704            ‐2      2000k       900k         600k      877703            ‐3      2800k      1300k         850k      860338            ‐4      3600k      1700k        1100k      846899            ‐5      4400k      2100k        1350k      845160            ‐6      5200k      2500k        1600k      838626            ‐7      6100k      2900k        1850k      834096            ‐8      6800k      3300k        2100k      828642            ‐9      7600k      3700k        2350k      828642RREECCOOVVEERRIINNGG DDAATTAA FFRROOMM DDAAMMAAGGEEDD FFIILLEESS       _b_z_i_p_2 compresses files in blocks, usually 900kbytes  long.       Each block is handled independently.  If a media or trans­       mission error causes a multi‐block  .bz2  file  to  become       damaged,  it  may  be  possible  to  recover data from the       undamaged blocks in the file.       The compressed representation of each block  is  delimited       by  a  48‐bit pattern, which makes it possible to find the       block boundaries with reasonable  certainty.   Each  block       also  carries its own 32‐bit CRC, so damaged blocks can be       distinguished from undamaged ones.       _b_z_i_p_2_r_e_c_o_v_e_r is a  simple  program  whose  purpose  is  to       search  for blocks in .bz2 files, and write each block out       into its own .bz2 file.  You can then use _b_z_i_p_2 −t to test       the integrity of the resulting files, and decompress those       which are undamaged.       _b_z_i_p_2_r_e_c_o_v_e_r takes a single argument, the name of the dam­       aged    file,    and    writes    a    number   of   files       "rec00001file.bz2",  "rec00002file.bz2",  etc,  containing       the   extracted   blocks.   The   output   filenames   are       designed  so  that the use of wildcards in subsequent pro­       cessing  ‐‐ for example, "bzip2 ‐dc  rec*file.bz2 > recov­       ered_data" ‐‐ processes the files in the correct order.       _b_z_i_p_2_r_e_c_o_v_e_r should be of most use dealing with large .bz2       files,  as  these will contain many blocks.  It is clearly       futile to use it on damaged single‐block  files,  since  a       damaged  block  cannot  be recovered.  If you wish to min­       imise any potential data loss through media  or  transmis­       sion errors, you might consider compressing with a smaller       block size.PPEERRFFOORRMMAANNCCEE NNOOTTEESS       The sorting phase of compression gathers together  similar       strings  in  the  file.  Because of this, files containing       very long runs of  repeated  symbols,  like  "aabaabaabaab       ..."   (repeated  several hundred times) may compress more       slowly than normal.  Versions 0.9.5 and  above  fare  much       better  than previous versions in this respect.  The ratio       between worst‐case and average‐case compression time is in       the  region  of  10:1.  For previous versions, this figure       was more like 100:1.  You can use the −vvvv option to mon­       itor progress in great detail, if you want.       Decompression speed is unaffected by these phenomena.       _b_z_i_p_2  usually  allocates  several  megabytes of memory to       operate in, and then charges all over it in a fairly  ran­       dom  fashion.   This means that performance, both for com­       pressing and decompressing, is largely determined  by  the       speed  at  which  your  machine  can service cache misses.       Because of this, small changes to the code to  reduce  the       miss  rate  have  been observed to give disproportionately       large performance improvements.  I imagine _b_z_i_p_2 will per­       form best on machines with very large caches.CCAAVVEEAATTSS       I/O  error  messages  are not as helpful as they could be.       _b_z_i_p_2 tries hard to detect I/O errors  and  exit  cleanly,       but  the  details  of  what  the problem is sometimes seem       rather misleading.       This manual page pertains to version 1.0.3 of _b_z_i_p_2_.  Com­       pressed  data created by this version is entirely forwards       and  backwards  compatible  with   the   previous   public       releases,  versions 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1 and       1.0.2, but with the following exception: 0.9.0  and  above       can  correctly decompress multiple concatenated compressed       files.  0.1pl2 cannot do this; it will stop  after  decom­       pressing just the first file in the stream.       _b_z_i_p_2_r_e_c_o_v_e_r  versions prior to 1.0.2 used 32‐bit integers       to represent bit positions in compressed  files,  so  they       could  not handle compressed files more than 512 megabytes       long.  Versions 1.0.2 and above use 64‐bit  ints  on  some       platforms  which  support them (GNU supported targets, and       Windows).  To establish whether or  not  bzip2recover  was       built  with  such  a limitation, run it without arguments.       In any event you can build yourself an  unlimited  version       if  you  can  recompile  it  with MaybeUInt64 set to be an       unsigned 64‐bit integer.AAUUTTHHOORR       Julian Seward, jsewardbzip.org.       http://www.bzip.org       The ideas embodied in _b_z_i_p_2 are due to (at least) the fol­       lowing  people: Michael Burrows and David Wheeler (for the       block sorting transformation), David Wheeler  (again,  for       the Huffman coder), Peter Fenwick (for the structured cod­       ing model in the original _b_z_i_p_, and many refinements), and       Alistair  Moffat,  Radford  Neal  and  Ian Witten (for the       arithmetic  coder  in  the  original  _b_z_i_p_)_.   I  am  much       indebted for their help, support and advice.  See the man­       ual in the source distribution for pointers to sources  of       documentation.  Christian von Roques encouraged me to look       for faster sorting algorithms, so as to speed up  compres­       sion.  Bela Lubkin encouraged me to improve the worst‐case       compression performance.  Donna Robinson XMLised the docu­       mentation.   The bz* scripts are derived from those of GNU       gzip.  Many people sent patches, helped  with  portability       problems,  lent  machines,  gave advice and were generally       helpful.                                                         bzip2(1)

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -