⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 faq

📁 一套客户/服务器模式的备份系统代码,跨平台,支持linux,AIX, IRIX, FreeBSD, Digital Unix (OSF1), Solaris and HP-UX.
💻
📖 第 1 页 / 共 5 页
字号:
      When a not correctable write error occurs the complete frame      is invalidated and rewritten to the next piece of tape able to      keep it. Thus the usable capacity decreases continuously and      according to HP officials this is a normal side effect of the      DAT technology.      The device can evaluate certain hints pointing to dirty read/      write-heads. Then a message can be transmitted to the device      driver and this way up to some user, who should then insert a      cleaning tape. But when the device detects a dirty head and      transmits the notification, it is regularly and usually much      too late. Read and write errors might have produced unusable      data on tape or lead to wrong tape file mark counting as stated      in FAQ Q11.      I'd like to summarize this under the normal bullshitting, that      is established today in the computer business (and others).      Special thanks to Micro$oft, whose one and only incredible      great feat is IMHO to have driven the users' pain threshold      to heights never reached before. Does anyone believe a single      word from them any more ?     Other sources say about DDS2-4:      DDS2 conformant drives (and higher) must be able to perform      hardware compression. Having written an already compressed      file to a DDS4 tape the mt tool of the dds2tar package      reports, that indeed 20 GB data have been put on the tape.      So here (at least with new tapes, i (af) guess), the specs      are fulfilled.Q26: Tape handling seems not to work at all, what's wrong ?A26: Nothing seems to work, you get error messages, you don't     understand, in the serverside log there are messages like:     Tue May 25 15:46:55 1999, Error: Input/output error, only -1 bytes read, trying to continue.     Tue May 25 16:47:31 1999, Warning: Expected cartridge 3 in drive, but have 2.     Tue May 25 16:58:31 1999, Error: Input/output error, only -1 bytes read, trying to continue.     Tue May 25 17:20:12 1999, Internal error: Device should be open, fd = -10.     Tue May 25 17:21:24 1999, Error: Input/output error, only -1 bytes read, trying to continue.     This means probably, that the program configured for setting a     tape file (SetFile-Command:) does not work. Either you have     supplied something syntactical incorrect, or you are using     RedHat Linux-5.2 . The mt command of this distribution and     version is broken. Solution: Update to a newer version of mt,     0.5b reportedly works.Q27: How can i change the compression configuration ?A27: Basically the compression level can be changed at any time,     but with the algorithm and afbackup version 3.2.6 or older     it is different story.     The only problem here is, that the filename logfiles (in other     words: the index files) are compressed and changing the uncompress     algorithm makes them unreadable. With afbackup 3.2.7 or higher     for each index file the appropriate unprocess command is saved     into a file with the same name like the index file, except that     it has a leading dot (thus hidden). A problem arises with indexes     without a related hidden file. The solution is to uncompress     them with the old algorithm into files, that do not have the     trailing .z . The existing .z files must be removed or moved out     of the way. When running the next backup the current file will     automatically be compressed. Of course the uncompressed files     can then be compressed into new .z files with the new compression     algorithm. In this case the files without the trailing .z must     be removed.     When using built-in compression, there is a little problem here.     A program is needed, that performs the same algorithm like the     built-in compression. Such a program comes with the distribution     and is installed as helper program __z into the client side     .../client/bin directory. The synopsis of this program is:      __z [ -{123456789|d} ]     __z [ -123456789 ]  compresses standard input to standard out                         using the given compression level     __z -d              uncompresses standard in to standard out     Having configured built-in compression AND a compress and     uncompress command, a pipe must be typed to get the desired     result. Keep in mind, that during compression first the command     processes the data and then the built-in compression (or the __z     program) is applied. To uncompress the index files e.g. the     following command is necessary:      /path/to/client/bin/__z -d < backup_log.135.z | \             /path/to/client/bin/__descrpt -d > backup_log.135     It is a good idea to check the contents of the uncompressed     file before removing the compressed version.     For the files saved in backups a change of the compression     algorithm is irrelevant, cause the name of the program to     perform the appropriate uncompression (or built-in uncompress)     is written with the file into the backup.Q28: Why does my Linux kernel Oops during afbackup operation ?A28: Reportedly on some machines/OS versions the scientific     functions in the trivial (not DES) authentication code are     causing the problems. Thus, when compiled with DES encryption     enabled, the problems are gone. The libm should not be the     problem, it operates at process/application level. A better     candidate is kernel math emulation.     Solutions: * Recompile the kernel with math emulation disabled.                  This should be possible with all non-stone-age-                  processors (Intel chips >= 486, any PPC, MIPS >=                  R3000, any sparc sun4, Motorola >= 68030 ...)                * Get the current libdes and link it in on all                  servers and clients. This also enhances securityQ29: Why does afbackup not use tar as packing format ?A29: tar is a format, that i don't have control of, and that lacks     several features, i and other users need to have. Examples:      - per-file compression      - arbitrary per-file preprocessing      - file contents saving      - saving ACLs      - saving command output (for database support)     I (too) often read: In emergency cases i want to restore with     a common command like tar or cpio, cause then afbackup won't     help me / be available / no argumentation. This is nonsense.     In emergency cases afbackup is still available. The clientside     program afclient can be used very similarly like tar. Thus     when using the single stream server you can recover from tape     without the afserver trying something like this (replace with     configured blocksize after bs= and get the tape file number,     where the desired files can be found, from the index file, it     is prefixed with hostname%port!cartridgenumber.tapefilenumber):      cd /where/to/restore      mt -f /dev/nst0 fsf <tapefilenumber>      sh -c 'while true ; do dd if=/dev/nst0 bs=<blocksize> ; done' \           /path/to/client/bin/afclient -xarvg -f-     RTFM about afclient (e.g. /path/to/client/bin/afclient -h)     and dd. Don't mistype if= as of= or for safety take away the     write permission from the tape device or use the cartridge's     hardware mechanism to prevent overwriting.     When using the multi-stream server, the tape format must be     multiplexed, so it will never be the raw packer's format.     Then it won't help in any way, if it was tar or cpio or what     ever, you need to go through the multi stream server to get     back to the original format.Q30: How to recover directly from tape without afclient/afserver ?A30: See Q29.Q31: Why do files truncated to multiples of 8192 during restore ?A31: This happens only on Linux with the zlib shipped with recent      (late 1999) distributions (Debian or RedHat reportedly) linked     in. I was unable to reproduce the problem on my Linux boxes     (SuSE 5.2 and 6.2) or on any other platform, where i always     built the zlib myself (1.0.4, 1.1.2 or 1.1.3). I have the     suspicion, that the shipped header zlib.h does not fit the     data representation expected in calls to functions in the     delivered libz.a or libz.so . Thus programs built with the     right header and appropriate libz do work, but programs built     with the wrong header linked to libz do not. Don't blame that     on me, i have a debugging output here sent to me by a user,     that proves, that libz does not behave like documented and     expected.Q32: What is the difference between total and real compression factor ?A32: The total compression factor is the sum of all the sizes of all     files, divided by the sum of the sizes of the files not compressed     and the number of bytes resulting from compressing files, what     makes the sum of all bytes saved as file contents, either being     compressed or not.     The real compression factor only takes those files into account,     that have been compressed and not those left uncompressed. This     factor is the sum of the sizes of the files having been compressed,     divided by the sum of bytes resulting from compressing those files.     Both factors are equal, if compression is applied to all files,     e.g. if the parameter DoNotCompress is not set or no files     matching the patterns supplied here are saved.Q33: How does afbackup compare to amanda ?A33: Admittedly i don't know much about amanda. Here's what i extracted     from an E-Mail-talk with someone, who had to report a comparison     between them (partially it's not very fair from both sides, but i     think everyone can take some clues from it and be motivated to ask     further questions), it starts with the issues from an amanda user's     view (> prefixes my comments on the items):DESCRIPTION                                                 Amanda  afbackupCentral scheduler which attempts to smooth the dailybackups depending on set constraints, can be interrogated.  YES     NO> (afbackup does not implement any scheduler, backups can be> started from a central place, afbackup does NOT force the> types of a backup, e.g. make incremental backup, if there> is not much space on tapes left)Sends mail when a tape is missing or on error,while in backup.                                            YES     YESPre-warns of a possible error condition (host notresponding, tape not present, disk full) before backup.     YES     PARTIALLY> (afbackup implements a connection timeout for remote> starts, an init-command for server startup and an> init-media-command, that is called, whenever a media> should be loaded, can be configured, that may test for> problems in advance)If no tape available, can dump to spool only.               YES     NO> (No (disk) spool area is maintained. Backup media can> be filesystems, thus also removable disks. It is> possible to configure a buffer file on disk, but it's> size should not be too big, to be safe: << 2 GB)Normally dumps in parallel to a spool disk, then to tape,for efficiency.                                             YES     N/A> (afbackup can dump in parallel to server, clientside> protocol optimizer for efficiency, no spool area s.a.)Supports autochanger in a simple way (can browse for atape, but will not remember the pack's content, this canbe a feature)                                               YES     YES> (Don't know, what is meant here. Autochanger is supported> in a simple way before 3.3, enhanced in 3.3, including a> media database)When using tar backups, indexes are generated which can beused to get back the data.                                  YES     YES> tar is not used (see below), indexes are maintainedAn history of the backups is available, Amanda can decidethe restore sequence, e.g. if the last full dump isnot available, go back in history, using incrementalbackups.                                                    YES     Y/N> (A before-date can be supplied, but no automatic> walk-back in history)Backup format can be simple tar.                            YES     YES(discouraged!)> I decided not to use the tar packing format as it lacks> several features, that i consider absolutely necessary,> most notably> - per-file compression/preprocessing> - command output packing> - extended include/excludeAmanda will interrogate the client and tell him to do a 0,1 or other level backup, depending on spool size, backupsize, etc.                                                  YES     manualCan print tape's content labels.                            YES     N/A> The label of an afbackup tape does not contain tape> contents. These are located in the index file(s). Those> can be printed easily, also only a certain end user's> files. This feature of amanda has in my opinion one of> the heaviest limitations of amanda (filesystem size> <= tape capacity) as consequenceCan print weekly tape content summary.                      YES     N/ACan print graphical summary of backup time.                 YES     NORestorer through an intelligent command line.               YES     YES, also GUIBackups can be stored and/or transmitted compressed.        YES     YES> clientside compression is one of afbackup features.> Thus transmitted data is already compressed.Backups can be encrypted during transport or on disk.       NO(1)   BOTH> ssh may be used to tunnel the connection, the contents> of the stored files can be preprocessed in any arbitrary> way, also encrypted

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -