⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 software-raid.howto.txt

📁 create raid tool at linux
💻 TXT
📖 第 1 页 / 共 3 页
字号:
  The Software-RAID HOWTO  Jakob OEstergaard (jakob@ostenfeld.dk)  v. 0.90.2 - Alpha, 27th february 1999  This HOWTO describes how to use Software RAID under Linux. You must be  using the RAID patches available from ftp://ftp.fi.ker-  nel.org/pub/linux/daemons/raid/alpha. The HOWTO can be found at  http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/.  ______________________________________________________________________  Table of Contents  1. Introduction     1.1 Disclaimer     1.2 Requirements  2. Why RAID ?     2.1 Technicalities     2.2 Terms     2.3 The RAID levels        2.3.1 Spare disks     2.4 Swapping on RAID  3. RAID setup     3.1 General setup     3.2 Linear mode     3.3 RAID-0     3.4 RAID-1     3.5 RAID-4     3.6 RAID-5     3.7 The Persistent Superblock     3.8 Chunk sizes        3.8.1 RAID-0        3.8.2 RAID-1        3.8.3 RAID-4        3.8.4 RAID-5     3.9 Options for mke2fs     3.10 Autodetection     3.11 Booting on RAID     3.12 Pitfalls  4. Credits  ______________________________________________________________________  1.  Introduction  This howto is written by Jakob OEstergaard based on a large number of  emails between the author and Ingo Molnar (mingo@chiara.csoma.elte.hu)  -- one of the RAID developers --, the linux-raid mailing list (linux-  raid@vger.rutgers.edu) and various other people.  The reason this HOWTO was written even though a Software-RAID HOWTO  allready exists is, that the old HOWTO describes the old-style  Software RAID found in the stock kernels. This HOWTO describes the use  of the ``new-style'' RAID that has been developed more recently. The  new-style RAID has a lot of features not present in old-style RAID.  Some of the information in this HOWTO may seem trivial, if you know  RAID all ready. Just skip those parts.  1.1.  Disclaimer  The mandatory disclaimer:  Although RAID seems stable for me, and stable for many other people,  it may not work for you.  If you loose all your data, your job, get  hit by a truck, whatever, it's not my fault, nor the developers'.  Be  aware, that you use the RAID software and this information at your own  risk!  There is no guarantee whatsoever, that any of the software, or  this information, is in anyway correct, nor suited for any use  whatsoever. Back up all your data before experimenting with this.  Better safe than sorry.  1.2.  Requirements  This HOWTO assumes you are using a late 2.2.x or 2.0.x kernel with a  matching raid0145 patch, and the 0.90 version of the raidtools. Both  can be found at ftp://ftp.fi.kernel.org/pub/linux/daemons/raid/alpha.  The RAID patch, the raidtools package, and the kernel should all match  as close as possible. At times it can be necessary to use older  kernels if raid patches are not available for the latest kernel.  2.  Why RAID ?  There can be many good reasons for using RAID. A few are; the ability  to combine several physical disks into one larger ``virtual'' device,  performance improvements, and redundancy.  2.1.  Technicalities  Linux RAID can work on most block devices. It doesn't matter whether  you use IDE or SCSI devices, or a mixture. Some people have also used  the Network Block Device (NBD) with more or less success.  Be sure that the bus(ses) to the drives are fast enough. You shouldn't  have 14 UW-SCSI drives on one UW bus, if each drive can give 10 MB/s  and the bus can only sustain 40 MB/s.  Also, you should only have one  device per IDE bus. Running disks as master/slave is horrible for  performance. IDE is really bad at accessing more that one drive per  bus.  Of Course, all newer motherboards have two IDE busses, so you  can set up two disks in RAID without buying more controllers.  The RAID layer has absolutely nothing to do with the filesystem layer.  You can put any filesystem on a RAID device, just like any other block  device.  2.2.  Terms  The word ``RAID'' means ``Linux Software RAID''. This HOWTO does not  treat any aspects of Hardware RAID.  When describing setups, it is useful to refer to the number of disks  and their sizes. At all times the letter N is used to denote the  number of active disks in the array (not counting spare-disks). The  letter S is the size of the smallest drive in the array, unless  otherwise mentioned. The letter P is used as the performance of one  disk in the array, in MB/s. When used, we assume that the disks are  equally fast, which may not always be true.  Note that the words ``device'' and ``disk'' are supposed to mean about  the same thing.  Usually the devices that are used to build a RAID  device are partitions on disks, not necessarily entire disks.  But  combining several partitions on one disk usually does not make sense,  so the words devices and disks just mean ``partitions on different  disks''.  2.3.  The RAID levels  Here's a short description of what is supported in the Linux RAID  patches. Some of this information is absolutely basic RAID info, but  I've added a few notices about what's special in the Linux  implementation of the levels.  Just skip this section if you know  RAID. Then come back when you are having problems   :)  The current RAID patches for Linux supports the following levels:  o  Linear mode  o  Two or more disks are combined into one physical device. The disks     are ``appended'' to each other, so writing to the RAID device will     fill up disk 0 first, then disk 1 and so on. The disks does not     have to be of the same size. In fact, size doesn't matter at all     here   :)  o  There is no redundancy in this level. If one disk crashes you will     most probably loose all your data.  You can however be lucky to     recover some data, since the filesystem will just be missing one     large consecutive chunk of data.  o  The read and write performance will not increase for single     reads/writes. But if several users use the device, you may be lucky     that one user effectively is using the first disk, and the other     user is accessing files which happen to reside on the second disk.     If that happens, you will see a performance gain.  o  RAID-0  o  Also called ``stripe'' mode. Like linear mode, except that reads     and writes are done in parallel to the devices. The devices should     have approximately the same size. Since all access is done in     parallel, the devices fill up equally. If one device is much larger     than the other devices, that extra space is still utilized in the     RAID device, but you will be accessing this larger disk alone,     during writes in the high end of your RAID device. This of course     hurts performance.  o  Like linear, there's no redundancy in this level either. Unlike     linear mode, you will not be able to rescue any data if a drive     fails. If you remove a drive from a RAID-0 set, the RAID device     will not just miss one consecutive block of data, it will be filled     with small holes all over the device. e2fsck will probably not be     able to recover much from such a device.  o  The read and write performance will increase, because reads and     writes are done in parallel on the devices. This is usually the     main reason for running RAID-0. If the busses to the disks are fast     enough, you can get very close to N*P MB/sec.  o  RAID-1  o  This is the first mode which actually has redundancy. RAID-1 can be     used on two or more disks with zero or more spare-disks. This mode     maintains an exact mirror of the information on one disk on the     other disk(s). Of Course, the disks must be of equal size. If one     disk is larger than another, your RAID device will be the size of     the smallest disk.  o  If up to N-1 disks are removed (or crashes), all data are still     intact. If there are spare disks available, and if the system (eg.     SCSI drivers or IDE chipset etc.) survived the crash,     reconstruction of the mirror will immediately begin on one of the     spare disks, after detection of the drive fault.  o  Read performance will usually scale close to to N*P, while write     performance is the same as on one device, or perhaps even less.     Reads can be done in parallel, but when writing, the CPU must     transfer N times as much data to the disks as it usually would     (remember, N identical copies of all data must be sent to the     disks).  o  RAID-4  o  This RAID level is not used very often. It can be used on three or     more disks. Instead of completely mirroring the information, it     keeps parity information on one drive, and writes data to the other     disks in a RAID-0 like way.  Because one disks is reserved for     parity information, the size of the array will be (N-1)*S, where S     is the size of the smallest drive in the array. As in RAID-1, the     disks should either be of equal size, or you will just have to     accept that the S in the (N-1)*S formula above will be the size of     the smallest drive in the array.  o  If one drive fails, the parity information can be used to     reconstruct all data.  If two drives fail, all data is lost.  o  The reason this level is not more frequently used, is because the     parity information is kept on one drive. This information must be     updated every time one of the other disks are writte to. Thus, the     parity disk will become a bottleneck, if it is not a lot faster     than the other disks.  However, if you just happen to have a lot of     slow disks and a very fast one, this RAID level can be very useful.  o  RAID-5  o  This is perhaps the most useful RAID mode when one wishes to     combine a larger number of physical disks, and still maintain some     redundancy. RAID-5 can be used on three or more disks, with zero or     more spare-disks. The resulting RAID-5 device size will be (N-1)*S,     just like RAID-4. The big difference between RAID-5 and -4 is, that     the parity information is distributed evenly among the     participating drives, avoiding the bottleneck problem in RAID-4.  o  If one of the disks fail, all data are still intact, thanks to the     parity information. If spare disks are available, reconstruction     will begin immediately after the device failure.  If two disks fail     simultaneously, all data are lost. RAID-5 can survive one disk     failure, but not two or more.  o  Both read and write performance usually increase, but it's hard to     predict how much.  2.3.1.  Spare disks  Spare disks are disks that do not take part in the RAID set until one  of the active disks fail.  When a device failure is detected, that  device is marked as ``bad'' and reconstruction is immediately started  on the first spare-disk available.  Thus, spare disks add a nice extra safety to especially RAID-5 systems  that perhaps are hard to get to (physically). One can allow the system  to run for some time, with a faulty device, since all redundancy is  preserved by means of the spare disk.  You cannot be sure that your system will survive a disk crash. The  RAID layer should handle device failures just fine, but SCSI drivers  could be broken on error handling, or the IDE chipset could lock up,  or a lot of other things could happen.  2.4.  Swapping on RAID  There's no reason to use RAID for swap performance reasons. The kernel  itself can stripe swapping on several devices, if you just give them  the same priority in the fstab file.  A nice fstab looks like:  /dev/sda2       swap           swap    defaults,pri=1   0 0  /dev/sdb2       swap           swap    defaults,pri=1   0 0  /dev/sdc2       swap           swap    defaults,pri=1   0 0  /dev/sdd2       swap           swap    defaults,pri=1   0 0  /dev/sde2       swap           swap    defaults,pri=1   0 0  /dev/sdf2       swap           swap    defaults,pri=1   0 0  /dev/sdg2       swap           swap    defaults,pri=1   0 0  This setup lets the machine swap in parallel on seven SCSI devices. No  need for RAID, since this has been a kernel feature for a long time.  Another reason to use RAID for swap is high availability.  If you set  up a system to boot on eg. a RAID-1 device, the system should be able  to survive a disk crash. But if the system has been swapping on the  now faulty device, you will for sure be going down.  Swapping on the  RAID-1 device would solve this problem.  However, swap on RAID-{1,4,5} is NOT supported. You can set it up, but  it will crash. The reason is, that the RAID layer sometimes allocates  memory before doing a write. This leads to a deadlock, since the  kernel will have to allocate memory before it can swap, and swap  before it can allocate memory.  It's sad but true, at least for now.  3.  RAID setup  3.1.  General setup  This is what you need for any of the RAID levels:  o  A kernel.  Get 2.0.36 or a recent 2.2.x kernel.  o  The RAID patches.  There usually is a patch available for the     recent kernels.  o  The RAID tools.  o  Patience, Pizza, and your favourite caffeinated beverage.  All this software can be found at ftp://ftp.fi.kernel.org/pub/linux

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -