– – English – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – Summary Provides detailed instructions for installing Parabola on 'fake RAID' volumes. This guide is intended to supplement the or the. Related Resources The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of GNU/Linux from partitions inside the RAID set using GRUB. When using so-called 'fake RAID' or 'host RAID', the disc sets are reached from /dev/mapper/chipsetNamerandomName and not /dev/sdX. Contents. 1 What is 'fake RAID' From Wikipedia: Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary.
RAID controllers can be implemented in hardware, which makes the RAID completely transparent to the operating systems running on top of these disks, or it can be implemented in software, which is the case we are interested in.
To fill this gap, cheap 'RAID controllers' were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as GNU/Linux or a modern version of Microsoft Windows is loaded, the drivers take over. These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit - not the RAID controller itself - thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards).
Before their introduction, a 'RAID controller' implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as 'fake RAID' even though the RAID itself is implemented correctly. Adaptec calls them 'host RAID'. See for more information. Despite the terminology, 'fake RAID' via is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. Dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted. 2 History In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS).
For Linux 2.6 the device-mapper framework can, among other nice things like and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace. Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions.
Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce. 3 Supported hardware. Tested with ICH10R on 2009.08 (x8664) - 23:10, 29 November 2009 (EST). Tested with Sil3124 on 2009.02 (i686) -. Tested with nForce4 on Core Dump (i686 and x8664) -.
Tested with Sil3512 on Overlord (x8664) -. Tested with nForce2 on 2011.05 (i686) -;. Tested with nVidia MCP78S on 2011.06 (x8664) -.
Tested with nVidia CK804 on 2011.06 (x8664) -. Tested with AMD Option ROM Utility using pdcadma on 2011.12 (x8664) For more information on supported hardware, see 4 Backup. Warning: Backup all data before playing with RAID.
What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. Consider yourself warned!
5 Outline. Preparation. Boot the installer. Load dmraid. Perform traditional installation. Install GRUB 6 Preparation. Open up any needed guides (e.g., ) on another machine.
If you do not have access to another machine, print it out. Download the latest Parabola install image. Backup all important files since everything on the target partitions will be destroyed. 6.1 Configure RAID sets. Enter your BIOS setup and enable the RAID controller. The BIOS may contain an option to configure SATA drives as 'IDE', 'AHCI', or 'RAID'; ensure 'RAID' is selected.
Save and exit the BIOS setup. During boot, enter the RAID setup utility. The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing. Use the RAID setup utility to create preferred stripe/mirror sets. Warning: Command 'dmraid -ay' could fail after boot to Parabola GNU/Linux-libre Release: 2011.09.01 as image file with initial ramdisk environment does not support dmraid. You could use an older Release.
Note that you must correct your kernel name and initrd name in grubs menu.lst after installing as these releases use different naming Example output: /dev/mapper/control. Tip: Utilize three consoles: the setup GUI to configure the system, a chroot to install GRUB, and finally a cfdisk reference since RAID sets have weird names. tty1: chroot and grub-install. tty2: /arch/setup. tty3: cfdisk for a reference in spelling, partition table and geometry of the RAID set Leave programs running and switch to when needed. Re-activate the installer ( tty2) and proceed as normal with the following exceptions:.
Select Packages. Ensure dmraid is marked for installation.
Configure System. Add dmmod to the MODULES line in mkinitcpio.conf. If using a mirrored (RAID 1) array, additionally add dmmirror. Add chipsetmoduledriver to the MODULES line if necessary.
Add dmraid to the HOOKS line in mkinitcpio.conf; preferably after sata but before filesystems 10 Install GRUB. Note: For an unknown reason, the default menu.lst will likely be incorrectly populated when installing via fake RAID.
Double-check the root lines (e.g. Root (hd0,0)). Additionally, if you did not create a separate /boot partition, ensure the kernel/initrd paths are correct (e.g. /boot/vmlinuz-linux-libre and /boot/initramfs-linux-libre.img instead of /vmlinuz-linux-libre and /initramfs-linux-libre.img.
For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as: /dev/mapper Linux GRUB Partition Partition Number nvidiafffadgic nvidiafffadgic5 / 4 nvidiafffadgic6 /boot 5 nvidiafffadgic7 /home 6 The correct root designation would be (hd0,5) in this example. Note: If you use more than one set of dmraid arrays or multiple Linux distributions installed on different dmraid arrays (for example 2 disks in nvidiafdaacfde and 2 disks in nvidiafffadgic and you are installing to the second dmraid array (nvidiafffadgic)), you will need designate the second array's /boot partition as the GRUB root. In the example above, if nvidiafffadgic was the second dmraid array you were installing to, your root designation would be root (hd1,5). After saving the configuration file, the GRUB installer will FAIL. However it will still copy files to /boot.
DO NOT GIVE UP AND REBOOT - just follow the directions below:. Switch to tty1 and into our installed system: # mount -t proc none /mnt/proc # mount -rbind /dev /mnt/dev # mount -rbind /sys /mnt/sys # chroot /mnt /bin/bash. Switch to tty3 and look up the geometry of the RAID set.
In order for cfdisk to find the array and provide the proper C H S information, you may need to start cfdisk providing your raid set as the first argument. Cfdisk /dev/mapper/nvidiafffadgic):.
The number of Cylinders, Heads and Sectors on the RAID set should be written at the top of the screen inside cfdisk. Note: cfdisk shows the information in H S C order, but grub requires you to enter the geometry information in C H S order. Example: 18079 255 63 for a RAID stripe of two 74GB Raptor discs.
Example: 38914 255 63 for a RAID stripe of two 160GB laptop discs. GRUB will fail to properly read the drives; the geometry command must be used to manually direct GRUB:.
Switch to tty1, the chrooted environment. Install GRUB on /dev/mapper/raidSet: # dmsetup mknodes # grub -device-map=/dev/null grub device (hd0) /dev/mapper/raidSet grub geometry (hd0) C H S Exchange C H S above with the proper numbers (be aware: they are not entered in the same order as they are read from cfdisk). If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command.
If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples: grub find /grub/stage1 # use when you have a separate boot partition grub find /boot/grub/stage1 # use when you have no separate boot partition Grub will report the proper device to designate as the grub root below (i.e. (hd0,0), (hd0,4), etc.) Then, continue to install the bootloader into the Master Boot Record, changing 'hd0' to 'hd1' if required.
Grub root (hd0,0) grub setup (hd0) grub quit.
For a long time, I've been thinking about switching to RAID 10 on a few servers. Now that Ubuntu 10.04 LTS is live, it's time for an upgrade. The servers I'm using are HP Proliant ML115 ( very good value). It has four internal 3.5' slots. I'm currently using one drive for the system and a RAID5 array (software) for the remaining three disks. The problem is that this creates a single-point-of-failure on the boot drive. Hence I'd like to switch to a RAID10 array, as it would give me both better I/O performance and more reliability.
The problem is only that good controller cards that supports RAID10 (such as 3Ware) cost almost as much as the server itself. Moreover software-RAID10 does not seem to work very well with Grub.
What is your advice? Should I just keep running RAID5? Have anyone been able to successfully install a software RAID10 without boot issues? I would be inclined to go for RAID10 in this instance, unless you needed the extra space offered by the single+RAID5 arrangement.
You get the same guaranteed redundancy (any one drive can fail and the array will survive) and slightly better redundancy in worse cases (RAID10 can survive 4 of the 6 'two drives failed at once' scenarios), and don't have the write penalty often experienced with RAID5. You are likely to have trouble booting off RAID10, either implemented as a traditional nested array (two RAID1s in a RAID0) or using Linux's recent all-in-one RAID10 driver as both LILO and GRUB expect to have all the information needed to boot on one drive which it may not be with RAID0 or 10 (or software RAID5 for that matter - it works in hardware as the boot loader only sees one drive and the controller deals with where the data it actually spread amongst the drives).
There is an easy way around this though: just have a small partition (128MB should be more than enough - you only need room for a few kernel images and associated initrd files) at the beginning of each of the drives and set these up as a RAID1 array which is mounted as /boot. You just need to make sure that the boot loader is correctly installed on each drive, and all will work fine (once the kernel and initrd are loaded, they will cope with finding the main array and dealing with it properly). The software RAID10 driver has a number of options for tweaking block layout that can bring further performance benefits depending on your I/O load pattern (see for some simple benchmarks) though I'm not aware of any distributions that support this for of RAID 10 from install yet (only the more traditional nested arrangement). If you want to try the RAID10 driver, and your distro doesn't support it at install time, you could install the entire base system into a RAID1 array as described for /boot above and build the RAID10 array with the rest of the disk space once booted into that.
For up to 4 drives, or as many SATA-drives you can connect to the motherboard, you are in many cases better served by using the motherboard SATA connectors and Linux MD software RAID than HW raid. For one thing, the on-board SATA connections go directly to the southbridge, with a speed of about 20 Gbit/s. Many HW controllers are slower.
And then Linux MD RAID software is often faster and much more flexible and versatile than HW RAID. For example the Linux MD RAID10-far layout gives you almost RAID0 reading speed. And you can have multiple partitions of different RAID types with Linux MD RAID, for example a /boot with RAID1, and then /root and other partitions in raid10-far for speed, or RAID5 for space.
A further argument is cost - buying an extra RAID controller is often more costly than just using the on-board SATA connections;-) A setup with /boot on raid can be found on. More info on Linux RAID can be found on the Linux RAID kernel group wiki at. Well, you pretty much answered your own question. You don't have the money for a hardware raid10 card. Grub don't support booting on a RAID10 software raid This implies that you can't use RAID10. How about using RAID5 across all disks?
This doesn't sound like a high-end (or traffic) server to me, so the performance penalty probably won't be that hard. Edit: I just googled a bit, and it seems like Grub can't read software raid. It needs a bootloader on every disk that you want to boot up (in RAID5: every disk). This seems extremely clumsy to me, have you considered buying a used raid5 card from ebay?
@vpetersson: when I experimented with running my netbook off a pair of fast microSD cards as RAID0 with RAID1 for /boot with Ubuntu 9.04, via the alternate install ISO, installing grub errored and I had to manually install it to both the drives - it worked perfectly fine from then until the day one of the SD cards died. Installing and booting from a RAID1 /boot (with RAID-something-else for everything else) has worked fine for me with a number of times Debian/Lenny, and Debian/Etch before that. I've not tried with other Ubuntu releases. – May 7 '10 at 18:46.