Jay Taylor's notes

back to listing index

Install Ubuntu 18.04 desktop with RAID 1 and LVM on machine with UEFI BIOS

[web search]
Original source (askubuntu.com)
Tags: ubuntu linux howto raid lvm boot askubuntu.com
Clipped on: 2019-09-23

With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.

In short

  1. Download the alternate server installer.
  2. Install with manual partitioning, EFI + RAID and LVM on RAID partition.
  3. Clone EFI partition from installed partition to the other drive.
  4. Install second EFI partition into UEFI boot chain.
  5. To avoid a lengthy wait during boot in case a drive breaks, remove the btrfs boot scripts.

In detail

1. Download the installer

2. Install with manual partitioning

  • During install, at the Partition disks step, select Manual.
  • If the disks contain any partitions, remove them.
    • If any logical volumes are present on your drives, select Configure the Logical Volume Manager.
      • Choose Delete logical volume until all volumes have been deleted.
      • Choose Delete volume group until all volume groups have been deleted.
    • If any RAID device is present, select Configure software RAID.
      • Choose Delete MD device until all MD devices have been deleted.
    • Delete every partition on the physical drives by choosing them and selecting Delete the partition.
  • Create physical partitions
    • On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as: EFI System Partition.
    • On each drive, create a second partition with 'max' size, Use as: Physical Volume for RAID.
  • Set up RAID
    • Select Configure software RAID.
    • Select Create MD device, type RAID1, 2 active disks, 0 spare disks, and select the /dev/sda2 and /dev/sdb2 devices.
  • Set up LVM
    • Select Configure the Logical Volume Manager.
    • Create volume group vg on the /dev/md0 device.
    • Create logical volumes, e.g.
      • swap at 16G
      • root at 35G
      • tmp at 10G
      • var at 5G
      • home at 200G
  • Set up how to use the logical partitions
    • For the swap partition, select Use as: swap.
    • For the other partitions, select Use as: ext4 with the proper mount points (/, /tmp, /var, /home, respectively).
  • Select Finish partitioning and write changes to disk.
  • Allow the installation program to finish and reboot.

3. Inspect system

  • Check which EFI partition has been mounted. Most likely /dev/sda1.

    mount | grep boot

  • Check RAID status. Most likely it is synchronizing.

    cat /proc/mdstat

4. Clone EFI partition

The EFI bootloaded should have been installed on /dev/sda1. As that partition is not mirrored via the RAID system, we need to clone it.

sudo dd if=/dev/sda1 of=/dev/sdb1

5. Insert second drive into boot chain

This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.

  • Run efibootmgr -v and notice the file name for the ubuntu boot entry. On my install it was \EFI\ubuntu\shimx64.efi.
  • Run sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi.
  • Now the system should boot even if either of the drives fail!

7. Wait

If you want to try to remove/disable any drive, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat However, you may perform step 8 below while waiting.

8. Remove BTRFS

If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run

sudo apt-get purge btrfs-progs

This should remove btrfs-progs, btrfs-tools and ubuntu-server. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.

9. Install the desktop version

Run sudo apt install ubuntu-desktop to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!

10. Update EFI partition after grub-efi-amd64 update

When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.

10.1 Find out clone source, quick way

If you haven't rebooted after the update, use

mount | grep boot

to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.

10.2 Find out clone source, paranoid way

Create mount points and mount both partitions:

sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1

Find timestamp of newest file in each tree

sudo find /tmp/sda1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sdb1

Compare timestamps

cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.\n"'

Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.

Unmount the partitions before the cloning to avoid cache/partition inconsistency.

sudo umount /tmp/sda1 /tmp/sdb1

10.3 Clone

If /dev/sdb1 was the clone source:

sudo dd if=/dev/sdb1 of=/dev/sda1

If /dev/sda1 was the clone source:

sudo dd if=/dev/sda1 of=/dev/sdb1

Done!

11. Virtual machine gotchas

If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):

FS0:
\EFI\ubuntu\grubx64.efi

The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.

answered Aug 16 '18 at 21:32
Image (Asset 2/4) alt=
How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup. – soze Sep 18 '18 at 3:40
  • 2
    @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas. – Niclas Börlin Sep 18 '18 at 7:52
  • Thanks @NiclasBörlin! I was struggling with the creation of boot partition under RAID and LVM, and your answer was crystal clear. Thanks a lot! – Gui Ambros Jan 22 at 6:31
  • Holy cow! Nice! – pileofrogs May 8 at 15:52
  • Your Answer

    community wiki

    Not the answer you're looking for? Browse other questions tagged or ask your own question.

    Hot Network Questions

    Ask Ubuntu
    Company
    Stack Exchange

    Network

    site design / logo © 2019 Stack Exchange Inc; user contributions licensed under cc by-sa 4.0 with attribution required. rev 2019.9.23.34994

    Ubuntu and Canonical are registered trademarks of Canonical Ltd.