ubuntu 18.04 btrfs snapper hw2018 server raid-lvm
This post is an updated version of that one.
One more update, setup /home
directly on the data
volume.
Server install
This is the first post about installing ubuntu 18.04 on my server, the following posts can be found here : server. My current server specifications are available here : hw2018.
Since I want to use snapper for doing snapshots, I will use btrfs.
This howto has been done on a virtual machine, since my server is still running 16.04 very well, but I’m planning a full reinstall.
Goal
My server has 2 (small) SDD for the system and 4 (bigger) HDD, and here is the plan :
- amd64 machine in UEFI mode
- 2x ssd in RAID10+LVM for the system (2x128GB disks -> 128GB storage)
/
will be on this block device
- 4x ssh in RAID10+LVM for the data (4x2TB disks -> 4TB storage)
/home
and other heavy directories will be on this block device
- btrfs because it supports snapshots.
- backup on external USB hdd.
Why
This setup seems a little bit complicated, but trust me, I’m an engineer.
- RAID : because disks tend to fail (see my backup post).
- LVM : to resize the disks when they’re full (even while the system is running).
Setup
- Get the ubuntu server 18.04
- iso from here, do not download the server-live version, it does not support LVM and RAID!
- SHA256SUMS
- SHA256SUMS.gpg
- Make sure the download is fine:
pim@pim-linux $ sha256sum /tmp/ubuntu-18.04-server-amd64.iso
a7f5c7b0cdd0e9560d78f1e47660e066353bb8a79eb78d1fc3f4ea62a07e6cbc /tmp/ubuntu-18.04-server-amd64.iso
- Make sure this SHA256 is signed by ubuntu.
- Create a bootable media
- For bootable USB on windows : rufus
- For bootable USB on linux : Startup Disk Creator (
usb-creator-gtk
)
- Boot using your favorite media, choose your language, region, keymap, hostname, network, setup your main user, the timezone, choose manual paritioning.
Partitionning
⚠ The name of the disk, and thus of the partitions can change between reboot or
after hotplug. If you booted using USB, it’s even possible that the USB device
will show as /dev/sda
(or whatever you expect your first mass device is
named).
Attention must be payed during the installation (for partitionning and for
the bootloader installation), but this can almost be forgotten after the setup
since partition will use UUID or RAID+LVM magic to recognize the drive, no
matter the order of detection.
☞ Writing the partition table only on one SSD and one HDD, then copying
paritions using dd
then returning to the setup is an exercise left for the
reader.
- On all SSD
- create an empty partition table
- create the first partition 128MB, used as EFI
- create a second partition the rest, physical volume for RAID
- On all HDD - create an empty partition table - create a partition, physical volume for RAID
- The system should look like this:
┌────────────────────────┤ [!!] Partition disks ├─────────────────────────┐ │ │ │ This is an overview of your currently configured partitions and mount │ │ points. Select a partition to modify its settings (file system, mount │ │ point, etc.), a free space to create partitions, or a device to │ │ initialize its partition table. │ │ │ │ Guided partitioning │ │ Configure software RAID ▒ │ │ Configure the Logical Volume Manager ▒ │ │ Configure encrypted volumes ▒ │ │ Configure iSCSI volumes ▒ │ │ ▒ │ │ Virtual disk 1 (vda) - 4.3 GB Virtio Block Device ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ > #1 126.9 MB B f ESP ▒ │ │ > #2 4.2 GB K raid ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ Virtual disk 2 (vdb) - 4.3 GB Virtio Block Device ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ > #1 126.9 MB B f ESP ▒ │ │ > #2 4.2 GB K raid ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ Virtual disk 3 (vdc) - 2.1 GB Virtio Block Device ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ > #1 2.1 GB K raid ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ Virtual disk 4 (vdd) - 2.1 GB Virtio Block Device ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ > #1 2.1 GB K raid ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ Virtual disk 5 (vde) - 2.1 GB Virtio Block Device ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ > #1 2.1 GB K raid ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ Virtual disk 6 (vdf) - 2.1 GB Virtio Block Device ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ > #1 2.1 GB K raid ▒ │ │ > 1.0 MB FREE SPACE ▒ │ │ ▒ │ │ Undo changes to partitions │ │ Finish partitioning and write changes to disk │ │ │ │ <Go Back> │ │ │ └─────────────────────────────────────────────────────────────────────────┘
- If you’re attentive, you have seen that my virtual disks vd[c,d,e,f] are
only 2GB, not 2TB, because this howto is written using a virtual machine. And
while we’re here, on a real machine, the name of the disk will probably be
/dev/sdX
.
- If you’re attentive, you have seen that my virtual disks vd[c,d,e,f] are
only 2GB, not 2TB, because this howto is written using a virtual machine. And
while we’re here, on a real machine, the name of the disk will probably be
RAID
- Now go to Configure software RAID, accept the changes.
- Create a MD device (for the system)
- RAID10
- 2 active devices
- 0 spare
- select the RAID partitions of both SSD, mine are vda2 and vdb2
- Create a MD device (for the data)
- RAID10
- 4 active devices
- 0 spare
- select the RAID partitions of both HHD, mine are vdc1, vdd1, vde1, vdf1
- Finish
LVM
- Now go to Configure the Logical Volume Manager, accept the changes.
- Create a volume group (for the system)
- volume group name : root
- /dev/md0
- continue, accept the changes.
- Create a volume group (for the data)
- volume group name : data
- /dev/md1
- continue, accept the changes.
- Create a logical volume (for the system)
- select : root
- logical volume name : root
- maximum size
- continue
- Create a logical volume (for the data)
- select : data
- logical volume name : data
- maximum size
- continue
- Display configuration details, should look like this:
┌─────────────────────┤ [!!] Partition disks ├──────────────────────┐ │ │ │ Current LVM configuration: │ │ Unallocated physical volumes: │ │ * none │ │ │ │ Volume groups: │ │ * data (4286MB) │ │ - Uses physical volume: /dev/md1 (4286MB) │ │ - Provides logical volume: data (4286MB) │ │ * root (4160MB) │ │ - Uses physical volume: /dev/md0 (4160MB) │ │ - Provides logical volume: root (4160MB) │ │ │ │ <Continue> │ │ │ └───────────────────────────────────────────────────────────────────┘
- Continue, Finish
Now the filesystems
- For the system, select LVM VG root, LV root #1
- Use as : btrfs
- Mount point : /
- Mount options : noatime
- Label : root
- For the data, select LVM VG data, LV dta #1
- Use as : btrfs
- Mount point : /home
- Mount options : noatime
- Label : data
- Finish partitioning and write changes to disk, accept
Continue the setup
- proxy
- Automatic updates
- Choose wisely between
Let my system run flawlessly, but not always up-to-date
Make my system more secure, but could be broken by an update
- I prefer managing the update by myself, since I don’t want my music to stop playing in the middle of a meal, and I accept risking some security update delay. In any case, this machine is not directly connected to the internet.
- Choose wisely between
- Software selection : empty and continue (more on that on the first boot!).
- Let the software be installed.
- No, don’t click on Continue !
┌───────────────────┤ [!!] Finish the installation ├────────────────────┐ │ │ ┌│ Installation complete │ ││ Installation is complete, so it is time to boot into your new system. │ ││ Make sure to remove the installation media (CD-ROM, floppies), so │ ││ that you boot into the new system rather than restarting the │ ││ installation. │ ││ │ └│ <Go Back> <Continue> │ │ │ └───────────────────────────────────────────────────────────────────────┘
Optimization for the snapshots
The system as it is configured could be used as is, even with snapshots in mind, but here are some optimization to consider:
- There is no need to take snapshots of
/tmp
, this directory is emptied at every boot. - I prefer putting
/var/log/
in a separate subvolume, since I don’t want to lose the logs of what happenend if I need to revert to a snapshot of the filesystem.- Click Go back, and select Execute a shell
- Unmount the target system:
umount /target/dev umount /target/proc umount /target/boot/efi umount /target/home umount /target
- now remount
mkdir /tmp/root mkdir /tmp/home mount /dev/root/root /tmp/root mount /dev/data/data /tmp/home rm /tmp/root/@/swapfile # not supported on btrfs
- fix /tmp, /var/log
btrfs subvolume create /tmp/root/@var-log btrfs subvolume create /tmp/root/@tmp
- fix fstab
/tmp/root/@/etc/fstab
(don’t forget to remove the swap file)# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/root-root / btrfs noatime,subvol=@ 0 1 /dev/mapper/root-root /tmp btrfs noatime,subvol=@tmp 0 1 /dev/mapper/root-root /var/log btrfs noatime,subvol=@var-log 0 1 # /boot/efi was on /dev/vda1 during installation UUID=4C77-014D /boot/efi vfat umask=0077 0 1 /dev/mapper/data-data /home btrfs noatime,subvol=@home 0 2
- unmount
umount /tmp/home umount /tmp/root sync
- Return to the installer
exit
- Finish the installation, then reboot.
The setup continues after the first reboot
update
Get the newest package list.
cli@server:~$ sudo apt-get update
Since almost nothing is configured (yet), it’s safe dist-upgrade
should be
safe.
sudo apt-get dist-upgrade
Install openssh-server
sudo apt-get -y install openssh-server
Setting a rescue user
Since I don’t want to enable root logins and /home
is not on the same place
(disk/partition) as /
, I will create a rescue user with a home on /
with sudo
priviledges and with the name local.
sudo adduser local
sudo usermod -aG sudo local
sudo usermod -aG adm local
sudo usermod -d /var/home/local local
sudo mkdir /var/home
sudo mv /home/local /var/home/
Ǹow verify the local user is working, from a remote machine:
ssh local@server
sudo lshw
It’s now a good time to setup passwordless access by copying your public key to the server.
grub
By default, grub is configured for starting the system in quiet mode (the
quiet
parameter from GRUB_CMDLINE_LINUX_DEFAULT
, and to stop booting after a
failed boot GRUB_RECORDFAIL_TIMEOUT
. Let’s change this in /etc/default/grub
.
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX=""
GRUB_RECORDFAIL_TIMEOUT=5
Then
sudo update-grub
grub and login on serial port
I can’t do this on my hardware, because the UPS is connected to my serial port,
but it’s possible to setup grub and login on the serial port.
For grub in /etc/default/grub
:
...
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0,115200"
...
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
...
systemd
will automatically start a login on the serial port when the kernel
console is configured on the serial port.
Install and configure snapper
sudo apt-get -y install snapper
sudo snapper -c root create-config /
sudo snapper -c var-log create-config /var/log/
sudo snapper -c home create-config /home
disable snapshots at boot
snapper automatically takes a snapshot at boot time, this can be disabled using this command:
sudo systemctl disable snapper-boot.service
disable locate
for snapshots
I don’t want to see all versions of the file when I use locate
, so
here is how to disable locate
for snapshots: edit /etc/updatedb.conf
and
add this line :
PRUNENAMES=".git .bzr .hg .svn .snapshots"
Configure snapper for users in the adm
group
Users from the adm
group will be able to create, list and diff snapshots, but
won’t be able to undochanges
.
Edit /etc/snapper/configs/*
and modify this line:
ALLOW_GROUPS="adm"
Then
sudo chmod a+rx /.snapshots
sudo chmod a+rx /var/log/.snapshots
sudo chmod a+rx /home/.snapshots
sudo chown :adm /.snapshots
sudo chown :adm /var/log/.snapshots
sudo chown :adm /home/.snapshots
Test it using a user in the adm group:
snapper -c home list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+----------------------------------+------+----------+-------------+---------
single | 0 | | | root | | current |
single | 1 | | Tue 15 May 2018 06:00:09 PM CEST | root | timeline | timeline |
Configure snapshot retention
Have a look at /etc/snapper/configs/*
and make sure the timeout or snapshot
numbers are configured as you want.
snapper and apt
apt
is already configured to take a snapshot pair just before and after
running, example:
sudo apt-get install -y ncdu
snapper -c root list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+----------------------------------+------+----------+-------------+---------
single | 0 | | | root | | current |
single | 1 | | Tue 15 May 2018 06:00:09 PM CEST | root | timeline | timeline |
pre | 2 | | Tue 15 May 2018 06:54:12 PM CEST | root | number | apt |
post | 3 | 2 | Tue 15 May 2018 06:54:13 PM CEST | root | number | apt |
apt
has automatically created the pre/post snapshot pair number 3 and 2, and
we can see what has been changed:
snapper status 3..2
....xa /dev/ptmx
-..... /usr/bin/ncdu
-..... /usr/share/doc/ncdu
-..... /usr/share/doc/ncdu/changelog.Debian.gz
-..... /usr/share/doc/ncdu/copyright
-..... /usr/share/man/man1/ncdu.1.gz
c..... /var/cache/apt/pkgcache.bin
c..... /var/cache/man/index.db
-..... /var/lib/dpkg/info/ncdu.list
-..... /var/lib/dpkg/info/ncdu.md5sums
c..... /var/lib/dpkg/status
c..... /var/lib/dpkg/status-old
c..... /var/tmp/snapper-apt
The setup is now finished, feel free to have a look at
snapper
, the snapshot manager. Now before
doing anything else, the server must be backuped.
~~~
Question, remark, bug? Don't hesitate to contact me or report a bug.