Veeam Linux Repository — Disk Provisioning Runbook

linuxrepo03
Host
linuxrepo03
OS
Ubuntu 20.04
Filesystem
XFS / GPT
Target Size
~90 TB
LVM
No — raw partition
Multipath
Not configured
Procedure Steps
01 Scan all HBA hosts
# Trigger SCSI rescan across all host adapters for host in /sys/class/scsi_host/host*; do echo "- - -" > $host/scan; done

host0–host4 present on this system. Loop handles all of them.

02 Confirm new device appeared
dmesg | tail -20 lsblk -o NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT # expect sdd to appear

Note: Existing disks are sda/sdb/sdc. New disk should present as sdd.

03 Verify size matches ticket
fdisk -l /dev/sdd

Confirm reported capacity matches the provisioned size before partitioning.

04 Partition — GPT, 100% of disk
parted /dev/sdd --script mklabel gpt parted /dev/sdd --script mkpart primary xfs 0% 100%

GPT required — disk will be >2TB. MBR max is 2TB.

05 Confirm partition created
lsblk /dev/sdd # expect sdd1 child partition
06 Format XFS
mkfs.xfs /dev/sdd1

Note: At ~90TB this may take several minutes. Do not interrupt. XFS default AG sizing (~1TB per AG) is appropriate at this scale.

07 Get device path for fstab
blkid /dev/sdd1 # note UUID if needed, but convention uses /dev path

Existing repo mounts (CBSU, ALUE) use /dev/sdX paths, not UUIDs. Match this convention.

08 Create mountpoint
mkdir -p /mnt/<REPONAME>

Match existing naming convention — e.g. CBSU, ALUE, GIPI.

09 Add fstab entry
# Verify current fstab first cat /etc/fstab # Append new entry ( >> appends, > would overwrite — never use > here ) echo "/dev/sdd1 /mnt/<REPONAME> xfs defaults 0 0" >> /etc/fstab # Confirm line landed correctly tail -3 /etc/fstab

Convention: existing mounts use /dev/sdXN with defaults 0 0 — no nofail, no UUID.

10 Mount and verify
mount -a df -hT /mnt/<REPONAME> # Set permissions chmod 755 /mnt/<REPONAME>

If mount -a errors, fix /etc/fstab before rebooting. A bad fstab entry on a non-root mount will drop the host to emergency mode on next boot.

⚠ Key Notes