Linux - Disks and Filesystems
iSCSI
- Block storage provider: iSCSI Target
- Storage client: iSCSI Initiator
- Dynamic Discovery: Initiator sends 'SendTargets' request to a single IP/port and if the target listens on multiple names and addresses, all of them are sent in a form of TargetName and TargetAddress (IP:port#).
- See here for background and IQN naming.
Target
- Install package (and dependencies: targetcli
- Choose/create local area for disk images: /iscsi_disks
Start admin utility:
targetcli /> cd /backstores/fileio /backstores/fileio> create disk01 /iscsi_disks/disk01.img 10G /backstores/fileio> cd /iscsi /iscsi> create iqn.2000-01.com.example:storage.target01 /iscsi> cd iqn.2000-01.com.example:storage.target01/tpg1/luns /iscsi/iqn.20...t01/tpg1/luns> create /backstores/fileio/disk01 /iscsi/iqn.20...t00/tpg1/luns> cd ../acls /iscsi/iqn.20...t00/tpg1/acls> create iqn.2000-01.com.example:initiator01 /iscsi/iqn.20...t00/tpg1/acls> cd iqn.2000-01.com.example:initiator01 /iscsi/iqn.20...an-server01> set auth userid=someuser /iscsi/iqn.20...an-server01> set auth password=somepass exit
Other commands within targetcli:
ls delete [object] help
systemctl enable target systemctl start target
If necessary, open firewall for 3260:
firewall-cmd --add-service=iscsi-target --permanent firewall-cmd --reload
See also here
Initiator
Install package: iscsi-initiator-utils
In /etc/iscsi/initiatorname.iscsi specify the iSCSI target:
InitiatorName=iqn.2000-01.com.example:initiator01
In /etc/iscsi/iscsid.conf:
node.session.auth.authmethod = CHAP node.session.auth.username = username node.session.auth.password = password
Discover target:
# iscsiadm -m discovery -t sendtargets -p san-server01 san-server01:3260,1 iqn.2000-01.com.example:storage.target01
More info:
iscsiadm -m node -o show ...
Login:
iscsiadm -m node --login
Confirm session:
iscsiadm -m session -o show
Confirm new device added (eg sdc):
cat /proc/partitions
Then, partition, format and mount /dev/sdc as normal.
Logout of iSCSI (after unmounting used filesystems):
iscsiadm -m node --logout
Disk Management
Grub
When installing on a RAID 1 mirror for the OS grub boot loader only installs on the first disk, so it that fails you can't boot off the second. To copy loader to the second disk:
grub> find /grub/stage1
This should find (hd0,0) and (hd1,0) which correspond for /dev/sda and /dev/sdb. Then temporarily make sdb the first disk and install:
device (hd0) /dev/sdb root (hd0,0) setup (hd0)
HD Parameters
Show settings/features:
hdparm -I /dev/sda
Test transfer rate:
hdparm -t --direct /dev/sda
Show power management setting:
hdparm -B /dev/sda
MD RAIDs
Create an array of 2 disks in a RAID1 (mirror):
mdadm --create /dev/md0 -l 1 -n 2 /dev/sdb1 /dev/sdc1
Monitor status with:
mdadm --detail /dev/md0 cat /proc/mdstat
Ensure RAID is detected at boot time:
mdadm -Es >> /etc/mdadm.conf
Remove a device from an array:
mdadm --remove /dev/md0 /dev/sdb1
Fail a drive in an array:
mdadm --fail /dev/md0 /dev/sdb1
Add a device to an array:
mdadm --add /dev/md0 /dev/sdb1
The /etc/cron.weekly/99-raid-check script can sometimes report:
WARNING: mismatch_cnt is not 0 on /dev/md1
The actual mismatch count can be found:
cat /sys/block/md1/md/mismatch_cnt
A repair and rebuild can be:
echo repair > /sys/block/md1/md/sync_action echo check > /sys/block/md1/md/sync_action
Partitioning
FDisk
Supports MBR partitions
Parted
Supports MBR and GPT
See Manual
LVM
Physical Volumes
To create a PV out of two partions:
pvcreate /dev/sdc1 /dev/sdd1
To show current PVs:
pvscan
Volume Groups
To create a VG:
vgcreate vg00 /dev/sd[cd]
To show all current VGs:
vgscan
To show details of a VG (including free PEs):
vgdisplay vg00
To extend a volume group by adding a new PV:
vgextend vg00 /dev/sde
To make a volume group available:
vgchange -ay vg00
Logical Volumes
To create a new LV:
lvcreate --size 100M vg00 -n lv00
or change --size option to --extents 500 or --extents 60%VG or -l 100%FREE
eg to create a RAID5 array out of 3 disks (2 data):
lvcreate -n lv00 --type raid5 -i 2 --extents 100%FREE vg00
Show status of LVM RAID:
lvs -a vg00
To rename a LV in VG vg01:
lvrename vg00 lvold lvnew
To remove a LV:
lvremove vg00/lv01
To show current LVs:
lvscan
Filesystems
To format with 1% minfree, large file support (see types in /etc/mke2fs.conf), journalling and a label:
mkfs.ext4 -m 1 -T largefile4 -j -L /home /dev/mapper/vg00-lv00
To alter the label:
e2label /dev/sda newlabel
To get the UUID of the disk:
blkid
To mount at boot time, enter in
- /etc/fstab
Or to use XFS on a LV:
mkfs.xfs -L /home /dev/mapper/vg0-lv0
BTRFS
See also here
Create a RAID5 array for data and metadata:
mkfs.btrfs -L data -d raid5 -m raid5 -f /dev/sdc /dev/sdd /dev/sde
View usage:
btrfs filesystem usage /data
Look for btrfs filesystems:
blkid --match-token TYPE=btrfs
Subvolumes
Create subvolume:
btrfs subvolume create /data/db
Info:
btrfs subvolume list . btrfs subvolume list /data btrfs subvolume show /data/db
Delete subvolume:
btrfs subvolume delete /data/db
Subvolumes can be mounted like separate filesystems
mount -o subvol=/oldpath /dev/sda5 /newpath
so the subvolume is now also visible under newpath
See also here
Compression
Mount with compression option in fstab:
compress=zstd:1
where the algorirm could also be lzo or zlib. Compression level can be increased to 2 or 3
Per-file/directory/subvolume compression is also available:
btrfs property get /somefile compression btrfs property set /etc compression zlib
Degragment:
btrfs filesystem defragment -r /
Loopback Filesystem
dd if=/dev/zero of=loopback.img bs=1024M count=5 losetup -fP loopback.img
To show loopback device(s):
losetup -a losetup -l
To delete loopback device:
losetup -d /dev/loop0
Then, create filesystem, eg:
mkfs.xfs -L backups loopback.img
And mount /dev/loop0 (-o loop) as a traditional device. Note: the losetup configuration is lost at restart so can't be added to /etc/fstab for at-boot mounting.
Smarttools
/etc/smartmontools/smartd.conf
Default to scan ATA/SCSI devices and report problems to root:
DEVICESCAN -H -m root -M exec /usr/libexec/smartmontools/smartdnotify -n standby,10,q
Or a specific device, and email an external user:
/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected]
Scan for devices:
smartctl --scan
Show detailed information about a device:
smartctl --all /dev/sda