LVM move
I'm revisiting an old post from two years ago (which was never finished), essentially doing the same in reverse for backup reasons. Originally I was moving from a 512G NVMe to a 1T one, now we will be going in the reverse (with most of the data already saved elsewhere, or let go, of course).
Source:
Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SPCC M.2 PCIe SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbf3eefc4
Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 2048 2099199 2097152 1G 83 Linux
/dev/nvme0n1p2 2099200 44042239 41943040 20G 83 Linux
/dev/nvme0n1p3 44042240 127928319 83886080 40G 83 Linux
/dev/nvme0n1p4 127928320 2000409263 1872480944 892.9G 83 Linux
Target:
Disk /dev/sdc: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D58D15BF-D320-4F0B-B76A-9C857520416D
I'm using a USB-NVMe adapter, which obscures some details, here's what it looked like before the original move:
fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SPCC M.2 PCIe SSD
[...]
Device Boot Start End Sectors Size Id Type
/dev/nvme1n1p1 2048 2099199 2097152 1G 83 Linux
/dev/nvme1n1p2 2099200 966789119 964689920 460G 83 Linux
/dev/nvme1n1p3 966789120 1000215215 33426096 15.9G 83 Linux
Let's preserve some of the layout:
$ fdisk /dev/sdc
Welcome to fdisk (util-linux 2.39.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sdc: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D58D15BF-D320-4F0B-B76A-9C857520416D
Command (m for help): o
Created a new DOS (MBR) disklabel with disk identifier 0x7eaecc2e.
The device contains 'gpt' signature and it will be removed by a write command. See fdisk(8) man page and --wipe option for more details.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-1000215215, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-1000215215, default 1000215215): 2099199
Created a new partition 1 of type 'Linux' and of size 1 GiB.
Partition #1 contains a ext2 signature.
Do you want to remove the signature? [Y]es/[N]o: y
The signature will be removed by a write command.
Command (m for help): n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (2-4, default 2):
First sector (2099200-1000215215, default 2099200):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-1000215215, default 1000215215): +300G
Created a new partition 2 of type 'Linux' and of size 300 GiB.
Command (m for help): p
Disk /dev/sdc: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: PCIe SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x7eaecc2e
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 2099199 2097152 1G 83 Linux
/dev/sdc2 2099200 631244799 629145600 300G 83 Linux
Filesystem/RAID signature on partition 1 will be wiped.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
All right, there's a boot partition, and one for the PV. The boot loader and
boot partition will be copied verbatim with dd
, but for the LVs, we'll take
advantage of LVM features.
$ dd if=/dev/nvme0n1 of=/dev/sdc bs=446 count=1
1+0 records in
1+0 records out
446 bytes copied, 0.014048 s, 31.7 kB/s
$ dd if=/dev/nvme0n1p1 of=/dev/sdc1 status=progress
1064522240 bytes (1.1 GB, 1015 MiB) copied, 87 s, 12.2 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 98.6537 s, 10.9 MB/s
pvcreate /dev/sdc2
vgextend zasz /dev/sdc2
$ pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p4 zasz lvm2 a-- <892.87g <630.82g
/dev/sdc2 zasz lvm2 a-- <300.00g <300.00g
$ lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dmc zasz -wi-a----- 1.00g
files zasz -wi-a----- 50.00g
home zasz -wi-a----- 100.00g
newroot zasz -wi-a----- 50.00g
nextcloud zasz -wi-a----- 45.00g
swap zasz -wi-a----- 16.00g
Move everything:
pvmove -n dmc /dev/nvme0n1p4 /dev/sdc2
pvmove -n newroot /dev/nvme0n1p4 /dev/sdc2
pvmove -n home /dev/nvme0n1p4 /dev/sdc2
pvmove -n files /dev/nvme0n1p4 /dev/sdc2
pvmove -n nextcloud /dev/nvme0n1p4 /dev/sdc2
pvmove -n swap /dev/nvme0n1p4 /dev/sdc2
See if we succeeded:
$ lvs -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
main EMBASSY_CRALVBAU2X7DL6KIRQPCR2AB5EHOSECOCTGNAQK44SKF3FSHP5FA -wi------- 8.00g /dev/zd16(0)
package-data EMBASSY_CRALVBAU2X7DL6KIRQPCR2AB5EHOSECOCTGNAQK44SKF3FSHP5FA -wi------- <92.00g /dev/zd16(2048)
dmc zasz -wi------- 1.00g /dev/sdc2(0)
files zasz -wi------- 50.00g /dev/sdc2(38656)
home zasz -wi------- 100.00g /dev/sdc2(13056)
newroot zasz -wi------- 50.00g /dev/sdc2(256)
nextcloud zasz -wi------- 45.00g /dev/sdc2(55552)
swap zasz -wi------- 16.00g /dev/sdc2(51456)
Looks good, let's remove it then.
$ vgreduce zasz /dev/nvme0n1p4
Physical volume "/dev/nvme0n1p4" still in use
Hmm, what's that about? Let's try a different way of listing allocations:
$ pvdisplay -m
--- Physical volume ---
PV Name /dev/nvme0n1p4
VG Name zasz
PV Size <892.87 GiB / not usable <1.34 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 228574
Free PE 228561
Allocated PE 13
PV UUID b55Svl-78yy-H16I-3yVY-cXLt-6pMd-naou1x
--- Physical Segments ---
Physical extent 0 to 140802:
FREE
Physical extent 140803 to 140815:
Logical volume /dev/zasz/lvol0_pmspare
Logical extents 0 to 12
Physical extent 140816 to 228573:
FREE
Okay, looks like I did not succeed in deleting that thin-provisioned pool I had
earlier. To solve this, we can run pvmove /dev/nvme0n1p4
which moves
everything to a different PV.
$ pvdisplay -m
--- Physical volume ---
PV Name /dev/nvme0n1p4
VG Name zasz
PV Size <892.87 GiB / not usable <1.34 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 228574
Free PE 228574
Allocated PE 0
PV UUID b55Svl-78yy-H16I-3yVY-cXLt-6pMd-naou1x
--- Physical Segments ---
Physical extent 0 to 228573:
FREE
Come to think of it, I'm not gonna need it anyway, let's just get rid of it.
lvremove zasz/lvol0_pmspare
All right, continuing where we left off.
$ vgreduce zasz /dev/nvme0n1p4
Removed "/dev/nvme0n1p4" from volume group "zasz"
$ pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p4 lvm2 --- <892.87g <892.87g
/dev/sdc2 zasz lvm2 a-- <300.00g <38.00g
Last step, remove the dangling PV and clear the old boot partition:
$ pvremove /dev/nvme0n1p4
Labels on physical volume "/dev/nvme0n1p4" successfully wiped.
$ mkfs.ext4 /dev/nvme0n1p1
Now let's see if it's still bootable! Unfortunately, it's not. GRUB starts, but then it passes control to the other disk. To fix, let's regenerate the config.
mount /dev/zasz/newroot /mnt/root
mount /dev/sdc1 /mnt/root/boot
artix-chroot /mnt/root /bin/bash
update-grub
After some mucking about with the partition ids in the fixed config (which is prepended to the generated entries, and are generally used for booting), we are able to boot from the backup disk. Tremendous success!