So, I’m trying to clone an SSD to an NVME drive and I’m bumping into this “dev-disk-by” error when I boot from the NVME (the SSD is unplugged).
I can’t find anyone talking about this in this context. It seems like what I’ve done here should be fine and should work, but there’s clearly something I and the arch wiki are missing.
You need to make sure both /etc/fstab and the boot cfg are pointing to the new partitions. Since they are using uuid, if the uuid changes due to the method used to clone, it won’t find the disk partition.
They’re identical to what they were in the original drive, I’ve verified it in gparted on a live image.
It’s driving me crazy because I can literally find this drive by that UUID in a live image, but when I go to boot the system has no idea what that is.
I’m confused. You say that you’re booting off that drive that it can’t find. Like, this is your root drive?
But I believe that the kernel finding the root drive should happen much earlier than this. Like, you’ve got systemd stuff there on the screen. For that to happen, I’d think that you’d need to have your root drive already up and mounted. Grub hands that off to the kernel, believe that it’s specified in
/etc/default/grub
on my Debian system, then gets written out when you runsudo update-grub
.If I’m not misunderstanding that you are saying that the drive in question is your root drive, are you sure that this isn’t happening because there’s a reference to the drive – maybe another partition or something – in
/etc/fstab
is failing to find something?Or maybe I’m just misunderstanding what you’re saying.
EDIT: if you just want to get it working, unless you’ve got some kind of exotic setup, I expect that you can probably boot into a very raw mode by, from grub, passing init=/bin/sh on the kernel command line. A lot of stuff won’t be functional if you do that, since you’ll just be running a shell and the kernel, but as long as you have a root filesystem, it’ll probably come up. Then I’d
mount -o remount,rw /
so that you can modify your root drive, and then fiddle your/etc/fstab
into shape. Probably a live distro is more comfortable to work in, but if all you need is to get the regular system up, I’d think that fiddling with /etc/fstab is likely all you need to do that.EDIT2: and then I’d probably compare the output of
blkid
to your fstab, from within the boot in your regular system, if that isn’t what you already did.I’m giving up on my dd attempt and trying clonezilla (a highly regarded option it seems).
But yeah, welcome to exactly what’s driving me crazy. The dd “worked”, grub loads, it starts loading Linux … and then it gets caught trying to find… itself (?)
Like the exact drive that’s missing is the drive it would have to find to even be partially operational. The other drives weren’t touched and the original drive is unplugged.
There is a btrfs subvolume and they’re both part of the same drive … but it was also copied bit for bit.
IDK… We’ll see whether clonezilla works. I’ve been using Linux over ten years, it’s been a long time since I’ve been this confused.
I mean, if you want to start over, that’s your call, but in all honesty, my guess is that all you have to change from your current situation is a line of text in fstab. I don’t believe that changing the cloning method is going to change that.
EDIT: maybe the UUID is for a swap partition or similar in fstab?
EDIT2: This guy is describing a very similar sounding situation (though it’s not clear if he unplugged his original drive before trying to use his cloned one, so might have had duplicate UUIDs).
https://unix.stackexchange.com/questions/751640/systemd-is-eternally-stuck-on-a-start-job-when-i-go-to-boot-from-my-cloned-to-nv
He thinks that some users have “fixed the problem” by creating a swap partition with gparted.
That would, I expect, generate a new UUID for the swap partition via calling
mkswap
and then they’re putting the UUID into their fstab.Just saying that I’d personally do that, confirm that the UUIDs listed in fstab conform to what
blkid
is saying before starting all over, because I don’t think that dd or another utility for copying disk contents will likely produce a different result.Clonezilla just worked. The fstab is unmodified/identical to what dd gave me.
I really have no idea what clonezilla did differently. Its output was so fast… But yeah, it just worked with that. So I guess I’ll take it.
Absolutely baffling.
Aight, well, glad to hear it.
Thanks and thanks for the effort you put in.
Clonezilla runs lots of tasks after (and before)
dd
that are in the log file(s) on the live environment before you reboot. I haven’t used it in a while, but I’m confident that one of the tasks is updating grubI did update grub via a chroot as one of my troubleshooting steps… So I don’t think that was it either. I actually recall it saying something about skipping updating grub (because it was a GPT system without some special flag set I think).
I remember seeing it do something to the EFI stuff explicitly and I’m wondering if maybe that’s where it did something I didn’t.
Now that you know the safe way out, break it again with dd and figure out the difference 😁
Moving from SATA to NVMe is a classic way to break the boot process. Most of the time, you want to boot a recovery mode from USB, mount your existing root and efi partitions, and then just reinstall grub.
If you’ve managed to recover this way only once, you feel a lot more comfortable in the future if shit goes wrong.
I did do that FWIW, but it didn’t do it/it wasn’t enough/it still didn’t work.
If this was a toy system and/or I was back in college and feeling adventurous, I would definitely be more inclined to try and figure out what happened. As it stands, I just want the thing to work 😅
Valid. Glad you’re back on track