CRITICAL

Linux Kernel: Fix Root Filesystem Mount Failure in Hybrid Cloud VM Migration

Quick Fix Summary

TL;DR

Boot from rescue media, chroot, and regenerate initramfs with correct drivers.

The kernel cannot find or mount the root filesystem, typically due to missing storage drivers in the initramfs after a cross-cloud VM migration.

Diagnosis & Causes

  • Missing storage controller drivers (e.g., virtio_scsi, nvme, xen_blkfront) in initramfs.
  • Incorrect root= kernel parameter or UUID/device name mismatch post-migration.
  • Recovery Steps

    1

    Step 1: Verify Boot Parameters and Environment

    From the kernel panic screen or a rescue boot, check the kernel command line and identify the expected root device.

    bash
    cat /proc/cmdline
    lsblk -f
    blkid
    2

    Step 2: Boot into Rescue/Recovery Environment

    Use cloud provider's rescue image, installation ISO, or a live CD to gain shell access to the failed system's disks.

    bash
    # Mount the root filesystem from rescue environment. Example:
    mkdir /mnt/rescue
    mount /dev/sda1 /mnt/rescue
    mount --bind /dev /mnt/rescue/dev
    mount --bind /proc /mnt/rescue/proc
    mount --bind /sys /mnt/rescue/sys
    3

    Step 3: Chroot and Identify Missing Drivers

    Change root into the failed system and check the kernel modules required for the current storage hardware.

    bash
    chroot /mnt/rescue /bin/bash
    lspci -k | grep -i storage -A3
    lsmod | grep -E 'virtio|nvme|xen|scsi'
    4

    Step 4: Rebuild Initramfs with All Necessary Drivers

    Force regeneration of the initramfs for the current kernel, ensuring all storage drivers are included. Adjust for your distribution.

    bash
    # For RHEL/CentOS/Fedora/AlmaLinux/Rocky:
    dracut --force --add-drivers "virtio_scsi virtio_blk virtio_pci nvme xen_blkfront"
    # For Ubuntu/Debian:
    update-initramfs -u -k all
    5

    Step 5: Verify and Update Bootloader Configuration

    Check that the GRUB configuration points to the correct root device by UUID or persistent name. Update if necessary.

    bash
    cat /etc/default/grub
    grub2-mkconfig -o /boot/grub2/grub.cfg
    # For legacy BIOS systems, also run:
    grub2-install /dev/sda
    6

    Step 6: Test Boot and Implement Prevention

    Exit chroot, reboot, and verify normal boot. To prevent recurrence, ensure initramfs tools are configured for hybrid environments.

    bash
    exit
    umount -R /mnt/rescue
    reboot
    # Post-boot, verify initramfs content:
    lsinitrd /boot/initramfs-$(uname -r).img | grep -E 'virtio|nvme|xen'

    Architect's Pro Tip

    "This often happens when migrating a VM from a hypervisor using SCSI emulation (like VMware) to one using virtio (like KVM/AWS) without rebuilding initramfs. Always test-boot in the target environment before cutting over."

    Frequently Asked Questions

    How can I prevent this during future migrations?

    Before migration, pre-install common cloud drivers (e.g., virtio drivers on VMware) and rebuild initramfs. Use a standardized, hardware-agnostic image build process.

    What if I don't know which storage driver is needed?

    Boot a working, generic instance in the target cloud, run `lspci -k` and `lsmod`, and note the storage controller driver (e.g., 'virtio_pci'). Add this driver to the dracut command.

    Related Linux Guides