Today while migrating SAN i face this issue, hope it will help others too…
The system panic during boot logging the error:
{0} ok boot 56024-disk
Boot device: /virtual-devices@100/channel-devices@200/disk@1 File and args:
SunOS Release 5.10 Version Generic_147440-01 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
NOTICE: zfs_parse_bootfs: error 19
Cannot mount root on rpool/68 fstype zfs
panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root
Changes
This issue usually occurs when system is trying to boot a ZFS rpool and the path to the disk changed, or customer is trying to boot the system from a cloned disk (that means the disk is a copy of another boot disks)
Cause
The issue is caused by a mismatch between the current path of the disk you are trying to boot from and the path stored in the ZFS label of the same disk:
ok boot 56024-disk
Boot device: /virtual-devices@100/channel-devices@200/disk@1 File and args:
# zdb -l /dev/rdsk/c0d1s0
——————————————–
LABEL 0
——————————————–
version: 29
name: ‘rpool’
state: 0
txg: 1906
pool_guid: 3917355013518575342
hostid: 2231083589
hostname: ”
top_guid: 3457717657893349899
guid: 3457717657893349899
vdev_children: 1
vdev_tree:
type: ‘disk’
id: 0
guid: 3457717657893349899
path: ‘/dev/dsk/c0d0s0‘
devid: ‘id1,vdc@f85a3722e4e96b600000e056e0049/a’
phys_path: ‘/virtual-devices@100/channel-devices@200/disk@0:a‘
whole_disk: 0
metaslab_array: 31
metaslab_shift: 27
ashift: 9
asize: 21361065984
is_log: 0
create_txg: 4
As you can see we are trying to boot the path disk@1 but in the ZFS label the path is disk@0.
Solution
To fix the issue you have to boot the system in failsafe mode or from cdrom and import the rpool on that disk to force ZFS to correct the path:
# zpool import -R /mnt rpool
cannot mount ‘/mnt/export’: failed to create mountpoint
cannot mount ‘/mnt/export/home’: failed to create mountpoint
cannot mount ‘/mnt/rpool’: failed to create mountpoint
# zdb -l /dev/rdsk/c0d1s0
——————————————–
LABEL 0
——————————————–
version: 29
name: ‘rpool’
state: 0
txg: 1923
pool_guid: 3917355013518575342
hostid: 2230848911
hostname: ”
top_guid: 3457717657893349899
guid: 3457717657893349899
vdev_children: 1
vdev_tree:
type: ‘disk’
id: 0
guid: 3457717657893349899
path: ‘/dev/dsk/c0d1s0‘
devid: ‘id1,vdc@f85a3722e4e96b600000e056e0049/a’
phys_path: ‘/virtual-devices@100/channel-devices@200/disk@1:a‘
whole_disk: 0
metaslab_array: 31
metaslab_shift: 27
ashift: 9
asize: 21361065984
is_log: 0
create_txg: 4
As you can see the path has been corrected, however you have also to remove the zpool.cache file otherwise after boot the ZFS command will still show the disk as c0d0:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 5.86G 13.7G 106K /mnt/rpool
rpool/ROOT 4.35G 13.7G 31K legacy
rpool/ROOT/s10s_u10wos_17b 4.35G 13.7G 4.35G /mnt
rpool/dump 1.00G 13.7G 1.00G –
rpool/export 63K 13.7G 32K /mnt/export
rpool/export/home 31K 13.7G 31K /mnt/export/home
rpool/swap 528M 14.1G 114M –
# zfs mount rpool/ROOT/s10s_u10wos_17b
# cd /mnt/etc/zfs
# rm zpool.cache