I have an ongoing problem with my RAID-Z1 pool. During some (not all) reboots my system fails to reimport the zpool. While it seems to me to be a 'random' issue, there is undoubtedly a reason behind it.
I would really appreciate advice on where I should begin to look for a potential cause.
My set up is a x86_64 Funtoo Current build. sys-kernel/spl, sys-fs/kmod-zfs and sys-fs/zfs are versions 0.6.3 installed together with the sys-kernel/ck-sources-3.14.4 kernel. The zfs service is started during the default runlevel.
When I log into desktop I find that my zpool wd20ears_zfs has failed with the following:
~ # zpool status
pool: wd20ears_zfs state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM wd20ears_zfs UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas sdf FAULTED 0 0 0 corrupted data sde FAULTED 0 0 0 corrupted data sdg UNAVAIL 0 0 0 sdd FAULTED 0 0 0 corrupted data
If I export the pool and then reimport with the 'force' switch it mounts successfully:
~ # zpool export wd20ears_zfs ~ # zpool import wd20ears_zfs -f ~ # zpool status
pool: wd20ears_zfs state: ONLINE scan: scrub repaired 0 in 11h31m with 0 errors on Sun Jul 6 08:03:38 2014 config: NAME STATE READ WRITE CKSUM wd20ears_zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdf ONLINE 0 0 0 sde ONLINE 0 0 0 sdg ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors
I am yet to raise this issue on the zfsonlinux.org Issue Tracker. I thought it best to start with the Funtoo forums.