Jump to content
Read the Funtoo Newsletter: Summer 2023 ×
  • 0

ZFS import during reboot fails intermittently


Tassie_Tux

Question

I have an ongoing problem with my RAID-Z1 pool. During some (not all) reboots my system fails to reimport the zpool. While it seems to me to be a 'random' issue, there is undoubtedly a reason behind it.

 

I would really appreciate advice on where I should begin to look for a potential cause.
 
My set up is a x86_64 Funtoo Current build. sys-kernel/spl, sys-fs/kmod-zfs and sys-fs/zfs are versions 0.6.3 installed together with the sys-kernel/ck-sources-3.14.4 kernel. The zfs service is started during the default runlevel.
 
When I log into desktop I find that my zpool wd20ears_zfs has failed with the following:

~ # zpool status
  pool: wd20ears_zfs
 state: UNAVAIL
status: One or more devices could not be used because the label is missing 
or invalid.  There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
  scan: none requested
config:


NAME        STATE     READ WRITE CKSUM
wd20ears_zfs  UNAVAIL      0     0     0  insufficient replicas
 raidz1-0  UNAVAIL      0     0     0  insufficient replicas
   sdf     FAULTED      0     0     0  corrupted data
   sde     FAULTED      0     0     0  corrupted data
   sdg     UNAVAIL      0     0     0
   sdd     FAULTED      0     0     0  corrupted data

If I export the pool and then reimport with the 'force' switch it mounts successfully:
 

~ # zpool export wd20ears_zfs
~ # zpool import wd20ears_zfs -f
~ # zpool status 
  pool: wd20ears_zfs
 state: ONLINE
  scan: scrub repaired 0 in 11h31m with 0 errors on Sun Jul  6 08:03:38 2014
config:


        NAME        STATE     READ WRITE CKSUM
        wd20ears_zfs  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdd     ONLINE       0     0     0


errors: No known data errors

 
I am yet to raise this issue on the zfsonlinux.org Issue Tracker. I thought it best to start with the Funtoo forums. 

Link to comment
Share on other sites

12 answers to this question

Recommended Posts

  • 0

Solved. (Yes... it was my fault)

 

Whenever I had to manually export and import my pool I did so with the command

zpool import $POOLNAME

It turns out that this simply uses actual device names (/dev/sdX) as indicated by my earlier zpool status outputs. The device names are apparently retained within the zpool.cache file and so will be used for the zfs reimport/mount. If udev assigns those specific device names to other devices then the import will of course fail. The advice from the zfsonlinux crowd was to export my pool and then reimport it with the command

zpool import -d /dev/disk/by-id $POOLNAME

so that the zpool.cache is set to use the /dev/disk/by-id links instead of the direct device names. I have rebooted with a USB drive plugged in to force udev into assign different device names and sure enough the pool continues to import correctly. The command zpool status now gives

  pool: wd20ears_zfs
 state: ONLINE
  scan: scrub repaired 0 in 11h33m with 0 errors on Fri Aug 15 11:45:16 2014
config:


        NAME                                          STATE     READ WRITE CKSUM
        wd20ears_zfs                                  ONLINE       0     0     0
          raidz1-0                                    ONLINE       0     0     0
            ata-WDC_WD20EARS-00MVWB0_WD-WMAZA1268718  ONLINE       0     0     0
            ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4138722  ONLINE       0     0     0
            ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4157214  ONLINE       0     0     0
            ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4174978  ONLINE       0     0     0


errors: No known data errors

This is probably not applicable to those users with root (/) on ZFS as I expect that there is no zpool.cache involved.

Link to comment
Share on other sites

  • 0

Sorry I have been away from my Funtoo install; have not yet made the changes. Hopefully this evening. :)

 

Moving the zfs init script to the boot runlevel will be easy.

 

Regarding the modules, I am wondering if my simply naming them up within /etc/conf.d/modules will be sufficient. I do have a custom initramfs which is for decrypting and mounting my dmcrypt/LUKS root (/) partition. If I were load spl and zfs within initramfs then it would have to be right at the end as I would not want the pool to mount before root (/). I have always been of the view that the modules init script is loaded during the boot runlevel (I cannot check this right now). So instead of changing my initramfs I am tempted to just let OpenRC do the work just after the initramfs step has completed.

Link to comment
Share on other sites

  • 0

Your spl and zfs modules need to be in the initramfs, or compiled into the kernel, if I am not mistaken, so the zpool cache is initilized at boot.

From what I understand, the kernel / initramfs starts the system, starts the kernel init, then the kernel init hands off to openrc to mount the root filesystem, config devices, and start services. If the drivers are not availiable to openrc at initialization, then dog ate your lunch, openrc just keeps going or fails if root is on zfs.

I think if you need dmcrypt for you zpools, edit the init.d zfs startup script, and add dmcrypt to the required line, and that way, every time zfs starts, it either checks if dmcrypt is loaded or loads it, before starting zfs.

 

hope this helps, exploring the depths of the whole boot processes myself, trying to build my dream system.

Link to comment
Share on other sites

  • 0

After several restarts this problem has unfortunately reappeared. This is after naming up zfs and spl in /etc/conf.d/modules and moving the zfs service to the boot runlevel. I am going to see what I can find in log files...

 

By the way, in this particular case the zpool is not root (/) and not encrypted. So I do not mind it being mounted during boot or during the default stage.  ^_^

 

 

 

hope this helps, exploring the depths of the whole boot processes myself, trying to build my dream system.

I know what you mean about 'dream system'. ;) Too bad that problems arise along the way!

Link to comment
Share on other sites

  • 0

yeah, still trying to get a funtoo / musl stage3 built....

 

Anyway, not to hijack your thread. 

 

Your using wd20ears drives.....

Did you ever fix the head parking? WD green drives park the heads a lot, That's how they save power, and makes them very bad for raid or zfs arrays.

Here is a fix for the issue, but the damage maybe done.

Have you checked the SMART data for the drives?

I have (4) of theses that are basically dead, from that issue, not on a zfs array, but a windows software raid5.

This was the whole reason I went to zfs, and have avoided WD drives since. 

I don't like the artificial market segmentation they are pushing. Seagate Enterprise SAS or Hitachi SATA.

Link to comment
Share on other sites

  • 0
No I did not know about the head parking issue! Interesting.

 

I have one WD20EARS-00MVWB0 which is now over 4 years old. Looking at SMART data the Load Cycle Count raw value is 295763.

 

The other three drives are each WD20EARS-60MVWB0 (4K sector version) and over 3 years old. Each drive indicates a Load Cycle Count of 7698.

 

These drives were previously in RAID5 (Intel ICH10R) under Windows 7. Having been  a victim of data corruption proved to be a major motivator for me to change them to being ZFS storage under Funtoo. Apart from this thread topic and perhaps questionable performance (read: probably my kernel settings) ZFS has been a good move.

 

I chose WD Greens because of their affordability. Also I know of, and have had issues with drives from other manufacturers.

Link to comment
Share on other sites

  • 0

I have gained some insight into this issue.

 

When I created my raidz1 pool I did so using the /dev/disk/by-id/ method as outlined at http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool instead of using /dev/sda, /dev/sdb, ... etc. The four drives participating in my pool (in order) are

lrwxrwxrwx 1 root root  9 Aug 20  2014 /dev/disk/by-id/ata-WDC_WD20EARS-00MVWB0_WD-WMAZA1268718 -> ../../sde
lrwxrwxrwx 1 root root  9 Aug 20  2014 /dev/disk/by-id/ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4138722 -> ../../sdd
lrwxrwxrwx 1 root root  9 Aug 20  2014 /dev/disk/by-id/ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4157214 -> ../../sdf
lrwxrwxrwx 1 root root  9 Aug 20  2014 /dev/disk/by-id/ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4174978 -> ../../sdc

and when there are no issues with re-mounting during boot I get

  pool: wd20ears_zfs
 state: ONLINE
  scan: scrub repaired 0 in 11h33m with 0 errors on Fri Aug 15 11:45:16 2014
config:

        NAME        STATE     READ WRITE CKSUM
        wd20ears_zfs  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

Tonight I had the same error as described in my first post with the zpool not re-mounting. However one thing that I did differently tonight was leave a USB flash drive plugged in before Funtoo Linux booted up. This made me think about persistent device names. Sure enough, my flash drive was allocated /dev/sda and all of the remaining /dev/sdX devices were shifted along by one letter. The four /dev/disk/by-id/ values participating in my zpool went from being sde sdd sdf sdc to sdf sde sdg sdd.

 

Removing the flash drive and rebooting returned the four /dev/disk/by-id/ values back to sde sdd sdf sdc. Now I have not had a flash drive plugged in every time that this issue has arisen. I am suspecting that the overall problem is eudev periodically assigning different device names to my drives and therefore breaking zpool re-import/mount.

 

Do others agree that this is probably the cause of the issue?

 

I am wondering now is what I can do to resolve this. http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool led me to believe that /dev/disk/by-id/ is persistent across reboots however this is clearly not the case. Is it that the zfsonlinux advice is incorrect or does Funtoo Linux (eudev) somehow do things differently compared to other distributions?

 

Does this mean that I need to create eudev rules to so that the device names do not change?

Link to comment
Share on other sites

  • 0

I have not yet posted my issue to the zfsonlinux buglist but I do realise now that my earlier interpretation (shared in my last post) is wrong.

 

I realise now that Funtoo Linux (eudev) is doing the right thing by changing my /dev/disk/by-id/ links to point to the correct /dev/sdX device. If I plug in an extra drive I do want /dev/disk/by-id/ to still point to that specific device regardless of whether it is assigned /dev/sda, /dev/sdb, and so on!

 

So perhaps what the issue boils down to is that despite creating the zpool using /dev/disk/by-id/ references, zfs is having a problem with those links occasionally changing.

 

Just had the same issue..... looking into eudev....

Good luck with that one Chris. Do you have root ( / ) on zfs?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...