Jump to content
funtoo forums

Search the Community

Showing results for tags 'zfs'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Funtoo Discussion Forums
    • News and Announcements
    • General Discussion
    • Dev Central
    • Funtoo Hosting
    • Funtoo Infrastructure
  • Help Central
    • General Help
    • Installation Help
    • Portage Help
    • Desktop Help
    • Server Help
  • Funtoo Services

Blogs

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


freenode


github


Location


Interests

Found 10 results

  1. Oleg Vinichenko

    ZFS-0.7.9 added

    Following ebuilds now available for testing in Funtoo/Linux sys-fs/zfs-0.7.9 sys-fs/zfs-kmod-0.7.9 sys-kernel/spl-0.7.9 These versions have a portion of upstream fixes and also support for newer kernels. Ebuilds added without keywords and do require manual setting of /etc/portage/package.keywords before update. https://github.com/zfsonlinux/zfs/releases
  2. walterw

    ZFS fails to build [SOLVED]

    I have previously installed ZFS with no issues, but am currently getting: sed: can't read /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999/scripts/check.sh: No such file or directory * ERROR: sys-kernel/spl-9999::core-kit failed (prepare phase): * Cannot patch check.sh I see in the upstream zfsonlinux/spl.git repository that there is indeed no check.sh script - it appears I should be using the spl-0.7.9999.ebuild. However, that is not available in portage in any of the kits (1.0, 1.1, or 1.2).
  3. erikr

    ZFS Mountpoints

    Hi, I have been using btrfs on my rootfs for quite a while and one of the ways I am using it is that I always do upgrades in a snapshot of root and when upgrade is done I use that as new root. In btrfs this is very simple. I have my root in /mnt/btrfs/root I make the snapshot as /mnt/btrfs/root-upgrade, I chroot into this an perform the upgrade. When I am done I simple rename the dirs; mv root root-fallback mv root-upgrade root Nothing is changes on my system and as mount with -o subvol=root applies to the subvolume named root at the specific point in time when mount runs I will simply enter the upgraded system. Question is; can I do something similar in zfs? Is there a way to make zfs use another name or mountpoint next time I boot but not directly? Regards, Erik
  4. Hello, If I''m not mistaked, funtoo hosting is using ZFS. I use LVM all the time, but it seems that users of LXC/LXD more favores ZFS. How stable ZFS in production? Has anybody, from funtoo community, experienced any problems, or data loss with ZFS? Does LVM has some real dowsides comparing to ZFS, in real usage? Has anybody experienced any? It would be interesting to know what storage driver people choose for their LXC/LXD servers? Thank you, -- Saulius p.s. I know there is documentation https://lxd.readthedocs.io/en/latest/storage/ but documentation is one thing, and user's experience another... :)
  5. Hello! I have tried many times to get this to work based on the fine ZFS_Install_Guide but I consistently fail at this point (see screenshot). Both the kernel and initramfs are completely rebuilt using genkernel, but it can never find 'rpool' as the root device. Any guidance is appreciated!
  6. NikosAlexandris

    CPU Soft Lockups of custom kernel and ZFS?

    Will search for answers elsewhere as it is more of a Kernel + ZFS issue, rather than a Funtoo issue. [content deleted]
  7. I am trying to bring up a system with ZFS following https://www.funtoo.org/ZFS_Install_Guide I'm having trouble booting. at the completion of the steps in the Guide. # grub-mkconfig -o /boot/grub/grub.cfg None of the 'initramfs-genkernel-XXXX' are being found that were build with the --zfs flag. The Guide indicates these initramfs-genkernel's should be the ones found in its example output. grub-mkconfig is finding the normal initramfs-debian-sources-XXXX and kernel files and these were not build with the --zfs flag to my knowledge. Anyone have any idea why this is happening and how to fix it? If I force a menu entry in /etc/grub.d/40_custom that uses initramfs-genkernel-XXXX the system will boot until it reaches the point where it tries to import the pools. Gives error cannot import 'rpool' no such pool or dataset destroy and re-create the pool... Yet if I go to the grub shell and do a zpool import rpool is found. What is going on here? I think something is broken as when emerging zfs, zfs-0.7.3 and and a new grub were pulled in. Has anyone gotten zfs install to work with the *current* portage? I know it worked in the past, doesn't seem to be working now.
  8. Otakku

    Pure UEFI + Luks + ZFS ?

    hello first like to thank you for the forum to have the participation of the creator of the Best Linux distro created and an honor. good i use linux just over two years, but there a few months decides to abandon the simplest distros such as Ubuntu, Debian .... and decides to migrate pro Arch Linux, but found it very easy then went pro Gentoo and Funtoo Finally a few days, I plan on not leaving until he creates own. Finally, I would like to use PURE-UEFI + Luks + ZFS, and I wonder if this is possible or do I have to install Grub?
  9. NikosAlexandris

    Installing Funtoo on ZFS

    Trying to follow up the installation process as described in <http://www.funtoo.org/ZFS_Install_Guide>. Only yesterday, however, I discovered the tutorial at <http://www.funtoo.org/Install_ZFS_root%26boot_File_System>. I think I understand, now, after reading the notes there-in the problem I am stuck on (read below). But don't know how to work-it-around. I use(d) sysresccd-4.7.0_zfs_0.6.5.4. After staging and installing portage (both funtoo-stable), I am to the point of confirming that the zfs pool can be read. I get the following error: ``` The device /dev/zfs is missing and must be created. Try running 'udevadm trigger' as root to create it. ``` The `/dev/zfs` exists however. And outside of the "chroot", the pool and the datasets are correctly reported. I guess this all boils down to "5. Upgrading to zfs version >= 0.6.5.4" as per the notes in <http://www.funtoo.org/Install_ZFS_root%26boot_File_System>. I did install zfs and zfs-kmod version 0.6.5.4-r1 (this is the one installed when requesting for emerge sys-zfs/zfs-0.6.5.4 at the moment). And still, the zpool utility does not work. Since the initramfs/kernel in sysresccd-4.7.0_zfs_0.6.5.4 is pre zfs-0.6.5.3 and does not understand zfs-0.6.5.4 (!?), how does an update of the kernel and initramfs, using genkernel, e.g. ``` # genkernel all --zfs --no-clean --kerneldir=/usr/src/linux -- kernel-config=/usr/src/<path_to_config> --callback="emerge -1 spl zfs-kmod zfs" ``` will solve the issue, if I can't "boot this kernel"? I opted for having a dedicated boot partition (/dev/sda1) and not installing everything on ZFS. Still, I can't make any use of grub, i.e. install it in /dev/sda. The command `grub-install` does respond with ``` Installing for x86_64-efi platform. grub-install: error: cannot find EFI directory. ``` Any pointers?
  10. I have an ongoing problem with my RAID-Z1 pool. During some (not all) reboots my system fails to reimport the zpool. While it seems to me to be a 'random' issue, there is undoubtedly a reason behind it. I would really appreciate advice on where I should begin to look for a potential cause. My set up is a x86_64 Funtoo Current build. sys-kernel/spl, sys-fs/kmod-zfs and sys-fs/zfs are versions 0.6.3 installed together with the sys-kernel/ck-sources-3.14.4 kernel. The zfs service is started during the default runlevel. When I log into desktop I find that my zpool wd20ears_zfs has failed with the following: ~ # zpool status pool: wd20ears_zfs state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM wd20ears_zfs UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas sdf FAULTED 0 0 0 corrupted data sde FAULTED 0 0 0 corrupted data sdg UNAVAIL 0 0 0 sdd FAULTED 0 0 0 corrupted data If I export the pool and then reimport with the 'force' switch it mounts successfully: ~ # zpool export wd20ears_zfs ~ # zpool import wd20ears_zfs -f ~ # zpool status pool: wd20ears_zfs state: ONLINE scan: scrub repaired 0 in 11h31m with 0 errors on Sun Jul 6 08:03:38 2014 config: NAME STATE READ WRITE CKSUM wd20ears_zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdf ONLINE 0 0 0 sde ONLINE 0 0 0 sdg ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors I am yet to raise this issue on the zfsonlinux.org Issue Tracker. I thought it best to start with the Funtoo forums.
×