Jump to content

Search the Community

Showing results for tags 'zfs'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Funtoo Discussion Forums
    • News and Announcements
    • General Discussion
    • Dev Central
    • Funtoo Hosting
    • Funtoo Infrastructure
  • Help Central
    • General Help
    • Installation Help
    • Portage Help
    • Desktop Help
    • Server Help
  • Funtoo Services

Blogs

  • drobbins' Blog
  • It's a Bear's life
  • haxmeister's Blog
  • psychopatch's Blog
  • 666threesixes666's Blog
  • decision theory
  • Chris Kurlinski's Blog
  • Sandro's Blog
  • danielv's Blog
  • Not So Stupid Admin Tricks
  • maldoror's Blog
  • andreawilson's Blog
  • Simple Step-by-Step Server Setup
  • saraedward's Blog
  • eusanpe
  • Linux Container Club - LXD/LXC's Blog
  • Funtoo Perl Hackers's Perl on Funtoo

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


freenode


github


web


First Name


Last Name


Location


Interests

Found 10 results

  1. Following ebuilds now available for testing in Funtoo/Linux sys-fs/zfs-0.7.9 sys-fs/zfs-kmod-0.7.9 sys-kernel/spl-0.7.9 These versions have a portion of upstream fixes and also support for newer kernels. Ebuilds added without keywords and do require manual setting of /etc/portage/package.keywords before update. https://github.com/zfsonlinux/zfs/releases
  2. I have previously installed ZFS with no issues, but am currently getting: sed: can't read /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999/scripts/check.sh: No such file or directory * ERROR: sys-kernel/spl-9999::core-kit failed (prepare phase): * Cannot patch check.sh I see in the upstream zfsonlinux/spl.git repository that there is indeed no check.sh script - it appears I should be using the spl-0.7.9999.ebuild. However, that is not available in portage in any of the kits (1.0, 1.1, or 1.2).
  3. erikr

    ZFS Mountpoints

    Hi, I have been using btrfs on my rootfs for quite a while and one of the ways I am using it is that I always do upgrades in a snapshot of root and when upgrade is done I use that as new root. In btrfs this is very simple. I have my root in /mnt/btrfs/root I make the snapshot as /mnt/btrfs/root-upgrade, I chroot into this an perform the upgrade. When I am done I simple rename the dirs; mv root root-fallback mv root-upgrade root Nothing is changes on my system and as mount with -o subvol=root applies to the subvolume named root at the specific point in time wh
  4. Hello, If I''m not mistaked, funtoo hosting is using ZFS. I use LVM all the time, but it seems that users of LXC/LXD more favores ZFS. How stable ZFS in production? Has anybody, from funtoo community, experienced any problems, or data loss with ZFS? Does LVM has some real dowsides comparing to ZFS, in real usage? Has anybody experienced any? It would be interesting to know what storage driver people choose for their LXC/LXD servers? Thank you, -- Saulius p.s. I know there is documentation https://lxd.readthedocs.io/en/latest/storage/
  5. Hello! I have tried many times to get this to work based on the fine ZFS_Install_Guide but I consistently fail at this point (see screenshot). Both the kernel and initramfs are completely rebuilt using genkernel, but it can never find 'rpool' as the root device. Any guidance is appreciated!
  6. Will search for answers elsewhere as it is more of a Kernel + ZFS issue, rather than a Funtoo issue. [content deleted]
  7. I am trying to bring up a system with ZFS following https://www.funtoo.org/ZFS_Install_Guide I'm having trouble booting. at the completion of the steps in the Guide. # grub-mkconfig -o /boot/grub/grub.cfg None of the 'initramfs-genkernel-XXXX' are being found that were build with the --zfs flag. The Guide indicates these initramfs-genkernel's should be the ones found in its example output. grub-mkconfig is finding the normal initramfs-debian-sources-XXXX and kernel files and these were not build with the --zfs flag to my knowledge. Anyone have any idea why this is happ
  8. hello first like to thank you for the forum to have the participation of the creator of the Best Linux distro created and an honor. good i use linux just over two years, but there a few months decides to abandon the simplest distros such as Ubuntu, Debian .... and decides to migrate pro Arch Linux, but found it very easy then went pro Gentoo and Funtoo Finally a few days, I plan on not leaving until he creates own. Finally, I would like to use PURE-UEFI + Luks + ZFS, and I wonder if this is possible or do I have to install Grub?
  9. Trying to follow up the installation process as described in <http://www.funtoo.org/ZFS_Install_Guide>. Only yesterday, however, I discovered the tutorial at <http://www.funtoo.org/Install_ZFS_root%26boot_File_System>. I think I understand, now, after reading the notes there-in the problem I am stuck on (read below). But don't know how to work-it-around. I use(d) sysresccd-4.7.0_zfs_0.6.5.4. After staging and installing portage (both funtoo-stable), I am to the point of confirming that the zfs pool can be read. I get the following error: ``` The device /dev/zfs is missi
  10. I have an ongoing problem with my RAID-Z1 pool. During some (not all) reboots my system fails to reimport the zpool. While it seems to me to be a 'random' issue, there is undoubtedly a reason behind it. I would really appreciate advice on where I should begin to look for a potential cause. My set up is a x86_64 Funtoo Current build. sys-kernel/spl, sys-fs/kmod-zfs and sys-fs/zfs are versions 0.6.3 installed together with the sys-kernel/ck-sources-3.14.4 kernel. The zfs service is started during the default runlevel. When I log into desktop I find that my zpool wd20ears_zfs has failed w
×
×
  • Create New...