-
Posts
53 -
Joined
-
Last visited
-
Days Won
7
Content Type
Profiles
Forums
Blogs
Posts posted by Tassie_Tux
-
-
edit: i found the error, when i do boot-update i see this: error bzimage not found
When you get the boot-update error can you see your kernel bzimage within /boot? In other words, is your /boot partition/zpool mounted before you installed your kernel image and before you run boot-update? This sounds a silly thing to check for however I have personally been caught several times with boot-update/grub issues only to realise that I had forgotten to mount /boot before copying the kernel image and/or running boot-update!
-
You nailed it:
Of course Poettering fails to attribute any blame to his own history of being rude, obnoxious, and dismissive of anyone that disagrees with him.
Poettering believes that his work should be above criticism. He feels like the adoption of systemd is proof of that when it's largely the result of funding and political machinations.
... yes, and if we are critical or disagree we are apparently 'haters'. :wacko:
I did not 'hate' Lennart before, however reading his G+ whine encourages me to start! :P
I am reminded of a child throwing a tantrum because they did not get their own way. Having no intentions to ever talk about this again on a public forum is very arrogant. No right of reply, no debate. I actually find this part the worst.
Obviously I cannot speak of the challenges that our Funtoo Developers have previously faced or currently face. What I can say is that as a Funtoo User, the community that Lennart is describing is not the Funtoo Linux that I have come to know. This is not a 'fortunate' thing. This is instead a reflection of the good people that are together making Funtoo Linux great. :D
- nrc and haxmeister
-
2
-
The web site looks really good on Chrome for Android. :) I struggle to think of a better looking "mobile friendly" page.
-
Chome/Chromium is phasing out NS Plugin API support, which java is one. I believe it started with version 36. Google it. It's been in the works for over a year. If you still need a web browser with java support, try another browser, like firefox.
http://blog.chromium.org/2014/05/update-on-npapi-deprecation.html
I just tried the Java verification site https://www.java.com/verify/ and the applet does not work for me with chromium 38.0.2125.24. It does work with firefox 31.0 and oracle-jdk-bin 1.7.0.67.
-
In chromium have a look at chrome://settings/content . Is "Allow all sites to run JavaScript (recommended)" enabled?
-
Solved. (Yes... it was my fault)
Whenever I had to manually export and import my pool I did so with the command
zpool import $POOLNAME
It turns out that this simply uses actual device names (/dev/sdX) as indicated by my earlier zpool status outputs. The device names are apparently retained within the zpool.cache file and so will be used for the zfs reimport/mount. If udev assigns those specific device names to other devices then the import will of course fail. The advice from the zfsonlinux crowd was to export my pool and then reimport it with the command
zpool import -d /dev/disk/by-id $POOLNAME
so that the zpool.cache is set to use the /dev/disk/by-id links instead of the direct device names. I have rebooted with a USB drive plugged in to force udev into assign different device names and sure enough the pool continues to import correctly. The command zpool status now gives
pool: wd20ears_zfs state: ONLINE scan: scrub repaired 0 in 11h33m with 0 errors on Fri Aug 15 11:45:16 2014 config: NAME STATE READ WRITE CKSUM wd20ears_zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ata-WDC_WD20EARS-00MVWB0_WD-WMAZA1268718 ONLINE 0 0 0 ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4138722 ONLINE 0 0 0 ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4157214 ONLINE 0 0 0 ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4174978 ONLINE 0 0 0 errors: No known data errors
This is probably not applicable to those users with root (/) on ZFS as I expect that there is no zpool.cache involved.
-
I have not yet posted my issue to the zfsonlinux buglist but I do realise now that my earlier interpretation (shared in my last post) is wrong.
I realise now that Funtoo Linux (eudev) is doing the right thing by changing my /dev/disk/by-id/ links to point to the correct /dev/sdX device. If I plug in an extra drive I do want /dev/disk/by-id/ to still point to that specific device regardless of whether it is assigned /dev/sda, /dev/sdb, and so on!
So perhaps what the issue boils down to is that despite creating the zpool using /dev/disk/by-id/ references, zfs is having a problem with those links occasionally changing.
Just had the same issue..... looking into eudev....
Good luck with that one Chris. Do you have root ( / ) on zfs?
-
I have gained some insight into this issue.
When I created my raidz1 pool I did so using the /dev/disk/by-id/ method as outlined at http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool instead of using /dev/sda, /dev/sdb, ... etc. The four drives participating in my pool (in order) are
lrwxrwxrwx 1 root root 9 Aug 20 2014 /dev/disk/by-id/ata-WDC_WD20EARS-00MVWB0_WD-WMAZA1268718 -> ../../sde lrwxrwxrwx 1 root root 9 Aug 20 2014 /dev/disk/by-id/ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4138722 -> ../../sdd lrwxrwxrwx 1 root root 9 Aug 20 2014 /dev/disk/by-id/ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4157214 -> ../../sdf lrwxrwxrwx 1 root root 9 Aug 20 2014 /dev/disk/by-id/ata-WDC_WD20EARS-60MVWB0_WD-WCAZA4174978 -> ../../sdc
and when there are no issues with re-mounting during boot I get
pool: wd20ears_zfs state: ONLINE scan: scrub repaired 0 in 11h33m with 0 errors on Fri Aug 15 11:45:16 2014 config: NAME STATE READ WRITE CKSUM wd20ears_zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sde ONLINE 0 0 0 sdd ONLINE 0 0 0 sdf ONLINE 0 0 0 sdc ONLINE 0 0 0 errors: No known data errorsTonight I had the same error as described in my first post with the zpool not re-mounting. However one thing that I did differently tonight was leave a USB flash drive plugged in before Funtoo Linux booted up. This made me think about persistent device names. Sure enough, my flash drive was allocated /dev/sda and all of the remaining /dev/sdX devices were shifted along by one letter. The four /dev/disk/by-id/ values participating in my zpool went from being sde sdd sdf sdc to sdf sde sdg sdd.
Removing the flash drive and rebooting returned the four /dev/disk/by-id/ values back to sde sdd sdf sdc. Now I have not had a flash drive plugged in every time that this issue has arisen. I am suspecting that the overall problem is eudev periodically assigning different device names to my drives and therefore breaking zpool re-import/mount.
Do others agree that this is probably the cause of the issue?
I am wondering now is what I can do to resolve this. http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool led me to believe that /dev/disk/by-id/ is persistent across reboots however this is clearly not the case. Is it that the zfsonlinux advice is incorrect or does Funtoo Linux (eudev) somehow do things differently compared to other distributions?
Does this mean that I need to create eudev rules to so that the device names do not change?
-
No I did not know about the head parking issue! Interesting.
I have one WD20EARS-00MVWB0 which is now over 4 years old. Looking at SMART data the Load Cycle Count raw value is 295763.
The other three drives are each WD20EARS-60MVWB0 (4K sector version) and over 3 years old. Each drive indicates a Load Cycle Count of 7698.
These drives were previously in RAID5 (Intel ICH10R) under Windows 7. Having been a victim of data corruption proved to be a major motivator for me to change them to being ZFS storage under Funtoo. Apart from this thread topic and perhaps questionable performance (read: probably my kernel settings) ZFS has been a good move.
I chose WD Greens because of their affordability. Also I know of, and have had issues with drives from other manufacturers.
-
After several restarts this problem has unfortunately reappeared. This is after naming up zfs and spl in /etc/conf.d/modules and moving the zfs service to the boot runlevel. I am going to see what I can find in log files...
By the way, in this particular case the zpool is not root (/) and not encrypted. So I do not mind it being mounted during boot or during the default stage. ^_^
hope this helps, exploring the depths of the whole boot processes myself, trying to build my dream system.I know what you mean about 'dream system'. ;) Too bad that problems arise along the way!
-
Sorry I have been away from my Funtoo install; have not yet made the changes. Hopefully this evening. :)
Moving the zfs init script to the boot runlevel will be easy.
Regarding the modules, I am wondering if my simply naming them up within /etc/conf.d/modules will be sufficient. I do have a custom initramfs which is for decrypting and mounting my dmcrypt/LUKS root (/) partition. If I were load spl and zfs within initramfs then it would have to be right at the end as I would not want the pool to mount before root (/). I have always been of the view that the modules init script is loaded during the boot runlevel (I cannot check this right now). So instead of changing my initramfs I am tempted to just let OpenRC do the work just after the initramfs step has completed.
-
Thanks! It is true that I have not been calling on those two modules to be loaded during boot.
-
I have an ongoing problem with my RAID-Z1 pool. During some (not all) reboots my system fails to reimport the zpool. While it seems to me to be a 'random' issue, there is undoubtedly a reason behind it.
I would really appreciate advice on where I should begin to look for a potential cause.
My set up is a x86_64 Funtoo Current build. sys-kernel/spl, sys-fs/kmod-zfs and sys-fs/zfs are versions 0.6.3 installed together with the sys-kernel/ck-sources-3.14.4 kernel. The zfs service is started during the default runlevel.
When I log into desktop I find that my zpool wd20ears_zfs has failed with the following:~ # zpool status
pool: wd20ears_zfs state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM wd20ears_zfs UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas sdf FAULTED 0 0 0 corrupted data sde FAULTED 0 0 0 corrupted data sdg UNAVAIL 0 0 0 sdd FAULTED 0 0 0 corrupted data
If I export the pool and then reimport with the 'force' switch it mounts successfully:
~ # zpool export wd20ears_zfs ~ # zpool import wd20ears_zfs -f ~ # zpool status
pool: wd20ears_zfs state: ONLINE scan: scrub repaired 0 in 11h31m with 0 errors on Sun Jul 6 08:03:38 2014 config: NAME STATE READ WRITE CKSUM wd20ears_zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdf ONLINE 0 0 0 sde ONLINE 0 0 0 sdg ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors
I am yet to raise this issue on the zfsonlinux.org Issue Tracker. I thought it best to start with the Funtoo forums. -
Okay, that does not sound to bad. The article obviously suggests major headaches for the eudev team, which lead me to think about Funtoo.
I look forward to watching that video. I have not yet developed a 'Lennart allergy'! :P
-
Phoronix: Using Udev Without Systemd Is Going To Become Harder
http://www.phoronix.com/scan.php?page=news_item&px=MTczNjI
Funtoo with eudev works very well for my purposes and so I have not dabbled with systemd. No doubt other distros and users are very happy with it.
I am ignorant to the politics and decision making around systemd and udev. It does seem to me disappointing that udev would be changed in such a way that it breaks functionality with non-systemd init daemons. If that is not "forcing distros to use systemd" then I don't know what is. -_-
-
Have you tried booting with System Rescue CD, 'chrooting' into your install and exporting the zpool from within the chroot?

Tr4ouble with Java
in Desktop Help
Posted
Which web browser are you using?