Admin on the slrpnk.net Lemmy instance.

He/Him or what ever you feel like.

XMPP: [email protected]

Avatar is an image of a baby octopus.

  • 1 Post
  • 601 Comments
Joined 2 years ago
cake
Cake day: September 19th, 2022

help-circle








  • poVoq@slrpnk.nettoSelfhosted@lemmy.worldNew storage setup, comments?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    14 days ago

    It nearly certainly happened to you, but you are simply not aware as filesystems like ext4 are completely oblivious to it happening and for example larger video formats are relatively robust to small file corruptions.

    And no, this doesn’t only happen due to random bit flips. There are many reasons for files becoming corrupted and it often happens on older drives that are nearing the end of their life-span and good management of such errors can expand the secure use of older drives significantly. And it can also help mitigate the use of non-ECC memory to some extend.

    Edit: And my comment regarding mdadm Raid5 was about it requiring equal sized drives and not being able to shrink or expand the size and number of drives on the fly, like it is possible with btrfs raids.


  • One of the main features of file systems like btrfs or ZFS is to store a checksum of each file and allow you to compare that to the current file to notice files becoming corrupted. With a single drive all btrfs can do then is to inform you that the checksum doesn’t match to the file any longer and that it is thus likely corrupted, but on a btrfs raid it can look at one of the still correct duplicates and heal the file from that.

    IMHO the little extra space from mdadm RAID5 is not worth the much reduced flexibility in future drive composition compared to a native btrfs raid1.




  • poVoq@slrpnk.nettoSelfhosted@lemmy.worldTell me why I shouldn't use btrfs
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    20 days ago

    I am using btrfs on raid1 for a few years now and no major issue.

    It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.

    Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.





  • Cool. I am currently using the OVH dyndns option and it is a bit annoying that you have to update each sub-domain individually and can’t just tell OVH to update all subdomains to the same new IP via a wildcard.

    Is that something your script could do?

    Also it seems like the OVH dyndns API currently only does either IPv4 or IPv6 but not both the same time.

    Edit: Ah I see you plan to allow creating sub-domains through it. I guess that would indirectly solve my issue as well.