• 0 Posts
  • 9 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle

  • Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can’t be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.

    This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn’t there is never any in-between.

    While COW is good for data integrity it isn’t always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn’t a issue but on HDDs it can slow things down and fragment your filesystem considerably.

    Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.

    Other filesystems like ext4/xfs are “journaling” filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a “journal” on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.

    ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.

    Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.

    This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.

    Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.

    Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).


  • I assume you are powering the dock? Many docks require external power before they will pass video.

    Does the screen on the deck shut off or stay active?

    If the screen stays active that means that it isn’t detecting an HDMI signal through the dock at all.

    If the screen shuts off but you get no video through the receiver, you should try hitting the power button one to shut it off wait a few seconds then turn it back on (while plugged in). Even the official dock has issues getting the deck to switch to the external output but putting the deck to sleep and back on gets it sorted out.

    If that still doesn’t do it plug in directly to your TV to narrow down the problem (removes the receiver as a variable). Next try a different HDMI cable, and as a last resort try a different dock. If you know someone else with their own deck you can try theirs to eliminate a hardware failure on your deck.



  • Add a -f to your umount and you can clear up those blocked processes. Sometimes you need to do it multiple times (seems like it maybe only unblocks one stuck process at a time).

    When you mount your NFS share you can add the “soft” option which will let those stuck calls timeout on their own.




  • I believe so. The package descriptions for most of the ZFS packages in Ubuntu mention OpenZFS, so it certainly appears that way.

    You can still create pools that are compatible with Oracle Solaris, you just have to set the pool version to 28 or older when you create it and obviously don’t update it. That will prevent you from using any of the newer features that have been added since the fork.


  • Well worse than that, Oracle closed sourced ZFS, so OpenZFS was forced to become a fork, and they are no longer compatible with each other.

    As for GPL the CDDL license that ZFS uses made sure that code contributions attribute copyright to the project owners, which means they can change the license as they please without having to track down contributors.

    You would think with their investments in Oracle Linux and btrfs they would welcome that license change, but apparently they need excuses to keep putting money into Solaris, and their Oracle ZFS appliances instead.