There’s an argument to be made that system software like filesystems and kernels shouldn’t get too smart about validating or transforming strings, because once you start caring about a strings meaning, you can no longer treat it as just a byte sequence and instead need to worry about all the complexities of Unicode code points. “Is this character printable” seems like a simple question but it really isn’t.
Now if I were to develop a filesystem from scratch, would I go for the 80% solution of just banning the ASCII newline specifically? Honestly yes, I don’t see a downside. But regardless of how much effort is put into it, there will always be edge cases – either filenames that break stuff, or filenames that aren’t allowed even though they should be.
I’m not a btrfs expert but AFAIK high unreachable space usage is usually a result of fragmentation. You might want to defragment the filesystem and see if that helps.
I will note that btrfs makes estimations of used/available space very difficult by design, and you especially can not trust what standard UNIX tools like
df
anddu
tell you about btrfs volumes. Scripting arounddu
or usingncdu
will not help here in any way. You might want to read this kernel.org wiki article as well as the man pages for the btrfs tools (btrfs(8)
and particularlybtrfs-filesystem(8)
), which among other things provide versions ofdf
anddu
that actually work, or at least they do most of the time instead of never.