

That was my deleted comment, but then I realized the article specifies they are Rome CPUs (Zen 2), while the 4000 series of EPYCs is based on Zen 4
That was my deleted comment, but then I realized the article specifies they are Rome CPUs (Zen 2), while the 4000 series of EPYCs is based on Zen 4
deleted by creator
I don’t think overheating would cause random corruptions (it should throttle down when overheating, and then shut down if the temperature gets too high even when throttled, but there should never be an incorrect result of any computation), and surely the RAM will run at the standard 2133 speed on default settings - OP says they reset the BIOS settings to default between CPU swaps.
A simple rm -rf says hello
Obviously you need some redundancy in case the script gets corrupted. 5000 repetitions seems reasonable for such a high quality work
That’s a reasonable per-core size, and it doesn’t make much sense to add all the cores up if your goal is to fit your data within L2 (like in the article)
Please don’t pretend as if OpenSource Devs don’t constantly complain about pesky PRs😅
<i>I</i>'ve <u>seen</u> much <b><u>more</u> complaints</b> about <a href=“https://0.0.0.0/random_img.tiff”>people</a> constantly <marquee>demanding</marquee> their specific <h1>annoyances</h1> to be fixed without ever <i>submitting <u>a single <b>line of code</b></u></i>. <i>Maintainers</i> are pretty much <b>universally</b> welcoming to code <h2>contributions</h2> <br><br><br><br><br><br>
I soooo hope this does something funky with someone’s Lemmy client
That’s more of a storage thing, RAM does a lot smaller transfers - for example a DDR5 memory has two independent 32bit (4 byte) channels with a minimum of 16 transfers in a single “operation”, so it does 64 bytes at once (or more). And CPUs don’t waste memory bandwidth than transferring more than absolutely necessary, as memory is often the bottleneck even without writing full pages.
The page size is relevant for memory protection (where the CPU will stop the program execution and give control back to the operating system if said program tries to do something it’s not allowed to do with the memory) and virtual memory (which is part of the same thing, but they are two theoretically independent concepts). The operating system needs to make a table describing what memory the program has what kind of access to, and with bigger pages the table can be much smaller (at the cost of wasting space if the program needs only a little bit of memory of a given kind).
My two cents: the only time I had an issue with Btrfs, it refused to mount without using a FS repair tool (and was fine afterwards, and I knew which files needed to be checked for possible corruption). When I had an issue with ext4, I didn’t know about it until I tried to access an old file and it was 0 bytes - a completely silent corruption I found out probably months after it actually happened.
Both filesystems failed, but one at least notified me about it, while the second just “pretended” everything was fine while it ate my data.
xfwm is XFCE’s window manager, and it’s eating almost 30% of the total system memory, so that’s the prime suspect (I’m not exactly sure how much it interacts with other apps, so it’s possible something else is forcing xfwm to use all that memory, but that is IMHO unlikely).
An ugly “fix” is to log out and log back in (yes, not much better than just rebooting), or you could try to somehow restart xfwm - running
xfvm --replace
in terminal might work.Edit: there’s an issue on the Manjaro forums that might be related: https://forum.manjaro.org/t/xfwm4-memory-leak-since-4-20/173910/7