

IBM/Red Hat maybe since it’s a US company. HOWEVER we went through this with PGP already and the infamous RSA Dolphin.
So could they try? Yeah. Would it work? I don’t know.
Just a dad with a sysadmin hobby … leaving reddit
IBM/Red Hat maybe since it’s a US company. HOWEVER we went through this with PGP already and the infamous RSA Dolphin.
So could they try? Yeah. Would it work? I don’t know.
L2ARC only does metadata out of the box. You have to tell it to do data & metadata. Plus for everything in L2ARC there has to be a memory page for it. So for that reason it’s better to max out your system memory before doing L2ARC.
It’s also not a cache in the way that LVMCACHE and BCACHE are.
At least that’s my understanding from having used it on storage servers and reading the documentation.
I used to do this all the time! So in terms of speed bcache is the fastest, but it’s not as well supported as lvm cache. IMHO lvm cache is plenty fast enough for most uses.
Is it going to be as fast as a NVME ssd? Nope. But it should be about as fast as a SATA ssd if not a little slower depending on how it’s getting the data. If you’re willing to take that trade off it’s worth it. Though anything already cached is going to be accessed at NVME speeds.
So it’s totally worth it if you need bigger storage but can’t afford the SSD. I would go bigger in your HDD though, if you can. Because unless you’re accessing more than the capacity of your SSD frequently; the caching will work extremely well for both reads and writes. So your steam games will feel like they’re on a SSD, most of the time, and everything else you do will “feel” snappy too.
Why? And what would be a replacement for it?
MacOS, nearly everyone who does anything with development or ops is using a MacBook. Though lately more “normal” employees have been getting MacBooks too.
Waaaaay better.
Restic allows you to make dedupe snapshots of your data. Everything is there and it’s damn hard to loose anything. I use backblaze b2 as my long term end point / offsite… some will use AWS glacier. But you don’t have to use any cloud services. You can just have a restic repository on some external drives. That’s what I use for my second copy of things. I also will do an annual backup to a hard disk that I leave with a friend for a second offsite copy.
I’ve been backing up all of my stuff like this for years now. I used to use BORG which is another great tool. But restic is more flexible with allowing multiple systems to use a single repository and has native support for things like B2 that BORG doesn’t.
We also use restic to backup control nodes for some of supercomputing clusters I manage. It’s that rock solid imho.
To be honest, there’s a few good comments linking to scripts and methods here to batch convert them on a windows pc/vm. That’s the best way to go.
To add on to their comments. If you’re just interested in preserving them then maybe printing them to pdf, specifically pdf/a, would be my approach once you got them opened.
I’ll leave this one here for someone:
You can tunnel L2 over OpenVPN. Just bridge your interfaces in both sides and it works.
That way if you need to provision a VOIP phone or just have something NetBoot remotely. Not that I recommend doing that…
Like everyone has said there’s way better ways of doing it.
HOWEVER if you wanted to use dd you totally could. I’d recommend piping into something like gzip/zstd to save some space though.
dd if=/dev/sda | gzip >/mnt/backup_disk/sda.gz
You could also use restic backup the raw block device too.
That being said, clonezilla is exactly what you want
I’ve abused syncthing in some many ways migrating servers and giant data sets. It’s freaking amazing. Though it’s been a few years since I’ve used it. Can only guess how much better it’s gotten.