

For me things actually became easier when I got myself a native Linux install instead of Windows. But I guess it depends on your college.
For me things actually became easier when I got myself a native Linux install instead of Windows. But I guess it depends on your college.
The size difference is not significant. This is about the maintenance burden. When you need to change some of the code where CPU architecture specific things happen you always have to consider what to do with the code path or the compiler flags that concern 486 CPUs.
Here is the announcement by the maintainer Ingo Molnar where he lists some of the things he can now remove and stop worrying about: https://lore.kernel.org/lkml/20250425084216.3913608-1-mingo@kernel.org/
It’s quite cruel of that compiler not being happy until you’re exhausted.
Or they could just have been infected. Especially the ones on Windows 8, which has been EoL for over a year.
Hey OP, regarding Minecraft: It’s a Java program that uses OpenGL for rendering. Therefore it’s not a Windows game, but inherently cross platform. Here’s the official .deb package https://launcher.mojang.com/download/Minecraft.deb
the school’s IT
I wonder if that even exists. A mix of Windows 8 (EoL) and 10 (almost EoL) running on Haswells with students freely installing Roblox… it all gives an unmaintained vibe.
I like how their release announcements always kind of read like press releases. Even when it’s just the third maintenance release for some normal release train.
You’re not alone in this:
https://discussion.fedoraproject.org/t/usb-tethering-stopped-working-after-f42-update/148809
https://bugzilla.kernel.org/show_bug.cgi?id=220002
https://lore.kernel.org/all/e0df2d85-1296-4317-b717-bd757e3ab928@heusel.eu/
When Debian upgrades to this kernel version you might run into the issue again. Unless there is a fix deployed before then.
I wanted a mainstream option but not Ubuntu, and one that was preferably offered with KDE Plasma pre-packaged.
So I ended up deciding between Debian and Fedora, and what tipped me to Fedora was thinking: Well SELinux sounds neat, quite close to what I learned about Mandatory Access Control in the lectures, and besides, maybe it will be useful in my work knowing one that is close to RHEL.
Now I work in a network team that has been using Debian for 30 years, lol. Kind of ironic, but I don’t regret it, now I just know both.
And fighting SELinux was kind of fun too. I modified my local policies so that systemd can run screen
because I wanted to create a Minecraft service to which I could connect as admin, even if it was started by systemd.
I don’t know why it comes off as hostile, it wasn’t intended that way. Sorry for not expressing it better!
If the last sentence came across badly, that was more meant to be incredulous that people accept all these workaround instead. There are other comments in here that go to ridiculous lengths to enforce separation, like using the UEFI boot menu to select a disk manually. To me even having two ESPs seems overly cautious, and against the design philosophy. Sharing one ESP is really not an issue (at least as long as you know you’re doing it, as you unfortunately found out the hard way).
First of all: You don’t have to reinstall Windows to get it’s bootmgr EFI and supporting files back into the ESP. Installing those from the CLI in from a booted install media is possible, I did it before. You can even install all of Windows manually if you ever need to, it’s just annoying to do with the windows command line tools.
Secondly: I’m not familiar with all distro installers, but surely you can just not format the ESP? Worst case scenario you’d have to use manual formatting I guess, but it’s not that difficult.
Thirdly: You said Grub doesn’t show the disk. If you mean the Grub command interface didn’t show the disk, then the issue is deeper, at a UEFI or hardware level. If you mean there are no boot entries for a Windows install to be selected, then it could be that they were not generated because the Windows bootmgr EFI was not found when Grub got installed. Sometimes just booting back into Linux and running os-prober again might be enough, if the Windows bootmgr EFI is still around. On my distro the os-proper is automatically run when I run grub-mkconfig -o /boot/grub/grub.cfg
I’ve always used a shared ESP for my dual boot systems and I certainly don’t reinstall one OS as the result of a change with the other.
My computer doesn’t really break, I’m Ship of Theseus-ing it regularly.
Apart from that, the only one among the normal window based ones that has felt like it respects my will to configure stuff in ways that feel right to me has been KDE Plasma.
9 years and 4 months ago I bought an Acer laptop with a 4 core Intel Skylake with hyperthreading (i7-6700HQ) and a Nvidia GTX 960M, because the laptop I had was slow for compiling in my classes at Uni, and I wanted a discrete GPU for the occasional game when away from my Desktop PC (winter break and such (still use it for that btw)). I regretted that three times:
First when I wanted to install Linux instead of just using VMs. In early 2016 the kernels on live system ISOs didn’t properly support Skylake yet, so I fucked around with Arch a bunch, but didn’t end up keeping it installed. Don’t remember why, probably got busy with schoolwork.
Then a while later, after I had installed Ubuntu or Fedora at some point, the next issue was that cooperative mode of Bluetooth and Wifi on the included Intel wireless chip wasn’t well supported (even found an Intel Bluetooth dev saying as much on a mailing list), and it hung sometimes, so I had to make a script to turn the chip off and then rescan the PCI bus, that worked as a workaround but was still annoying.
Finally when we had Machine Learning classes I thought I might be able to use CUDA locally, so I tried installing the proprietary Nvidia driver and was greeted by a black screen on the next boot. Had to boot from a live system and chroot in to remove the proprietary crap again.
On my Desktop PC I have used AMD GPUs for quite a while and dual booting Windows and Linux has always been a breeze.
Ah I’m glad to see the situation seems to have cooled a little.
See this comment and the three following, as well as this one and the two following. I think they can now work it out between the projects reasonably.
PS: This more fundamental proposal for Fedora Workstation that started from the OBS packaging issue is also interesting to read. It seems they are looking to make more limited / focused use of their own Flatpak remote in the future since some old assumptions regarding Flatpaks and Flathub don’t hold so well anymore.
I think you might find this comment by one of the OBS upstream devs interesting:
https://pagure.io/fedora-workstation/issue/463#comment-955899
Considering “the Linux system” is literally anything you throw on top of the kernel called Linux, it can be a development environment or anything you want it to be.
Yeah I thought about the same thing when posting, if anything it would have to be the the combination of tools available on Linux. Like GNU binutils, GCC, GNU emacs, GDB, Git. But that’s how I remember him saying it. Either my memory is wrong, or he just wasn’t that precise in his language.
But I think part of the appeal of an IDE is how all the parts integrate (the “I” in “IDE”) so a bunch of packages thrown together might not provide the same cohesive feeling.
I agree, it may not be what you want if you’re looking for an IDE.
But, like me back then, if you’re new to the Linux ecosystem, it’s good to hear at least once that you don’t strictly need to look for an IDE. And that you can instead use disparate CLI tools together, to make for an experience that some people end up preferring.
I really like Kate as an advanced editor with syntax highlighting, auto-completion, plugin support. I would then use the Terminal pane at the bottom to run my code during development.
However, if you want a full IDE with included dependency management, test runner, and debugger it’s probably not enough.
One of my professors said you don’t need an IDE, the Linux system already is a development environment. Not sure that I fully agree with that, especially thinking of things like Android Studio that include the virtual machine smartphone, but it’s still an approach thing that is worth trying out.
The direction of your change doesn’t matter, the GPL license under which the program was already given out is not revocable.
If all copyright holders agree you can grant a different license in addition to the first one, or you can stop offering one license and start offering another one, all the new changes that were never offered under the first one will then only be publicly available under the new license.
But anyone who received the code at a specific time with a GPL license can keep it, modify it, distribute it onwards with the same license and so on, no matter what new terms the copyright holders begin to offer to other people later.
More importantly, they can’t adapt Windows to their needs.
Just recently there was a guy on the NANOG List ranting about Anubis being the wrong approach and people should just cache properly then their servers would handle thousands of users and the bots wouldn’t matter. Anyone who puts git online has no-one to blame but themselves, e-commerce should just be made cacheable etc. Seemed a bit idealistic, a bit detached from the current reality.
Ah found it, here