

Sorry, that I’m not certain of, since that’s an installer-specific thing. I think I’d try that option first, and see if the installer lets you choose the empty drive.
Sorry, that I’m not certain of, since that’s an installer-specific thing. I think I’d try that option first, and see if the installer lets you choose the empty drive.
Just spitballing here, but if I read this correctly, you pulled the Windows drive, installed Mint, and then put the Windows drive back in alongside the Mint drive? If so, that might be the issue.
UEFI firmware looks for a special EFI partition on the boot drive, and loads the operating system’s own bootloader from there. The Windows drive has one. When you pulled the Windows drive to install Mint on another drive, Mint had to create an EFI partition on its disk to store its bootloader.
Then, when you put the Windows disk back in, there were two EFI partitions. Perhaps the UEFI firmware was looking for the Windows bootloader in the EFI partition on the Mint disk. It would of course not find it there. In my experience, Windows recovery is utterly useless in fixing EFI boot issues.
It’s possible to rebuild the Windows EFI bootloader files manually, but since you don’t mind blowing away both OS installs, I’d say just install Mint on the second drive while both of them are installed in the system, so the installer puts the Mint bootloader on the same EFI partition as the Windows one. With the advent of EFI, Windows will still sometimes blow away a Linux bootloader, but Linux installers are very good at installing alongside Windows. If it does get stuffed up, there’s a utility called Boot-Repair, that you can put on a USB disk, that works a lot better than Windows recovery.
This just sounds like a bad idea, a solution in search of a problem. Sure, sudo is a setuid binary, but it’s a fairly simple program, and at some point, you have to trust the code. It’s also a very fundamental piece of the system that you want to always work, even (especially!) when other things get borked. The brief description of run0 already has too many potential points of failure.
One that Linux should’ve had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.
A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they’re backwards-compatible with, so new versions don’t necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren’t present, allowing programs to drop certain features gracefully, so we wouldn’t need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.
Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.