Just wanted to add something for future reference of anyone reading your post: after Canonical did this, LXD was forked by Linux Containers into a new project named Incus.
Just wanted to add something for future reference of anyone reading your post: after Canonical did this, LXD was forked by Linux Containers into a new project named Incus.
Ugh, balenaetcher messed with my USB pen drive so that I had to jump through hoops to make it usable again. Based on web searches, this was not uncommon at that time. I haven’t had issues with ventoy so far. However, maybe I’ll just go back to Rufus.
I’m not European, but I understand that there’s an old European (German?) saying that basically goes: “If I had wheels, I’d be a trolley.” I understand that it’s been pretty well-established that AI coding tools routinely underperform compare to humans in terms of “better” and “safer”, which indirectly would also lead to it failing at “cheaper” too.
On top of that, there is another major issue with using AI for open-source code: copyright. First, you don’t know if the code that you’re adding through AI may be copying license-incompatible code verbatim. Because everyone has access to open-source code, it would be trivial for anyone to search and find copyright-infringing code to attack projects with. Second, the code that AI produces is also not-copyrightable, so that is another line of attack that this would make open-source projects vulnerable to. These could be used in combination as a one-two punch combination to knock out an open-source project.
I think that using AI-generated code in open-source projects is a uniquely ill-advised idea.