• 11 Posts
  • 77 Comments
Joined 7 months ago
cake
Cake day: September 13th, 2024

help-circle





  • LLMs can’t even stay on topic when specifically being asked to solve one problem.

    This happens to me all the damn time:

    I paste a class that references some other classes which I have already tested to be working, my problem is in a specific method that doesn’t directly call on any of the other classes. I tell the LLM specifically which method is not working, I also tell it that I have tested all the other methods and they work as intended (complete with comments documenting what they’re supposed to do). I then ask the LLM to only focus on the method I have specified, and it still goes on about “have you implemented all the other classes this class references? Here’s my shitty implementation of those classes instead.”

    So then I paste all the classes that the one I’m asking about depends on, reiterate that all of them have been tested and are working, tell the LLM which method has the problem again, and it still decides that my problem must be in the other classes and starts “fixing” them which 9 out of 10 times is just rearranging the code that I already wrote and fucking up the organisation that I had designed.

    It’s somewhat useful for searching for well-known example code using natural language, i.e. “How do I open a network socket using Rust,” or if your problem is really simple. Maybe it’s just the specific LLM I use, but in my experience it can’t actually problem solve better than humans.












  • LLM scraping is a parasite on the internet. In the actual ecological definition of parasite: they place a burden on other unwitting organisms computer systems, making it harder for the host to survive or carry out their own necessary processes, solely for the parasite’s own benefit while giving nothing to the host in return.

    I know there’s an ongoing debate (both in the courts and on social media) about whether AI should have to pay royalties to its training data under copyright law, but I think they should at the very least be paying to use infrastructure while collecting the data, even free data, given that it costs the organisation hosting said data real money and resources to be scraped, and it’s orders of magnitude more money and resources compared to serving that data to individual people.

    The case can certainly be made that copying is not theft, but copying is by no means free either, especially when done at the scales LLMs do.


  • It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.

    The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is “better” than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that’s a nightmare to maintain.

    Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers’ asses, AI does that even when it make no sense not to tell the truth because it doesn’t really know what “the truth” is and why other people would want it.

    Will AI eventually catch up? Almost certainly, but we’re nowhere close to that right now. Currently it’s less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.

    I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we’re going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.



  • Fedora Linux has been the most stable OS in my experience, having used Windows XP to 10 and switching to Linux before 11 came out. I can leave it on for literally weeks on end and the memory never randomly fills up, nor does it get more and more glitchy/crash prone as you leave it on, both of which I have experienced on Windows.