Lemmy account of natanox@chaos.social

  • 0 Posts
  • 15 Comments
Joined 9 months ago
cake
Cake day: October 7th, 2024

help-circle


  • Parts of me want to argue that “experienced devs” can’t seriously still ask ChatGPT for syntax correction. Like, I do that with Codestral as I’m learning Python (despite the occasional errors it’s still so much better than abstract docs…), but that should just be a learning thing… or is it because nowadays a single codebase often consists of 5+ languages and devs are expected to constantly learn all the new “hot shit” which obviously won’t make anyone experts in one specific one like back when the there just weren’t as many?



  • Interesting moral question here:

    Given the huge problems are power consumption, morals behind training data and blind trust in AI slop, do you think there is a window of acceptable usage for LLMs as locally run (on existing hardware) coding assistant (not executive tool that does it for you) to help with work on FOSS projects (giving back to where it has taken from) with no money flowing to any company (therefore not bolstering that commercial ecosystem)? While this obviously doesn’t address the energy consumption during training, it may alleviates moral issues to the point people start to think about it as acceptable tool.

    To make it abundantly clear, this is neither about “vibe coding” where it does code for you badly, and definitely not about any other bullshit like generative “art”. It’s about the question of humble, educated use of a potential useful tool in a way it might be morally acceptable.



  • Yeah… I’m quickly reaching the point where I’m quicker thinking and writing Python code than even writing the prompts. Let alone the additional time going through the generated stuff to adjust and fix things.

    It’s good to get a grip on syntax, terminology and as an overly fancy (but very fast) search bot that can (mostly) apply your question to the very code that’s in front of you, at least in popular languages. But once you got that stuff in your head… I don’t think I’ll bother too much in the future. There surely are tons of useful things you can do with multimodal LLMs, coding on its own properly just isn’t one of it. At least not with the current generation.



  • Yeah, same with Codestral. You have to tell it what to do very specifically, and once it gets stuck somewhere you have to move to a new session to get rid of the history junk.

    Both it and ChatGPT also repeatedly told me to save binary data I wanted to store in memory as a list, with every 1024 bytes being a new entry… in form of a string (supposedly). And the worst thing is that, given the way it extracted that data later on, this unholy implementation from hell would’ve probably even worked up to a certain point.