

Just one more GPT bro, I swear it will bring us AGI bro, please bro please, just one more GPT…
Lemmy account of natanox@chaos.social
Just one more GPT bro, I swear it will bring us AGI bro, please bro please, just one more GPT…
Unfortunately nothing that can be done about it unless someome creates an “illegal” kernel module which supports HDMI 2.1 despite the lack of license. That’s basically the only hope right now (and I’m all for it).
Because it sucks? 🙃
Like, seriously, Manjaro had so many problems already that were completely preventable, makes stupid mistakes, bloats itself up with nonsense and provides no stability improvement over Arch whatsoever (which is a bad thing). On the contrary, it apparently even introduces additional bugs.
I’m not surprised they accidentally DDoS’ed the AUR, given they also have a docker script that spins up every single time an image is downloaded to push a change to git to increment a counter. Every. Single. Time. A full docker container. From scratch.
I’d only ever recommend Manjaro to people I really don’t like.
How many “equal” symbols do we need to be absolutely sure?
Beginning your sentence with “I code in JS” really makes any follow-up statement sound sane in comparison.
The description in the first photo about int - steing comparison is incomplete though, right? Wasn’t there also a rule anout which one of then comes first (the second parameter gets converted?), and what happens if a string contains non-numeric values?
It’s all so confusing…
You expect those 250k lines to be comprehensible? In my experience they’ll be an utter clusterfuck.
You can’t fix the airplane if it turns out to be a boat with legs, 2 holes (worked around with 5 pumps) and 3.5 enormous ears tagged “wings”.
Rust: Borrow handler got mad at you for asking
(I’d assume)
And switch cases (called match cases) are there as well.
I use lambdas all the time to shovel GTK signal emitions from worker threads into GLib.idle_add in a single line, works as you’d expect.
Previous commenters probably didn’t look at Python in a really long time.
Parts of me want to argue that “experienced devs” can’t seriously still ask ChatGPT for syntax correction. Like, I do that with Codestral as I’m learning Python (despite the occasional errors it’s still so much better than abstract docs…), but that should just be a learning thing… or is it because nowadays a single codebase often consists of 5+ languages and devs are expected to constantly learn all the new “hot shit” which obviously won’t make anyone experts in one specific one like back when the there just weren’t as many?
No wonder there are some older developers who defend Lisp so passionately. Sounds like a dream to work with once you got the hang of it.
Interesting moral question here:
Given the huge problems are power consumption, morals behind training data and blind trust in AI slop, do you think there is a window of acceptable usage for LLMs as locally run (on existing hardware) coding assistant (not executive tool that does it for you) to help with work on FOSS projects (giving back to where it has taken from) with no money flowing to any company (therefore not bolstering that commercial ecosystem)? While this obviously doesn’t address the energy consumption during training, it may alleviates moral issues to the point people start to think about it as acceptable tool.
To make it abundantly clear, this is neither about “vibe coding” where it does code for you badly, and definitely not about any other bullshit like generative “art”. It’s about the question of humble, educated use of a potential useful tool in a way it might be morally acceptable.
Same, really nice distro back then.
Yeah… I’m quickly reaching the point where I’m quicker thinking and writing Python code than even writing the prompts. Let alone the additional time going through the generated stuff to adjust and fix things.
It’s good to get a grip on syntax, terminology and as an overly fancy (but very fast) search bot that can (mostly) apply your question to the very code that’s in front of you, at least in popular languages. But once you got that stuff in your head… I don’t think I’ll bother too much in the future. There surely are tons of useful things you can do with multimodal LLMs, coding on its own properly just isn’t one of it. At least not with the current generation.
Depends on the language I’d assume. The last thing I heard was that the current Codestral version is optimal for Python for example.
Yeah, same with Codestral. You have to tell it what to do very specifically, and once it gets stuck somewhere you have to move to a new session to get rid of the history junk.
Both it and ChatGPT also repeatedly told me to save binary data I wanted to store in memory as a list, with every 1024 bytes being a new entry… in form of a string (supposedly). And the worst thing is that, given the way it extracted that data later on, this unholy implementation from hell would’ve probably even worked up to a certain point.
For a moment I wondered why the Rust code was so much more readable than I remembered.
This would make a nice VS Codium plugin to deal with all the visual clutter. I actually like this.
Instead they’ll become curiosities leading down rabbit holes to understand why and how they happened.
You don’t bet in achieving a pipe dream in the near future when developing something (at least I hope you don’t). The actual AI devs are also rather humble about what can be realistically achieved in the near future if they’re allowed to speak openly, it’s the techbros and investors who’re blowing everything up so phenomenally that the only way for this bubble - which by now is big enough to crash the US economy on its own - to proceed is to achieve the literally impossible, which is General Artificial Intelligence within at least the next 2 to 3 years (they can only pump so much money into it). I haven’t heard a single developer or scientist saying this to even be remotely realistic.
This is more about economics than programming, AI experts and grifters just collect the money as long as the bubble persist. Also has to do with tech-solutionism and latestage capitalism. In the end the science of Machine Learning will prevail (although a lot of devs might need to find other specialisations; perhaps COBOL?), but this economical house of cards will fall apart. Let’s hope it will burry US-neolibertarianism and -fascism under it this time.