You can’t choose where you grow up. :(
You can’t choose where you grow up. :(
I’m pretty sure fused add multiply with store is part of the AVX instruction set.
Recursion makes it cheaper to run in the dev’s mind, but more expensive to run on the computer. Subroutines are always slower than a simple jump.
Hand written assembly is much more powerful than a turing-complete high level language because it lets you fuck up everything. Rust and python are way too wimpy to allow a user to destroy their computer.
So you made a meme about how your opponent is completely irrational and you are a paragon of logic and reason, and then proceeded to declare yourself the winner?
Everything can be done in constant time, at least during runtime, with a sufficiently large look-up table. It’s easy! If you want to simulate the universe exactly, you just need a table with nxm entries, where n is the number of plank volumes in the universe, and m is the number of quantum fields. Then, you just need to compute all of them at compile time, and you have O(1) time complexity during runtime.
There are bindings in java and c++, but python is the industry standard for AI. The libraries for machine learning are actually written in c++, but use python language bindings. Python doesn’t tend to slow things down since machine learning is gpu-bound anyway. There are also library specific programming languages which urges the user to make pythonic code that can be compiled into c++.
I completely agree that it’s a stupid way of doing things, but it is how openai reduced the vocab size of gpt-2 & gpt-3. As far as I know–I have only read the comments in the source code– the conversion is done as a preprocessing step. Here’s the code to gpt-2: https://github.com/openai/gpt-2/blob/master/src/encoder.py I did apparently make a mistake, as the vocab reduction is done through a lut instead of a simple mod.
Can’t find the exact source–I’m on mobile right now–but the code for the gpt-2 encoder uses a utf-8 to unicode look up table to shrink the vocab size. https://github.com/openai/gpt-2/blob/master/src/encoder.py
This might be happening because of the ‘elegant’ (incredibly hacky) way openai encodes multiple languages into their models. Instead of using all character sets, they use a modulo operator on each character, to make all Unicode characters represented by a small range of values. On the back end, it somehow detects which language is being spoken, and uses that character set for the response. Seeing as the last line seems to be the same mathematical expression as what you asked, my guess is that your equation just happened to perfectly match some sentence that would make sense in the weird language.
Anything that’s turning complete, has enough ram, and has a c compiler can run Linux. Theoretically, you could program a CPLD to run brainfuck and you could still run Linux.
Oh boy, can’t wait for DOGE to receive all the private info the government stores about me! I’m sure that hiring kids with no experience to program every single automatizable aspect of the government will turn out just fine! 🫠