☆ Yσɠƚԋσʂ ☆

  • 524 Posts
  • 496 Comments
Joined 6 years ago
cake
Cake day: January 18th, 2020

help-circle






  • My view is that all corps are slimy, some are just more blatant about it than others. I do agree that Apple stuff tends to be overpriced, and I’ve love to see somebody else offer a similar architecture using RISCV that would target Linux. I’m kind of hoping some Chinese vendors will start doing that at some point. What Apple did with their architecture is pretty clever, but it’s not magic and now that we know how and why it works, seems like it would make sense for somebody else to do something similar.

    The big roadblock in the west is the fact that Windows has a huge market share, and the market for Linux users is just too small for a hardware vendor to target without having Windows support. But in China, there’s an active push to get off US tech stack, and that means Windows doesn’t have the same relevance there.



  • I really hope the project doesn’t die, they had some people leave recently and there was some drama over that. Apple hardware is really nice, and with Linux it would be strictly superior to macos which is just bloated garbage at this point. I’m also hoping we’ll see somebody else make a similar architecture to M series using ARM or RISCV targeting Linux. Maybe we’ll see some Chinese vendors go RISCV route in the future.



  • It’s not an apples to apples comparison because the architecture is so different. Notice his observation in the article:

    I am very impressed with how smooth and problem-free Asahi Linux is. It is incredibly responsive and feels even smoother than my Arch Linux desktop with a 16 core AMD Ryzen 7945HX and 64GB of RAM.

    M1 architecture has a huge advantage being a SoC and having shared memory between the CPU and the GPU which avoids the need for a bus. I’m still using M1 macbook with 8gb of RAM that I got to keep at one of my jobs a few years ago, and it’s incredibly snappy. I’ve tried x86 laptops with way better specs on paper, and they don’t come anywhere close in practice.

















  • Erlang isn’t special because it’s functional, but rather it’s functional because that was the only way to make its specific architecture work. Joe Armstrong and his team at Ericsson set out to build a system with nine nines of reliability. They quickly realized that to have a system that never goes down, you need to be able to let parts of it crash and restart without taking down the rest. That requirement for total isolation forced their hand on the architecture, which in turn dictated the language features.

    The specialness is entirely in the BEAM VM itself, which acts less like a language runtime like the JVM or CLR, and more like a mini operating system. In almost every other environment, threads share a giant heap of memory. If one thread corrupts that memory, the whole ship sinks. In Erlang, every single virtual process has its own tiny, private heap. This is the killer architectural feature that makes Erlang special. Because nothing is shared, the VM can garbage collect a single process without stopping the world, and if a process crashes, it takes its private memory with it, leaving the rest of the system untouched.

    The functional programming aspect is just the necessary glue to make a shared nothing architecture usable. If you had mutable state scattered everywhere, you couldn’t trivially restart a process to a known good state. So, they stripped out mutation to enforce isolation. The result is that Erlang creates a distributed system inside a single chip. It treats two processes running on the same core with the same level of mistrust and isolation as two servers running on opposite sides of the Atlantic.

    Learning functional style can be a bit of a brain teaser, and I would highly recommend it. Once you learn to think in this style it will help you write imperative code as well because you’re going to have a whole new perspective on state management.

    And yeah there are functional languages that don’t rely on using a VM, Carp is a good example https://github.com/carp-lang/Carp


  • RISCV would be a huge step forward, and there are projects like this one working on making a high performance architecture using it. But I’d argue that we should really be rethinking the way we do programming as well.

    The problem goes deeper than just the translation layer because modern chips are still contorting themselves to maintain a fiction for a legacy architecture. We are basically burning silicon and electricity to pretend that modern hardware acts like a PDP-11 from the 1970s because that is what C expects. C assumes a serial abstract machine where one thing happens after another in a flat memory space, but real hardware hasn’t worked that way in decades. To bridge that gap, modern processors have to implement insane amounts of instruction level parallelism just to keep the execution units busy.

    This obsession with pretending to be a simple serial machine also causes security nightmares like Meltdown and Spectre. When the processor speculates past an access check and guesses wrong, it throws the work away, but that discarded work leaves side effects in the cache that attackers can measure. It’s a massive security liability introduced solely to let programmers believe they are writing low level code when they are actually writing for a legacy abstraction. on top of that, you have things like the register rename engine, which is a huge consumer of power and die area, running constantly to manage dependencies in scalar code. If we could actually code for the hardware, like how GPUs handle explicit threading, we wouldn’t need all this dark silicon wasting power on renaming and speculation just to extract speed from a language that refuses to acknowledge how modern computers actually work. This is a fantastic read on the whole thing https://spawn-queue.acm.org/doi/10.1145/3212477.3212479

    We can look at Erlang OTP for an example of a language platform looks like when it stops lying about hardware and actually embraces how modern chips work. Erlang was designed from the ground up for massive concurrency and fault tolerance. In C, creating a thread is an expensive OS-level operation, and managing shared memory between them is a nightmare that requires complex locking using mutexes and forces the CPU to work overtime maintaining cache coherency.

    Meanwhile, in the Erlang world, you don’t have threads sharing memory. Instead, you have lightweight processes, that use something like 300 words of memory, that share nothing and only communicate by sending messages. Because the data is immutable and isolated, the CPU doesn’t have to waste cycles worrying about one core overwriting what another core is reading. You don’t need complex hardware logic to guess what happens next because the parallelism is explicit in the code, not hidden. The Erlang VM basically spins up a scheduler on each physical core and just churns through these millions of tiny processes. It feeds the hardware independent, parallel chunks of work without the illusion of serial execution which is exactly what it wants. So, if you designed a whole stack from hardware to software around this idea, you could get a far better overall architecture.




  • I got one from a startup I worked at a couple of years ago, and then when the whole Silicon Valley bank crash happened they laid me off, but let me keep it. And yeah Asashi is still pretty barebones mainly cause you can basically just use open source apps on it that can be compiled against it. I’m really hoping to see something like M series from China but using RISCV and with Linux.