

I’ve generally been up front when starting new jobs that nothing impinges my ability to work on FLOSS software on my own time. Only one company put a restriction in for working on FLOSS software in the same technical space as my $DAYJOB.
FLOSS virtualization hacker, occasional brewer


I’ve generally been up front when starting new jobs that nothing impinges my ability to work on FLOSS software on my own time. Only one company put a restriction in for working on FLOSS software in the same technical space as my $DAYJOB.


The article mentioned there is a long history of forks in the open source Doom world. It seems the majority of the active developers just moved to the new repository.


I helped with the initial Aarch64 emulation support for qemu as well as working with others to make multi-threaded system emulation a thing. I maintain a number of subsystems but perhaps the biggest impact was implementing the cross compilation support that enabled the TCG testing to be run by anyone including eventually the CI system. This is greatly helped by being a paid gig for the last 12 years.
I’ve done a fair bit of other stuff over my many decades of using FLOSS including maintain a couple of moderately popular Emacs packages. I’ve got drive by patches in loads of projects as I like to fix things up as I go.
I didn’t know who Kirk was until the assassination I have better things to do with my limited time than go on a deep dive into their history before posting any comment on the news. I kinda got the vibe when I realised that was who Cartman was based on in the recent South Park.


Was it before or after Oracle acquired Sun that the fork happened? I’m fairly sure it was Oracle that passed the project across to Apache and I have no idea why the Apache foundation accepted it.


Yeah I don’t think this is an ncdu issue but something is broken with the OPs system.


I’ve long avoided npm but attacks on PyPi are a worry.


I’ve got an Ampere workstation (AVA) which from a firmware point works fine. They may even fix the PCIe bus on later versions.


Asahi is a powerful example of what a small well motivated team can achieve. However they are still face the sisyphean task of reverse engineering entirely undocumented hardware and getting that upstream.
If you love Apples hardware then great. Personally when I have Apple hardware I just tweak the keys to make it a little more like a Linux system and use brew for the tools I’m used to. If I need to I can always spin up a much more hackable VM.


Arm has been slowly pushing standardisation for the firmware which solves a lot of the problems. On the server side we are pretty much there. For workstations I’m still waiting for someone to ship hardware with non-broken PCIe. On laptops the remaining challenge is power usage parity with Windows and the insistance of some manufacturers to try and lock off EL2 which makes virtualization a pain.
Sorry to hear that. Good luck finding a new gig without needing to interact with Teams again.
I used to update my tickets from Emacs org-mode where I kept my working set off knowledge. The org export functions dealt with whatever format Jira expects. Nowadays I’m mostly tracking stuff so my comments are generally never more than a “thanks”, 👍 or occasionally a link to the patch series or pull requests.
Jira is alright, not great, not terrible. You need something to track projects and break down work and say least being ubiquitous a lot of people are familiar with it.
Teams is a dumpster fire of excrement though.
What do the inputs and configuration drop down menus say?
I remember the old ADSL modems where effectively winmodems. I had to keep a Windows ME machine as my household router until the point the community had reversed engineered them enough to get them working on Linux.
At least they where usb based rather than some random card. I think the whole driver could work in user space.


VirtIO was originally developed as a device para-virtualization as part of KVM but it is now an OASIS standard: https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html which a number of hypervisors/VMM’s support.
The line between what a hypervisor (like KVM) does and what is delegated to a Virtual Machine Monitor - VMM (like QEMU) is fairly blurry. There is always an additional cost to leaving the hypervisor to the VMM so it tends to be for configuration and lifetime management. However VirtIO is fairly well designed so the bulk of VirtIO data transactions can be processed by a dedicated thread which just gets nudged by the kernel when it needs to do stuff leaving the VM cores to just continue running.
I should add HVF tends to delegate most things to the VMM rather than deal with things in the hypervisor. It makes for a simpler hypervisor interface although not quite as performance tuned as KVM can be for big servers.


No the Apple hypervisor is called hvf, but projects like rust-vmm and QEMU can control and service guests run on that hypervisor. No KVM required.


virtio-gpu with Vulkan pass through for the VM with a Vulkan to Metal translator in host user space. There are various talks about this including at KVM forum: https://kvm-forum.qemu.org/2024/The_many_faces_of_virtio-gpu_F4XtKDi.pdf


Care needs to be taken with big orgs like the NHS to not try and boil the ocean with massive IT systems. Concentrating on open interoperability standards allows for smaller more flexible contracts and the ability to swap out components when needed.
Open source licences would be the ideal default although at a minimum the purchasing org should have a licence that allows them (or subcontractors) to make fixes without being tied to the original vendor.
mu4e inside my Emacs session.