

Yeah I don’t think this is an ncdu issue but something is broken with the OPs system.
FLOSS virtualization hacker, occasional brewer
Yeah I don’t think this is an ncdu issue but something is broken with the OPs system.
I’ve long avoided npm but attacks on PyPi are a worry.
I’ve got an Ampere workstation (AVA) which from a firmware point works fine. They may even fix the PCIe bus on later versions.
Asahi is a powerful example of what a small well motivated team can achieve. However they are still face the sisyphean task of reverse engineering entirely undocumented hardware and getting that upstream.
If you love Apples hardware then great. Personally when I have Apple hardware I just tweak the keys to make it a little more like a Linux system and use brew for the tools I’m used to. If I need to I can always spin up a much more hackable VM.
Arm has been slowly pushing standardisation for the firmware which solves a lot of the problems. On the server side we are pretty much there. For workstations I’m still waiting for someone to ship hardware with non-broken PCIe. On laptops the remaining challenge is power usage parity with Windows and the insistance of some manufacturers to try and lock off EL2 which makes virtualization a pain.
Sorry to hear that. Good luck finding a new gig without needing to interact with Teams again.
I used to update my tickets from Emacs org-mode where I kept my working set off knowledge. The org export functions dealt with whatever format Jira expects. Nowadays I’m mostly tracking stuff so my comments are generally never more than a “thanks”, 👍 or occasionally a link to the patch series or pull requests.
Jira is alright, not great, not terrible. You need something to track projects and break down work and say least being ubiquitous a lot of people are familiar with it.
Teams is a dumpster fire of excrement though.
What do the inputs and configuration drop down menus say?
I remember the old ADSL modems where effectively winmodems. I had to keep a Windows ME machine as my household router until the point the community had reversed engineered them enough to get them working on Linux.
At least they where usb based rather than some random card. I think the whole driver could work in user space.
VirtIO was originally developed as a device para-virtualization as part of KVM but it is now an OASIS standard: https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html which a number of hypervisors/VMM’s support.
The line between what a hypervisor (like KVM) does and what is delegated to a Virtual Machine Monitor - VMM (like QEMU) is fairly blurry. There is always an additional cost to leaving the hypervisor to the VMM so it tends to be for configuration and lifetime management. However VirtIO is fairly well designed so the bulk of VirtIO data transactions can be processed by a dedicated thread which just gets nudged by the kernel when it needs to do stuff leaving the VM cores to just continue running.
I should add HVF tends to delegate most things to the VMM rather than deal with things in the hypervisor. It makes for a simpler hypervisor interface although not quite as performance tuned as KVM can be for big servers.
No the Apple hypervisor is called hvf, but projects like rust-vmm and QEMU can control and service guests run on that hypervisor. No KVM required.
virtio-gpu with Vulkan pass through for the VM with a Vulkan to Metal translator in host user space. There are various talks about this including at KVM forum: https://kvm-forum.qemu.org/2024/The_many_faces_of_virtio-gpu_F4XtKDi.pdf
Care needs to be taken with big orgs like the NHS to not try and boil the ocean with massive IT systems. Concentrating on open interoperability standards allows for smaller more flexible contracts and the ability to swap out components when needed.
Open source licences would be the ideal default although at a minimum the purchasing org should have a licence that allows them (or subcontractors) to make fixes without being tied to the original vendor.
The other option is to use VirtIO with Native Context support as a software based partitioning scheme that is relatively lightweight compared to the mdev approach.
The kernel on GitHub is just a mirror - the primary source is on kernel.org
Not just that - modern Androids compile apps in a VM these days to reduce the attack surface of the compiler. You can also push other services into VMs that support the main image. You could even push some vendor drivers into VMs and help keep the main kernel less of a vendor fork fest.
FLOSS projects can only be sustainable if their are enough shared interests able to support it through contributions of all kinds. Fortunately the code is free so that constellation of support can change over time. It’s a shame this particular line of government funding is coming to an end but others can help.
I think the most useful thing for this is hosting repos that suffer from constant DMCA takedowns. Emulators, ad-blockers, site revancers etc.
Was it before or after Oracle acquired Sun that the fork happened? I’m fairly sure it was Oracle that passed the project across to Apache and I have no idea why the Apache foundation accepted it.