I recently found out about a Linux Distro named Q4OS and I wanted to test out their claim that it only requires 256 MB of ram when using the trinity desktop environment. However, when I used the live cd in virt-manager with 256 MB or ram, it just kernel panicked at boot. So I then tried it with 512 MB of ram. In addition to some issues that are not present when you are using at least 1 GB of ram, such as “sudo apt update” causing the entire VM to become unresponsive, I noticed that it seemed to actually use anywhere between 290 MB to 370 MB of ram when the only thing running was the process viewer (which is htop).

Obviously, this is still very low for a modern Linux distro but I was wondering how accurate VMs are for testing ram usage.

And, yes I know that it would be pretty much useless on a PC that only had 256 MB of ram even if it did work. I’m actually checking the ram usage because there is a possibility that I may be using a very old computer of mine that only has 1 GB of ram at some point in the future. So I’m just testing it and eventually other distros out to to see which one I’m going to end up using (assuming I do actually end up even using that computer).

Edit: I just tried the 32-bit version in virt-manager and htop stated it was only using 232 MB of ram, which means that their claim was right and that I might have been using the wrong version.

Edit 2: I just tried installing the 64-bit version in virt-manager and htop stated that it was using about 350 MB of ram, so I don’t know if installing it actually made a difference.

  • ReversalHatchery@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    There could also be differences in which hardware drivers are loaded and operating. In a VM, the graphical environment probably uses software rendering, which is also expected to take away some system memory of you didn’t pass through a gpu, but maybe that’s accounted differently.

  • NaN@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    The install cd is probably just running Debian installer, and way more lightweight.

    “Use the install-cd media for older 64bit as well as 32bit machines.” - probably applies to such low memory.

    Also you should probably use the 32-bit cd. 64-bit binaries use more memory, and realistically anyone building with an Athlon 64 (2003) or newer was probably also installing more memory than that.

    • vortexal@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      RIght, I forgot that 64-bit binaries use more ram. And seeing that the 64-bit version does work fine with 1 GB of ram, in the off chance that there is something that should work but requires a 64-bit OS, I would still have the option to use the 64-bit version.

  • BCsven@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    On a somewhat unrelated note: I have an old Iomega arm board running an old version of Debian and OpenMediaVault, it only has 256 MB RAM, and only uses about 30% of that while streaming DLNA audio. Linux can be super minimal

  • LeFantome@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I just installed this myself ( Trinity Desktop 32 bit ). What a weird and wonderful mix of old and new.

    Running htop in konsole after install reported 245 MB of memory used. So, less than 256 MB confirmed.

  • warmaster@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I use Bazzite, AFAIK Steam OS runs inside a container, the performance is amazing. I’ve read the same thing from people who do VFIO GPU passthrough to a Windows VM. If you use kernel based virtualization, there should be no difference.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          A container is just an environment where it appears to any program running within it that it has full access to the computer, while in reality it’s “jailed” and isolated from the rest of the system. The OS resources are shared with the container, instead of the hardware resources as in a virtual machine. There’s no hardware being emulated. It’s a beefed up version of a chroot.

        • meteokr@community.adiquaints.moe
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Docker, e.g. containers, are actually a process isolation system similar but not exactly the same as a chroot if you are familiar with that. It’s an isolation of resources, but not so hardware isolated like a full fat VM. For example, adding a GPU to a VM requires handing over the full PCIe hardware interface, with one interface per VM. Where as containers can just bind mount the device files in /dev and multiple containers can share the same GPU hardware. Containers aren’t virtualizating anything, just isolating processes from each other in a standardized way.