Half of these exist because I was bored once.

The Windows 10 and MacOS ones are GPU passthrough enabled and what I occasionally use if I have to use a Windows or Mac application. Windows 7 is also GPU enabled, but is more a nostalgia thing than anything.

I think my PopOS VM was originally installed for fun, but I used it along with my Arch Linux, Debian 12 and Testing (I run Testing on host, but I wanted a fresh environment and was too lazy to spin up a Docker or chroot), Ubuntu 23.10 and Fedora to test various software builds and bugs, as I don’t like touching normal Ubuntu unless I must.

The Windows Server 2022 one is one I recently spun up to mess with Windows Docker Containers (I have to port an app to Windows, and was looking at that for CI). That all become moot when I found out Github’s CI doesn’t support Windows Docker containers despite supporting Windows runners (The organization I’m doing it for uses Github, so I have to use it).

  • WasPentalive@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Why mix docker and VMs? Isn’t docker sort of like a VM, an application-level VM maybe? (I obviously do not understand Docker well)

    • Kovukono@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Serious answer, I’m not sure why someone would run a VM to run just a container inside the VM, aside from the VM providing volumes (directories) to the VM. That said, VMs are perfectly capable of running containers, and can run multiple containers without issue. For work, our Gitlab instance has runners that are VMs that just run containers.

      Fun answer, have you heard of Docker in Docker?

    • lazynooblet@lazysoci.al
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I like to run a hypervisor host as just that, a hypervisor host. The host being stable is important, and also reduce attack surface by only having it as that.

      An LXC per service is somewhat overkill. A docker host running on LXC could likely run all the docker containers.

      • olympicyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I mentioned above, and not to spam, but there might be a use case that requires a different host distribution. Networking isolation might be another reason why. For 90% of use cases, you’re correct.

    • olympicyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I have a real use case! I have a commercial server software that can run on Ubuntu or RHEL compatible distributions. My entire environment is Ubuntu. They also allow the server software to run in a docker container but the container must be running RHEL. Furthermore, their license terms require me to build the docker container myself to accept the EULA and the docker image must be built on RHEL! So I have an LXC container running Rocky Linux that gets docker installed for the purpose of building RHEL (Core is 8) imaged docker containers. It’s a total mess but it works! You must configure nested security because this doesn’t work by default.

      Instructions here: https://ubuntu.com/tutorials/how-to-run-docker-inside-lxd-containers#1-overview

    • wulf@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      LXC is much more light weight than VMs, so it’s not as much overhead. I’ve done it this way in case I need to reboot a container (or something goes wrong with an update) without disrupting the other services

      Also keeps it consistent since I have some services that don’t run in docker. One service per LXC