I have three Ethernet interfaces, namely eth[0…2]. eth0 is connected to my VPN router and eth1 and eth2 are connected to my public facing router. eth0 is the standard interface that I normally let my Linux instance use. I now want to set up a container that hijacks (makes unavailable to the host) eth1 or eth2 in order to run various services that need to be reachable from WAN through a Wireguard tunnel.

I am aware that the man pages for systemd-nspawn say that it is primarily meant to be a test environment and not a secure container. Does anybody have experience with and/or opinions on this? Should I just learn how to use Docker?

For now, I am only asking about any potential security implications, since I don’t understand how container security works “under the hood”. The network portion of the setup would be something like:

Enabling forwarding kernel parameters on the host

Booting the container with systemd-nspawn -b -D [wherever/I/put/the/container] --network-interface=[eth1 or 2]

Then, managing the container’s network with networkd config files, including enabling IPForward and IPMasquerade

Then, configuring wireguard according their official guides or, for instance, the Arch wiki.

Any and all input would be appreciated! 😊

  • doodoo_wizard@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 hour ago

    Cons:

    It’s not gonna work

    It’s not well documented

    No one else does it so it’s hard to ask for help

    You don’t even need a container for this, just use the routing table

    Pros:

    New project

    No chance to be led astray by stackoverflow or reddit

    Contributing to systemd development by testing new features

    • emotional_soup_88@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 minutes ago

      Well, now I just have to try it!

      I have no idea how to tell specific processes or shells to use a specific interface, while also forbidding others to use the same interface… Which is why I thought, “but I can force a container to use a specific interface! Gotcha!”

      I’m almost there, I think. I managed to get my phone and my nspawn-ed wireguard interface to shake hands. I just need to tweak the forwarding and nat-ing rules in my firewall. After I touch grass. Oh, my back…

  • truthfultemporarily@feddit.org
    link
    fedilink
    arrow-up
    7
    ·
    7 hours ago

    This feels like a hacky solution.

    Why not use VLANs? You can have just one physical interface and then have VLAN interfaces. You can then use a bridge to have every container have their own interface and IP that is attached to a specific VLAN.

  • talkingpumpkin@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 hours ago

    Should I just learn how to use Docker?

    Since you are not tied to docker yet, I’d recommend going with podman instead.

    They are practically the same and most (all?) docker commands work on podman too, but podman is more modern (second generation advantage) and has a better reputation.

    As for passing a network interface to a container, it’s doable and IIRC it boils down to changing the namespace on the interface.

    Unless you have specific reasons to do that, I’d say it’s much easier to just forward ports from the host to containers the “normal” way.

    There’s no limit to how many different IPs you can assign to a host (you don’t need a separate interface for each one) and you can use a given port on different IPs for different things .

    For example, I run soft-serve (a git server) as a container. The host has one “management” IP (92.168.10.243) where openssh listens on port 22 and another IP (192.168.10.98) whose port 22 is forwarded to the soft-serve container via podman run [...] -p 192.168.10.98:22:22).

    • emotional_soup_88@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Thank you for the suggestion on Podman! The thing is, since the VPN is running on one of my routers (connected to eth0), and since I want the public facing interfaces (1 and 2) not to use that router, I’m going to make use of one of those two extra interfaces anyway. Either way, good advice in adding multiple addresses to the same interface!

  • a_fancy_kiwi@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    8 hours ago

    Should I just learn how to use Docker?

    Yes. I put off learning it for so long and now can’t imagine self-hosting anything without it. I think all you have to do is set a static IP to the NIC from your router and then specify the IP and port in a docker-compose.yml file:

    Ex: IP-address:external-port:container-port

    services:
        app-name:
            ports
                - 192.168.1.42:3000:3000
    
      • TVA@thebrainbin.org
        link
        fedilink
        arrow-up
        2
        ·
        3 hours ago

        Unless you’re downloading a prebuilt LXC, you’d still have to do all the manual install yourself.

        If you do download a prebuilt one, then you’ll need to do the updating yourself, like you would a normal application, including ensuring you keep dependencies up to date and all that.

        Both have their pros and cons and I use each depending on what I’m doing (and basically all of my dockers are running in their own LXC containers, which I find to be the best of both worlds).

        FWIW, I don’t download any prebuilt LXC anymore other than the base ‘Ubuntu’ or ‘Debian’ ones … the ones in ProxMox that have the prebuilt apps were a pain to update for me, especially since I had no idea how they were actually installed and most of the times they didn’t have package manager installations or curl installed and it was just way more trouble than it was worth.

        ProxMox does now have a built in containerized docker implementation that will use an LXC and you can just provide it the docker package details, but, it’s still in beta and I don’t know that it’s ready to be depended on yet.

          • TVA@thebrainbin.org
            link
            fedilink
            arrow-up
            2
            ·
            3 hours ago

            Sorry, not 100% sure what you mean “converting its spec”

            If you mean take an existing docker and move it to a standard installation, that would depend on what all is needed. Some installations include a ton of other dockers with databases and such and you’d basically need to move them all independently and ensure the programs talk to each other properly.

            For others, it’s be as simple as making sure the contents of your original docker data folder is in the right place when you launch the app and you’re done.

            • MonkderVierte@lemmy.zip
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              55 minutes ago

              Oof, okay. Although you could probably just merge the dependencies in your LXC container? It works like this with creating appimages.

              About “converting its spec”: i assumed the main friction point would be the LXC tooling not knowing Dockerfiles. Forgot the name of the containers specification file (Dockerfile), since it was a while ago since i last looked into containering.

              Huh, there’s also “Apptainer” now? Portable and reproducible, seems interesting.

      • a_fancy_kiwi@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        7 hours ago

        I’m assuming you mean LXC? It’s doable but without some sort of orchestration tools like Nix or Ansible, I imagine on-going maintenance or migrations would be kind of a headache.

      • a_fancy_kiwi@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        7 hours ago

        You might come across docker run commands in tutorials. Ignore those. Just focus on learning docker compose. With docker compose, the run command just goes into a yaml file so it’s easier to read and understand what’s going on. Don’t forget to add your user to the docker group so you aren’t having to type sudo for every command.

        Commands you’ll use often:

        docker compose up - runs container

        docker compose up -d - runs container in headless mode

        docker compose down - shuts down container

        docker compose pull - pulls new images

        docker image list - lists all images

        docker ps - lists running containers

        docker image prune -a - deletes images not being used by containers to free up space