I have three Ethernet interfaces, namely eth[0…2]. eth0 is connected to my VPN router and eth1 and eth2 are connected to my public facing router. eth0 is the standard interface that I normally let my Linux instance use. I now want to set up a container that hijacks (makes unavailable to the host) eth1 or eth2 in order to run various services that need to be reachable from WAN through a Wireguard tunnel.
I am aware that the man pages for systemd-nspawn say that it is primarily meant to be a test environment and not a secure container. Does anybody have experience with and/or opinions on this? Should I just learn how to use Docker?
For now, I am only asking about any potential security implications, since I don’t understand how container security works “under the hood”. The network portion of the setup would be something like:
Enabling forwarding kernel parameters on the host
Booting the container with systemd-nspawn -b -D [wherever/I/put/the/container] --network-interface=[eth1 or 2]
Then, managing the container’s network with networkd config files, including enabling IPForward and IPMasquerade
Then, configuring wireguard according their official guides or, for instance, the Arch wiki.
Any and all input would be appreciated! 😊


Would
NLXC be more inconvenient? I don’t trust Dockers pseudo-containering.Unless you’re downloading a prebuilt LXC, you’d still have to do all the manual install yourself.
If you do download a prebuilt one, then you’ll need to do the updating yourself, like you would a normal application, including ensuring you keep dependencies up to date and all that.
Both have their pros and cons and I use each depending on what I’m doing (and basically all of my dockers are running in their own LXC containers, which I find to be the best of both worlds).
FWIW, I don’t download any prebuilt LXC anymore other than the base ‘Ubuntu’ or ‘Debian’ ones … the ones in ProxMox that have the prebuilt apps were a pain to update for me, especially since I had no idea how they were actually installed and most of the times they didn’t have package manager installations or curl installed and it was just way more trouble than it was worth.
ProxMox does now have a built in containerized docker implementation that will use an LXC and you can just provide it the docker package details, but, it’s still in beta and I don’t know that it’s ready to be depended on yet.
Thanks. How about taking a Docker container and converting it’s spec?
Sorry, not 100% sure what you mean “converting its spec”
If you mean take an existing docker and move it to a standard installation, that would depend on what all is needed. Some installations include a ton of other dockers with databases and such and you’d basically need to move them all independently and ensure the programs talk to each other properly.
For others, it’s be as simple as making sure the contents of your original docker data folder is in the right place when you launch the app and you’re done.
Oof, okay. Although you could probably just merge the dependencies in your LXC container? It works like this with creating appimages.
About “converting its spec”: i assumed the main friction point would be the LXC tooling not knowing Dockerfiles. Forgot the name of the containers specification file (Dockerfile), since it was a while ago since i last looked into containering.
Huh, there’s also “Apptainer” now? Portable and reproducible, seems interesting.
I’m assuming you mean LXC? It’s doable but without some sort of orchestration tools like Nix or Ansible, I imagine on-going maintenance or migrations would be kind of a headache.