I recently read this article, which talks about restricting (proactive) internet access of a process.
It is easy to completely disable internet/network access, by throwing a process into a new private network namespace. I think all popular sandboxing tools support it nowadays:
- unshare -n
- bwrap --unshare-net
- systemd.service has PrivateNetwork=yes
- docker has internal network
But the trickier, and more realistic scenario is:
- [Inbound] The process needs to listen one or more ports, and/or
- [Outbound] The process needs to access one or more specific IP address/domain
I can think of a few options.
Option 1: Firewall Rules
Both iptables and nftables support filter packets by uid and gid. So the steps are clear:
- Run the process with a dedicate uid and/or gid
- Filter packets in the firewall
- If needs, regularly query DNS and update the allowed set of IP addresses.
- This is somehow similar to reresolve-dns.sh from WireGuard.
This option is not very complicated, and I think the overhead is low. While the DNS part is a bit ugly, it is flexiable and solves both inbound and outbound filtering.
On the other hand, it might be a bit difficult to maintain it, because the constraints (firewall rules) and the processes are in different places.
Option 2: Systemd Service with Socket Activation
Recently I've been playing with sandboxing flags in systemd. Especially systemd-analyze. Our problem can be solved with systemd + socket activation like this:
- Create my-service.socket that listens to the desire address and port
- Create my-service.service for the process, with PrivateNetwork=yes.
- The process has no access to network, it receives a socket from systemd instead, i.e. socket activation
I tested the following setup:
- my-service-proxy.socket, which activate the corresponding service
- my-service-proxy.service, which runs systemd-socket-proxyd.
- The service must have PrivateNetwork=yes and JoinsNamespaceOf=my-service.service
- my-service.service, the real process, with PrivateNetwork=yes
This way, the process can accept connections at a pre-defined address/port, but has no network access otherwise.
It works for me, but with a few shortcomings:
- It only worked for system services (running with root systemd). I suspected that it might work with PrivateUsers=yes, but it didn't.
- It is quite some hassle to write and maintain all these files.
For outbound traffic, systemd can filter by IP addresses, but I'm not sure about ports. For domain filtering, it might be possible to borrrow ideas from the other two options, but I suppose it won't be easy.
Option 3: Docker with Proxy
If the process in question is in a Docker container, inbound traffic is already handled by Docker (via iptables rules).
For outbound traffic, the firewall option also works well for IP addresses. Actually it might be easier to filter packets this way.
For domains, there is another interesting solution: use a proxy. Originally I had some vague ideas about this option, then I found this article. I learned a lot from it and I also extended it.
To explain how it works, here's an example docker compose snippet:
networks:
network-internal:
internal: true
network-proxy:
...
services:
my-service:
# needs to access https://my-domain.com
networks:
- network-internal
...
my-proxy:
# forwards 443 to my-domain.com:443
networks:
- network-internal
- network-proxy
...
The idea is that my-service runs in network-internal, which has no Internet access. But my-service may access selected endpoints via my-proxy.
There are two detailed problems to solve:
- Which proxy to use?
- How to make my-service talks to my-proxy?
Choosing the Proxy
In the article the author uses nginx. Originally I had thought it'd be a mess of setting up SSL (root) certificates. But later I learned that nginx can act as a stream proxy that forwards TCP/UDP ports, which make thing much easier.
On the other hand, I often use socat to forwards ports as well, which can also be used here.
Comparing both:
- socat is lighter-weighted, the alpine/socat docker image is about 5MB, while the nginx docker image is about 55MB.
- socat can be configured via command line flags, but nginx needs a configuration file.
- socat can support only one port, but nginx can manage multiple ports with one instance.
So in practice I'd use socat for one or two ports, but I'd switch to nginx for more. It'd be a hassle to create one container for each port.
Enabling the Proxy
If my-service needs to be externally accessible, the ports must be forwarded and exposed by my-proxy.
For outbound traffic, we want to trick my-service, such that it will see my-proxy when it wants to resolve, for example, my-domain.com.
I'm aware of three options:
#1 That article uses links, but the option is designed for inter-container communcations, and it is deprecated.
#2 Another option is to assign a static IP of my-proxy, then add an entry to extra_hosts of my-service.
#3 Add an aliases entry of my-proxy on network-internal.
While #3 seems better, it is does not just work like that, because when my-proxy wants to send the real traffic to my-domain.com, it will actually send to itself because of the aliases.
To fix it, I have a very hacky solution:
networks:
network-internal:
internal: true
network-proxy:
...
services:
my-service:
networks:
- network-internal
...
my-proxy1:
# forwards 443 to my-proxy2:443
networks:
network-internal:
aliases:
- my-domain.com
network-proxy:
...
my-proxy2:
# forwards 443 to my-domain.com:443
networks:
- network-proxy
...
In this version, my-proxy1 injects the domain and thus hijacks traffic from my-service. Then my-proxy1 forwards traffic to my-proxy2. Finally my-proxy2 forwards traffic to the real my-domain.com. Note that my-proxy2 can correctly resolve the domain because it is not in network-internal.
On the other hand, it might be possible to tweak the process to ignore local hosts, but I'm not aware of any easy soltuion.
I use #3 in practice despite it is ugly and hacky, mostly because I don't want to set up static IP for #2.
More on Docker, or Docker Compose, it is possible to specify the network for building containers, which could be handy.
Conclusions
In practice I use option 3 with a bit of option 1.
With option 3, if I already have a Docker container/image, it'd be just adding a few lines in docker-compose.yml, maybe plus a short nginx.conf file.
With option 1, the main concern is the rules may become out of sync with the processes. For example, if the environment of the process is changed (e.g. uid, pid, IP address etc), I may need to update the firewall rules to stay up-to-date. But this could be easily missed. I'd set up firewall rules for stable services and generic rules
Option 2 could be useful in some cases, but I don't enjoy writing the service files. And it seems harder to extend (e.g. add a proxy).
Comments