Skip to main content

Restricting Network Access of Processes

I recently read this article, which talks about restricting (proactive) internet access of a process.

It is easy to completely disable internet/network access, by throwing a process into a new private network namespace. I think all popular sandboxing tools support it nowadays:

  • unshare -n
  • bwrap --unshare-net
  • systemd.service has PrivateNetwork=yes
  • docker has internal network
But the trickier, and more realistic scenario is:
  • [Inbound] The process needs to listen one or more ports, and/or
  • [Outbound] The process needs to access one or more specific IP address/domain
I can think of a few options.

Option 1: Firewall Rules


Both iptables and nftables support filter packets by uid and gid. So the steps are clear:
  • Run the process with a dedicate uid and/or gid
  • Filter packets in the firewall
  • If needs, regularly query DNS and update the allowed set of IP addresses.
This option is not very complicated, and I think the overhead is low. While the DNS part is a bit ugly, it is flexiable and solves both inbound and outbound filtering.

On the other hand, it might be a bit difficult to maintain it, because the constraints (firewall rules) and the processes are in different places.

Option 2: Systemd Service with Socket Activation


Recently I've been playing with sandboxing flags in systemd. Especially systemd-analyze. Our problem can be solved with systemd + socket activation like this:
  • Create my-service.socket that listens to the desire address and port
  • Create my-service.service for the process, with PrivateNetwork=yes.
    • The process has no access to network, it receives a socket from systemd instead, i.e. socket activation
However, it only works if the process supports socket activation. If not, there is a handy tool systemd-socket-proxyd designed for this case. There are nice examples in the manual.

I tested the following setup:
  • my-service-proxy.socket, which activate the corresponding service
  • my-service-proxy.service, which runs systemd-socket-proxyd.
    • The service must have PrivateNetwork=yes and JoinsNamespaceOf=my-service.service
  • my-service.service, the real process, with PrivateNetwork=yes
This way, the process can accept connections at a pre-defined address/port, but has no network access otherwise.

It works for me, but with a few shortcomings:
  • It only worked for system services (running with root systemd). I suspected that it might work with PrivateUsers=yes, but it didn't.
  • It is quite some hassle to write and maintain all these files.
For outbound traffic, systemd can filter by IP addresses, but I'm not sure about ports. For domain filtering, it might be possible to borrrow ideas from the other two options, but I suppose it won't be easy.

Option 3: Docker with Proxy


If the process in question is in a Docker container, inbound traffic is already handled by Docker (via iptables rules).

For outbound traffic, the firewall option also works well for IP addresses. Actually it might be easier to filter packets this way.

For domains, there is another interesting solution: use a proxy. Originally I had some vague ideas about this option, then I found this article. I learned a lot from it and I also extended it.

To explain how it works, here's an example docker compose snippet:

networks:
  network-internal:
    internal: true
  network-proxy:
    ...

services:
  my-service:
    # needs to access https://my-domain.com
    networks:
      - network-internal
    ...
  my-proxy:
    # forwards 443 to my-domain.com:443
    networks:
      - network-internal
      - network-proxy
    ...


The idea is that my-service runs in network-internal, which has no Internet access. But my-service may access selected endpoints via my-proxy.

There are two detailed problems to solve:
  • Which proxy to use?
  • How to make my-service talks to my-proxy?


Choosing the Proxy


In the article the author uses nginx. Originally I had thought it'd be a mess of setting up SSL (root) certificates. But later I learned that nginx can act as a stream proxy that forwards TCP/UDP ports, which make thing much easier.

On the other hand, I often use socat to forwards ports as well, which can also be used here. 

Comparing both:
  • socat is lighter-weighted, the alpine/socat docker image is about 5MB, while the nginx docker image is about 55MB.
  • socat can be configured via command line flags, but nginx needs a configuration file.
  • socat can support only one port, but nginx can manage multiple ports with one instance.
So in practice I'd use socat for one or two ports, but I'd switch to nginx for more. It'd be a hassle to create one container for each port.


Enabling the Proxy


If my-service needs to be externally accessible, the ports must be forwarded and exposed by my-proxy.

For outbound traffic, we want to trick my-service, such that it will see my-proxy when it wants to resolve, for example, my-domain.com.

I'm aware of three options:

#1 That article uses links, but the option is designed for inter-container communcations, and it is deprecated.

#2 Another option is to assign a static IP of my-proxy, then add an entry to extra_hosts of my-service.

#3 Add an aliases entry of my-proxy on network-internal.

While #3 seems better, it is does not just work like that, because when my-proxy wants to send the real traffic to my-domain.com, it will actually send to itself because of the aliases.

To fix it, I have a very hacky solution:

networks:
  network-internal:
    internal: true
  network-proxy:
    ...

services:
  my-service:
    networks:
      - network-internal
    ...
  my-proxy1:
    # forwards 443 to my-proxy2:443
    networks:
      network-internal:
        aliases:
          - my-domain.com
      network-proxy:
    ...
  my-proxy2:
    # forwards 443 to my-domain.com:443
    networks:
      - network-proxy
    ...


In this version, my-proxy1 injects the domain and thus hijacks traffic from my-service. Then my-proxy1 forwards traffic to my-proxy2. Finally my-proxy2 forwards traffic to the real my-domain.com. Note that my-proxy2 can correctly resolve the domain because it is not in network-internal.

On the other hand, it might be possible to tweak the process to ignore local hosts, but I'm not aware of any easy soltuion. 

I use #3 in practice despite it is ugly and hacky, mostly because I don't want to set up static IP for #2.

More on Docker, or Docker Compose, it is possible to specify the network for building containers, which could be handy.

Conclusions


In practice I use option 3 with a bit of option 1. 

With option 3, if I already have a Docker container/image, it'd be just adding a few lines in docker-compose.yml, maybe plus a short nginx.conf file.

With option 1, the main concern is the rules may become out of sync with the processes. For example, if  the environment of the process is changed (e.g. uid, pid, IP address etc), I may need to update the firewall rules to stay up-to-date. But this could be easily missed. I'd set up firewall rules for stable services and generic rules

Option 2 could be useful in some cases, but I don't enjoy writing the service files. And it seems harder to extend (e.g. add a proxy).

Comments

Popular posts from this blog

Determine Perspective Lines With Off-page Vanishing Point

In perspective drawing, a vanishing point represents a group of parallel lines, in other words, a direction. For any point on the paper, if we want a line towards the same direction (in the 3d space), we simply draw a line through it and the vanishing point. But sometimes the vanishing point is too far away, such that it is outside the paper/canvas. In this example, we have a point P and two perspective lines L1 and L2. The vanishing point VP is naturally the intersection of L1 and L2. The task is to draw a line through P and VP, without having VP on the paper. I am aware of a few traditional solutions: 1. Use extra pieces of paper such that we can extend L1 and L2 until we see VP. 2. Draw everything in a smaller scale, such that we can see both P and VP on the paper. Draw the line and scale everything back. 3. Draw a perspective grid using the Brewer Method. #1 and #2 might be quite practical. #3 may not guarantee a solution, unless we can measure distances/p...

Qubes OS: First Impressions

A few days ago, while browsing security topics online, Qubes OS surfaced—whether via YouTube recommendations or search results, I can't recall precisely. Intrigued by its unique approach to security through compartmentalization, I delved into the documentation and watched some demos. My interest was piqued enough that I felt compelled to install it and give it a try firsthand. My overall first impression of Qubes OS is highly positive. Had I discovered it earlier, I might have reconsidered starting my hardware password manager project. Conceptually, Qubes OS is not much different from running a bunch of virtual machines simultaneously. However, its brilliance lies in the seamless desktop integration and the well-designed template system, making it far more user-friendly than a manual VM setup. I was particularly impressed by the concept of disposable VMs for temporary tasks and the clear separation of critical functions like networking (sys-net) and USB handling (sys-usb) into the...

Exploring Immutable Distros and Declarative Management

My current server setup, based on Debian Stable and Docker, has served me reliably for years. It's stable, familiar, and gets the job done. However, an intriguing article I revisited recently about Fedora CoreOS, rpm-ostree, and OSTree native containers sparked my curiosity and sent me down a rabbit hole exploring alternative approaches to system management. Could there be a better way? Core Goals & Requirements Before diving into new technologies, I wanted to define what "better" means for my use case: The base operating system must update automatically and reliably. Hosted services (applications) should be updatable either automatically or manually, depending on the service. Configuration and data files need to be easy to modify, and crucially, automatically tracked and backed up. Current Setup: Debian Stable + Docker My current infrastructure consists of several servers, all running Debian Stable. System Updates are andled automatically via unattended-upgrades. Se...