2022-05-06

Setting up sslh as transparent proxy for a remote container

 I have an NGINX server that is publicly accessible. It has been deployed in the following manner:

  • Machine A
    • Port forwarding with socat: localhost:4443 ==>  0.0.0.0:443
  • Machine B
    • Running NGINX in a Docker container
    • Port forwarding by Docker: <container_ip>:443 ==> localhost:4443
    • Port forwarding by SSH to Machine A: localhost(B):4443 ==> localhost(A):4443
This in general works. Machine A is published to my domain, and the traffic to 443 is forwarded to NGINX in a few hops.

However there is a problem: the NGINX server never sees the real IP of the client, so it is impossible to depoly fail2ban or other IP address based tools. So I wanted to fix it.


Step 1: VPN

The first step is to connect machine A and B with a VPN. I feel that it would also work without it, but the iptables rules could be more tricky. 

WireGuard is my choice. I made a simple setup:
  • Machine A has IP: 10.0.0.2/24
  • Machine B has IP: 10.0.0.1/24
  • On both machines, the interface is called wg0, AllowedIPs of the other peer is <other_peer_ip>/32 
  • wg-quick and systemd are used manage the interface.

Step 2: Machine A

Configure sslh:

sslh --user sslh --transparent --listen 0.0.0.0:443 --tls 10.0.0.1:4443

This way sslh will create a transparent socket that talks to Machine B. When the reply packets come back, we need to redirect them to the transparent socket:

iptables -t mangle -N MY-SERVER
iptables -t mangle -I PREROUTING -p tcp -m socket --transparent -j MY-SERVER
iptables -t mangle -A MY-SERVER -j MARK --set-mark 0x1
iptables -t mangle -A MY-SERVER -j ACCEPT
ip rule add fwmark 0x1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

Here I'm forwarding all transparent sockets, which is OK because sslh is the only one that creates such traffic.

Step 3: Machine B

Now machine A will start routing packets, the source address will be of the real HTTP client, not Machine A. However WireGuard will block them because of AllowedIPs. 

To unblock:

wg set wg0 peer MACHINE_A_PUB_KEY allowed-ips 10.0.0.2/32,0.0.0.0/0

Note that I cannot simply add 0.0.0.0/0 to AllowedIPs in the conf file, because wg-quick will automatically set ip routing.

My Linux distro and Docker already set up some good default values for forwarding traffic towards containers:
  • IP forwarding is enabled
  • -j DNAT is set to translate the destination IP address and port.
Now NGINX can see the real IP addresses of clients. It will also send response traffic back to that real IP. I need make sure that the traffic is sent back to machine A.

Note that if NGINX proactively initiates traffic to the Internet, I still want it to go through the default routing on machine B. But I suppose it is also OK to route all traffic to machine A if preferred/needed.

iptables -N MY-SERVER
# Tag incoming traffic towards NGINX
iptables -I FORWARD -i wg0 -o docker0 -m conntrack --ctorigdst 10.0.0.1 --ctorigdstport 4443 -j MY-SERVER
iptables -A MY-SERVER -j CONNMARK --set-xmark 0x01/0x0f
iptables -A MY-SERVER -j ACCEPT
# Tag response traffic from NGINX
iptables -t mangle -I PREROUTING -i docker0 -m connmark --mark 0x01/0x0f -j CONNMARK --restore-mark --mask 0x0f

# Route all tagged traffic via wg0
ip rule add fwmark 0x1 lookup 100
ip route add 0.0.0.0/0 dev wg0 via 10.0.0.2 table 100

Now everything should work.

Notes

I mainly referred to the official guide of sslh. I also referred to a few other sources like Arch Wiki. 

In practice, some instructions did not apply to my case:

  • I did not need to grant CAP_NET_RAW or CAP_NET_ADMIN to sslh. Althougth it is mentioned in an sslh doc and a manpage. Maybe the sslh package already handled it automatically.
  • On machine A I did not need to enable IP forwading. Actually this could make sense, because routing is happening on machine B.
  • I did not need to enable route_localnet on machine A

No comments: