This weekend I wanted to explore Wireguard and the option to setup split tunnel inside my cluster to route some traffic to different country. This proved to be journey and a half. So let’s buckle up and dive in.
Wireguard
Wireguard is basically the standard of VPN tooling currently. It’s being used by a lot of zero trust and overlay networks like Tailscale and Netmaker. You can have it included in kernel or it can be used in user space with Wireguard-go, this gives it incredible flexibility of deployment. And for this reason I wanted to understand it more and deploy my own.
Plan
The plan is to route public traffic from one pod through the Wireguard node deployed to public cloud. I’m fan of nixos so the vps will be nixos based, and will be deployed to Linode.
In kubernetes I will handle the routing with sidecar container inside the pod. My cluster is using the 10.0.0.0/8
so I will leave this traffic untouched and route all other traffic into the VPN.
Sounds easy so let’s:
- Deploy Nixos Wireguard server
- Deploy Wireguard client to the pod
- Setup split tunnel
- Profit ?
Wireguard server
Netmaker is using Wireguard under the hood for creating overlay networks, but also allows to create clients using Wireguard directly to connect to the network.
And since I wanted to switch from tailscale in near future, this is good opportunity to try Netmaker for real usecases.
Deploying Netmaker is getting super easy on nixos thanks to ongoing afford of the community, the full Netmaker service deployment is not merged
as of writing this,
so I will have to use fork of nixpkgs.
Using the nixos-generators
I can create Linode image. The configuration for Netmaker is basically as follows:
services.Netmaker = {
enable = true;
debugTools = true;
domain = "Netmaker.neurobug.com";
email = "wexder19@gmail.com";
};
Easy so far.
On Linode we can create “linode” from the image. The only thing I have to change is the boot option to Grub 2
in Configurations -> Edit -> Boot settings -> Kernel options
otherwise nixos will not start.
And I am basically done with the server. Or so I thought, the internet gateway feature is not enabled for non “Pro” deployments of Netmaker. I found this out after spending almost the whole weekend playing
with different Wireguard configurations. Anyway back to the drawing board.
Wireguard server (the second time)
Let’s try it the old way, raw dog the server configuration. This is not ideal of expending, especially since I cannot easily replace our inside the server and have to replace it with new IP address, but for now it will be sufficient. Still nixos is great choice in here since I can see the whole configuration of server on one place. Configuring Wireguard server manually is harder the just using the Netmaker but still nothing average Linux user would not handle.
I just need to generate two private keys via the wg genkey
util, one for the server and one for the client. These are used for authentication so it’s important to keep them secret, but both my nixos configuration and my homelab kubernetes
configurations are public so I will have to encrypt them. For nixos I’m using agenix
and for kubernetes Sealed secrets
.
For the Wireguard network I’ve chosen subnet that will not clash with any network configuration. And in the end the whole configuration for nixos is just few lines:
networking.nat.enable = true;
networking.nat.externalInterface = "eth0";
networking.nat.internalInterfaces = [ "wg0" ];
networking.wireguard.interfaces = {
wg0 = {
ips = [ "172.16.0.1/12" ];
listenPort = 51820;
postSetup = ''
${pkgs.iptables}/bin/iptables -t nat -A POSTROUTING -s 172.16.0.0/12 -o eth0 -j MASQUERADE
'';
postShutdown = ''
${pkgs.iptables}/bin/iptables -t nat -D POSTROUTING -s 172.16.0.0/12 -o eth0 -j MASQUERADE
'';
privateKeyFile = config.age.secrets.altostratusWgPk.path;
peers = [
{
publicKey = "NBn/YkXzIsGJRHXaHPURFDygzNY4MWIMRI6TDX0oJmg=";
allowedIPs = [ "172.16.0.2/32" ];
}
];
};
};
Testing it with my local machine everything works as expected and I can route all traffic through it. Let’s get to kubernetes.
Kubernetes
I was surprised how easy it was to deploy the wireguard client to handle traffic from one pod. Since every pod has it’s own networking namespace, we can just deploy second container into the pod to alter the networking. Here’s snippet how to do it with linuxserver wireguard container:
containers:
- name: wireguard
image: ghcr.io/linuxserver/wireguard
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false
volumeMounts:
- mountPath: /config/wg_confs/tunnel.conf
subPath: wg_confs/tunnel.conf
name: wireguard
readOnly: false
- name: other
... other container setting
The only unusual configuration is the added capabilities and that container has to be privileged. The last part to add will be the configuration for Wireguard client.
Configuration
This was the part I struggled with the most. I might have been because of the struggle with Netmaker or maybe I’m just bad at these networking parts.
Setting wireguard to allow traffic for the subnet was easy.
[Interface]
Address = 172.16.0.2/32
PrivateKey = <client_key>
MTU = 1420
[Peer]
PublicKey = <server_public_key>
AllowedIPs = 172.16.0.0/12
Endpoint = 172.104.139.121:51820
PersistentKeepalive = 20
But as soon as I tried to split the traffic I’ve run into issues, either with the pod not being reachable from inside the cluster, or it not having any internet access.
Luckily after some searching around I found this excellent calculator
, which allowed me to include only parts of the network in the allowed IPs.
This still wasn’t perfect, but the site also list alternative approach and that’s setting the routes so that traffic that’s not supposed to be handled by wireguard is routed to different interface.
PreUp = ip route add 10.0.0.0/8 via 10.244.0.209 dev eth0
PostDown = ip route del 10.0.0.0/8 via 10.244.0.209 dev eth0
The 10.244.0.209
ip address was taken from hubble and I assume is the egress of the cluster but I’m not sure.
Recap
Wireguard is powerful and fast VPN solution, that’s easily deployable in Nixos. Kubernetes pad networking is interesting and something I will want to explore more.
And in the end, even if it took longer then expected I was able to deploy the split tunnel. But still I will have to explore the Linux networking more to better understand the ip route settings.
My configurations can be found on my Github.