Testing Namespace Isolation using NetworkPolicy

4 min read

In this post, I presented a pattern for isolating namespaces from each other in a cluster using NetworkPolicy.

Let’s try it out!

Trying it out

And there we have it. By allowing just traffic to and from Pods in the same namespace, and traffic to and from the internet (but not internal IPs) we have fully isolated this namespace from the other namespaces in the cluster, as if it was in a cluster of it’s own.

Cluster requirements

This example is designed to run on GKE, using DPv2 and CloudDNS. This is the default for Autopilot, so the following is enough to get you a working environment:

gcloud container clusters create-auto autopilot-cluster-3 \
--region us-west1

For the node-based version of GKE, enable both options. On the command line, add --enable-dataplane-v2 --cluster-dns=clouddns

To run in other environments like GKE with Calico, you will need to tweak the ingress ipBlock rules as mentioned above to block the Pod IP ranges.

If you’re using kubedns on either environment, you’ll need to allow DNS traffic to pods in kube-system as well. The same is true for any other connections to DaemonSet agents.

Setup

To setup, we’ll create 2 namespaces each with a public and ClusterIP service. Only one will have the isolation network policies applied

Let’s test it out! First, create a namespace with a simple service

# create namespace
kubectl create namespace non-isolated
kubectl config set-context --current --namespace=non-isolated

# create 2 services
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.2_Ingress/timeserver-deploy-dns.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.1_InternalServices/timeserver-service.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.1_InternalServices/robohash-deploy.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.1_InternalServices/robohash-service.yaml

And another namespace, this one with our isolation NetworkPolicies, and a couple of services.

# create namespace
kubectl create namespace isolated
kubectl config set-context --current --namespace=isolated

# apply network policy
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/autopilot-examples/main/networkpolicy/isolate-namespace/isolate-namespace-egress.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/autopilot-examples/main/networkpolicy/isolate-namespace/isolate-namespace-ingress.yaml

# create the same 2 services
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.2_Ingress/timeserver-deploy-dns.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.1_InternalServices/timeserver-service.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.1_InternalServices/robohash-deploy.yaml
kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/kubernetes-for-developers/master/Chapter07/7.1_InternalServices/robohash-service.yaml

Testing connectivity

Get the service list, and note the ClusterIPs

$ kubectl get svc --all-namespaces
NAMESPACE        NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)            AGE
default          kubernetes             ClusterIP      34.118.224.1     <none>         443/TCP            85m
gke-gmp-system   alertmanager           ClusterIP      None             <none>         9093/TCP           13d
gke-gmp-system   gmp-operator           ClusterIP      34.118.238.255   <none>         8443/TCP,443/TCP   13d
isolated         robohash-internal      ClusterIP      34.118.232.141   <none>         80/TCP             12s
isolated         timeserver             LoadBalancer   34.118.227.94    <pending>      80:30902/TCP       13s
kube-system      antrea                 ClusterIP      34.118.238.71    <none>         443/TCP            13d
kube-system      default-http-backend   NodePort       34.118.233.221   <none>         80:30416/TCP       13d
kube-system      kube-dns               ClusterIP      34.118.224.10    <none>         53/UDP,53/TCP      13d
kube-system      metrics-server         ClusterIP      34.118.232.23    <none>         443/TCP            13d
non-isolated     robohash-internal      ClusterIP      34.118.239.213   <none>         80/TCP             2m11s
non-isolated     timeserver             LoadBalancer   34.118.234.103   34.16.70.221   80:31834/TCP       3m

Now try the Pod in the isolated namespace. It should be able to access the other service in the namespace, and the internet, but not the Pod in the other namespaces. To test this, I conducted the following:

  • Attempt to curl google.com (succeded, PASS)
  • Attempt to connect to service in another namespace (denied, PASS)
  • Attempt to connect to service in own namespace (allowed, PASS)

This is demonstrated by curl-ing google.com, and then trying to hit the cluster local namespace. Note that it can still access that local service via it’s public IP.

$ kubectl exec -it deploy/timeserver -n isolated -- bash
# curl "http://google.com"
.<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
# apt-get update && apt-get install host --yes
# host timeserver.non-isolated.svc
timeserver.non-isolated.svc.cluster.local has address 34.118.234.103
# curl "http://timeserver.non-isolated.svc"
curl: (28) Failed to connect to timeserver.non-isolated.svc port 80 after 131135 ms: Couldn't connect to server
# curl "http://34.118.234.103"
curl: (28) Failed to connect to 34.118.234.103 port 80 after 131854 ms: Couldn't connect to server
# curl http://34.16.70.221
curl: (28) Failed to connect to 34.16.70.221 port 80 after 128886 ms: Couldn't connect to server
# curl "http://robohash-internal/robots.txt"
User-agent: *
Allow: /

Connecting to the external service in this namespace from your local machine should also work

$ kubectl get svc timeserver -n isolated
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
timeserver   LoadBalancer   34.118.227.94   34.135.225.117   80:30902/TCP   23m
$ curl http://34.135.225.117
The time is 9:13 PM, UTC.%           

Now, let’s try the same thing from the non-isolated namespace. Even though this namespace has no NetworkPolicy applied, it shouldn’t be able to connect to services in the isolated namespace either.

$ kubectl exec -it deploy/timeserver -n non-isolated -- bash
# apt-get update && apt-get install host --yes
timeserver.isolated.svc.cluster.local has address 34.118.227.94
# host timeserver.isolated.svc
# curl "http://timeserver.isolated.svc"
curl: (28) Failed to connect to timeserver.isolated.svc port 80 after 129175 ms: Couldn't connect to server
# curl "http://34.118.227.94"

Even accessing the *public* IP of the service is blocked, which is somewhat excessive. However, if you need to expose the service to internal usage, you can simply add a rule to allow it.

Allowing traffic to a particular service in the isolated namespace

If you want to allow traffic to a particular service in this isolated namespace, you can by adding a new rule for that, like the following

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-timeserver-ingress
spec:
  # apply to Pods with the following label
  podSelector:
    matchLabels:
      pod: timeserver-pod
  policyTypes:
  - Ingress
  ingress:
  - from:
    # Allow all cluster traffic
    - namespaceSelector: {}

allow-timeserver-ingress.yaml

$ kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/autopilot-examples/main/networkpolicy/allow-timeserver-ingress.yaml
networkpolicy.networking.k8s.io/allow-timeserver-ingress created

$ kubectl exec -it deploy/timeserver -n non-isolated -- bash
# curl "http://timeserver.isolated"
The time is 9:35 PM, UTC
# curl "http://robohash-internal.isolated"
curl: (28) Failed to connect to robohash-internal.isolated port 80 after 130659 ms: Couldn't connect to server

Allowing traffic from the isolated namespace

now to allow pods in the isolate dnamespace to contact another internal service

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-timeserver-egress
spec:
  # Apply to all pods in this namespace
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    # allow traffic to pod: timeserver-pod in ns non-isolated
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: non-isolated
      podSelector:
        matchLabels:
          pod: timeserver-pod

allow-timeserver-egress.yaml

Note that to select the name of the namespace you use kubernetes.io/metadata.name, not name. You can find this with kubectl describe namespace.

To try this out, create the policy then exec into the container in the isolated namespace, and see if you can access the service we just allowed. It should work, while trying to access another service we didn’t explicitly allow will be blocked, as shown here:

$ kubectl create -f https://raw.githubusercontent.com/WilliamDenniss/autopilot-examples/main/networkpolicy/allow-timeserver-egress.yaml

$ kubectl exec -it deploy/timeserver -n isolated -- bash
root@timeserver-8669c964f8-x7ztn:/app# curl "http://timeserver.non-isolated.svc"
The time is 9:34 PM, UTC.root@timeserver-8669c964f8-x7ztn:/app# 
# curl "http://robohash-internal.non-isolated.svc"

Hopefully now you have an understanding of how to create broad traffic rules, while also adding specific rules to permit allowed traffic.

Cleanup

To delete all the resources created for this demo, run the following

kubectl delete namespace non-isolated
kubectl delete namespace isolated

Discuss

I don’t have comments enabled on this blog, because spam, but if you want to share a comment or question, feel free to post on the site formerly known as Twitter, and tag me with @WilliamDenniss.