Isolating namespaces with NetworkPolicy

5 min read

Kubernetes allows for multi-tenancy within the cluster using namespaces. You can host multiple different applications each in their own namespace that at times can feel almost like a separate cluster. If you configure RBAC you can grant users and service accounts access to specific namespaces and even control what objects they can modify within.

However, by default every Pod can communicate with every other Pod, cluster wide. This is actually a really nice design property of Kubernetes, which allows different teams to publish their own services, but it doesn’t give you any network isolation at all. While a team may want to allow internal access of a particular service from other namespaces, they may not want all their internal services exposed.

That’s where NetworkPolicy comes in. Network policy allows you to control the Pod’s communication. You configure a bunch of rules to allow traffic to particular Pods, namespaces or IP ranges. In practice it’s a little complex to setup due to the way the defaulting works. By default, everything can access everything. Once you select a Pod with a rule though, now the Pod cannot access anything except what you explicitly permit.

Isolating Egress

Say you want to completely isolate a namespace, preventing the Pods from communicating outside their namespace, essentially treating the namespace as if it was in it’s own cluster in it’s own isolated network, but without the cost and overhead of creating a second cluster.

There is no “deny” logic, so you can’t write a rule like “deny this Pod access to other namespaces”, you need to build a rule to govern everything the Pod can do, essentially the inverse of the desired deny rule, i.e. “allow this Pod access to it’s own namespaces, and the internet”.

NOTE: You may see NetworkPolicy recipes that describe themselves as a “deny” policy. Just remember that there are no actual deny rules with NetworkPolicy. All such policies that achieve a “deny” objective, are achieving it by allowing all traffic except what you wish to deny. Sometimes you may inadvertently miss allowing traffic you didn’t plan to deny, in which case you’ll need to adjust the rule (typically by adding an extra selector).

We can build up this isolation by permitting the access we want. Specifically, that the Pod should:

  • Be able to access the internet
  • Be able to access other Pods/services in it’s own namespace
  • Be able to access a specific internal resource, like a Database

Adding those 3 allow rules together, we get:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: isolate-namespace-egress
spec:
  # Apply to all pods in this namespace
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    # Allow Pod traffic to own namespace
    - podSelector: {}
    # Allow traffic to internet (but not internal network)
    - ipBlock:
        cidr: '0.0.0.0/0'
        except: ['10.0.0.0/8']
    # Allow traffic to specific cloud service
    - ipBlock:
        cidr: '10.123.182.3/32' # Cloud SQL server

isolate-namespace-egress.yaml

Since access to other namespaces and access to IPs in the network isn’t granted here, it’s denied.

Isolating Ingress

We can further isolate this namespace by preventing all connections into the namespace, except from other Pods within the namespace and from the internet via LoadBalancers.

Again, we need to build this rule up by what is allowed. In this case:

  • Accept connections from the internet
  • Accept connections from Pods in the same namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: isolate-namespace-ingress
spec:
  # Apply to all pods in this namespace
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    # Allow Pod traffic from own namespace
    - podSelector: {}
    # Allow all non-Pod traffic
    - ipBlock:
        cidr: '0.0.0.0/0'

isolate-namespace-ingress.yaml

Unlike for egress, this rule is not preventing connections from the local network. That is possible to configure (see the next section) but unfortunately will make your configuration non-portable. If you want to prevent your Pods being accessed from the local network outside the cluster, you can follow that approach.

Notice the ipBlock rule here. Strictly speaking, 0.0.0.0/0 being the universe of all IPv4 IPs, will include the Pod IPs. So does this rule allow Pod traffic from other namespaces after all?

The answer is unfortunately not defined by Kubernetes and is implementation dependant. I wrote this GKE doc section on the topic to document how GKE works. For DPv2, the target environment of this post, Pod traffic is not considered for ipBlock rules, so this rule won’t allow Pod traffic from other namespaces. In Calico, you’ll need to exclude Pod IP ranges (see the next section).

Ingress Considerations

Blocking local network ingress

Recall that above with egress, egress to the local network was blocked, but with the above ingress rule it’s not. You can also block traffic from the local network with a rule like:

- ipBlock:
    cidr: '0.0.0.0/0'
    except: ['10.0.0.0/8']

However, there’s a catch. Traffic from the LoadBalancer (even traffic originating from the internet) is actually considered internal network traffic, originating from the node IP. So, by blocking the local network you also block internet traffic going via the node.

To solve that, you will need a non-portable policy that allows traffic from your Node IP ranges. If your Node IP range is 10.20.1.3/17 you could configure this like so:

# Allow internet except local network
- ipBlock:
    cidr: '0.0.0.0/0'
    except: ['10.0.0.0/8']
# Allow Node IP traffic
- ipBlock:
    cidr: '10.20.1.3/17'

Calico Considerations

If you’re using Calico (GKE before DPv2), you need to exclude Pod IP ranges explicitly to avoid them being included in the broad rule by referencing their ranges. You can do that either using the technique above to block the entire local network except Node IPs, or you can exclude your Pod IP ranges as follows:

    # Allow all internet traffic
    - ipBlock:
        cidr: '0.0.0.0/0'
        # but exclude the Pod IP range
        except: ['240.10.0.0/17']

DPv2 is a clear winner here as it classifies traffic as either being Pod traffic (which you select with selectors), or network traffic (which you select with CIDR IP blocks), but not both. Pod traffic that happens to have an IP in the CIDR range won’t be allowed.

Testing it out

I’ve a complete post covering a NetworkPolicy test scenario.

Alternative: only set ingress rules

This post covered how to isolate a specific namespace for both ingress and egress traffic. Useful if you want to really guard a particular namespace because it’s special in some way.

If you plan to deploy NetworkPolicies for every namespace, then there’s an alternative. Rather than creating ingress and egress rules for each namespace which can feel duplicative, if you broadly want to keep all namespaces isolated you could also just implement ingress rules, while allowing all egress. To achieve that, you just create the ingress rule above, and then one for each specific service to allow (and don’t create any egress rules, allowing all egress by default). As long as each namespace has an ingress rule configured, then egress within the cluster is essentially denied except for what you explicitly allow.

Next Steps

Depending on your requirements, you may wish to pair this network isolation strategy with:

  • Pod Security Admission to control the privileges of Pods in the namespace
  • RBAC to control user and service account access to the namespace
  • A RuntimeContext like gVisor to further isolate the container from the host for defense in depth (i.e. GKE Sandbox)

Pod Security Admission and RBAC are covered in Chapter 12 of my book Kubernetes for Developers, along with fully worked examples.

Discuss

I don’t have comments enabled on this blog, because spam, but if you want to share a comment or question, feel free to post on the site formerly known as Twitter, and tag me with @WilliamDenniss.