GKE Autopilot is deployed using the regional cluster architecture. This has a number of advantages such as giving you 3 master nodes for high availability of the control plane, and the ability to spread pods among zones for high availability of your workloads. But sometimes this may be more than what you need, and zonal pattern would work better.
Update: there is now an official doc on this topic.
If you’re dealing with zonal resources, say for example you want to mount a persistent disk into a Pod, where the disk is a zonal disk — you’ll want to target your zone explicitly.
Fortunately this is easy to achieve using nodeSelectors. Remember that while Autopilot removes the concern of nodes, it still supports most of the Kubernetes API, including node selectors. To target the us-west1-a zone in a cluster in the us-west1 region, you can add the following node selector:
apiVersion: v1
kind: Pod
metadata:
name: pluscode-demo
labels:
app: pluscode
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west1-a
containers:
- name: pluscode-container
image: wdenniss/pluscode-demo:latest
For single zones, the shorthand nodeSelector also works:
apiVersion: v1
kind: Pod
metadata:
name: pluscode-demo
labels:
app: pluscode
spec:
nodeSelector:
topology.kubernetes.io/zone: "us-west1-a"
containers:
- name: pluscode-container
image: wdenniss/pluscode-demo:latest
If you want to target multiple zones, that’s easy as well, thanks to the nodeAffinity In operator. To target us-west1-a and us-west1-b in the us-west1 region, you can simply add to the nodeAffinity. Here it is as a Deployment, so you can see the effect when you create a few replicas.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pluscode-demo
spec:
replicas: 3
selector:
matchLabels:
app: pluscode
template:
metadata:
labels:
app: pluscode
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west1-a
- us-west1-b
containers:
- name: pluscode-container
image: docker.io/wdenniss/pluscode-demo:latest
For a real world example, let’s say you have a disk that exists in a specific zone, and want to create a Pod in that same zone. In this example, a Pod running MariaDB will be created in the us-west-1-a
zone, and will mount a GCE disk named mariadb-disk
from that same zone. Note that the Autopilot cluster itself must be created in the region that includes the desired zones, in this case us-west-1
, otherwise this zone won’t be enabled and the Pod will remain in the Pending
state.
apiVersion: v1
kind: Pod
metadata:
name: mariadb-demo
labels:
app: mariadb
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west1-a
containers:
- name: mariadb-container
image: mariadb:latest
volumeMounts:
- mountPath: /var/lib/mysql
name: mariadb-volume
env:
- name: MYSQL_ROOT_PASSWORD
value: "your database password"
volumes:
- name: mariadb-volume
gcePersistentDisk:
pdName: mariadb-disk
fsType: ext4
That example mounts the volume directly which is platform-dependant. When using PersistantVolumes, which is generally preferred, you’ll also need to specify the zonal requirements in the PD resources.
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
storageClassName: ""
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: mariadb-disk
fsType: ext4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-west1-a
1 comment
Comments are closed.