Autopilot offers a slightly more abstract view of compute infrastructure than GKE’s Standard mode. The idea is to focus more on the different use-cases than the minutiae of individual machine configurations. However, if you’re coming from GKE Standard or GCE, and know those machine types well, you might want to understand the mapping between the two.
Here’s a quick reference for the resulting machine type from an Autopilot nodeSelector.
CPU Types
Sources:
- Compute classes in Autopilot
- Choose a minimum CPU platform
- Run fault-tolerant workloads at lower costs in Spot Pods
Machine | Selectors |
E2 | none (default) |
T2D | cloud.google.com/compute-class: Scale-Out |
T2A | cloud.google.com/compute-class: Scale-Out |
N2+N2D (random) | cloud.google.com/compute-class: Balanced |
N2 | cloud.google.com/compute-class: Balanced “ |
N2D | cloud.google.com/compute-class: Balanced |
Each of these machines can optionally be requested as Spot, by adding an additional node selector for cloud.google.com/gke-spot: "true"
. For example:
T2D Spot | cloud.google.com/compute-class: Scale-Out |
Note: the resulting machine from the compute class might change over time, in a backwards compatible way.
These selectors go into the Podspec. Here’s an example for a N2D node with the Rome (or better) CPU platform:
apiVersion: apps/v1
kind: Deployment
metadata:
name: n2d
spec:
replicas: 1
selector:
matchLabels:
pod: n2d-pod
template:
metadata:
labels:
pod: n2d-pod
spec:
nodeSelector:
cloud.google.com/compute-class: Balanced
supported-cpu-platform.cloud.google.com/AMD_Rome: "true"
containers:
- name: timeserver-container
image: docker.io/wdenniss/timeserver:1
resources:
requests:
cpu: "4"
I’ve put a complete set of examples on github.
You can see what machine was assigned to a particular pod by describing the node that it runs on, and looking for the node.kubernetes.io/instance-type
label. For example:
$ kubectl describe node | grep "node.kubernetes.io/instance-type"
node.kubernetes.io/instance-type=e2-standard-2
GPU Types
Sources:
N1 with T4 | cloud.google.com/gke-accelerator: nvidia-tesla-t4 |
A2 with A100 40GB | cloud.google.com/gke-accelerator: nvidia-tesla-a100 |
GPU Pods can also be mixed with Spot. See TensorFlow on GKE Autopilot with GPU acceleration for a complete example.