High-Performance Compute on Autopilot

< 1 min read

This week, Autopilot announced support for the Scale-Out Compute Class, for both x86 and Arm architectures. The point of this compute class is to give you cores for better single-threaded performance, and improved price/performance for “scale-out” workloads — basically for when you are saturating the CPU, and/or need faster single-threaded performance (e.g. remote compilation, etc).

To use, simply add “compute-class: Scale-Out” to your workloads. They can be Arm, or x86 — but pay attention to the available regions. Also note that you need to be using a very new version of Autopilot (see this blog for a CLI command to get you a Scale-Out qualified cluster).

As can be seen by inspecting the nodes with kubectl describe, this compute class is currently served by the T2D and T2A GCE VM types, so you can review those docs for the performance characteristics of those machines. Regarding regional availability, x86 (T2D) is available in 13 regions, while Arm (T2A) is in 3.

With this launch, Autopilot moves closer to the goal of offering 100% workload compatibility for non-administrative workloads running on GKE. We’re not building a toy Kubernetes environment here, but one that is fully featured to run anything you can throw at it. This wouldn’t be possible with a single, “flat” compute offering, as not all workloads share the same compute requirements. Now you have a great home for your CPU-intensive batch jobs, and (thanks to Autopilot’s support for StatefulSet with block storage) higher-performance databases.