Building GKE Autopilot

2 min read
The audience at my KubeCon talk Building a Nodeless Kubernetes Platform.
Building a Nodeless Kubernetes Platform. Photo Credit: Kaslin Fields

Last month gave a presentation at KubeCon Europe in Valencia on “Building a Nodeless Kubernetes Platform”. In it, I shared the details about the creation of GKE Autopilot including some key decisions that we made, how the product was implemented, and why I believe that the design leads to an ideal fully managed platform.

Building a Nodeless Kubernetes Platform

Autopilot is GKE

When we were designing Autopilot during a 2-day-long in-person summit in early 2020, I wrote on the whiteboard each day “Serverless GKE is GKE” (after the first day it became “Auto GKE is GKE” following a change to the code name). The idea behind this was that fundamentally this “nodeless” product would be able to serve all the workloads that GKE could. This was a breakthrough idea at the time, because thus far, nodeless products had made some compromises on workload compatibility, one obvious example being the lack of support for block-level storage volumes.

The biggest decision that we made was to platform this operationally nodeless product on top of GKE itself, nodes and all. While this seems counterintuitive, my fundamental belief is that it’s not the actual hiding or removal of nodes that made the platform “nodeless”, but rather the removal of the operational duties surrounding nodes. By building such an experience on existing building blocks of the existing product, it both improved the time to market and increased the workload compatibility–a key goal.

So check out the presentation video for more about how we built Autopilot on GKE, and retained workload compatibility including being able to support StatefulSet with block-level storage and leaving the door open for future hardware-level support.