Develop on Kubernetes Series — Operator Dev — Understanding and Dissecting Kubebuilder

Yashvardhan Kukreja
10 min readOct 10, 2021

Hi everyone! Welcome to the second part of this N-part series of Kubernetes Operator Dev. Do checkout the first part if you would like to get a nice theoretical intro around what operators are and how do they work.
In this part, we are going to explore how to use Kubebuilder to ease up writing an operator.

So, let’s go!

https://raw.githubusercontent.com/cncf/artwork/master/projects/kubernetes/stacked/all-blue-color/kubernetes-stacked-all-blue-color.png

Kubebuilder? What’s that?

Writing Kubernetes operators involves dealing with the Kubernetes API like creating, watching, listing objects, etc. And for dealing with that, you can make use of well-abstracted libraries like client-go and controller-runtime to perform CRUD operations on Kubernetes cluster.

But even using them from scratch to write a fully-fledged operator would end up involving a massive amount of complexity, learning curve and boilerplate code to deal with.

Hence, to save us from that hassle, there are multiple SDKs out there to help us ease up and quicken the process of writing operators. One of them is Kubebuilder.

https://storage.googleapis.com/kubebuilder-logos/logo-single-line-01.png

Kubebuilder is a great SDK backed by controller-runtime which helps you to write Kubernetes operators in Go with ease and speed by dealing with multiple hectic things like bootstrapping a massive chunk of boilerplate code in a well-organized manner, setting up a Makefile with useful make targets to build, run and deploy operator, building CRDs, setting up relevant Dockefiles, RBACs, multiple YAMLs involved with deploying your operators and so much more.

And in this article, we’re gonna see that in action.

But before moving on!

I’d love to keep this article series as exemplified and relatable as possible. And to ensure that, I’ll be explaining all the concepts and topics in a project-first approach. Everything we’ll learn on the way, we’ll apply and implement it in the development of a toy operator so that you folks can understand and relate to all the concepts of Kubernetes operator dev with an absolutely practical standpoint. And finally, once we’ll be done with this article series, we’ll end up with a fully-fledged Kubernetes operator born incrementally out of all the concepts we went through :D

So let me share with you folks what operator are we going to build and finally, how will it look :)

PostgresWriter

The operator which we will be building throughout this article series is going to be called “PostgresWriter”.

The idea is pretty straightforward. Say, we have a Postgres DB somewhere sitting at the corner of the world.

Our cluster would have a custom resource called “postgres-writer”.
And a manifest associated with “postgres-writer” resource would like this:

Whenever a manifest like this would be applied/created to a Kubernetes cluster, our operator is going to capture that event and do the following three things:

  • Parse the spec of incoming “postgres-writer” resource being created and recognize the fields table , name , age and country .
  • Form a unique id corresponding to the above incoming “postgres-writer” resource in the format <namespace of the incoming postgres-writer resource>/<name of the incoming postgres-writer resource> ( default/sample-student in this case).
    Because in Kubernetes, namespace/name combination is always unique across the cluster for a certain resource kind (in our case PostgresWriter).
  • Insert a new row in our Postgres DB in the spec.table and with the spec.name , spec.age and spec.country fields accordingly and primary key would be the above unique id (namespace/name of the incoming resource) which we formed.
High level flow of our operator and custom-resource in action
A slightly deeper flow of our operator in action

Also, whenever a PostgresWriterresource like the above is going to be deleted, our operator will accordingly DELETE the row corresponding to that resource from our PostgresDB to keep the rows of our PostgresDB and the PostgresWriter resources present on our cluster consistent with each other.

With respect to the above example, if we were to kubectl delete the above sample-studentPostgresWriter resource, then our operator would be deleting the row corresponding to the id default/sample-student as a consequence.

This will ensure that for every PostgresWriter resource in our cluster, there’s one row in our PostgresDB, nothing more, nothing less.

So, let’s dive in!

Installing Kubebuilder

For setting up our project/operator, we need Kubebuilder. And for that, we need to install it (Duh!)

Please have Go installed in your computers.

Now, run the following lines in your terminal to install Kubebuilder

# download kubebuilder and install locally.
curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/

And boom! You have it :)

Bootstrapping our operator

Let’s first create a directory for our project/operator and step inside it.

mkdir -p postgres-writer-operator
cd postgres-writer-operator

Now comes the magical part of Kubebuilder. Run the following command to bootstrap our project.

kubebuilder init \
--domain yash.com \
--repo github.com/yashvardhan-kukreja/postgres-writer-operator \
--project-name postgres-writer-operator \
--license apache2 \
--skip-go-version-check

Feel free to change --repo and --domain as per your wish :)

This command defines the basic generic files with respect to our project and sets up the basic dependencies. Basically, it’s just some meta-information associated with your project.

But still, this is not enough because as I mentioned in our operator’s description, our operator will be watching a Custom Resource called PostgresWriter . And as it is a custom resource, we’d have to define its CRD, write the equivalent Go code, attach it with our operator, yada, yada! But no worries, we’re gonna run another magical command to bootstrap that for us as well :)

kubebuilder create api \
--group demo \
--version v1 \
--kind PostgresWriter \
--resource true \
--controller true \
--namespaced true

The above command will be bootstrapping all the required files and codes associated with our PostgresWritercustom resource and attach it with our operator to make us easily begin with writing our beloved operator.

“But Yash! What are those scary terms like group, controller, namespaced, api”

Don’t worry let me explain the command step-by-step, or rather word-by-word :P

  • kubebuilder — Well, that’s the tool we’re using bootstrap stuff.
  • create api— This is meant to perform the operation of creating (bootstrapping) a custom resource in our project. But why call it “create api” and not “create operator”? Well, that’s because, Kubernetes is natively an API driven tool, and even by creating/deploying an operator, we’re just adding a bunch of new API endpoints to Kubernetes. That’s why, we’re “creating api” in the eyes of Kubernetes.
  • --group demo --version v1 --kind PostgresWriter — Every resource type/kind in Kubernetes is uniquely identified with a combination of its group-version-kind (or GVK). So, Kubernetes API organizes resources in an hierarchical manner, where at the top level, it defines Groupsand each grouphas one/multiple Kinds and each Kind has one/multiple versions. And, the GVK is meant to place and integrate a resource in the Kubernetes API in an organized manner.
    To identify the GVK of any resource in Kubernetes, just look at the apiVersion and kind field of it in its YAML. For example, for Deployment resource in Kubernetes, the apiVersion is apps/v1 which denotes that it belongs to apps group, its kind is Deployment and the version of it, which we’re dealing with, is v1 .
    In our case, if you look at the following sample YAML corresponding to our PostgresWriterresource, the group isdemo.yash.com ( the “group” argument + “domain” argument in our command), the kind is PostgresWriter and version is v1
  • --resource true — By this, we’re telling Kubebuilder that we are trying to build a Custom resource i.e. PostgresWriter, and that we want some boilerplate code and files to be boostrapped as well around our custom resource.
  • --controller true — By this, we’re telling Kubebuilder that we want our operator to watch our PostgresWriter resource and act like a controller to it by reconciling over it. This will make Kubebuilder bootstrap some boilerplate code around “Reconcile()” method (reconciliation loop) and a method to set up our operator/controller with our project’s controller-runtime manager.
  • --namespaced true — By this, we’re telling Kubebuilder to consider that PostgresWriter will be used as a namespaced resource unlike Cluster-scoped resources like ClusterRoles, ClusterRoleBindings, etc.

And with all the above arguments and info, Kubebuilder will bootstrap the base CRD, RBAC manifests, Makefile, Code around Reconciliation Loop and much more.

Dissecting the project directory structure

Now, you might be getting freaked out by so many files being generated by executing the plain-simple command I mentioned in the previous section.

Woah!

But don’t worry, let me give you a spoiler, we would deal with just two files:
controllers/postgreswriter_controller.go and api/v1/postgreswriter_types.go to write our entire operator.

A lot of the other stuff is mostly auto-generated and you won’t need to mess around with them.

Yet, I’d still love to explain you folks in a super quick and concise manner about a bunch of important files which got auto-generated in our project. That’s because, in the future, you might encounter some special cases or some bugs where you might need to tweak those files manually. And that’s where this knowledge will come handy to you. Plus, you would know what you’re dealing with whenever you will be bootstrapping an operator the next time, which is always pretty cool :)

So, let’s dissect the important parts of the directory structure. I’ll go top-down:

  • api/v1/*.go — This directory contains all the files associated with defining our PostgresWriter resource. We will edit postgreswriter_types.go accordingly and define type PostgresWriter structwith respect to the structure/schema we want our PostgresWriter resource to have. As api/v1/postgreswriter_types.go only will contain the exact definition of our Custom Resource (via various structs), kubebuilder will generate PostgresWriterCRD on the basis this file only.
  • bin/* — It just contains the binaries of some other tools like controller-genand kustomize which will be required in bootstrapping some parts of code and deploying our operator.
  • config/* — This directory contains all the YAML-related stuff around our operator and custom resource. YAML manifests like those for roles, rolebindings, CRD, sample demo YAMLs, etc. live under this directory.
  • controllers/* — This is the place which will contain the source code of our operator i.e. the logic/code behind the reconciliation loop of our operator, stuff around watching PostgresWriterresource and attaching our operator with the “manager” (defined later) are going to be programmed here.
  • main.go — This is the entrypoint of our project to run our operator. This is where our operator is instantiated and attached with the our “manager” and executed.

“Manager” is the component which wraps one or more controller/operator and registers them with a Kubernetes cluster. In our case, it’s just wrapping and registering one operator/controller i.e. PostgresWriter. So basically, the flow is: Define the operator/controller, Attach it/Set it up with the manager and execute the manager.

“Defining the operator/controller” and “ Defining the method to set it up with the manager” are done inside controllers/postgreswriter_controller.go file (Look atReconcileand SetupWithManager methods respectively).

Instantiating the operator, setting it up with the manager and executing the manager are done in main.go (Look at the last section mgr.Start)

  • hack/* — This directory is meant to store basic shell scripts or any other sort of “hacky” scripts to automate any sort of Ops around our operator. For example, scripts behind running certain checks, scripts for recursively formatting/linting our code, scripts for installing and setting up pre-requisite tools, etc. will be placed here.
  • Makefile — This file contains all the relevant make targets around building and deploying our operator and other things like bootstrapping CRDs, utility code like DeepCopy methods with controller-gen, etc.
  • Dockerfile — This file contains the Dockerfile script/instructions to package our manager (attached with our operator and instructed to run it) into a docker image which can be later deployed over a Kubernetes cluster.

That’s pretty much it!

What’s next?

In the next part, we’ll begin with programming our operator. We will be diving into controllers/postgreswriter_controller.go to understand the Go code bootstrapped in it and we will edit it to program the reconciliation loop of our operator i.e. the Reconcile() method in that file. We will also be setting up all the methods and clients around Postgres to make our operator talk with our PostgresDB. And finally, we will also execute our operator locally and see it in action for the first time. :)

End note!

Thanks a lot for reaching till here. I hope you understood this article and got a nice idea around what Kubebuilder is and why is it so useful.

I’d suggest you to go over this article again because I can understand if you’re still feeling baffled by so much info and jargons. But believe me, it all gets easier.

And it goes without saying that in case of any doubts/concerns around this article or life :P, feel absolutely free to reach out to me on any of my social media handles. I’ll be happy to help :)

Twitter — https://twitter.com/yashkukreja98

Github — https://github.com/yashvardhan-kukreja

LinkedIn — https://www.linkedin.com/in/yashvardhan-kukreja

Do stay tuned for the next part of this series (it’s gonna be fun, I promise :) ) and thanks again ❤

Until next time,
Adios!

--

--

Yashvardhan Kukreja

Software Engineer @ Red Hat | Masters @ University of Waterloo | Contributing to Openshift Backend and cloud-native OSS