Kubernetes the Lazy Way – Part 1

Stacks of multi-colored storage containers beneath a cloudy sky.Stacks of containers by Copilot Designer

Kubernetes is a tool that is inescapable if you’re in the software industry. I tend to spend an inordinate amount of time trying to make kubectl tell me what’s going on with a given workload and any practice I can get with k8s is a good thing.

Recently, I’ve been putting more effort into building a kubernetes cluster at home for tinkering and practice. I’m not interested in getting too deep into the work of bootstrapping a k8s cluster since I exclusively interact with managed clusters (such as EKS, GKE, etc) on a day to day basis. If you’re looking for deep learning about setting up a cluster yourself, see Kubernetes the Hard Way. I also don’t need my home cluster to have the fault-tolerance and scalability of a production kubernetes cluster. Finally, my nodes aren’t going to be very powerful and every bit of load on them counts. It’d be really easy for vanilla k8s to bring my wimpy cluster to a crawl before I even add real workloads to it.

I just want to be able to play with helm charts and host some lightweight services on a bit of cheap compute that I own. Against the urging of some friends, I tried to run kubernetes via Docker Desktop. I’m sure that works for some people, but I couldn’t get it to come up. Besides, docker tends to consume my battery and I try to only use it when plugged in or when I have a pressing need. I’ve also seen lots of love for kind, but that seems more focused on building things for kubernetes.

All of that is to say, I’m going with k3s for this project. If you’re not familiar, k3s is a certified kubernetes distribution built for simple deployments. It’s CNCF sandbox project and is harder to screw up than regular k8s. In my next few posts, I’ll cover the process from hardware selection all the way through deploying my first workloads to the cluster.