EP 9 - Homelabing: 5 Ways to Set Up a Self-Managed Kubernetes Cluster
If it has not already, I believe a time will come when your home lab will serve as your portfolio and resume as a DevOps Engineer. Do you have a Home Lab??
One important thing to check off your to-do list while working as a DevOps engineer is the setup of your Homelab. Your local sandbox environment where you test out theories and learn new concepts before applying them to a live environment can be the difference between your level of efficiency and that of your co-worker.
Experience, after all, is characterized by the number of times you have been in a situation. The more you put yourself in a situation where you work with containers and container orchestration, the better you get at it.
For today’s article, I will focus on ways to set up a Kubernetes Cluster where you can learn and play around without breaking the bank. As we know, running Kubernetes is usually expensive but these would be the most low-cost methods of creating a cluster
KubeAdm
KubeAdm is the father of Kubernetes cluster setups. It is the method used in Kubernetes the Hard Way. When you use the KubeAdm tool to set up your Kubernetes cluster, you get hands-on with configuring everything yourself.
I would recommend using KubeAdm as a great way to learn, it provides a straightforward way to create and manage your cluster by setting up the basic building blocks and components to form a single node or a highly available production cluster.
For those who love the flexibility of manually configuring their cluster to their specified, personalized taste, this would be the method for you.
When to Use KubeAdm:
When you desire more granular control over your installment process
Creating clusters on bare metal or cloud servers
For a better understanding of the internal processes of a Kubernetes Cluster
Setting Up with Kubeadm
Install Docker or another container runtime.
Install Kubeadm, Kubelet, and Kubectl.
Initialize the cluster using
kubeadm init
.Join worker nodes using the
kubeadm join
command.
Minikube
This tool was designed to provide its users with a way to create and quickly set up Kubernetes clusters primarily for development and testing purposes. It is often run within a local computer on virtual machines or sometimes using docker or podman. Minikube includes a driver that provides built-in configuration for running the cluster on various hypervisors of container runtimes.
It is also packaged with add-ons for quick deployments of pre-configured third-party tools like ingress controllers, monitoring tools, etc.
The main purpose behind Minikube, as I have come to realize, is to provide a simplified environment for experimenting with Kubernetes without the overhead of managing a full cluster.
When to use Minikube:
For development and quick testing of features locally
For user who wants a light-weight kubernetes setup running on their computers
Setting Up with Minikube:
Install Minikube and a hypervisor (e.g., VirtualBox, HyperKit).
Start Minikube with
minikube start
.Use
kubectl
to interact with the cluster.
KinD
Kubernetes IN Docker or as it is commonly called, KinD provides an easy way to run a Kubernetes cluster using Docker containers as your nodes. It deploys its nodes by running a container image called kindest, which has all the preconfigured and packed components of the cluster.
Kind is also another way to setup a lightweight local cluster for testing and managing containers using Docker. I personally prefer KinD to Minikube when running within Docker but that’s just me, I guess.
In my experience, it provides a more hands-on feeling of managing a Kubernetes cluster. There aren’t a lot of add-ons that would take away the management and configuration experience, therefore, you can configure your container applications to best suit you.
When to use KinD:
KinD is very similar to minikube and should be used for testing and development environments
Setting Up with Kind
Install Docker.
Install Kind via
go get
orbrew install
.Create a cluster with
kind create cluster
.
k3s
K3s is a lightweight, certified Kubernetes distribution specifically designed and optimized for edge computing, IoT, and low-resource environments. Rancher Labs created this tool to be easy to install, with a binary that's less than 100 MB. It removes some non-essential components like legacy storage drivers, making it an ideal solution for resource-constrained environments.
With K3s, you can set up virtual machines, Raspberry Pis, and bare metal servers, and deploy Kubernetes on them with a single command. It comes packaged with some useful tools for ingress, load-balancing, monitoring, etc. But can easily be disabled to configure your tools and stack.
When to Use K3s
Edge computing and IoT use cases where low-resource clusters are needed.
Developers who want a simplified, lightweight Kubernetes distribution.
Users looking for Kubernetes clusters on devices like Raspberry Pi.
Setting Up with K3s
Install K3s via a one-liner:
curl -sfL <https://get.k3s.io> | sh -
.K3s automatically sets up the control plane and worker nodes.
Use
kubectl
ork3s kubectl
to interact with the cluster.
K3s has become increasingly popular for edge deployments due to its minimal resource consumption while maintaining full Kubernetes functionality.
k0s
K0s is another lightweight Kubernetes distribution developed by the team at Mirantis (the team that created Lens, thank you Mirantis!). It aims to be an easy-to-use, single-binary Kubernetes distribution, just like K3s.
K0s is entirely open-source and emphasizes simplicity, security, and zero-friction deployments. K0s installs everything needed to run a Kubernetes cluster in a single binary, which reduces complexity and boosts security. It also gives you the means to disable unnecessary, preconfigured tools if you want to deploy and manage your stack.
When to Use K0s
Developers who want a single-binary, minimal Kubernetes distribution.
Deploying Kubernetes clusters where security and simplicity are priorities.
Learning Kubernetes with minimal setup.
Setting Up with K0s
Download and install K0s from the official site.
Initialize the cluster with
k0s install controller
.Start the controller using
k0s start
.Add worker nodes with the
k0s install worker
andk0s start
commands.
K0s emphasizes a hassle-free experience by wrapping the essential Kubernetes components into a single executable, which reduces the operational overhead often associated with Kubernetes management.
These five methods are great for learning Kubernetes both as a beginner, improving your skills as an intermediate-level engineer, or testing out new products or tools as an advanced engineer.
Homelabing is for everybody. You can also take this a step further and use configuration management tools to manage multiple VMs, but that is a discussion for another article. Watch out for that one!
Thank you for joining me again at KubeCounty’s Container and Codes newsletter. Please drop your thoughts in the comments, I would like to hear more from you guys.
Share with your friends and co-workers, and till next time, Happy Engineering!