EP 14 (Deep Dive) - The $106 Kubernetes Cluster: A DevOps Engineer's Guide to Running K8s on AWS Without Breaking the Bank
"How I cut my AWS Kubernetes costs by 52% - from $220 to $106 - by making strategic infrastructure decisions. A practical guide for engineers who want enterprise-grade K8s without enterprise spending"
📚 12 minute read
Quick Navigation:
Introduction (2 min)
Architecture Decisions (3 min)
Cost Breakdown & EKS Comparison (3 min)
Implementation Details (3 min)
Practical Considerations (1 min)
Introduction
The cost of running infrastructure in the cloud has become a significant strain on most organizations and engineers, particularly during these economically uncertain times. While optimizing cloud expenditures has always been important, it's become critical for individuals and small businesses who can't afford the luxury of large-scale cloud environments and expensive multi-account infrastructures.
Kubernetes has become the de facto standard for running infrastructure platforms. Its advantages are undeniable, but one major barrier to adoption is its reputation for being expensive to run.
But here's the thing: it doesn't have to be.
In this guide, I'll show you how to leverage Kubernetes without breaking the bank. It all comes down to understanding technologies and making informed decisions about what to manage yourself versus what to pay your cloud provider to manage.
Note: The strategies outlined here prioritize cost-efficiency over high availability. If your use case requires multiple nines of uptime, you might need to invest more in your cloud infrastructure. I'll cover high-availability setups in a future post.
The Architecture Decisions
Instead of jumping straight into the setup, let's first understand the key decisions that enable significant cost savings:
Control Plane Management: Self-managed K3s cluster vs. EKS
Network Access: Bastion host vs. AWS VPN
Instance Selection: Right-sized compute resources
Network Architecture: Cost-effective alternatives to managed services
Control Plane Management
The key decision in optimizing Kubernetes cost is whether to use a Managed Cluster or a Self-Managed Cluster. While AWS charges $0.10 per hour for its managed Kubernetes control plane, a self-managed cluster can avoid this cost, at the expense of requiring more effort to manage the control plane yourself.
I chose a K3s multi-node cluster on EC2 instances for my self-managed setup, which saved me $73 per month compared to using AWS’s managed Kubernetes service (EKS). My decision to manage the cluster myself was also an opportunity to learn.
Network Access: Bastion vs VPN
For security, I set up a bastion host to access my private subnet, avoiding the cost of AWS’s VPN solution. The bastion runs on a t3.micro instance, which costs $7.59 per month compared to $73 per month for the VPN.
Instance Selection
Kubernetes clusters need robust machines to handle multiple services and containers. I started with t3.micro for my control plane and t3.small for my worker nodes, but I soon realized they were underpowered. I upgraded to t3.medium instances, which provided the right balance of CPU and memory (2vCPU and 4GB RAM). This gave me more stable performance, especially when using etcd as the data store.
Here’s the cost breakdown for my cluster:
t3.medium instances for all three nodes (control plane + 2 workers):
$0.0416 × 730 hours = $30.37/month per instance.
For three instances: $30.37 × 3 = $91.11/month.
Although this exceeded my $50 target initially, there were still savings to be found.
Network Architecture
Running a Kubernetes cluster in a private subnet requires careful consideration of networking services. AWS charges for NAT Gateways and Load Balancers, which can significantly increase costs.
NAT Gateway: $0.045 × 730 hours = $32.85/month.
Load Balancer: $0.0225 × 730 hours = $16.42/month.
Instead of using these managed services, I opted for more cost-effective alternatives:
NAT Instance: I set up a t3.micro instance as a NAT, which costs $7.59 per month instead of using AWS’s NAT Gateway.
Load Balancer: I configured HAProxy on my bastion server to serve as a load balancer for my Kubernetes cluster, eliminating the need for a managed load balancer.
This setup brought the networking costs down significantly while maintaining essential functionality.
Equivalent EKS Setup Comparison
Let's compare our optimized setup with an equivalent EKS-managed configuration:
EKS Setup Costs (Monthly):
EKS Control Plane
EKS Cluster Management: $73.00
Core Infrastructure
3 × t3.medium worker nodes: $91.11
Bastion host (t3.micro): $7.59
Network Infrastructure
NAT Gateway: $32.85
AWS Application Load Balancer: $16.42
Total EKS Monthly Cost: $220.97
Key Differences:
Control Plane: EKS ($73.00) vs. Self-managed ($0)
NAT: AWS NAT Gateway ($32.85) vs. NAT Instance ($7.59)
Load Balancer: AWS ALB ($16.42) vs. HAProxy on Bastion ($0)
Cost Differential: $114.68 (52% savings)
What You Get with EKS:
Automated control plane management
Automatic security patches and updates
AWS support and SLAs
Simplified cluster operations
Better integration with AWS services
Enhanced high availability
What You Handle Yourself in Our Setup:
Control plane management
Kubernetes version updates
Node maintenance and updates
High availability configuration
Backup and disaster recovery
Security patches
This comparison shows that while our optimized setup saves over 50% in costs, it does require more operational oversight and technical expertise. The choice between these setups should be based on your team's capabilities, business requirements, and the criticality of your workloads.
Cost Breakdown and Analysis
Let's summarize all our cost-saving decisions and their financial impact:
Cluster Management
Opted for self-managed K3s cluster instead of EKS
Monthly savings: $73.00
Core Infrastructure
3 × t3.medium instances for K3s nodes: $91.11
Bastion host (t3.micro): $7.59
Network Infrastructure
NAT Instance (t3.micro) instead of NAT Gateway: $7.59
HAProxy as LoadBalancer instead of AWS ALB/NLB: $0 (running on existing bastion)
Total Monthly Cost: $106.29
Cost Optimization Strategy Results
By implementing these cost-saving measures, we achieved significant savings:
Traditional Setup vs. Optimized Setup (Monthly):
EKS Control Plane: $73.00 vs. $0
NAT Gateway: $32.85 vs. $7.59
Load Balancer: $16.42 vs. $0 (utilizing existing bastion)
Total Monthly Savings: $114.68
By strategically shutting down my non-production environment when not in use, I reduced my instance runtime by about 60%, bringing my cost down to approximately $45.55/month.
That’s a 79.39% decrease in cloud cost
Conclusion
Running a Kubernetes cluster doesn't have to break the bank. Through careful architectural decisions and a willingness to manage certain components ourselves, we've created a functional Kubernetes environment that costs significantly less than a typical managed setup. Here's what we've learned:
Strategic Trade-offs: By choosing self-managed solutions over managed services, we traded convenience for cost savings. While this requires more expertise and hands-on management, it provides valuable learning opportunities and substantial cost reductions.
Resource Right-sizing: Starting with smaller instances and scaling up only when necessary helped us find the sweet spot between performance and cost. Our final setup with t3.medium instances provides adequate resources for a development or small production environment.
Creative Solutions: Using existing infrastructure creatively, like running HAProxy on our bastion host, helped eliminate redundant costs without compromising functionality.
However, it's important to note that this setup might not be suitable for everyone. Consider this approach if:
You're comfortable managing infrastructure components
Your application doesn't require extremely high availability
You're operating on a limited budget
You're running a development environment or small-scale production workload
For larger organizations or critical production workloads, the managed services' cost might be justified by their reliability, support, and reduced operational overhead.
Remember: The goal isn't just to minimize costs but to find the right balance between expenditure and operational requirements. This setup demonstrates that with careful planning and some technical expertise, you can run a functional Kubernetes cluster while keeping costs under control.
Practical Considerations
Before implementing this setup, consider these important factors:
Operational Overhead: This setup requires more hands-on management and technical expertise
Scaling Considerations: While cost-effective for small to medium workloads, you may need to reassess as you scale
Security Implications: Self-managed components require vigilant security maintenance
Backup and Recovery: You'll need to implement your own backup and disaster recovery strategies
7 Days of Kube Boot Camp
Our BootCamp where we would be implementing this strategy begins this week! In th course of the bootcamp, we would:
Build this exact infrastructure from scratch
Deep dive into K3s configuration and management
Learn best practices for self-managed Kubernetes components
Implement monitoring and security best practices
You can join upcoming bootcamp where we would dive into more practical applications of Kubernetes and Cloud-Native Technology!
Did you find this helpful? Share it with a fellow engineer who's looking to optimize their cloud costs. Have questions or suggestions? Hit reply – I read every response.