Kubernetes has become the default answer to "how should we deploy this?" — and that's a problem. K8s is a genuinely remarkable engineering achievement, but it's designed to solve problems that most applications don't have. Choosing it by default is like buying an articulated lorry to do your weekly grocery run.
What Kubernetes actually does
Kubernetes orchestrates containers across a cluster of machines, providing automated deployment, scaling, self-healing, service discovery, and load balancing. At Google-scale, where you're running thousands of services across tens of thousands of machines, this coordination is invaluable. The question is whether your application has any of those problems.
The honest complexity cost
A minimal production Kubernetes setup requires understanding: pods, deployments, services, ingress controllers, namespaces, RBAC, persistent volumes, config maps, secrets, health checks, resource limits, node affinity, and more. Debugging a K8s production issue requires expertise that takes months to develop. Most startups don't have that expertise, and acquiring it is expensive.
What you should probably use instead
For most applications serving up to millions of requests per day: a managed platform like Railway, Render, or Fly.io (for small-to-medium apps), or AWS ECS/App Runner (for larger workloads) provides 90% of K8s's operational benefits with 10% of the complexity. Add Kubernetes when you genuinely outgrow these — not before.
When K8s is actually the right answer
Complex microservices (10+ services), multi-team environments where services need isolated deployment pipelines, workloads with genuinely complex scaling requirements, or teams with dedicated platform engineering capacity. If you check at least three of those boxes, Kubernetes makes sense. Otherwise, reach for simpler tools first.