I spent years on Kubernetes. Now I'm betting against it.
Solo devs don't need complexity. They need deploys that work.
I’ve spent years in the Kubernetes ecosystem. I wrote about K3s. I ran production clusters. I know my way around kubectl, Helm charts, and the CNCF landscape.
And I’m building a deployment tool that doesn’t use any of it.
Here’s why.
Kubernetes solves problems you don’t have
K8s is incredible engineering. It solves real problems:
Multi-team deployments without stepping on each other
Automatic failover across dozens of nodes
Fine-grained resource allocation at massive scale
Rolling updates for services with thousands of instances
If you’re Spotify, you need this. If you’re running a 50-person engineering org, you need this.
If you’re a solo dev with one FastAPI app and a Celery worker? You don’t.
As one dev put it: “Do you want to build a product, or do you want to build an infrastructure team? Kubernetes makes sense for the latter, but it’s often overkill for the former.”
You need:
git push → app is live
Rollback when you break something
Logs you can actually read
Alerts when the site goes down
That’s it. Everything else is ceremony.
The hidden cost isn’t the cluster
“But K3s is lightweight! You can run it on a $6 VPS!”
True. I’ve done it. Here’s what they don’t tell you:
A solo dev recently posted on r/kubernetes with a title that said it all: “Solo dev tired of K8s churn... What are my options?”
His pain point wasn’t learning Kubernetes. It was the maintenance:
“I don’t mind learning the topics and writing the config, I do mind having to deal with a lot of work out of nowhere just because the underlying tools are beyond my control and requiring breaking updates.”
He’d been burned by Bitnami charts pulling the rug, NGINX ingress breaking changes. Things that worked stopped working — not because he changed anything, but because the ecosystem did.
“It all felt very straightforward, and it worked so well for a bit, but it starts to crumble even when I haven’t changed anything on my side.”
This is the hidden cost. Not the setup — the churn.
The YAML tax: Every change requires editing manifests. Add an env var? YAML. Change a port? YAML. Want a cron job? That’s a whole new CronJob resource. One team had a production outage caused by an improperly indented YAML line. A single space broke prod.
The debugging tax: Something’s wrong. Is it the pod? The service? The ingress? The network policy? The PVC? Hope you remember how to read kubectl describe.
The upgrade tax: K3s made this easier, but you’re still running a distributed system. A 2024 report found over 77% of Kubernetes practitioners still have issues running their clusters — up from 66% in 2022. It’s getting harder, not easier.
The cognitive tax: Part of your brain is always allocated to “how does Kubernetes work” instead of “how do I ship features.”
As one commenter put it: “Choose your churn.” There’s always something.
The Reddit OP’s conclusion? He gave up on K8s entirely. Settled on plain NixOS on a single Hetzner VPS. Accepted that 99.9% uptime from one server is good enough. Skipped the redundancy he thought he needed.
“I am trying to write my software, I just want a reliable thing to host it with the freedom and reliability that one would expect from a system that stays out of your way.”
That’s the real ask. A system that stays out of your way.
For teams, the Kubernetes tax is worth paying. You split it across people, you build expertise, you amortize the cost.
Solo? You pay it all yourself, every time.
What actually works for solo devs
So if not Kubernetes, what?
The same Reddit OP nailed the PaaS problem too:
“These ‘managed-docker’ services charge per container/pod and force the user to over-provision. Your pod doesn’t run on 250mb RAM? Ok pay for 1GB even though you only need 500mb.”
I’ve tried everything:
Heroku (great until the bill hits)
Railway/Render (same story, nicer UX — $50-100/mo for what costs $5 on a VPS)
Dokku (solid, but showing its age)
Coolify (powerful, but now you’re babysitting another server)
K3s (overkill for most solo projects)
Raw Docker + nginx (works but tedious)
The best setup I’ve found: Kamal.
It’s from 37signals. They run Basecamp and HEY on it. It’s just Docker + SSH. No cluster, no orchestrator, no YAML manifests.
kamal deployThat’s it. It SSHs into your server, pulls your container, does a zero-downtime swap. Rollback is one command. Logs are one command.
It’s boring. It works.
My bet: AI interface > dashboards > CLI > YAML
Here’s where it gets interesting.
Kamal solved the “deploy” problem. But ops is more than deploy:
Why is the app slow right now?
What happened at 3am?
Should I upgrade my VM or optimize my code?
Show me the errors from the last hour
These questions require jumping between tools. SSH into the box, grep the logs, check Grafana, cross-reference with your deploy history.
My bet: you shouldn’t need to do any of that.
You should just ask.
“Why is memory usage spiking?” → Here’s what’s using RAM, and here’s the trend over the last week.
“Roll back to yesterday’s deploy” → Done. Here’s what changed.
“Show me errors from the /api/checkout endpoint” → Found 47 errors, here’s the pattern.
This isn’t science fiction. LLMs are good at this now. The interface just doesn’t exist yet.
What I’m building
VMKit is my attempt at this interface.
Bring your own VPS (Hetzner, DigitalOcean, whatever)
It handles Kamal, Traefik, SSL, monitoring
The interface is conversation — web chat or MCP server in Claude Code
No Kubernetes. No YAML manifests. No 47-screen dashboards.
Just say what you want.
I might be wrong. Maybe solo devs actually love clicking through Render’s UI. Maybe the Kubernetes complexity is worth it for everyone.
But I don’t think so. I think the right answer for one person running one to three apps is radically simpler than what we have today.
vmkit.dev if you want to follow along.
The uncomfortable truth
I’m not anti-Kubernetes. I’m anti-complexity-for-its-own-sake.
K8s is a tool. An incredibly powerful one. But tools have contexts where they make sense and contexts where they don’t.
Solo dev shipping a SaaS? You don’t need pod autoscaling. You need deploys that work and a way to debug when they don’t.
That’s the bet.

