Anthropic Is Losing Money on You Every Month. What Are You Shipping?
Every developer on Claude Max is being subsidized by billions in VC money. Here's why the window is open, how long it stays that way, and what to build before it closes.
I do this thing at the end of every month where I look at my Claude usage stats and feel mildly guilty.
Not guilty enough to stop, obviously. But guilty in the way you feel when you’ve been eating at a nice restaurant and you suddenly realize your friend with the expense account has been covering all of it. You’d have ordered differently if you knew that at the start.
Here’s what I know: I pay $200/month for Claude Max. Based on what I actually do with it — multi-hour Claude Code sessions, agents running in parallel, research deep-dives, content pipelines chewing through tokens like a hungry golden retriever — the API-rate equivalent of my usage is somewhere between $600 and $900. Every month.
Anthropic is losing money on me. On you. On every developer who’s turned this into a real part of how they build.
This isn’t an accident. This is the plan. And it has an expiration date.
The Hemorrhage Is Real
I was reading Sebastian Raschka’s Build a Large Language Model from Scratch last week and stumbled into a footnote that sent me down a rabbit hole. He cites Lambda Labs: it would take 355 years to train GPT-3 on a single V100 datacenter GPU. On a consumer RTX 8000: 665 years.
I know, I know — “but they use thousands of GPUs in parallel.” Yes. And those thousands of GPUs cost tens of millions of dollars for a single training run. That’s before we talk about the ongoing cost of serving that model to every user who hits the API every day. Training is the capital expenditure. Inference — every time you actually use Claude — is the operating cost. I’m talking about the second thing. Both are obscene.
Let’s look at what’s actually happening, because the numbers are — and I say this as someone who’s seen a lot of startup math — genuinely unhinged.
OpenAI’s revenue went from $3.7 billion in 2024 to over $20 billion ARR by end of 2025. Ten times in two years. Sounds like they’ve figured it out. Except their own internal projections show losses of $14 billion in 2026 — against $13 billion in revenue. The revenue explodes. The costs explode faster. Microsoft has put in $13 billion. SoftBank committed $41 billion across various tranches. A 2026 funding round valued the company at $730 billion. None of this is profit. All of it is gap-filling.
Anthropic is nearing $20 billion in annualized revenue as of early 2026 — up from $1 billion at the start of 2025. Google has put in over $3 billion in equity, plus a cloud infrastructure deal described as “tens of billions” in compute. Amazon has committed $8 billion. The Series G closed at a $380 billion valuation. These are not investments in a profitable business. These are bets on essential infrastructure, placed by people who are terrified of the alternative.
Google’s own AI division is entirely subsidized by search advertising. They watched OpenAI nearly disrupt their core business and decided that losing money on AI is preferable to losing the company. You can’t really argue with the logic. You can appreciate that the logic benefits you.
Here’s what makes this particularly strange: the more usage grows, the worse the unit economics get. OpenAI’s gross margins collapsed from roughly 40% to 33% in 2025 because inference costs quadrupled as usage scaled. They’re getting less efficient per dollar as they get bigger. The burn isn’t winding down. It’s accelerating.
They’re all playing the same game — lose money now, win the market, figure out profitability later. You’ve seen this movie. AWS subsidized startups through aggressive discounting from 2008-2015 and built the most profitable cloud business in history. Uber burned billions subsidizing rides below cost for seven years. Every streaming service ran at a loss from 2015-2022 while racing to lock in subscribers before the music stopped.
The pattern: 5-8 years of heavy subsidies. Prices normalize. The land grab ends. Survivors optimize for margin.
AI is somewhere in year 3-4 of this cycle.
Why They’re Subsidizing You Specifically
Here’s the part most people miss.
It’s not just the gym membership model — yes, light users subsidize heavy users across the subscriber base. But for developers specifically, you serve a purpose that goes way beyond the math:
You evangelize. Every blog post about Claude Code, every Hacker News comment about your workflow, every Slack recommendation to a colleague — that’s marketing no ad budget can replicate. Authentic practitioner enthusiasm is worth more than a campaign, and they get it from you for free.
You’re the top of the enterprise funnel. The conversion path goes: you try Pro, you love it, you build something real, you show your team, your team shows leadership, leadership signs a $500K enterprise contract. That single deal is worth 2,500 Max subscribers. You’re not where the money is. You’re where the money comes from.
You stress-test the product. Power users find the edges. You file the bug reports casual users never hit. This feedback loop is genuinely expensive to replicate through formal QA — and you’re doing it gratis.
You build the ecosystem. Tutorials, repos, guides, courses. The content that helps a thousand other developers get value from the product? That’s unpaid work you’re doing for their platform.
You are, in the most literal sense, being paid for this in subsidized compute. It’s a trade. The question is whether you’re getting the better end of it.
(You are. Obviously. That’s the point.)
How Long Does the Window Stay Open?
Nobody knows. Anyone giving you a specific timeline is guessing, including me.
But the runway math doesn’t matter as much as the signals. Watch these:
Usage limits tightening. Already happening. “Unlimited” has gotten more creative in its definition. Rate limits appear. Fair use policies materialize. You’ve noticed.
Tier restructuring. The free tier gets worse. The basic tier gets capped. The premium tier develops features that used to be standard. The ladder shifts.
API price changes. When enterprise revenue is strong enough to sustain the business, the argument for subsidizing consumers weakens. Check the API pricing page periodically.
Enterprise-only features. When the best capabilities start requiring a sales call, the consumer product is no longer the growth driver.
My working model: 18-24 months of relatively stable economics. After that, genuine uncertainty.
The open-source wildcard could extend the window or change what “subsidized” even means. The gap between frontier models and the best open-weight models has compressed dramatically — we’re talking 6-12 months behind the frontier now, versus the 18-24 months people were citing a year ago. Running genuinely capable models locally on a Mac is already real, not theoretical. That’s a hedge against pricing pressure, but it doesn’t change the core argument. It just means the floor is higher than it was.
Either way: cheap access to frontier AI while the models keep getting dramatically better is the thing with the uncertain timeline. Don’t wait for a clear signal. By the time the signal is clear, the window is already closing.
What You Should Actually Be Building
This is where I have to resist the urge to give you a twenty-point tactical playbook. (I’m saving that for a separate post. Watch for it.)
The mental model is simple: use subsidized tools to build assets you own. Don’t just consume. Create.
For developers building SaaS, this means a few things specifically:
Ship the MVP, not the perfect version. Claude Code does 70% of the implementation and you do the system design and judgment calls. A SaaS MVP that would have taken three months solo two years ago takes a weekend now. These economics are extraordinary and they will not last forever. The price of “wait until it’s ready” is time you don’t have.
Build the content moat before everyone else does. Technical guides, deep-dives, tutorials on topics you actually know. This content ranks before your competitors get around to writing theirs. The window for content arbitrage — where AI-assisted quality beats raw human output at volume — is also temporary. The ones who started in 2025-2026 will own the long-tail traffic. The rest will write for audiences that already exist.
Develop taste. This is the skill that survives every model improvement and every price normalization. Knowing whether AI output is actually good — whether the code is maintainable, whether the architecture makes sense, whether the essay says something real — is something that cannot be automated. It gets more valuable as AI gets cheaper. Invest in it.
Build the audience. Newsletter subscribers, people who trust your recommendations, readers who show up when you publish. This is the asset that persists regardless of what happens to model pricing. You’re not renting audience from Anthropic. You own it.
The math I keep returning to: 18 months of focused effort with subsidized AI tools could produce 3-5 years of normal-pace output. The SaaS you’ve been procrastinating? You could ship three of them. The content backlog? Gone. The technical course based on your experience? Done.
That compounds. The skills sharpen. The audience grows. By the time pricing normalizes, you’ve already built the moat.
The Only Question That Matters
You pay 200/month.You′regetting200/month.You′regetting600-900/month in value. That arbitrage exists right now, today.
But the real arbitrage isn’t the monthly spread. It’s what you build during the window.
The people who win this period aren’t the ones who used Claude for the most impressive demo or the most clever prompt chain. They’re the ones who used cheap frontier AI access to build products, audiences, and content that persist after the subsidies end.
So: what are you shipping?
The clock’s running.
This post started as a rabbit hole triggered by a paragraph in Sebastian Raschka’s Build a Large Language Model from Scratch (Manning). His Substack is obviously worth following: @rasbt.

