Stop Making Claude Code Guess
Claude Code is trapped in a box. Here's the escape hatch.
This is post 4 of 4 in my Claude Code series. Catch up on The Mental Model, Skills vs Slash Commands, and The Control Freak’s Guide to Agents if you missed them.
We’ve covered how to trigger Claude (slash commands), teach it (skills), and let it explore (agents). Now: how to give it access to the real world.
Because right now, Claude is trapped in a box. It can read your code. It can write your code. But it can’t see what’s on a webpage. It can’t query your database. It can’t check your production logs. It’s a very intelligent entity with no access to external reality.
Ask Claude what your database schema looks like without an MCP server, and you get this:
“Based on typical Postgres schemas, your users table probably has an id, email, and created_at column...”
Probably. Or we could just query the actual database. Revolutionary concept, I know.
MCP servers are the escape hatch. They give Claude eyes, ears, and hands outside your codebase.
What MCP servers actually are
MCP (Model Context Protocol) servers expose tools to Claude over a standardized protocol. Each server runs as a separate process and registers its capabilities — functions Claude can call when it needs to interact with something external.
Examples:
Playwright: browse the web, scrape pages, automate testing
Supabase/Postgres: query databases directly
Betterstack: pull production logs
Filesystem: access files outside your project
The “server” terminology is slightly misleading — they’re really tool providers that Claude can invoke. You configure them in .mcp.json, Claude discovers their capabilities, and suddenly it can do things it couldn’t do before.
The critical distinction: capabilities vs instructions
This is where people get confused between MCP servers and skills. Let me make it painfully clear:
MCP servers = capabilities. They let Claude DO something it couldn’t do before. Playwright gives Claude the ability to browse. A Postgres MCP gives Claude the ability to query databases.
Skills = instructions. They tell Claude HOW to do something. A “web-scraping” skill might teach Claude efficient patterns for using Playwright.
You often want both. The MCP server provides the capability. The skill provides the expertise.
Example: I have the Supabase MCP installed (capability). I also have a skill that says “when querying user data, always filter by organization_id first for performance” (instruction). The skill makes the capability more useful, but without the capability, the skill is just a nice idea with no way to execute.
Skills can tell Claude what to do. They can’t give Claude new abilities. Good luck querying your production database using only skills.
The token cost nobody mentions in the marketing materials
Here’s the thing that caught me off guard: MCP server tool definitions are always in your context window.
Every MCP server you install adds its tool signatures to every conversation. Even when you’re not using it. Even when you’re doing something completely unrelated. The tool definitions are just sitting there, eating tokens.
Install 10 MCP servers because they seemed cool? All 10 are eating tokens in every session. Like subscription services you forgot you signed up for, except it’s your API bill.
This matters for:
Context window limits (you have less room for actual work)
Cost (more tokens = more money, math is cruel)
Performance (sometimes, more context = slower responses)
Be intentional. Don’t install MCP servers “just in case.” Install what you actually use. Uninstall what you don’t. Your token budget will thank you.
The patterns that waste everyone’s time
Watch any Claude Code session where someone doesn’t have the right MCP servers installed:
Asking Claude to guess what a webpage looks like (when Playwright could just open it)
Copy-pasting API responses into the chat (human middleware)
Describing database schemas in words (when Claude could just query them)
MCP servers aren’t optional extras. They’re not power-user features for people who want to show off. They’re how Claude becomes actually useful for real-world tasks instead of just being a very eloquent guesser.
If you’re regularly copy-pasting external data into Claude, you need an MCP server. Stop being the bottleneck.
My current MCP stack (minimal, intentional)
For my SaaS projects, I typically configure:
Playwright: browser automation, web scraping, testing flows
Supabase (read-only): querying my production database without leaving Claude
Betterstack: pulling logs when debugging production issues
Key point: these are project-specific, configured in .mcp.json in the repo. Not global. When I’m working on this blog, I don’t need Supabase or Betterstack — so they’re not loaded, and I’m not paying the token cost.
This is the lever most people miss. You can have different MCP stacks for different projects. A SaaS project needs database and logs. A content project might just need Playwright for research. Configure per-project, pay only for what that project actually needs.
I’ve tried others and removed them. The token cost wasn’t worth it for occasional use. That fancy Notion MCP? Gone. The GitHub MCP for repo scaffolding? Turns out gh CLI works fine and doesn’t eat context.
Minimal viable MCP stack. Everything you need, nothing you don’t.
When to add an MCP server (and when not to)
Add an MCP server when:
You’re regularly copy-pasting external data into Claude
You’re asking Claude to guess at things it could just look up
The task requires interacting with systems outside your codebase
The capability would be used frequently enough to justify the token cost
Don’t add an MCP server when:
You might use it “someday” (you won’t, and it’ll eat tokens until you remember to remove it)
You’re just curious what it does (read the docs instead)
Another tool already covers the capability
A CLI tool would be faster and cheaper (often true)
The full picture: MCP servers + skills + agents
Here’s how they compose in practice. Say I’m debugging why users are seeing 500 errors:
Betterstack MCP pulls the error logs from the last hour
Supabase MCP queries the affected user records
Agent correlates the data — finds that all failing requests share a malformed organization_id
Skill reminds Claude to check the migration history when schema issues appear
Slash command triggered this whole investigation with
/debug-500s
Each layer does its job. The MCP servers are the foundation — without the capability to actually access logs and data, everything else is just documentation for things you can’t do.
The pattern: MCP servers give Claude access to production reality. Skills teach Claude how to navigate that reality efficiently. Agents do the investigation autonomously. You get answers instead of guesses.
(There’s a deeper rabbit hole here about balancing MCP capabilities against context window costs — which MCPs to load when, how to structure project-specific configs, when to use CLI tools instead. That’s a future post.)
This completes the 4-part series:
The Mental Model (slash commands, skills, agents, MCP servers, plugins)
Skills vs Slash Commands (one works, one “works”)
Agents (stop scripting exploration)
MCP servers (stop making Claude guess)
I help technical founders develop, deploy, and market their SaaS using Claude Code. If this series saved you some confusion, that’s what I’m here for.
I’m curious:
What’s your MCP stack? Which servers do you actually use daily vs installed “just in case”?
Have you noticed the token cost creep from too many MCPs?
Any MCP servers you’d recommend that I haven’t mentioned?
Reply or comment — I read everything.

