MCP Killed the Custom Integration. Here's How.
We used to write separate integrations for Claude, Codex, and Cursor. Then we built one MCP server. It works with all of them. And anything that speaks the protocol tomorrow.
Six months ago, if you wanted your developer tool to work with multiple AI agents, you wrote multiple integrations. One for Claude's function-calling format. One for OpenAI's tool schema. One for Cursor's custom API. Different SDKs, different schemas, different bugs.
We did this for CodeCortex. We had 14 tools (search symbols, get dependencies, read decisions, etc.) and maintaining three separate integration layers was a nightmare. Every time we added a tool, we updated three codebases. Every time an AI provider changed their schema, one of our integrations broke.
Then we rebuilt everything as a single MCP server. One codebase. One protocol. Works with every MCP-compatible agent. Here's what changed.
The before: three integrations, three problems
Our original architecture had CodeCortex's core logic (symbol lookup, graph queries, temporal analysis) wrapped in three adapter layers:
[CodeCortex Core]
|
├── Claude adapter (Anthropic tool format)
├── OpenAI adapter (function calling schema)
└── Cursor adapter (custom integration)Each adapter translated our internal API into the agent's expected format. Each had its own schema definitions, response formatting, and error handling. Each broke independently when providers updated their APIs.
The maintenance burden was linear: 14 tools × 3 adapters = 42 integration points. Adding tool #15 meant updating all three adapters.
The after: one MCP server
[CodeCortex Core]
|
MCP Server (stdio)
|
├── Claude Code
├── Cursor
├── Codex
├── Windsurf
├── Zed
└── (anything that speaks MCP)One server. One protocol. The MCP server declares its 14 tools in a standard schema. Any MCP-compatible client discovers them through a handshake and calls them with structured JSON arguments. We don't know or care which agent is on the other end.
Adding tool #15 means updating one file. Every client gets it automatically.
What MCP actually does
MCP (Model Context Protocol) is a JSON-RPC protocol that standardizes how AI agents discover and use external tools. The flow:
- 1.**Discovery**: Client connects and asks "what tools do you have?"
- 2.**Schema**: Server responds with tool names, descriptions, and input schemas
- 3.**Invocation**: Client calls a tool with structured arguments
- 4.**Response**: Server returns structured data
The transport is either stdio (for local tools) or HTTP+SSE (for remote ones). CodeCortex uses stdio because it runs locally alongside your project.
Here's what a tool declaration looks like:
json{ "name": "lookup_symbol", "description": "Find functions, classes, or types by name. Returns file location, signature, and relationships.", "inputSchema": { "type": "object", "properties": { "name": { "type": "string", "description": "Symbol name or pattern" }, "kind": { "type": "string", "enum": ["function", "class", "type", "interface", "method"] }, "file": { "type": "string", "description": "Filter to specific file" } }, "required": ["name"] } }
Every MCP client reads this schema identically. The agent knows the tool exists, what arguments it accepts, and what it does — from the schema alone. No documentation page, no SDK, no README needed.
Setting it up: 60 seconds
For Claude Code, add to ~/.claude/mcp.json:
json{ "mcpServers": { "codecortex": { "command": "codecortex", "args": ["serve"] } } }
For Cursor, add to the MCP settings panel. Same config.
That's it. The agent restarts, discovers 14 tools, and starts using them. No API keys, no cloud services, no account creation.
Why this matters for developer tool builders
If you're building a tool that AI agents should be able to use — a linter, a database explorer, a deployment manager, a code analyzer — MCP is the only integration you need to write.
The economics are simple:
Without MCP: 1 integration per agent × N agents = N maintenance burden
With MCP: 1 integration total = 1 maintenance burdenThe agent ecosystem is growing fast. Claude Code, Cursor, Windsurf, Codex, Gemini CLI, Zed — and the list keeps expanding. Every new MCP client automatically works with your tool. Zero integration effort on your part.
The 14 tools CodeCortex exposes
To make the abstract concrete, here are the 14 MCP tools CodeCortex provides. Nine for reading knowledge, five for writing it:
Read tools:
1. get_project_overview — Architecture understanding, data flow, module map
2. get_module_context — Deep dive into one module with gotchas and temporal data
3. lookup_symbol — Find any function/type/class by name with file:line
4. get_dependency_graph — Import and call edges between files/modules
5. get_change_coupling — Files that co-change in git (hidden dependencies)
6. get_hotspots — Volatility ranking of files by change frequency
7. get_decision_history — Architectural decisions and reasoning
8. search_knowledge — Full-text search across all knowledge files
9. get_session_briefing — What happened in the last session
Write tools:
10. analyze_module — Trigger deep analysis of a module
11. save_module_analysis — Persist module documentation
12. record_decision — Log an architectural decision
13. update_patterns — Document a coding pattern
14. report_feedback — Flag incorrect knowledge for the next analysis cycle
Each tool returns structured, token-efficient data. lookup_symbol returns the name, file, line, signature, and callers — not the entire file. The agent gets what it needs in 50 tokens instead of 5,000.
The ecosystem effect
MCP creates a network effect for developer tools. The more agents support MCP, the more tool builders adopt it. The more tools speak MCP, the more useful agents become.
The Awesome MCP Servers repository has thousands of entries. There are MCP servers for databases (query data without SQL in the context), browsers (automated testing), file systems (structured access), APIs (call external services), and now codebases (query structured knowledge).
For developers evaluating AI tooling, MCP support is the new baseline. If a tool doesn't speak MCP, it only works with the agents its developers explicitly integrated with. If it does speak MCP, it works with every agent in the ecosystem — today and tomorrow.
The takeaway
MCP is infrastructure. You don't think about TCP when loading a web page, and you won't think about MCP when your agent queries your codebase. It just works.
For tool builders: build one MCP server instead of N adapters. For developers: prefer tools with MCP support — they'll work with whatever agent you use next year. For the ecosystem: the integration problem is solved. Now we can focus on making the tools themselves better.