Give your AI agent
permanent codebase memory

Architecture, dependencies, patterns, history — pre-digested and served via MCP. Your agent starts every session already understanding your codebase.

Understands 27+ languages natively

TypeScript
Python
Go
JavaScript
C++
Java
Swift
Terminal
$ codecortex init
Step 1/6: Discovering project structure...
Found 44 files in 7 modules
Step 2/6: Extracting symbols with tree-sitter...
Extracted 976 symbols, 131 imports, 2489 call edges
Step 3/6: Building dependency graph...
7 modules, 14 external deps
Step 4/6: Analyzing git history...
49 hotspots, 26 coupling pairs
Step 5/6: Writing knowledge files...
Step 6/6: Generating constitution...
CodeCortex initialized!
Symbols: 976 | Modules: 7 | Hidden deps: 14
codecortex</> main
+New Terminal
...
Claude
OpenCode
Codex
Gemini
Cursor Agent
Terminals (5)

Agent Agnostic

One memory layer.
Every agent.

Switch between Claude Code, Cursor, Codex, and Gemini without losing context. CodeCortex speaks MCP — the universal protocol for AI tools. Your knowledge persists, no matter which agent you use.

Why It Works

Knowledge that compounds, not context that burns

Every session your agent spends re-reading files is a session wasted. CodeCortex builds structured knowledge that gets richer over time.

27 Languages

Native tree-sitter extraction across TypeScript, Python, Go, Rust, C, Java, and 21 more. Every symbol, every import, every edge.

14 MCP Tools

Your agent doesn't read files — it queries knowledge. 9 read + 5 write tools via the Model Context Protocol.

Temporal Intelligence

Some files are secretly coupled — zero imports, but they always change together. Git history reveals what code structure hides.

85% Token Reduction

Stop feeding your agent raw files. Structured knowledge delivers full codebase understanding in ~4,300 tokens vs ~37,800 from scanning.

Deep Understanding

A living map
of your codebase

Six layers of knowledge — from raw structure to architectural decisions — stored as flat files in .codecortex/. No database. No cloud. Commit it to git.

.codecortex/10 files
Structuralgraph.json
Semanticmodules/*.md
Temporaltemporal.json
Decisionsdecisions/*.md
Patternspatterns.md
Sessionssessions/*.md
cortex.yaml
constitution.md
overview.md
graph.json
symbols.json
temporal.json
modules/
decisions/
sessions/
patterns.md

Battle-Tested

We threw the Linux kernel at it

93,000 files. 5.3 million symbols. 30 years of git history. The world's largest open-source codebase — understood in 90 seconds. Zero crashes.

Zero Config

One prompt. That's it.

Paste this into your AI agent. It installs CodeCortex, analyzes your codebase, and wires up MCP — your agent remembers everything from the next session onward.

Setup prompt - works with any LLM
Install codecortex-ai globally, then initialize it on this project. After init completes, add the MCP server config so I can use CodeCortex tools in future sessions.

Steps:
1. Run: npm install -g codecortex-ai
2. Run: codecortex init
3. Add this to my MCP config (claude_desktop_config.json or .cursor/mcp.json):
{
  "mcpServers": {
    "codecortex": {
      "command": "codecortex",
      "args": ["serve"]
    }
  }
}
4. Confirm it worked by running: codecortex status

Quick Start

From amnesia to understanding in three commands

1

Analyze your codebase

Discovers files, extracts symbols with tree-sitter, builds the dependency graph, and analyzes git history for hidden coupling.

$ codecortex init
2

Start the MCP server

Starts a local MCP server over stdio. Any MCP-compatible AI agent can connect and read your codebase knowledge.

$ codecortex serve
3

Connect your agent

Add the MCP config to Claude Code, Cursor, or any agent. It starts every session with full codebase understanding.

{
  "mcpServers": {
    "codecortex": {
      "command": "codecortex",
      "args": ["serve"]
    }
  }
}

Frequently
asked questions

Any agent that supports MCP (Model Context Protocol). That includes Claude Code, Cursor, Codex, Windsurf, Gemini CLI, Zed, OpenCode, and more. The MCP server communicates over stdio, so any MCP-compatible client can connect and start using your codebase knowledge immediately.
Reading source files gives your agent raw code with no context. CodeCortex gives it structured knowledge: what each module does, how files depend on each other, which files are secretly coupled through git co-change patterns, where the hotspots are, what architectural decisions were made, and how the code has evolved over time. Think of it as the difference between handing someone a phone book vs a map.
No. Everything runs locally on your machine. The structural extraction (symbols, imports, call graph, temporal analysis) uses native tree-sitter parsing. The semantic layer (module analysis, decisions, patterns) is written by your AI agent during normal coding sessions. No API keys, no cloud, no data leaves your machine.
27 languages with native tree-sitter grammars: TypeScript, JavaScript, Python, Go, Rust, C, C++, Java, Kotlin, Scala, C#, Swift, Dart, Ruby, PHP, Lua, Bash, Elixir, OCaml, Elm, Solidity, Vue, Zig, and more. Each language has a dedicated extraction strategy optimized for its syntax and patterns.
CodeCortex uses streaming JSON writers for symbols and dependency graphs, so it can handle repositories with millions of symbols without hitting Node.js memory limits. It has been tested on large open-source projects like the Linux kernel (5.3M symbols) and OpenClaw (129K symbols).
9 read tools for querying knowledge (symbols, modules, dependencies, hotspots, coupling, decisions, patterns, overview, and search) plus 5 write tools for updating knowledge as your codebase evolves (update modules, add decisions, record patterns, log sessions, and refresh structure).
The opposite. CodeCortex reduces token usage by about 85% compared to raw file scanning. Instead of feeding your agent thousands of lines of source code, it gets a structured summary in around 4,300 tokens. Agents start faster and stay focused because they already understand the codebase.
Everything lives in a .codecortex/ folder at the root of your project as flat files (JSON, Markdown, YAML). No database, no binary formats. You can commit it to git, inspect it manually, or delete it anytime. Fully portable between machines and agents.
Yes. CodeCortex is MIT licensed and completely free to use. Install it with npm install -g codecortex-ai and run codecortex init in any project to get started.