Structural graph memory for AI coding assistants — MCP server for codebase navigation
Structural graph memory for AI coding assistants. Map your codebase. Navigate by structure. Read only what matters.
repo-graph gives LLMs a map of your codebase — entities, relationships, and flows — so they can navigate to the right files without reading everything first.
Instead of flooding an LLM's context window with your entire codebase (or hoping it guesses right), repo-graph builds a lightweight graph of what exists, how things connect, and where the entry points are. The LLM queries the graph, finds the minimal set of files it needs, and reads only those.
Same bug, same model, same prompt — the only difference is whether repo-graph is installed.
The task: fix a reversed comparison operator in a Go + Angular monorepo (566 nodes, 620 edges).
| Without repo-graph | With repo-graph | |
|---|---|---|
| Tokens used | 85,986 | 29,838 |
| Time to fix | 4m 36s | ~30s |
| Files explored | ~15 (grep, read, grep, read...) | 2 (flow lookup + handler file) |
| Outcome | Found and fixed the bug | Found and fixed the bug |
2.9x fewer tokens. ~9x faster. Same correct fix.
Both runs used identical conditions to keep the comparison fair:
/clear with no prior conversationgroup_controller.go:57 on its ownWithout repo-graph, Claude greps for keywords, reads files, greps again, reads more files, and eventually narrows down to the bug. With repo-graph, Claude calls flow("groups"), gets back the exact handler function and file, reads it, and fixes it.
Browse pre-generated examples for FastAPI, Gin, Hono, and NestJS — real graph output you can inspect without installing anything.
LLMs working on code waste most of their context on orientation:
This is expensive, slow, and gets worse as codebases grow.
repo-graph scans your codebase once and builds a graph of:
Then it exposes 12 MCP tools that let the LLM:
The LLM gets structural context in a few hundred tokens instead of reading thousands of lines.
| Language | Detection | What it extracts |
|---|---|---|
| Go | go.mod | Packages, functions, HTTP routes (gin/echo/chi/stdlib), imports |
| Rust | Cargo.toml | Crates, modules, structs, traits, functions, routes (Actix/Rocket/Axum) |
| TypeScript | tsconfig.json | Modules, classes, functions, import relationships |
| React | react in package.json | Components, hooks, context providers, React Router routes, fetch/axios calls, flows |
| Angular | @angular/core in package.json | Components, services, guards, DI injection, HTTP calls, feature flows |
| Python | pyproject.toml / setup.py / requirements.txt | Packages, modules, classes, functions, routes (Flask/FastAPI/Django) |
| Java/Kotlin | pom.xml / build.gradle | Packages, classes, routes (Spring/JAX-RS) |
| C#/.NET | .csproj / .sln | Namespaces, classes, routes (ASP.NET/Minimal API) |
| Ruby | Gemfile / .gemspec | Files, classes, modules, routes (Rails) |
| PHP | composer.json | Namespaces, classes, interfaces, routes (Laravel/Symfony) |
| Swift | Package.swift / .xcodeproj | Files, types (class/struct/enum/protocol/actor), routes (Vapor) |
| C/C++ | CMakeLists.txt / Makefile / meson.build | Sources, headers, classes, structs, enums, namespaces, includes |
| SCSS | .scss files present | File-level bloat analysis (selector blocks, sizes) |
Multiple analyzers can match one repo (e.g., Go backend + Angular frontend + SCSS). Each contributes its nodes and edges into a single unified graph.
pip install mcp-repo-graphRequires Python 3.11+. Only runtime dependency: mcp[cli].
repo-graph-generate --repo /path/to/your/projectThis scans the codebase and writes graph data to .ai/repo-graph/ inside the target repo.
Add to your MCP configuration:
Claude Code (~/.claude/claude_code_config.json or project .mcp.json):
{
"mcpServers": {
"repo-graph": {
"command": "repo-graph",
"args": ["--repo", "/path/to/your/project"]
}
}
}With environment variable:
{
"mcpServers": {
"repo-graph": {
"command": "repo-graph",
"env": { "REPO_GRAPH_REPO": "/path/to/your/project" }
}
}
}The AI assistant now has access to all 12 tools. Example queries it can answer:
status toolflow toolimpact toolminimal_read toolsplit_plan toolgraph_view toolAdd repo-graph-generate to a pre-commit hook so the graph stays up to date automatically — no LLM context spent on regeneration:
# .git/hooks/pre-commit (or add to your existing hook)
#!/bin/sh
repo-graph-generate --repo .
git add .ai/repo-graph/chmod +x .git/hooks/pre-commitEvery commit keeps the graph current. The LLM always has a fresh map without wasting a single token on generate.
Tip: If you don't want graph data in version control, add
.ai/repo-graph/to.gitignoreand skip thegit addline — the graph will just live locally.
| Tool | Parameters | Description |
|---|---|---|
generate | (none) | Scan the codebase from scratch, rebuild the graph, and reload |
reload | (none) | Reload graph data from disk (after external repo-graph-generate) |
| Tool | Parameters | Description |
|---|---|---|
status | (none) | Repo overview: git state, detected languages, entity counts, available flows |
flow | feature | End-to-end flow for a feature — from entry point through service layer to data |
trace | from_id, to_id | Shortest path between any two nodes in the graph |
impact | node_id, direction (upstream/downstream), depth | Fan out from a node to see what it affects or depends on |
neighbours | node_id | All direct connections to and from a node |
| Tool | Parameters | Description |
|---|---|---|
cost | feature | Total line count for all files in a feature's flow |
hotspots | top_n | Files ranked by size * connections — maintenance risk indicators |
minimal_read | feature, task_hint | Smallest file set needed for a specific task within a feature |
| Tool | Parameters | Description |
|---|---|---|
bloat_report | file_path | Internal structure of a file: functions/methods ranked by size, type counts |
split_plan | file_path | Concrete suggestions for splitting an oversized file, grouped by responsibility |
graph_view | feature or node, depth | Visual ASCII map of a feature flow, node neighbourhood, or full graph overview |
scan_project_dirs() finds project roots (including monorepo layouts like packages/*, apps/*, services/*, src/*). Each analyzer checks for its marker files.Generated files live in .ai/repo-graph/ inside the target repo:
nodes.json — [{id, type, name, file_path}, ...]edges.json — [{from, to, type}, ...]flows/*.yaml — named feature flows with ordered step sequencesstate.md — human-readable snapshot for quick orientationEdge types: imports, defines, contains, uses, calls, handles, handled_by, exports, includes.
Create repo_graph/analyzers/<language>.py:
from .base import AnalysisResult, Edge, LanguageAnalyzer, Node, scan_project_dirs, rel_path, read_safe
class MyLangAnalyzer(LanguageAnalyzer):
@staticmethod
def detect(repo_root):
# Check for language marker files
return any(
(d / "my-marker").exists()
for d in scan_project_dirs(repo_root)
)
def scan(self):
nodes, edges = [], []
# ... scan files, extract entities, build relationships ...
return AnalysisResult(
nodes=nodes,
edges=edges,
state_sections={"MyLang": f"{len(nodes)} entities\n"},
)
# Optional: file-level analysis for bloat_report / split_plan
def supported_extensions(self):
return {".mylang"}
def analyze_file(self, file_path):
# Return dict with function/method sizes, class counts, etc.
pass
def format_bloat_report(self, analysis):
# Format the analysis dict into a human-readable string
passRegister it in analyzers/__init__.py by adding it to _analyzer_classes().
MIT
If repo-graph saved you time, consider buying me a coffee.
<p align="center"> <a href="https://buymeacoffee.com/polycrisis"> <img src="docs/bmc-qr.png" alt="Buy Me a Coffee" width="200"> </a> <br> <a href="https://buymeacoffee.com/polycrisis">buymeacoffee.com/polycrisis</a> </p> <!-- mcp-name: io.github.James-Chahwan/repo-graph -->James-Chahwan/repo-graph
April 12, 2026
April 13, 2026
Python