Newcontext-mode—Save 98% of your AI coding agent's context windowLearn more
MCP Directory
ServersClientsBlog

context-mode

Save 98% of your AI coding agent's context window. Works with Claude Code, Cursor, Copilot, Codex, and more.

Try context-mode
MCP Directory

Model Context Protocol Directory

MKSF LTD
Suite 8805 5 Brayford Square
London, E1 0SG

MCP Directory

  • About
  • Blog
  • Documentation
  • Contact

Menu

  • Servers
  • Clients

© 2026 model-context-protocol.com

The Model Context Protocol (MCP) is an open standard for AI model communication.
Powered by Mert KoseogluSoftware Forge
  1. Home
  2. Servers
  3. brain-mcp-server

brain-mcp-server

GitHub

Generic MCP server for Markdown-based AI Brain context systems

1
0

brain-mcp-server

A generic MCP server that serves Markdown-based AI Brain files to any MCP-compatible Claude client.

What It Does

Gives Claude persistent, context-aware access to a collection of Markdown files (an "AI Brain") via the Model Context Protocol. Supports reading, writing, searching, and git-backed versioning — all over local stdio transport.

Tools

ToolDescription
brain_load_contextEntry point — returns the loader + NOW.md, plus lint, issue, and inbox nudges
brain_read_fileRead a specific Brain file by name
brain_update_fileUpdate a Brain file (replace, append, or patch)
brain_commitGit commit changes, optionally push
brain_list_filesList all files with staleness metadata
brain_searchSearch across all Brain files
brain_logAppend an entry to the Brain change log
brain_read_logRead recent change log entries
brain_lintRun a health check (bloat, staleness, orphan backlinks, drift, missing cross-references)
brain_ingestProcess a new source — dry-run analysis or save to sources/
brain_ingest_completeRecord provenance after ingest (updates SOURCES.md + LOG.md, optionally deletes inbox file)
brain_scan_inboxList files pending in the inbox/ drop-folder for processing

Requirements

  • Node.js 18+
  • A directory of Markdown files (your "Brain") with at least 00_loader.md and NOW.md
  • Git initialised in the Brain directory (for commit/push features)

Quick Start

# 1. Clone and build
git clone <your-fork-or-clone-url> ~/Projects/brain-mcp-server
cd ~/Projects/brain-mcp-server
npm install
npm run build

# 2. Verify the build produced dist/index.js
ls dist/index.js

# 3. Test with MCP Inspector (optional but recommended)
npx @modelcontextprotocol/inspector node dist/index.js

Configuration

VariableDescriptionDefault
BRAIN_DIRAbsolute path to your Brain markdown files directory~/Projects/ai-brain-jem/brain

Client Setup

Claude Code

Add to ~/.claude/settings.json (user-level) or .claude/settings.json (project-level):

{
  "mcpServers": {
    "brain": {
      "command": "node",
      "args": ["/absolute/path/to/brain-mcp-server/dist/index.js"],
      "env": {
        "BRAIN_DIR": "/absolute/path/to/your/brain/files"
      }
    }
  }
}

Claude Desktop

Add to claude_desktop_config.json (same format as above).

Claude Cowork

Add via the MCP server configuration in Cowork settings (same command, args, and env values).

Post-Install Configuration

After the MCP server is connected, three additional steps make it fully automatic.

Step 1: Pre-authorise tool calls (Claude Code)

Add mcp__brain to your permissions allow-list so Claude doesn't prompt for approval on every Brain tool call.

In ~/.claude/settings.json, add to the permissions.allow array:

{
  "permissions": {
    "allow": [
      "mcp__brain"
    ]
  }
}

This matches all twelve Brain tools. You can verify with /permissions in Claude Code.

Step 2: Conditional auto-load directive (Claude Code / Cowork)

Add the following to ~/.claude/CLAUDE.md so Claude loads Brain context when relevant — not on every conversation:

## AI Brain (Conditional Auto-Load)

An AI Brain (personal knowledge system) is connected via MCP. Load it when the conversation
benefits from personal context — skip it for generic tasks.

Load Brain context proactively (don't wait to be asked) when:
- Writing on the user's behalf, career/professional tasks, work context, personal projects,
  strategy/advice requiring background, anything needing voice/preferences/expertise,
  or user references "my Brain" / "my context"

Skip when:
- Generic technical questions, general knowledge/research, pure coding help,
  or user explicitly says not to load

Load sequence (when loading):
1. Fetch tools (if deferred): ToolSearch(query="select:mcp__brain__brain_load_context,mcp__brain__brain_read_file,mcp__brain__brain_search,mcp__brain__brain_update_file,mcp__brain__brain_commit,mcp__brain__brain_log,mcp__brain__brain_read_log,mcp__brain__brain_lint,mcp__brain__brain_ingest,mcp__brain__brain_ingest_complete,mcp__brain__brain_scan_inbox")
2. Call brain_load_context (returns loader + NOW.md + lint/issue nudges)
3. Call brain_read_file for task-relevant files per the navigation table
4. If brain_load_context flags a lint nudge or open issues, act accordingly

This works for both Claude Code and Cowork (both read ~/.claude/CLAUDE.md).

Step 3: User preferences (Claude Desktop / claude.ai — manual)

Claude Desktop and claude.ai do not read CLAUDE.md. For these clients, add the conditional auto-load directive to your user preferences (Settings → Profile → User preferences).

See MANUAL_SETUP.md for the exact text to paste, verification tests, and a troubleshooting checklist.

Note: This step requires manual entry in each client's preferences UI. It cannot be automated.

How It Works

  1. Claude calls brain_load_context at session start (automatically, if configured per above)
  2. The response includes the loader, NOW.md, and nudges (lint overdue, open maintenance issues, pending inbox files)
  3. Claude reads the navigation table and requests specific files via brain_read_file
  4. Edits are written via brain_update_file, then committed via brain_commit
  5. New information is processed via brain_ingest (dry-run analysis, then guided updates)
  6. Changes are tracked via brain_log; health is checked via brain_lint

The routing logic lives in the loader (a Markdown file you maintain), not in code. The server is content-agnostic — it knows nothing about your specific Brain content, only how to serve Markdown files.

Brain files use [[backlinks]] (Obsidian-style wikilinks) to cross-reference each other. During ingest, the LLM maintains these links across all affected files. The lint process checks for orphan content files (zero inbound backlinks). The Brain directory can be opened as an Obsidian vault for graph view and backlink navigation.

Recommended Workflow

The MCP server works from any Claude client, but different tasks suit different clients:

ActivityBest clientWhy
Using Brain in conversationChat (any client with MCP)All read/write tools work over stdio — no uploads needed
Small updates (NOW.md, tasks, journal)Chatbrain_update_file + brain_commit handle it directly
Large document ingestionCowork or CodeMulti-step workflow needs filesystem access to sources/
Drop-folder ingestionAny (via scheduled task)Drop files into inbox/, daily task processes them automatically
Server code maintenanceCodeIterative build-test-commit cycle
Brain repo git operationsCodeShell access for rebasing, conflict resolution, pushing
Graph view & knowledge mappingObsidianOpen brain/ as a vault for graph visualization, backlink navigation, and orphan detection

Chat is the primary interface for day-to-day Brain usage — loading context, searching, editing files, committing. Cowork or Code are needed when the workflow requires direct filesystem access (e.g., saving source documents) or agentic multi-step operations. Code is the right tool for maintaining the server codebase itself.

For the full methodology, see the AI Brain Primer.

Related Projects

  • ai-brain-primer — Framework and methodology for building an AI Brain
  • ai-brain-jem — Example private Brain implementation (private repo)

License

MIT

Repository

JE
JEM-Fizbit

JEM-Fizbit/brain-mcp-server

Created

March 25, 2026

Updated

April 13, 2026

Language

TypeScript

Category

AI