Newcontext-mode—Save 98% of your AI coding agent's context windowLearn more
MCP Directory
ServersClientsBlog

context-mode

Save 98% of your AI coding agent's context window. Works with Claude Code, Cursor, Copilot, Codex, and more.

Try context-mode
MCP Directory

Model Context Protocol Directory

MKSF LTD
Suite 8805 5 Brayford Square
London, E1 0SG

MCP Directory

  • About
  • Blog
  • Documentation
  • Contact

Menu

  • Servers
  • Clients

© 2026 model-context-protocol.com

The Model Context Protocol (MCP) is an open standard for AI model communication.
Powered by Mert KoseogluSoftware Forge
  1. Home
  2. Clients
  3. fu7ur3pr00f

fu7ur3pr00f

GitHub
Website

fu7ur3pr00f — AI career agent with 41 tools, 12 MCP clients, and 5 specialists. Gathers LinkedIn/GitHub/GitLab data, builds RAG knowledge base, analyzes skill gaps, tracks job markets, generates ATS-optimized CVs. Chat-first, powered by LangChain + ChromaDB.

3
1

fu7ur3pr00f

Python Version
License: GPL-2.0
Tests

Career intelligence agent that gathers professional data, searches job boards, analyzes career trajectories, and generates ATS-optimized CVs through conversational chat.

Quick Start

# Install
pipx install fu7ur3pr00f

# Run
fu7ur3pr00f

In the chat:

  • /setup — Configure your LLM provider
  • /gather — Import LinkedIn, GitHub, portfolio, CliftonStrengths
  • /analyze — Get skill gap analysis
  • /search — Query 7 job boards + Hacker News
  • /generate — Create ATS-optimized CV (Markdown + PDF)

Installation

Debian/Ubuntu (amd64)

curl -fsSL https://juanmanueldaza.github.io/fu7ur3pr00f/fu7ur3pr00f-archive-keyring.gpg | \
  sudo tee /usr/share/keyrings/fu7ur3pr00f-archive-keyring.gpg >/dev/null

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/fu7ur3pr00f-archive-keyring.gpg] \
https://juanmanueldaza.github.io/fu7ur3pr00f stable main" | \
  sudo tee /etc/apt/sources.list.d/fu7ur3pr00f.list

sudo apt update && sudo apt install fu7ur3pr00f

Development

git clone https://github.com/juanmanueldaza/fu7ur3pr00f.git
cd fu7ur3pr00f
pip install -e .
pip install -r requirements-dev.txt

# Run locally
fu7ur3pr00f

# Run with debug logs
fu7ur3pr00f --debug

Configuration

Run /setup in the chat, or manually edit ~/.fu7ur3pr00f/.env:

# Pick ONE provider (auto-detected if empty)
FUTUREPROOF_PROXY_KEY=fp-...   # Default, zero config
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
OLLAMA_BASE_URL=http://localhost:11434  # Local, offline

See .env.example for all options.

Chat Commands

CommandDescription
/help or /hShow help message
/setupConfigure LLM providers and API keys
/gatherGather career data (LinkedIn, CliftonStrengths, CV, portfolio)
/profileView your career profile
/goalsView your career goals
/thread [name]Show or switch conversation thread
/threadsList saved user conversation threads
/memoryShow memory and profile stats
/debugToggle debug mode (verbose logging)
/verboseShow system information
/agentsList available specialist agents
/clearClear current thread history
/resetFactory reset (delete all generated data)
/quit, /q, or /exitExit chat

Architecture

Multi-Agent with Blackboard Pattern

graph TB
    User --> Chat[Chat Client]
    Chat --> Engine[Engine<br/>invoke_turn]
    Engine --> Outer[Outer Graph<br/>SessionState persistent]
    Outer --> Classify[classify_turn<br/>factual / follow_up / new_query]
    Classify --> Route[route_turn<br/>LLM routing + keyword fallback]
    Route --> Inner[Inner Blackboard Graph<br/>per-turn execution]
    Inner --> S1[Coach]
    Inner --> S2[Jobs]
    Inner --> S3[Learning]
    Inner --> S4[Code / Founder]
    S1 & S2 & S3 & S4 --> KB[(ChromaDB<br/>career knowledge)]
    Inner --> Synth[Synthesis<br/>LLM]
    Outer --> Accum[accumulate findings<br/>cross-turn context]
    Accum --> Suggest[suggest_next<br/>proactive follow-ups]

Routing Architecture:

  • LLM-based semantic routing: Understands query intent, selects 1-4 specialists
  • Keyword fallback: Deterministic fallback if LLM unavailable (rate limits, network errors)
  • Fast paths: Factual queries → coach only; follow-ups → reuse previous specialists
  • Structured output: RoutingDecision model guarantees valid specialist names
  • Specialist guidance: All instructions load from prompts/md/specialist_guidance.md (no hardcoded fallbacks)
  • Direct model selection: Purpose-specific models are selected from configured provider settings; invocation errors surface directly instead of retrying across models

Design decisions:

DecisionWhy
Multi-agent blackboardSingle agent with 41 tools; specialists provide focused reasoning via blackboard pattern
LLM routingKeyword matching too brittle — "leverage strengths to win money" should route to 3 specialists, not 1
Keyword fallbackNetwork-resilient: keeps routing obvious queries to the right specialist even if the LLM is unavailable
Direct model selectionKeeps model behavior explicit: choose the configured model and surface real errors instead of hiding them behind runtime fallback
Blackboard patternMulti-specialist analysis with shared context and iteration
Database-firstGatherers index directly to ChromaDB — no intermediate files
Two-pass synthesisAnalysisSynthesisMiddleware replaces generic LLM output with focused reasoning
HITL confirmationDestructive/expensive operations require user approval via interrupt()
Prompt-drivenAll specialist behavior from prompts folder, zero hardcoded fallbacks

Tools

41 tools organized by domain:

CategoryTools
Profile (7)get_user_profile, update_user_name, update_current_role, update_salary_info, update_user_skills, set_target_roles, update_user_goal
Gathering (5)gather_portfolio_data, gather_linkedin_data, gather_assessment_data, gather_cv_data, gather_all_career_data
GitHub (3)search_github_repos, get_github_repo, get_github_profile
GitLab (3)search_gitlab_projects, get_gitlab_project, get_gitlab_file
Knowledge (4)search_career_knowledge, get_knowledge_stats, index_career_knowledge, clear_career_knowledge
Analysis (3)analyze_skill_gaps, analyze_career_alignment, get_career_advice
Market (6)search_jobs, get_tech_trends, get_salary_insights, analyze_market_fit, analyze_market_skills, gather_market_data
Financial (2)convert_currency, compare_salary_ppp
Generation (2)generate_cv, generate_cv_draft
Memory (4)remember_decision, remember_job_application, recall_memories, get_memory_stats
Settings (2)get_current_config, update_setting

MCP Clients

12 MCP clients for real-time data access:

ClientPurpose
githubRepository search, file access, profile
financialCurrency conversion, PPP comparison
tavilyWeb search, salary research
hnHacker News jobs, trending discussions
jobspyMulti-board job aggregation
remoteokRemote job listings
himalayasRemote job listings
remotiveRemote job listings
jobicyRemote job listings
weworkremotelyRemote job listings
devtoDeveloper articles, trends
stackoverflowTag trends, popular questions

Development

# Install dev tools
pip install pyright pytest ruff

# Test
pytest tests/ -q
pyright src/fu7ur3pr00f
ruff check .
ruff check . --fix

Scripts

ScriptPurpose
scripts/setup.shOne-time Azure/config setup
scripts/fresh_install_check.shValidate pipx install
scripts/clean_dev_artifacts.shClean build artifacts
scripts/build_deb.shBuild .deb package
scripts/build_apt_repo.shBuild apt repository
scripts/validate_apt_artifact.shTest .deb in Docker
scripts/vagrant.shVagrant VM management

Testing

# Unit tests
pytest tests/ -q

# Benchmarks
pytest tests/benchmarks/ -v

# Fresh install check
scripts/fresh_install_check.sh --source local --config-from .env

# Vagrant apt repo testing
scripts/vagrant.sh test-apt

# Multi-agent system testing
scripts/vagrant.sh multi

Offline behavior:

  • Specialist routing falls back to deterministic keyword scoring in CI or other offline environments.
  • CV parsing falls back to local heading extraction for Markdown and plain-text resumes when LLM section extraction is unavailable.
  • Sensitive LinkedIn/social sections such as conversations and sponsored message threads are excluded before knowledge indexing, not just hidden at search time.

System Dependencies (Optional)

FeaturePackage
GitLab CLIsudo apt install glab
CliftonStrengths PDF parsingsudo apt install poppler-utils
CV PDF exportsudo apt install libpango-1.0-0 libpangoft2-1.0-0 libcairo2 libfontconfig1 libgdk-pixbuf-2.0-0

Tech Stack

Python 3.13 · LangChain + LangGraph · ChromaDB · Typer + Rich · WeasyPrint · MCP

Documentation

Documentation is embedded in the codebase:

  • Prompts: src/fu7ur3pr00f/prompts/md/ — All system and specialist prompts
  • Tools: src/fu7ur3pr00f/agents/tools/ — Tool implementations with docstrings
  • MCP Clients: src/fu7ur3pr00f/mcp/ — MCP client implementations
  • Memory: src/fu7ur3pr00f/memory/ — ChromaDB, RAG, episodic memory
  • Gatherers: src/fu7ur3pr00f/gatherers/ — Data collection modules
  • Specialists: src/fu7ur3pr00f/agents/specialists/ — Multi-agent specialists and orchestrator
  • Blackboard: src/fu7ur3pr00f/agents/blackboard/ — Blackboard pattern implementation

Key documentation files:

  • QWEN.md — Project context for AI assistants
  • GEMINI.md — Additional project documentation
  • .env.example — Configuration reference

Licensed under GPL-2.0.

Repository

JU
juanmanueldaza

juanmanueldaza/fu7ur3pr00f

Created

January 22, 2026

Updated

April 13, 2026

Language

Python

Category

AI