Dive is an open-source, cross-platform desktop application that integrates with LLMs supporting function calling, offering features like multi-language support, API management, custom instructions, and auto-updates for seamless AI agent integration.
Dive is an open-source, cross-platform MCP Host Desktop Application designed to integrate seamlessly with LLMs supporting function calling, such as ChatGPT, Anthropic, Ollama, Google Gemini, and Mistral AI. It supports Windows, MacOS, and Linux.
Key features include universal LLM support, Model Context Protocol (MCP) for stdio and SSE integration, multi-language support (Traditional Chinese, Simplified Chinese, English, Spanish), advanced API management, custom instructions, and auto-updates.
Recent updates include Spanish translation and extended model support with Google Gemini and Mistral AI integration.
Downloads are available for Windows (.exe, Python and Node.js pre-installed), MacOS (.dmg, requires manual Python and Node.js installation), and Linux (.AppImage, requires manual Python and Node.js installation, may need --no-sandbox
).
Dive includes a default echo MCP Server, but can be configured to use tools like Fetch and Youtube-dl. Configuration examples are provided for both stdio and SSE modes. yt-dlp-mcp requires the yt-dlp package, with installation instructions for each OS.
OpenAgentPlatform/Dive
January 24, 2025
March 28, 2025
TypeScript