Skip to content

MCP Protocol

import { Aside } from ‘@astrojs/starlight/components’;

The Model Context Protocol (MCP) is an open specification for connecting AI models to external data sources and tools. Published by Anthropic under the MIT License, MCP is a standardized way for AI agents to discover and call tools provided by external servers.

MCP solves a coordination problem: every AI coding agent previously had to invent its own plugin system, and every tool had to be reimplemented for each agent. With MCP, a tool that speaks the protocol works with any agent that speaks the protocol.

Forge implements the MCP server side of the protocol. AI agents (Claude Code, Cursor, Windsurf, Zed, Continue, and others) implement the MCP client side.

Forge uses the stdio transport: the MCP client (your AI agent) launches forge serve as a subprocess and communicates over stdin/stdout. This is the most common MCP transport and is universally supported.

{
"mcpServers": {
"forge": {
"command": "forge",
"args": ["serve", "."],
"env": {}
}
}
}

No HTTP server. No ports to configure. No networking between agent and Forge. The agent and Forge are the same process group, communicating through pipes.

All messages are JSON-RPC 2.0 over the stdio pipe. The key exchanges:

  1. initialize — Client announces capabilities. Forge responds with its server info and injects behavioral instructions into the system prompt.
  2. tools/list — Client requests the full tool manifest. Forge returns all 21 tools with their JSON Schema input definitions and descriptions.
  3. tools/call — Client calls a specific tool with arguments. Forge runs the query and returns the result.

The most important thing Forge does at initialize time is inject a block of behavioral instructions into the agent’s context. These instructions:

  • Define the correct order of operations (forge_prepare before edit, forge_validate after)
  • Explain when to use workflow tools vs detailed tools
  • Teach the agent how to interpret GO/CAUTION/STOP assessments
  • Document the tool categories and when to reach for each

This is why Forge’s behavior in properly configured agents is automatic. The agent learns the intended workflow from Forge itself at the start of every session, without the user needing to write a system prompt or custom instructions.

Forge’s tools are organized into six categories. The three workflow tools handle the most common AI-agent use cases. The 18 detailed tools are for targeted analysis.

These three tools are the highest-leverage entry points. Each bundles multiple detailed tool calls into a single result:

ToolWhen to useReturns
forge_prepareBefore any file modificationDependents, imports, health findings, coverage, git activity, GO/CAUTION/STOP
forge_validateAfter file modifications completeHealth delta (new findings introduced vs fixed), import verification
forge_understandWhen encountering unfamiliar codeFull structural analysis: symbols, callers, dependencies, test coverage, recent history
ToolWhen to useReturns
forge_searchFind code by concept or keywordRanked file+snippet results, camelCase-aware
forge_pattern_searchFind structural code patternsAll locations matching an ast-grep pattern
forge_search_symbolsFind symbols by nameAll functions/classes/types matching a name query
ToolWhen to useReturns
forge_trace_importsWhat does this file depend on?Outbound import edges from a file
forge_trace_dependentsWho depends on this file?Inbound import edges pointing to a file
forge_check_wiringIs this module connected?Reachability from any entry point in the graph
forge_find_cyclesAre there circular deps?Circular dependency chains (if any)
forge_dependency_graphFull subgraph visualizationDOT-format or JSON dependency graph for a path
ToolWhen to useReturns
forge_health_checkRun all health checksP0/P1/P2/info findings across the repo
ToolWhen to useReturns
forge_parse_fileRaw symbol list for a fileEvery symbol in a file with type, line, exported status
forge_extract_symbolGet the full source of a symbolSource code + context for a named function/class/type
ToolWhen to useReturns
forge_git_historyRecent commits for a fileLast N commits: hash, author, date, message
forge_git_blamePer-line authorshipBlame data for a line range
ToolWhen to useReturns
forge_ingest_scipUpgrade to compiler-resolved edgesConfirmation + edge count before/after
forge_coverageIngest test coverageCoverage stored; per-file % queryable by workflow tools
ToolWhen to useReturns
forge_index_statusCheck index healthFile counts, stale files, last index time, which layers are ready
forge_reindexTrigger incremental re-indexRe-index result (files processed, duration)

forge_prepare is not magic — it calls forge_trace_dependents, forge_trace_imports, forge_health_check, forge_git_history, and optionally forge_coverage under the hood, then synthesizes the results into a single actionable summary.

This composability is intentional. You can call the detail tools directly when you need targeted information. Use workflow tools when you want a complete picture.

All tools return structured JSON. This makes results machine-readable and allows the agent to reason about them programmatically — counting dependents, sorting findings by severity, comparing before/after states.

Each tool’s MCP description is written to be read by AI agents, not just humans. The descriptions explain not just what the tool does but when to use it, what inputs to provide, and how to interpret the output. This is part of why Forge agents need minimal prompting — the tools are self-documenting.

Any agent that supports MCP stdio transport can use Forge. Tested combinations:

AgentMCP supportNotes
Claude CodeFullPrimary test target. Server instructions fully supported.
CursorFullConfig via cursor_mcp_config.json in project root
WindsurfFullConfig via workspace settings
ZedFullConfig via settings.json context server block
Continue (VS Code)FullConfig via config.json
Codex CLIFull--mcp-config flag or CODEX_MCP_CONFIG env var

For per-agent setup guides, see the How-To section.