Skip to content

Architecture

AI Clients (ChatGPT, Claude, VS Code, Cursor)
mcpr proxy (one upstream per process)
├── Parse JSON-RPC 2.0
├── Classify MCP method
├── Forward to upstream MCP server
├── Rewrite widget CSP per AI client
├── Emit ProxyEvent (non-blocking)
└── Stream response back
MCP Server (your backend)

mcpr fronts one upstream MCP app per process. Non-MCP traffic is forwarded as-is — mcpr never blocks traffic.

Events never block the proxy hot path. emit() pushes to an mpsc channel; a background dispatcher fans out to registered sinks.

Proxy hot path
│ emit(ProxyEvent)
EventBus ─── mpsc channel ─── background dispatcher
├── StderrSink → one-line summary per request
├── SqliteSink → ~/.mcpr/store.db
└── CloudSink → batched POST to cloud.mcpr.app

The CloudSink is registered only when [cloud].token is set. Custom sinks implement the EventSink trait. Overflow is silently dropped so the proxy never blocks.

mcpr is a Rust workspace with three crates:

CratePurpose
mcpr-cliBinary entrypoint, CLI parsing, proxy run/setup, store maintenance
mcpr-coreProtocol parsing, MCP method classification, CSP rewriting, event types, request pipeline
mcpr-integrationsStderr / SQLite / cloud sinks, cloud client, store schema

The tunnel client lives in a separate published crate (mcp-tunnel-client) consumed as a dependency.

All local observability data lives in a single SQLite database (~/.mcpr/store.db by default; override via MCPR_DB or change HOME in containers). No external database needed.

Inspect the schema and tables directly with sqlite3. Cloud sync is the recommended path for dashboards and retention; the local store is the source of truth for ad-hoc queries.

AI Client (ChatGPT / Claude)
│ HTTPS
Nginx / LB (TLS termination)
│ HTTP
mcpr proxy (foreground PID)
├── JSON-RPC → MCP Server
└── Event Bus → stderr / SQLite / cloud.mcpr.app

Single static binary (~5 MiB), no runtime dependencies. mcpr is designed to run as a sidecar next to your MCP server — your supervisor (Docker, systemd, k8s) owns the lifecycle. SIGTERM drains in-flight requests up to drain_timeout (default 30s) before exiting.