Architecture
Request flow
Section titled “Request flow”AI Clients (ChatGPT, Claude, VS Code, Cursor) │ ▼mcpr proxy (one upstream per process) ├── Parse JSON-RPC 2.0 ├── Classify MCP method ├── Forward to upstream MCP server ├── Rewrite widget CSP per AI client ├── Emit ProxyEvent (non-blocking) └── Stream response back │ ▼MCP Server (your backend)mcpr fronts one upstream MCP app per process. Non-MCP traffic is forwarded as-is — mcpr never blocks traffic.
Event bus
Section titled “Event bus”Events never block the proxy hot path. emit() pushes to an mpsc channel; a background dispatcher fans out to registered sinks.
Proxy hot path │ emit(ProxyEvent) ▼EventBus ─── mpsc channel ─── background dispatcher │ ├── StderrSink → one-line summary per request ├── SqliteSink → ~/.mcpr/store.db └── CloudSink → batched POST to cloud.mcpr.appThe CloudSink is registered only when [cloud].token is set. Custom sinks implement the EventSink trait. Overflow is silently dropped so the proxy never blocks.
Crate layout
Section titled “Crate layout”mcpr is a Rust workspace with three crates:
| Crate | Purpose |
|---|---|
mcpr-cli | Binary entrypoint, CLI parsing, proxy run/setup, store maintenance |
mcpr-core | Protocol parsing, MCP method classification, CSP rewriting, event types, request pipeline |
mcpr-integrations | Stderr / SQLite / cloud sinks, cloud client, store schema |
The tunnel client lives in a separate published crate (mcp-tunnel-client) consumed as a dependency.
Storage
Section titled “Storage”All local observability data lives in a single SQLite database (~/.mcpr/store.db by default; override via MCPR_DB or change HOME in containers). No external database needed.
Inspect the schema and tables directly with sqlite3. Cloud sync is the recommended path for dashboards and retention; the local store is the source of truth for ad-hoc queries.
Production deployment
Section titled “Production deployment”AI Client (ChatGPT / Claude) │ │ HTTPS ▼ Nginx / LB (TLS termination) │ │ HTTP ▼ mcpr proxy (foreground PID) ├── JSON-RPC → MCP Server └── Event Bus → stderr / SQLite / cloud.mcpr.appSingle static binary (~5 MiB), no runtime dependencies. mcpr is designed to run as a sidecar next to your MCP server — your supervisor (Docker, systemd, k8s) owns the lifecycle. SIGTERM drains in-flight requests up to drain_timeout (default 30s) before exiting.
