Skip to content

Production Deployment

In production, mcpr runs as a sidecar next to your MCP server. The launched PID is the proxy itself, so your supervisor (Docker, systemd, Kubernetes) owns the lifecycle directly. SIGTERM drains gracefully — there is no daemon, no /healthz polling required, and no separate admin server.

Create mcpr.toml for production:

mcp = "http://mcp-server:9000"
port = 8080
drain_timeout = 25
[cloud]
token = "mcpr_xxxxxxxx"
server = "prod-1"

The published image runs mcpr proxy run /etc/mcpr/mcpr.toml as PID 1 under tini, so SIGTERM forwards directly to the proxy.

Terminal window
docker run -d --name mcpr \
-v "$(pwd)/mcpr.toml:/etc/mcpr/mcpr.toml:ro" \
-v mcpr-state:/var/lib/mcpr \
-p 8080:8080 \
--restart unless-stopped \
ghcr.io/pragmalabs-tech/mcpr:0.5
VolumePurpose
/etc/mcpr/mcpr.tomlProxy config — bind-mount read-only
/var/lib/mcprState directory — SQLite store, lockfiles, logs

State resolves via the HOME env var, which the image sets to /var/lib/mcpr; mcpr writes everything under /var/lib/mcpr/.mcpr/. Use a named volume to persist state across container restarts.

Pin to X.Y.Z (or X.Y for the latest patch in a minor line) in production. latest is for evaluation.

services:
mcp-server:
build: ./my-mcp-server
expose:
- "9000"
mcpr:
image: ghcr.io/pragmalabs-tech/mcpr:0.5
restart: unless-stopped
depends_on:
- mcp-server
ports:
- "8080:8080"
volumes:
- ./mcpr.toml:/etc/mcpr/mcpr.toml:ro
- mcpr-state:/var/lib/mcpr
volumes:
mcpr-state:
# mcpr.toml — alongside the compose file
mcp = "http://mcp-server:9000"
port = 8080

The image is Kubernetes-ready out of the box: non-root UID 10001, SIGTERM forwarded by tini, mcpr proxy run as PID 1, in-flight drain on shutdown.

apiVersion: v1
kind: ConfigMap
metadata:
name: mcpr-config
data:
mcpr.toml: |
mcp = "http://mcp-server:9000"
port = 8080
drain_timeout = 25
[cloud]
token = "mcpr_xxxxxxxx"
server = "prod-1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcpr
spec:
replicas: 1
selector:
matchLabels:
app: mcpr
template:
metadata:
labels:
app: mcpr
spec:
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
containers:
- name: mcpr
image: ghcr.io/pragmalabs-tech/mcpr:0.5
ports:
- name: proxy
containerPort: 8080
volumeMounts:
- name: config
mountPath: /etc/mcpr/mcpr.toml
subPath: mcpr.toml
- name: state
mountPath: /var/lib/mcpr
volumes:
- name: config
configMap:
name: mcpr-config
- name: state
persistentVolumeClaim:
claimName: mcpr-state
---
apiVersion: v1
kind: Service
metadata:
name: mcpr
spec:
selector:
app: mcpr
ports:
- name: proxy
port: 8080
targetPort: 8080

Set terminationGracePeriodSeconds slightly longer than drain_timeout so the kubelet does not SIGKILL mid-drain. The state volume must be writable by UID 10001; fsGroup: 10001 handles that automatically.

There is no /healthz admin server today. If your control plane requires liveness/readiness probes, use a TCP probe against the proxy port (8080), or a startup probe that times out generously while the proxy boots.

If you already run Nginx for TLS termination:

server {
listen 443 ssl;
server_name mcp.yourapp.com;
ssl_certificate /etc/ssl/certs/yourapp.com.pem;
ssl_certificate_key /etc/ssl/private/yourapp.com.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# SSE support
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 86400s;
}
}

Your MCP app is available at https://mcp.yourapp.com.

SIGTERM (and SIGINT) trigger a graceful drain:

  1. Stop accepting new connections
  2. Wait for in-flight requests to complete (up to drain_timeout, default 30s)
  3. Flush event bus sinks (SQLite, cloud)
  4. Exit cleanly

docker stop, systemctl stop, and Kubernetes pod termination all work as expected.

Validate before deploying. Use this in CI/CD:

Terminal window
mcpr validate -c /etc/mcpr/mcpr.toml

Exit code 0 means valid; 1 means errors.

Stream events to mcpr Cloud for the dashboard:

[cloud]
token = "mcpr_xxxxxxxx"
server = "prod-1"

Local SQLite at /var/lib/mcpr/.mcpr/store.db (inside the container) remains the source of truth and is reachable via docker exec mcpr mcpr store stats.

See Events for sink details.