Deployment Guide¶
This guide covers deploying MeterBase from development through production. The primary production deployment uses Vercel (frontend + backend Python Functions) with Neon Postgres (serverless). Docker Compose remains available for local development and self-hosted scenarios.
Production Deployment (Vercel + Neon)¶
MeterBase production runs on:
- Frontend: Vercel (React SPA deployed as static assets)
- Backend: Vercel Python Functions (FastAPI wrapped as serverless functions)
- Database: Neon Postgres (serverless, with connection pooling)
- Sales site: meterbase.io (BDR-only, no self-service signup)
- Application: app.meterbase.io
Vercel Setup¶
1. Connect Repository¶
2. Configure Environment Variables¶
Set these in Vercel Dashboard > Settings > Environment Variables (or via CLI):
vercel env add DATABASE_URL production
vercel env add SECRET_KEY production
vercel env add ANTHROPIC_API_KEY production
vercel env add STRIPE_SECRET_KEY production
vercel env add STRIPE_PUBLISHABLE_KEY production
vercel env add STRIPE_WEBHOOK_SECRET production
vercel env add STRIPE_PRO_PRICE_ID production
vercel env add PROPEXO_API_KEY production
vercel env add REDIS_URL production
3. Deploy¶
4. Vercel Configuration¶
The vercel.json at the project root configures routing and function settings:
{
"builds": [
{ "src": "frontend/**", "use": "@vercel/static" },
{ "src": "backend/api/**/*.py", "use": "@vercel/python" }
],
"routes": [
{ "src": "/api/(.*)", "dest": "backend/api/$1" },
{ "src": "/(.*)", "dest": "frontend/$1" }
],
"functions": {
"backend/api/**/*.py": {
"maxDuration": 60,
"memory": 1024
}
}
}
Function Configuration
Increase maxDuration for AI-heavy endpoints (bill analysis, tariff extraction). The default 10-second timeout is too short for Claude API calls. Set regions near your Neon Postgres instance to minimize latency.
Neon Postgres Setup¶
1. Create a Neon Project¶
Sign up at neon.tech and create a project. Choose a region close to your Vercel deployment region.
2. Get Connection String¶
# From Neon Dashboard, copy the connection string:
postgresql://meterbase:password@ep-cool-name-123456.us-east-2.aws.neon.tech/meterbase?sslmode=require
3. Run Migrations¶
# Set DATABASE_URL to your Neon connection string
export DATABASE_URL="postgresql+asyncpg://meterbase:password@ep-cool-name-123456.us-east-2.aws.neon.tech/meterbase?sslmode=require"
cd backend
alembic upgrade head
python scripts/seed_database.py
4. Connection Pooling¶
Neon provides built-in connection pooling. Use the pooled connection string (port 5432) for application traffic and the direct connection (port 5433) for migrations:
# Application (pooled)
DATABASE_URL=postgresql://meterbase:password@ep-cool-name-123456.us-east-2.aws.neon.tech/meterbase?sslmode=require
# Migrations (direct)
DATABASE_URL_DIRECT=postgresql://meterbase:password@ep-cool-name-123456.us-east-2.aws.neon.tech:5433/meterbase?sslmode=require
Serverless Considerations
Neon Postgres scales to zero during inactivity. The first request after idle may experience a cold start (~500ms). For production, enable Neon's "Always On" compute option to avoid cold starts.
Local Development (Docker Compose)¶
For local development, Docker Compose provides the full stack. This is also suitable for self-hosted deployments.
Architecture Overview¶
(ALB / Nginx)"] subgraph App["Application Tier"] FE["Frontend Container
(Nginx + React SPA)"] BE1["Backend Container
(FastAPI + Uvicorn)"] CW["Celery Worker(s)"] CB["Celery Beat"] end subgraph Data["Data Tier"] PG["PostgreSQL 15"] Redis["Redis 7"] S3["File Storage
(S3 / local volume)"] end end Internet --> LB LB --> FE LB --> BE1 FE --> BE1 BE1 --> PG BE1 --> Redis BE1 --> S3 CW --> PG CW --> Redis CW --> S3 CB --> Redis
Docker Compose (Full Stack)¶
The simplest production-ready deployment uses Docker Compose with all 6 services.
Quick Start¶
# Clone and configure
git clone https://github.com/meterbase/meterbase.git
cd meterbase
cp backend/.env.example backend/.env
# Edit backend/.env (see Environment Variables section below)
# Build and start all services
docker compose up -d --build
# Run database migrations
docker compose exec backend alembic upgrade head
# Seed initial data (utilities + service territories)
docker compose exec backend python scripts/seed_database.py
# Verify all services are healthy
docker compose ps
Services¶
| Service | Image | Port | Health Check |
|---|---|---|---|
postgres |
postgres:15-alpine |
5432 | pg_isready |
redis |
redis:7-alpine |
6379 | redis-cli ping |
backend |
Custom (Dockerfile.backend) | 8000 | curl /health |
frontend |
Custom (Dockerfile.frontend) | 80 | wget /health |
celery-worker |
Same as backend | -- | -- |
celery-beat |
Same as backend | -- | -- |
Docker Compose File¶
The full docker-compose.yml is at the project root. Key details:
- Named volumes for persistent data:
postgres_data,redis_data,upload_data - Health checks with dependencies: backend waits for PostgreSQL and Redis to be healthy
- Internal network (
meterbase-net): all services communicate on a private bridge network - Environment overrides: container-internal hostnames (
postgres,redis) replacelocalhost
Individual Dockerfiles¶
Backend (Dockerfile.backend)¶
FROM python:3.11-slim AS base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /app
# System dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential libpq-dev curl \
&& rm -rf /var/lib/apt/lists/*
# Python dependencies
COPY backend/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Application code
COPY backend/ .
# Non-root user
RUN addgroup --system meterbase && \
adduser --system --ingroup meterbase meterbase && \
chown -R meterbase:meterbase /app
USER meterbase
EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
Key decisions:
python:3.11-slimfor minimal image size- Non-root
meterbaseuser for security - 4 Uvicorn workers (adjust based on CPU cores:
2 * cores + 1) - Health check built into the image
Frontend (Dockerfile.frontend)¶
# Stage 1: Build
FROM node:18-alpine AS build
WORKDIR /app
COPY frontend/package.json frontend/package-lock.json* ./
RUN npm ci
COPY frontend/ .
RUN npm run build
# Stage 2: Serve
FROM nginx:1.25-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY docker/nginx.conf /etc/nginx/conf.d/meterbase.conf
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget -qO- http://localhost/health || exit 1
CMD ["nginx", "-g", "daemon off;"]
Key decisions:
- Multi-stage build: Node for compilation, Nginx for serving (much smaller final image)
npm cifor reproducible builds- Static assets served directly by Nginx
Nginx Configuration¶
The Nginx config at docker/nginx.conf handles:
upstream backend_api {
server backend:8000;
}
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 256;
gzip_types text/plain text/css text/javascript
application/javascript application/json
application/xml image/svg+xml;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Health check
location /health {
access_log off;
return 200 '{"status":"ok"}';
add_header Content-Type application/json;
}
# API proxy
location /api/ {
proxy_pass http://backend_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 120s; # Long timeout for AI requests
proxy_send_timeout 120s;
}
# Static assets with long cache
location /assets/ {
expires 1y;
add_header Cache-Control "public, immutable";
try_files $uri =404;
}
# SPA fallback
location / {
try_files $uri $uri/ /index.html;
}
}
Production Nginx Additions¶
For a production deployment in front of Docker, add these to a host-level Nginx config:
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
server {
listen 443 ssl http2;
server_name meterbase.example.com;
ssl_certificate /etc/letsencrypt/live/meterbase.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/meterbase.example.com/privkey.pem;
# Modern TLS config
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# HSTS
add_header Strict-Transport-Security "max-age=63072000" always;
location /api/ {
limit_req zone=api burst=50 nodelay;
proxy_pass http://127.0.0.1:8000;
# ... same proxy headers as above
}
location / {
proxy_pass http://127.0.0.1:80;
}
}
# HTTP -> HTTPS redirect
server {
listen 80;
server_name meterbase.example.com;
return 301 https://$server_name$request_uri;
}
Environment Variables¶
Required for Production¶
Create backend/.env with the following:
# =============================================================================
# Application
# =============================================================================
APP_NAME=MeterBase
APP_VERSION=0.1.0
DEBUG=false
API_PREFIX=/api/v1
# =============================================================================
# Database (REQUIRED)
# =============================================================================
DATABASE_URL=postgresql+asyncpg://meterbase:STRONG_PASSWORD_HERE@postgres:5432/meterbase
DATABASE_ECHO=false
# =============================================================================
# Redis (REQUIRED)
# =============================================================================
REDIS_URL=redis://redis:6379/0
# =============================================================================
# Authentication (REQUIRED - change in production!)
# =============================================================================
SECRET_KEY=generate-with-openssl-rand-hex-32
ACCESS_TOKEN_EXPIRE_MINUTES=1440
ALGORITHM=HS256
# =============================================================================
# API Keys (configure as needed)
# =============================================================================
ANTHROPIC_API_KEY=sk-ant-... # Required for AI features
OPENEI_API_KEY= # Optional (higher rate limits)
EIA_API_KEY= # Optional
# =============================================================================
# Celery (REQUIRED)
# =============================================================================
CELERY_BROKER_URL=redis://redis:6379/1
CELERY_RESULT_BACKEND=redis://redis:6379/2
# =============================================================================
# Rate Limits
# =============================================================================
FREE_TIER_REQUESTS_PER_DAY=1000
PRO_TIER_REQUESTS_PER_DAY=100000
# =============================================================================
# Storage
# =============================================================================
UPLOAD_DIR=/app/data/uploads
MAX_UPLOAD_SIZE_MB=50
# =============================================================================
# Scraping
# =============================================================================
SCRAPE_CONCURRENCY=5
SCRAPE_DELAY_SECONDS=2.0
# =============================================================================
# Propexo PMS Integration
# =============================================================================
PROPEXO_API_KEY= # Required for PMS features
PROPEXO_BASE_URL=https://api.propexo.com/v1
Generating a Secret Key¶
Security
Never commit .env files to version control. The .env.example file contains placeholder values only.
Database Setup and Migrations¶
Initial Setup¶
# Create the database (if not using Docker)
createdb -U meterbase meterbase
# Run all migrations
docker compose exec backend alembic upgrade head
# Seed with utility and service territory data
docker compose exec backend python scripts/seed_database.py
# Import OpenEI tariffs (takes ~10 minutes)
docker compose exec backend python scripts/bulk_import.py
Migration Workflow¶
# Check current migration state
docker compose exec backend alembic current
# Apply pending migrations
docker compose exec backend alembic upgrade head
# Rollback one step (if needed)
docker compose exec backend alembic downgrade -1
Pre-deployment Checklist¶
SSL/TLS with Let's Encrypt¶
Using Certbot¶
# Install certbot
sudo apt install certbot python3-certbot-nginx
# Obtain certificate
sudo certbot --nginx -d meterbase.example.com
# Auto-renewal is configured automatically
# Verify with:
sudo certbot renew --dry-run
Using Docker (certbot container)¶
# Add to docker-compose.yml
certbot:
image: certbot/certbot
volumes:
- certbot_data:/etc/letsencrypt
- certbot_www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Certificate Renewal¶
Let's Encrypt certificates expire every 90 days. Certbot's systemd timer handles auto-renewal. Verify:
Monitoring and Health Checks¶
Built-in Health Endpoints¶
| Endpoint | Service | Response |
|---|---|---|
GET /health |
FastAPI Backend | {"status": "healthy"} |
GET /health |
Nginx Frontend | {"status": "ok"} |
Docker Health Checks¶
All services include Docker-native health checks. Monitor with:
# Check all service health
docker compose ps
# View health check logs
docker inspect --format='{{json .State.Health}}' meterbase-backend | jq
# Watch logs in real-time
docker compose logs -f backend celery-worker
Application Monitoring¶
For production, add these monitoring layers:
(metrics)"] App --> Logs["Loki / ELK
(structured logs)"] App --> Traces["Jaeger
(distributed traces)"] Metrics --> Grafana["Grafana
(dashboards)"] Logs --> Grafana Traces --> Grafana App --> Alerts["PagerDuty / Slack
(alert routing)"]
Recommended metrics to track:
| Metric | Source | Alert Threshold |
|---|---|---|
| API response time (p95) | FastAPI middleware | > 2 seconds |
| API error rate (5xx) | Nginx access log | > 1% of requests |
| Database connection pool | SQLAlchemy | > 80% utilization |
| Celery queue depth | Redis | > 100 pending tasks |
| Disk usage (uploads) | Docker volume | > 80% capacity |
| OpenEI sync status | Celery task result | Failure for 2+ consecutive runs |
| PostgreSQL replication lag | pg_stat_replication | > 30 seconds |
Structured Logging¶
MeterBase uses structlog for JSON-formatted logs, making them easy to ingest into log aggregation systems:
{
"event": "OpenEI sync complete",
"total": 62700,
"created": 150,
"updated": 340,
"errors": 2,
"timestamp": "2026-03-25T02:15:30Z",
"level": "info"
}
Backup Strategy¶
Database Backups¶
# Manual backup
docker compose exec postgres pg_dump -U meterbase meterbase | gzip > backup_$(date +%Y%m%d).sql.gz
# Restore from backup
gunzip < backup_20260325.sql.gz | docker compose exec -T postgres psql -U meterbase meterbase
Automated Backup Script¶
#!/bin/bash
# scripts/backup.sh - Run via cron: 0 3 * * * /path/to/backup.sh
BACKUP_DIR="/backups/meterbase"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
# Database backup
docker compose exec -T postgres pg_dump -U meterbase --format=custom meterbase \
> "$BACKUP_DIR/db_${TIMESTAMP}.dump"
# Upload volume backup
docker run --rm \
-v meterbase_upload_data:/data:ro \
-v "$BACKUP_DIR":/backup \
alpine tar czf "/backup/uploads_${TIMESTAMP}.tar.gz" -C /data .
# Redis backup (RDB snapshot)
docker compose exec redis redis-cli BGSAVE
sleep 5
docker cp meterbase-redis:/data/dump.rdb "$BACKUP_DIR/redis_${TIMESTAMP}.rdb"
# Cleanup old backups
find "$BACKUP_DIR" -name "*.dump" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR" -name "*.rdb" -mtime +$RETENTION_DAYS -delete
echo "Backup complete: $TIMESTAMP"
Backup Schedule¶
| Component | Frequency | Retention | Method |
|---|---|---|---|
| PostgreSQL | Daily at 03:00 UTC | 30 days | pg_dump --format=custom |
| Upload files | Daily at 03:00 UTC | 30 days | tar archive |
| Redis | Daily at 03:00 UTC | 7 days | RDB snapshot |
Disaster Recovery¶
- Provision new infrastructure (or restore from snapshot)
- Start PostgreSQL and Redis containers
- Restore database:
pg_restore -U meterbase -d meterbase db_TIMESTAMP.dump - Restore uploads: Extract tar archive to upload volume
- Run migrations (in case backup predates latest schema):
alembic upgrade head - Start application containers
- Verify health checks and data integrity
Scaling¶
Vertical Scaling (Single Node)¶
Adjust resources in docker-compose.yml:
backend:
# Increase Uvicorn workers (2 * CPU cores + 1)
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 8
deploy:
resources:
limits:
cpus: "4"
memory: 4G
celery-worker:
# Increase concurrency
command: celery -A app.workers.celery_app worker --loglevel=info --concurrency=8
deploy:
resources:
limits:
cpus: "2"
memory: 2G
postgres:
# Tune PostgreSQL
command: >
postgres
-c shared_buffers=1GB
-c effective_cache_size=3GB
-c work_mem=16MB
-c maintenance_work_mem=256MB
-c max_connections=200
Read Replicas¶
For read-heavy workloads (tariff search, reports):
# docker-compose.prod.yml
postgres-replica:
image: postgres:15-alpine
environment:
POSTGRES_USER: meterbase
POSTGRES_PASSWORD: meterbase
command: >
bash -c "
pg_basebackup -h postgres -U meterbase -D /var/lib/postgresql/data -Fp -Xs -P -R
&& postgres
"
depends_on:
- postgres
Configure the backend to route read queries to replicas using SQLAlchemy's create_async_engine with a separate read URL.
Worker Scaling¶
Scale Celery workers independently:
For task prioritization, split into multiple queues:
# In celery_app.py
celery_app.conf.task_routes = {
"app.workers.tasks.extract_tariff_from_document": {"queue": "ai"},
"app.workers.tasks.sync_openei_database": {"queue": "data"},
"app.workers.tasks.check_rate_changes": {"queue": "default"},
}
# Start dedicated workers per queue
celery -A app.workers.celery_app worker --queues=ai --concurrency=2
celery -A app.workers.celery_app worker --queues=data --concurrency=4
celery -A app.workers.celery_app worker --queues=default --concurrency=4
CDN for Static Assets¶
Place CloudFront or Cloudflare in front of the frontend for global distribution:
Configure the CDN to:
- Cache
/assets/*for 1 year (immutable hashed filenames from Vite) - Cache
index.htmlfor 5 minutes (or use invalidation on deploy) - Pass
/api/*through to the backend without caching
Object Storage for Uploads¶
For production, replace the local Docker volume with S3-compatible object storage:
# backend/app/core/config.py
class Settings(BaseSettings):
# Add S3 configuration
s3_bucket: str = ""
s3_region: str = "us-east-1"
s3_access_key: str = ""
s3_secret_key: str = ""
storage_backend: str = "local" # "local" or "s3"
This allows the upload volume to be stateless, enabling horizontal scaling of backend nodes without shared filesystems.