Skip to content

Contributing Guide

This guide covers everything you need to set up a development environment, understand the project structure, and contribute code to MeterBase.


Project Structure

MeterBase/
├── backend/                    # FastAPI backend application
│   ├── alembic.ini             # Alembic migration configuration
│   ├── app/
│   │   ├── __init__.py
│   │   ├── main.py             # FastAPI application entry point
│   │   ├── ai/                 # AI integration (Claude)
│   │   │   └── tariff_extractor.py
│   │   ├── api/                # API layer
│   │   │   └── v1/
│   │   │       ├── router.py           # Route aggregation
│   │   │       ├── auth.py             # Authentication endpoints
│   │   │       ├── properties.py       # Property CRUD
│   │   │       ├── bills.py            # Bill management + PDF upload
│   │   │       ├── tariffs.py          # Tariff search + lookup
│   │   │       ├── utilities.py        # Utility directory
│   │   │       ├── tenant_billing.py   # RUBS billing
│   │   │       ├── savings.py          # Savings analysis
│   │   │       ├── alerts.py           # Rate alerts
│   │   │       ├── bulk_import.py      # CSV bulk import
│   │   │       ├── propexo.py          # PMS integration
│   │   │       ├── calculate.py        # Rate calculator
│   │   │       ├── reports.py          # Report generation
│   │   │       ├── ai_endpoints.py     # AI-powered endpoints
│   │   │       └── walmart.py          # Walmart integration
│   │   ├── core/               # Framework core
│   │   │   ├── auth.py         # JWT + API key authentication
│   │   │   ├── config.py       # pydantic-settings configuration
│   │   │   └── database.py     # SQLAlchemy async engine + session
│   │   ├── models/             # SQLAlchemy ORM models (22 tables)
│   │   │   ├── __init__.py     # Model registry
│   │   │   ├── user.py         # User, APIKey
│   │   │   ├── utility.py      # Utility, ServiceTerritory
│   │   │   ├── tariff.py       # Tariff, EnergyRate, DemandRate, ...
│   │   │   ├── property.py     # Portfolio, Property, Building, ...
│   │   │   ├── bill.py         # Bill
│   │   │   ├── tenant_billing.py  # TenantBillingConfig, ...Period, PropexoSync
│   │   │   └── alert.py        # RateAlert
│   │   ├── schemas/            # Pydantic request/response schemas
│   │   ├── services/           # Business logic layer
│   │   │   ├── propexo.py      # Propexo PMS client
│   │   │   ├── rate_calculator.py  # Tariff cost calculation engine
│   │   │   ├── report_generator.py # PDF/Excel report generation
│   │   │   ├── savings_finder.py   # Savings analysis engine
│   │   │   ├── tariff_service.py   # Tariff search and lookup
│   │   │   └── utility_connect.py  # Utility API connections
│   │   └── workers/            # Celery background tasks
│   │       ├── celery_app.py   # Celery configuration + beat schedule
│   │       └── tasks.py        # Task definitions
│   ├── migrations/             # Alembic migration files
│   ├── requirements.txt        # Python dependencies
│   └── tests/                  # Backend test suite
├── frontend/                   # React frontend application
│   ├── index.html
│   ├── package.json
│   ├── vite.config.ts          # Vite build configuration
│   ├── tsconfig.json           # TypeScript configuration
│   ├── tailwind.config.js      # Tailwind CSS configuration
│   ├── postcss.config.js
│   ├── src/
│   │   ├── main.tsx            # React entry point
│   │   ├── App.tsx             # Root component + routing
│   │   ├── components/         # Reusable UI components
│   │   ├── pages/              # Page-level components
│   │   ├── hooks/              # Custom React hooks
│   │   ├── utils/              # Utility functions
│   │   └── styles/             # Global styles
│   └── public/                 # Static assets
├── scrapers/                   # Data collection scrapers
│   ├── openei_importer.py      # OpenEI USURDB bulk importer
│   ├── utility_scraper.py      # Base utility website scraper
│   ├── puc_monitor.py          # Public utility commission monitor
│   ├── spiders/                # Per-utility scraping spiders
│   ├── extractors/             # Data extraction logic
│   └── parsers/                # Rate schedule parsers
├── scripts/                    # Utility scripts
│   ├── setup.sh                # Initial project setup
│   ├── run_dev.sh              # Start development servers
│   ├── seed_database.py        # Seed DB with utilities + territories
│   ├── seed_data.sh            # Seed data shell wrapper
│   ├── bulk_import.py          # Bulk tariff import
│   ├── run_import.py           # Import runner
│   └── fix_state_data.py       # State data corrections
├── data/                       # Data files (CSV, JSON)
├── docker/                     # Docker configuration
│   ├── Dockerfile.backend      # Backend image (Python 3.11)
│   ├── Dockerfile.frontend     # Frontend image (Node 18 + Nginx)
│   └── nginx.conf              # Nginx reverse proxy config
├── docs/                       # MkDocs documentation
├── docker-compose.yml          # Full-stack orchestration
├── mkdocs.yml                  # MkDocs Material configuration
├── Makefile                    # Development commands
└── README.md

Setting Up the Development Environment

Prerequisites

Tool Version Purpose
Python 3.11+ Backend runtime
Node.js 18+ Frontend build
PostgreSQL 15+ Database
Redis 7+ Cache + task broker
Docker 24+ Containerized development (optional)
Docker Compose 2.0+ Multi-service orchestration (optional)

The fastest way to get the full stack running:

# Clone the repository
git clone https://github.com/meterbase/meterbase.git
cd meterbase

# Create environment file
cp backend/.env.example backend/.env
# Edit backend/.env with your API keys (ANTHROPIC_API_KEY, etc.)

# Start all services
docker compose up -d

# Seed the database
docker compose exec backend python scripts/seed_database.py

# Verify
curl http://localhost:8000/health
# {"status": "healthy"}

# Frontend available at http://localhost:80
# API docs at http://localhost:8000/docs

Option B: Local Development

For faster iteration without Docker:

# 1. Start PostgreSQL and Redis (or use Docker for just these)
docker compose up -d postgres redis

# 2. Backend setup
cd backend
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

# Create .env
cp .env.example .env
# Edit with: DATABASE_URL=postgresql+asyncpg://meterbase:meterbase@localhost:5432/meterbase

# Run migrations
alembic upgrade head

# Seed database
python scripts/seed_database.py

# Start backend (with hot reload)
uvicorn app.main:app --reload --port 8000

# 3. Frontend setup (new terminal)
cd frontend
npm install
npm run dev
# Frontend at http://localhost:5173 (Vite dev server)

# 4. Celery worker (new terminal, optional)
cd backend
celery -A app.workers.celery_app worker --loglevel=info

Code Style

Python (Backend)

  • Standard: PEP 8
  • Type hints: Required on all function signatures
  • String formatting: f-strings preferred
  • Imports: stdlib, third-party, local (separated by blank lines)
  • Line length: 120 characters max
  • Docstrings: Google style
# Good
async def get_tariffs_by_zip(
    zip_code: str,
    sector: str = "residential",
    limit: int = 50,
) -> list[Tariff]:
    """Fetch tariffs for a given ZIP code.

    Args:
        zip_code: 5-digit US ZIP code.
        sector: Tariff sector filter.
        limit: Maximum results to return.

    Returns:
        List of matching tariffs, ordered by relevance.
    """
    ...

TypeScript (Frontend)

  • Mode: Strict ("strict": true in tsconfig.json)
  • Components: Functional components with hooks
  • Styling: Tailwind CSS utility classes
  • State management: React hooks (useState, useReducer, custom hooks)
  • API calls: Centralized in utils/ or custom hooks in hooks/
// Good
interface PropertyCardProps {
  property: Property;
  onSelect: (id: number) => void;
}

const PropertyCard: React.FC<PropertyCardProps> = ({ property, onSelect }) => {
  return (
    <div
      className="rounded-lg border p-4 shadow-sm"
      onClick={() => onSelect(property.id)}
    >
      <h3 className="text-lg font-semibold">{property.name}</h3>
      <p className="text-sm text-gray-500">{property.address}</p>
    </div>
  );
};

Adding New API Endpoints

Follow this step-by-step process to add a new API endpoint.

Step 1: Define the Pydantic Schema

# backend/app/schemas/my_feature.py
from pydantic import BaseModel

class MyFeatureRequest(BaseModel):
    property_id: int
    option: str = "default"

class MyFeatureResponse(BaseModel):
    id: int
    result: str
    created_at: str

    model_config = {"from_attributes": True}

Step 2: Create the Route Handler

# backend/app/api/v1/my_feature.py
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession

from app.core.auth import get_current_user
from app.core.database import get_session
from app.models import User
from app.schemas.my_feature import MyFeatureRequest, MyFeatureResponse

router = APIRouter(prefix="/my-feature", tags=["My Feature"])


@router.post("/", response_model=MyFeatureResponse)
async def create_my_feature(
    request: MyFeatureRequest,
    user: User = Depends(get_current_user),
    session: AsyncSession = Depends(get_session),
):
    """Create a new my-feature resource."""
    # Business logic here
    ...

Step 3: Register the Router

# backend/app/api/v1/router.py
from app.api.v1.my_feature import router as my_feature_router

router.include_router(my_feature_router)

Step 4: Write Tests

# backend/tests/test_my_feature.py
import pytest
from httpx import AsyncClient

@pytest.mark.asyncio
async def test_create_my_feature(client: AsyncClient, auth_headers: dict):
    response = await client.post(
        "/api/v1/my-feature/",
        json={"property_id": 1, "option": "test"},
        headers=auth_headers,
    )
    assert response.status_code == 200
    assert "result" in response.json()

Adding New Models

Step 1: Create the Model

# backend/app/models/my_model.py
from datetime import datetime, timezone
from sqlalchemy import DateTime, ForeignKey, Integer, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from app.core.database import Base

class MyModel(Base):
    __tablename__ = "my_models"

    id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
    property_id: Mapped[int] = mapped_column(
        ForeignKey("properties.id"), nullable=False, index=True
    )
    name: Mapped[str] = mapped_column(String(255), nullable=False)
    created_at: Mapped[datetime] = mapped_column(
        DateTime(timezone=True), default=lambda: datetime.now(timezone.utc)
    )

Step 2: Register in __init__.py

# backend/app/models/__init__.py
from app.models.my_model import MyModel

__all__ = [
    # ... existing models ...
    "MyModel",
]

Step 3: Generate Migration

cd backend
alembic revision --autogenerate -m "Add my_models table"

Step 4: Review and Apply Migration

# Review the generated migration file in migrations/versions/
# Then apply:
alembic upgrade head

Database Migrations with Alembic

Common Commands

cd backend

# Generate a migration from model changes
alembic revision --autogenerate -m "Description of change"

# Apply all pending migrations
alembic upgrade head

# Rollback one migration
alembic downgrade -1

# Show current migration state
alembic current

# Show migration history
alembic history --verbose

# Generate an empty migration (for manual SQL)
alembic revision -m "Manual data migration"

Migration Best Practices

  1. Always review auto-generated migrations. Alembic cannot detect all changes (e.g., column renames, data migrations).
  2. Keep migrations small and focused. One migration per logical change.
  3. Test migrations both ways (upgrade and downgrade) before committing.
  4. Never edit a migration that has been applied to production. Create a new migration instead.
  5. Include data migrations in separate migration files from schema changes.

Example: Adding a Column

# migrations/versions/xxxx_add_energy_score.py
"""Add energy_score column to properties."""

from alembic import op
import sqlalchemy as sa

revision = "xxxx"
down_revision = "yyyy"

def upgrade() -> None:
    op.add_column(
        "properties",
        sa.Column("energy_score", sa.Float(), nullable=True),
    )

def downgrade() -> None:
    op.drop_column("properties", "energy_score")

Adding New Utility Scrapers

MeterBase includes a scraper framework for collecting tariff data directly from utility websites.

Step 1: Create the Spider

# scrapers/spiders/my_utility_spider.py
"""Scraper for My Utility Company rate schedules."""

from scrapers.utility_scraper import BaseUtilityScraper


class MyUtilitySpider(BaseUtilityScraper):
    name = "my_utility"
    utility_name = "My Utility Company"
    eia_id = "12345"
    base_url = "https://www.myutility.com/rates"

    def scrape_tariffs(self) -> list[dict]:
        """Scrape all available tariffs.

        Returns:
            List of tariff dicts in MeterBase standard format.
        """
        response = self.fetch(self.base_url)
        # Parse HTML, extract rate tables
        tariffs = []
        # ... parsing logic ...
        return tariffs

Step 2: Add a Parser (if needed)

For complex rate schedule formats, add a dedicated parser:

# scrapers/parsers/my_utility_parser.py
def parse_rate_table(html: str) -> dict:
    """Parse My Utility's HTML rate table into structured data."""
    ...

Step 3: Register the Spider

Add it to the scraper runner so Celery Beat picks it up during weekly scrapes.

Step 4: Respect Rate Limits

  • Use the configured SCRAPE_DELAY_SECONDS (default: 2.0s) between requests
  • Set a proper User-Agent header
  • Honor robots.txt
  • Maximum concurrency is controlled by SCRAPE_CONCURRENCY (default: 5)

Running Tests

Backend Tests

cd backend

# Run all tests
pytest

# Run with verbose output
pytest -v

# Run a specific test file
pytest tests/test_tariffs.py

# Run a specific test function
pytest tests/test_tariffs.py::test_search_by_zip

# Run with coverage
pytest --cov=app --cov-report=html

# Run only async tests
pytest -m asyncio

Frontend Tests

cd frontend

# Run all tests
npm test

# Run in watch mode
npm run test:watch

# Run with coverage
npm run test:coverage

PR Process

Before Submitting

  1. Branch from main: Create a feature branch with a descriptive name.

    git checkout -b feature/add-gas-tariff-support
    
  2. Write tests for new functionality.

  3. Run the full test suite and ensure it passes:

    cd backend && pytest
    cd frontend && npm test
    
  4. Run linters:

    # Python
    ruff check backend/
    mypy backend/app/
    
    # TypeScript
    cd frontend && npm run lint
    
  5. Update documentation if your change affects:

    • API endpoints (update OpenAPI descriptions)
    • Data model (update data-model.md)
    • Configuration (update .env.example and deployment docs)
    • Data sources (update data-sources.md)

PR Template

## What

Brief description of the change.

## Why

Context and motivation.

## How

Technical approach.

## Testing

- [ ] Unit tests added/updated
- [ ] Manual testing performed
- [ ] Migration tested (upgrade + downgrade)

## Checklist

- [ ] Tests pass
- [ ] Linters pass
- [ ] Documentation updated (if applicable)
- [ ] Migration file reviewed (if applicable)

Review Expectations

  • PRs require at least one approving review
  • CI must pass (tests, linting, type checking)
  • Migrations must be reviewed carefully for correctness and reversibility
  • API changes must maintain backward compatibility (or be versioned)