Skip to content

Multi-Tenant Architecture

Campaign Brain uses a single-tenant deployment model where each customer gets their own isolated application instance.

Tenant Isolation

graph LR
    subgraph "Tenant A"
        A_APP[App Instance]
        A_DB[(Database)]
        A_FILES[Files]
    end

    subgraph "Tenant B"
        B_APP[App Instance]
        B_DB[(Database)]
        B_FILES[Files]
    end

    A_APP --> A_DB
    A_APP --> A_FILES
    B_APP --> B_DB
    B_APP --> B_FILES

What's Isolated

Resource Isolation Level Notes
Database Full Separate DuckDB file per tenant
Application Code Shared Source Same codebase, independent instances
Configuration Full Separate .env files
Ports Full Unique port pairs per tenant
Systemd Services Full Dedicated services
Files/Uploads Full Separate directories

What's Shared

Resource Notes
Source Repository All tenants run same cbapp version
i360 Database Optional shared voter data
Ollama LLM Shared AI service

Port Allocation

The tenant manager automatically allocates ports:

Base: 32300

Tenant 1: 32300 (frontend), 32301 (api)
Tenant 2: 32310 (frontend), 32311 (api)
Tenant 3: 32320 (frontend), 32321 (api)
...

Port allocation checks:

  1. Database records
  2. Actual system port usage (socket binding)
  3. Nginx configuration

Version Management

All tenants can be upgraded independently:

# Check version
GET /api/tenants/{id}/version

# Upgrade to specific version
POST /api/tenants/{id}/upgrade
{
    "target_version": "v0.3.0",
    "backup_db": true,
    "run_migrations": true
}

Service Management

Each tenant runs as systemd services:

# Service naming
{slug}-api.service
{slug}-frontend.service

# Management
sudo systemctl start mi20-clevenger-api
sudo systemctl status ky04-frontend

Scaling Considerations

Current Model (Single Server)

  • All tenants on one server
  • Shared Nginx reverse proxy
  • Shared Ollama instance

Future Scaling Options

  1. Horizontal Scaling: Deploy tenants across multiple servers
  2. Container Deployment: Docker/Kubernetes per tenant
  3. Database Sharding: Separate database servers per tenant group

See Also