Skip to content

Data Services

All data services run on apps-data (192.168.51.30) via the data stack with placement constraints.

Stack file: /opt/swarm/stacks/data/stack.data.yml

PostgreSQL

Image: postgres:15.15-alpine Port: 5432 (internal, via data_net overlay network) Volume: Named volume for persistent data

Access

# Direct access
docker exec -it $(docker ps -qf name=data_postgres) psql -U postgres

# Specific database
docker exec -it $(docker ps -qf name=data_postgres) psql -U postgres -d recipicity_production

# List databases
docker exec -it $(docker ps -qf name=data_postgres) psql -U postgres -c "\l"

# Active connections
docker exec -it $(docker ps -qf name=data_postgres) psql -U postgres -c "SELECT datname, usename, state FROM pg_stat_activity;"

Databases

Database Application Owner
recipicity Recipicity (legacy) postgres
recipicity_production Recipicity Production recipicity_production_owner
recipicity_staging Recipicity Staging recipicity_staging_owner
eventapp Events (legacy) eventapp_owner
vault HashiCorp Vault vault_owner

Database Users

User Role Purpose
postgres Superuser Database administration
monitoring pg_monitor Read-only metrics for Prometheus exporter
recipicity_production_owner Owner Recipicity production database
recipicity_staging_owner Owner Recipicity staging database
eventapp_owner Owner Events database
vault_owner Owner Vault storage backend

Backup

# Single database
docker exec $(docker ps -qf name=data_postgres) pg_dump -U postgres recipicity_production > backup.sql

# All databases
docker exec $(docker ps -qf name=data_postgres) pg_dumpall -U postgres > all-databases.sql

# Restore
cat backup.sql | docker exec -i $(docker ps -qf name=data_postgres) psql -U postgres -d recipicity_production

An automated backup service (data_backup) runs as part of the data stack.

PgBouncer

Image: edoburu/pgbouncer:latest Port: 6432 (internal) Purpose: Connection pooling for PostgreSQL

Applications connect to PgBouncer at port 6432 instead of directly to PostgreSQL at 5432. This reduces connection overhead and prevents pool exhaustion.

Configuration: Managed via Docker config pgbouncer-userlist containing SCRAM-SHA-256 hashes.

Redis

Image: redis:7.4.7-alpine Port: 6379 (internal, via data_net) Volume: Named volume for persistence

Access

# Redis CLI
docker exec -it $(docker ps -qf name=data_redis) redis-cli

# Health check
docker exec $(docker ps -qf name=data_redis) redis-cli ping

# Info/stats
docker exec $(docker ps -qf name=data_redis) redis-cli info

Used for session storage, caching, and rate limiting across applications.

MinIO

Image: minio/minio:latest Ports: 9000 (API), 9001 (Console) Volume: Named volume for object data Placement: apps-data only

S3-compatible object storage for file uploads and media.

Console: minio.apps.jlwaller.com (HTTPS via Traefik, internal DNS only)

Bucket Management

# Get into MinIO container
ssh john@192.168.51.30
docker exec -it $(docker ps -qf name=data_minio) sh

# Set up mc alias (use root credentials from Docker secret)
mc alias set local http://localhost:9000 <root-user> <root-password>
mc ls local/
mc mb local/<bucket-name>
mc anonymous set download local/<bucket-name>

Current buckets:

Bucket Application
recipicity-production-images Recipicity Production
recipicity-staging-images Recipicity Staging
recipicity-assets General assets
recipicity-images Recipicity Images (shared)
generationsoftrust-assets GoT assets
event-assets Events platform

User and Policy Management

# Generate credentials
ACCESS_KEY=$(cat /dev/urandom | tr -dc 'A-Za-z0-9' | fold -w 20 | head -n 1)
SECRET_KEY=$(cat /dev/urandom | tr -dc 'A-Za-z0-9' | fold -w 40 | head -n 1)

# Create user and attach a bucket-scoped policy
mc admin user add local $ACCESS_KEY $SECRET_KEY
mc admin policy create local <policy-name> /tmp/policy.json
mc admin policy attach local <policy-name> --user=$ACCESS_KEY

After creating a MinIO user, store credentials as Docker secrets:

echo "<access-key>" | docker secret create <app>_<env>_minio_access_key -
echo "<secret-key>" | docker secret create <app>_<env>_minio_secret_key -

Applications reference these via *_FILE environment variables pointing to /run/secrets/.

Vault

Image: HashiCorp Vault 1.15.6 URL: vault.apps.jlwaller.com (Traefik) or http://192.168.51.30:8200 (direct) Stack: data (runs alongside other data services on apps-data) Storage backend: PostgreSQL (vault database) Port 8200: Published on data stack for direct access from VLAN 51 Engine: KV v2 at secret/ path Unseal: Shamir 2-of-3, auto-unseal service on dockerrr (vault-unseal systemd unit)

Vault provides centralized secrets management for the platform. It runs on the data_net overlay network and is accessible via Traefik with Let's Encrypt TLS or directly on port 8200.

Key secrets stored:

Path Contents
secret/unifi/udm UniFi API key
secret/pushover Pushover notification credentials
secret/recipicity/prod/analytics Recipicity GA key (G-9NC4HEHCZ0)
secret/generationsoftrust/prod/analytics GoT GA key (G-2MSRPL704E)

Docker Registry

Image: registry:2 URL: registry.apps.jlwaller.com (HTTPS via Traefik, Let's Encrypt cert) Stack: registry (separate from data stack) Placement: apps-data only

Operations

# List all repositories
curl -sk https://registry.apps.jlwaller.com/v2/_catalog

# List tags for an image
curl -sk https://registry.apps.jlwaller.com/v2/{repo}/tags/list

# Push an image
docker tag myapp:latest registry.apps.jlwaller.com/myapp:latest
docker push registry.apps.jlwaller.com/myapp:latest

Current Repositories

  • recipicity-production-api, recipicity-production-frontend, recipicity-production-admin
  • recipicity-staging-api, recipicity-staging-frontend, recipicity-staging-admin
  • jlwaller-site
  • pingcast
  • platform-docs

Prometheus Exporters

Exporters run as part of the monitoring-exporters stack. See Monitoring for full details.

Exporter Port Connection Credentials
postgres_exporter 9187 monitoring user via data_net Docker secret monitoring_pg_password
redis_exporter 9121 Password file via data_net Docker secret monitoring_redis_password (JSON format)

Secrets

Data service credentials are managed via Docker secrets:

Secret Purpose
pg_password PostgreSQL superuser password
redis_password Redis authentication
monitoring_pg_password PostgreSQL monitoring user password (read-only)
monitoring_redis_password Redis password for exporter (JSON format)
recipicity_production_database_url Production DB connection string
recipicity_staging_database_url Staging DB connection string
minio_root_password MinIO admin password

See Credential Rotation Runbook for rotation procedures.

Troubleshooting

Problem Solution
"Access Denied" listing buckets Verify you're using root credentials from Docker secret
"Bucket does not exist" Create it: mc mb local/<bucket-name>
"Secret already exists" Remove first: docker secret rm <name>, then recreate
API can't connect after secret change Restart the service: docker service update --force <service>