Skip to content

Architecture

Updated April 2026. Docker Swarm was fully decommissioned and replaced with k3s.

Cluster Topology

6-node k3s cluster, all nodes on VLAN 51 (192.168.51.0/24).

Internet → Cloudflare → UDM Firewall → apps-edge (Traefik, ports 80/443)
                                       k3s cluster
                              ┌────────────────────────┐
                              │  apps-app1 :10 (ctrl)  │
                              │  apps-app2 :15          │
                              │  apps-data :30 (data)   │
                              │  apps-dev1 :40 (build)  │
                              │  apps-edge :50 (edge)   │
                              │  apps-monitoring :20    │
                              └────────────────────────┘

Node Details

Node IP Role Key Services
apps-app1 192.168.51.10 control-plane + worker kubectl, registry
apps-data 192.168.51.30 worker postgres, pgbouncer, redis, minio, vault
apps-dev1 192.168.51.40 worker + build Docker image builds, source code
apps-monitoring 192.168.51.20 worker cAdvisor (:8080)
apps-edge 192.168.51.50 worker Traefik hostNetwork (80/443), docs source
apps-app2 192.168.51.15 worker application workloads

Traffic Flow

Internet
  → Cloudflare (DDoS, WAF, CDN)
  → UDM Firewall (ports 80/443 only)
  → apps-edge:Traefik (TLS termination, routing via IngressRoutes)
  → k3s pod (any worker node, via cilium CNI)

Certificate Management

cert-manager handles all TLS via Let's Encrypt DNS-01 (Cloudflare):

Issuer Covers
letsencrypt-prod *.jlwaller.com, *.apps.jlwaller.com
letsencrypt-prod-recipicity *.recipicity.com, recipicity.com
letsencrypt-prod-http01 api.generationsoftrust.com (pending DNS A record)

Storage

Data services run as hostNetwork pods pinned to apps-data. All data on host:

Path (on apps-data) Service
/opt/postgres PostgreSQL data
/opt/redis Redis persistence
/opt/minio MinIO object storage
/opt/vault/file Vault file backend
/opt/backups Daily backup output

Registry storage at /opt/registry/ on apps-app1.

Networking

k3s uses Cilium as the CNI (replaced Flannel). Pods communicate across nodes via Cilium overlay.

Data services use hostNetwork: true — they bind directly to 192.168.51.30 ports, same as before the k3s migration. No port changes.

Deployment Workflow

1. Edit source code on apps-dev1 (/opt/development/recipicity/)
2. Build image: bash build.sh frontend --env production --push
   (pushes to registry.apps.jlwaller.com:5000 on apps-app1)
3. Update manifest on dockerrr:
   /opt/docker/homelab/k3s-manifests/<namespace>/<file>.yaml
   (update image digest)
4. Apply: cat manifest.yaml | ssh john@192.168.51.10 "kubectl apply -f -"
5. Monitor: kubectl rollout status deployment/<name> -n <namespace>

Secrets Management

Application secrets are Kubernetes Secrets in each namespace:

# Read a secret
ssh john@192.168.51.10 "kubectl get secret recipicity-production-secrets \
  -n recipicity-production -o jsonpath='{.data.jwt-secret}' | base64 -d"

Vault (at apps-data:8200) stores API keys, UniFi tokens, Pushover credentials:

curl -s -H "X-Vault-Token: <token>" http://192.168.51.30:8200/v1/secret/data/<path>