Recipicity Deployment¶
Production¶
Always Use the Build Script
Never run docker build manually for production Recipicity. The frontend requires build args for API URL configuration.
Build and Deploy¶
ssh john@192.168.51.40
cd /opt/development/recipicity/prod
# Build, push to registry, and deploy to swarm
./build.sh all --deploy
# Or individual components
./build.sh frontend --deploy
./build.sh api --deploy
./build.sh admin --deploy
# Options:
# --no-cache Build without Docker cache
# --push Push to registry after build
# --deploy Deploy to swarm after push (implies --push)
Digest-Pinned Deploys
The build script automatically captures the SHA256 digest from each docker push and uses it with docker service update --image <image>@sha256:<digest>. This eliminates Swarm's stale :latest tag cache issues — no need for --force updates.
Manual Deploy (if image already in registry)¶
# On any manager node
docker stack deploy -c /opt/swarm/stacks/recipicity-production/stack.recipicity-production.yml app-recipicity-production
Force Update a Single Service¶
# Preferred: use exact digest from registry
docker service update --with-registry-auth --image registry.apps.jlwaller.com/recipicity-production-api@sha256:<digest> app-recipicity-production_api
# Fallback: force pull of :latest tag
docker service update --with-registry-auth --image registry.apps.jlwaller.com/recipicity-production-frontend:latest --force app-recipicity-production_frontend
Docker Cache Gotcha
Swarm nodes cache pulled images locally. Using :latest tag with --force may still serve a cached image if the digest hasn't changed. The build script's digest-pinned approach avoids this entirely. For manual deploys, always use @sha256:<digest> when possible.
Verify Production¶
# Check all 5 services running
docker stack services app-recipicity-production
# Check frontend
curl -I https://recipicity.com
# Check API
curl https://recipicity.com/api/health
# Check admin
curl -I https://recipicity.com/admin
# Logs
docker service logs -f app-recipicity-production_api
docker service logs -f app-recipicity-production_frontend
Staging¶
Build and Deploy¶
ssh john@192.168.51.40
cd /opt/development/recipicity/staging
# Build, push, and deploy (digest-pinned)
./build.sh all --deploy
# Or individual components
./build.sh frontend --deploy
./build.sh api --deploy
./build.sh admin --deploy
# Full rebuild from scratch
./build.sh all --no-cache --deploy
Key Differences from Production¶
| Aspect | Production | Staging |
|---|---|---|
| Frontend | Next.js | Next.js 16.1.6 |
| Frontend port | 3000 (Node.js) | 3000 (Node.js) |
| Rendering | Server-side (SSR) | Server-side (SSR) |
| Internal API | http://app-recipicity-production_api:3000/api |
http://app-recipicity-staging_api:3000/api |
| CSS | Tailwind CSS v4 | Tailwind CSS v4 |
| Auth | httpOnly cookies (BFF pattern) | httpOnly cookies (BFF pattern) |
| AI Provider | Provider-agnostic (Anthropic + OpenAI) | Provider-agnostic (Anthropic + OpenAI) |
Staging-Specific Notes¶
- Frontend needs
data_netnetwork for SSR API calls (direct service mesh, not through Traefik) - Internal API URL set via
INTERNAL_API_URLenvironment variable - Memory: 512MB limit / 256MB reservation (higher than production due to SSR)
Staging Admin (Separate Stack)¶
docker stack deploy -c /opt/swarm/stacks/app-recipicity/stack.recipicity-staging-admin.yml app-recipicity-staging-admin
AI Configuration¶
Recipicity uses a provider-agnostic AI client that routes requests to either Anthropic (Claude) or OpenAI based on the selected model.
How It Works¶
- Model detection: Models starting with
claude-route to Anthropic; others route to OpenAI - Provider setting: Admin panel sets
defaultProvidertoauto,openai, oranthropic - Fallback: If the chosen provider has no API key configured, falls back to the other provider
- API keys: Stored in the
ai_settingsdatabase table (DB key takes priority over environment variables)
AI Features¶
| Feature | Description |
|---|---|
| Recipe Generation | AI generates complete recipes from user prompts |
| Recipe Import (URL) | Schema.org parsing + AI enrichment (description, tags) |
| Recipe Import (Image) | Vision model extracts recipe from photos |
| Recipe Import (Text) | AI structures pasted recipe text |
| Conversational Recipes | Chat-based recipe creation and modification |
AI Tracking¶
All AI calls are logged to the ai_generation_logs table with provider, model, token usage, and cost. Imported recipes store aiProvider and aiModel fields for traceability.
Configure via Admin Panel¶
- Navigate to Admin → AI Settings
- Set API keys for Anthropic and/or OpenAI
- Select default model (e.g.,
claude-sonnet-4-20250514) - Set provider preference (
autorecommended)
Recipe Import Pipeline¶
URL Import Flow¶
- Fetch & parse — Downloads page, extracts Schema.org JSON-LD or microdata
- Schema.org extraction — Parses structured recipe data (ingredients, steps, times)
- AI enrichment — Generates factual description, suggests tags
- Unit normalization — Standardizes ingredient units (cups→cup, tablespoons→tbsp, pounds→lb)
- Drink detection — Heuristic analysis classifies recipe as food or drink
- Save as draft — User reviews before publishing
Unit Normalization¶
Ingredient units are normalized to canonical forms to prevent duplicates:
| Variants | Canonical |
|---|---|
| cups, Cups, c, C | cup |
| tablespoons, Tablespoon, Tbsp | tbsp |
| teaspoons, teaspoon, TSP | tsp |
| pounds, lbs, pound | lb |
| ounces, ounce, oz. | oz |
| slices, pieces, cloves, etc. | slice, piece, clove |
Copyright Policy
Recipe imports never copy copyrightable content (personal stories, creative descriptions, images). Only factual data is extracted: ingredient lists, cooking steps, times, servings. AI generates original descriptions. Source URL is always attributed.
Database Operations¶
Connect to Production Database¶
Connect to Staging Database¶
Run Migrations¶
# Staging (safe to test)
ssh john@192.168.51.40
cd /opt/development/recipicity/staging/recipicity-api
npx prisma migrate deploy
# Production (always test in staging first!)
cd /opt/development/recipicity/prod/recipicity-api
npx prisma migrate deploy
Backup Production Database¶
docker exec $(docker ps -qf name=data_postgres) pg_dump -U postgres recipicity_production > recipicity_prod_$(date +%F).sql
Rollback¶
Service Rollback¶
docker service rollback app-recipicity-production_api
docker service rollback app-recipicity-production_frontend
Deploy Previous Image Version¶
# List available tags
curl -sk https://registry.apps.jlwaller.com/v2/recipicity-production-api/tags/list
# Deploy specific tag
docker service update --with-registry-auth --image registry.apps.jlwaller.com/recipicity-production-api:{tag} app-recipicity-production_api
Payment System (Paddle)¶
- Webhook route mounted BEFORE
express.json()middleware (needs raw body for signature verification) - Credentials stored encrypted in SiteConfig DB table (AES-256-GCM)
- Encryption key from Docker secret:
recipicity_staging_encryption_key - Paddle SDK v1.10: uses
paddle.transactions.create()(not checkouts) - Cancel:
{ effectiveFrom: "immediately" } - Auth tokens:
paddle.customers.generateAuthToken()returns.customerAuthToken
Disk Maintenance¶
Docker images accumulate on cluster nodes. Periodically prune unused images:
# Check reclaimable space
ssh john@192.168.51.10 "docker system df"
ssh john@192.168.51.15 "docker system df"
# Prune unused images (safe — only removes untagged/unused)
ssh john@192.168.51.10 "docker image prune -af"
ssh john@192.168.51.15 "docker image prune -af"
# Also prune build cache on dev server
ssh john@192.168.51.40 "docker builder prune -af && docker image prune -af"