1.4 KiB
1.4 KiB
Deployment
Docker Compose
The default. Every built room gets its own compose stack.
cd gen/<room>/soleprint
docker compose up
Environment variables in .env control the stack:
DEPLOYMENT_NAME— identifies the deploymentNETWORK_NAME— Docker network nameSOLEPRINT_PORT— port to expose
Each room runs independently. No shared state between rooms.
Local Development with Caddy
For multi-room local dev, Caddy acts as a reverse proxy. It routes by hostname.
Caddyfile location: ~/wdir/ppl/local/Caddyfile
Add entries to /etc/hosts for each room:
127.0.0.1 myroom.local.ar
127.0.0.1 myroom.spr.local.ar
Routing pattern:
myroom.local.ar— app directmyroom.spr.local.ar— app with soleprint sidebar
Caddy reads the hostname and proxies to the right container.
AWS Deployment
Production runs on EC2. Domain: soleprint.mcrn.ar.
Deploy standalone with:
./ctrl/deploy.sh
Services sit on a shared Docker network. Nginx handles routing by subdomain.
The Gateway Pattern
All rooms share one Nginx entry point. One server, one IP, many rooms.
How it works:
- Wildcard DNS:
*.mcrn.arpoints to the EC2 instance. - Nginx receives the request and reads the hostname.
- Hostname maps to a container on the shared Docker network.
- Nginx proxies to that container.
No per-room DNS config. Add a room, add an Nginx block, reload. The wildcard handles the rest.