Files
soleprint/docs/data/en/deployment.md
2026-04-14 10:32:05 -03:00

64 lines
1.4 KiB
Markdown

# Deployment
## Docker Compose
The default. Every built room gets its own compose stack.
```bash
cd gen/<room>/soleprint
docker compose up
```
Environment variables in `.env` control the stack:
- `DEPLOYMENT_NAME` — identifies the deployment
- `NETWORK_NAME` — Docker network name
- `SOLEPRINT_PORT` — port to expose
Each room runs independently. No shared state between rooms.
## Local Development with Caddy
For multi-room local dev, Caddy acts as a reverse proxy. It routes by hostname.
Caddyfile location: `~/wdir/ppl/local/Caddyfile`
Add entries to `/etc/hosts` for each room:
```
127.0.0.1 myroom.local.ar
127.0.0.1 myroom.spr.local.ar
```
Routing pattern:
- `myroom.local.ar` — app direct
- `myroom.spr.local.ar` — app with soleprint sidebar
Caddy reads the hostname and proxies to the right container.
## AWS Deployment
Production runs on EC2. Domain: `soleprint.mcrn.ar`.
Deploy standalone with:
```bash
./ctrl/deploy.sh
```
Services sit on a shared Docker network. Nginx handles routing by subdomain.
## The Gateway Pattern
All rooms share one Nginx entry point. One server, one IP, many rooms.
How it works:
1. Wildcard DNS: `*.mcrn.ar` points to the EC2 instance.
2. Nginx receives the request and reads the hostname.
3. Hostname maps to a container on the shared Docker network.
4. Nginx proxies to that container.
No per-room DNS config. Add a room, add an Nginx block, reload. The wildcard handles the rest.