test: trigger pipeline

This commit is contained in:
buenosairesam
2026-01-02 19:05:57 -03:00
parent 9e5cbbad1f
commit 56f720ca92
41 changed files with 78 additions and 3252 deletions

282
CLAUDE.md
View File

@@ -4,7 +4,7 @@
Soleprint is a **development workflow platform** - a self-contained environment where you can run, test, and document everything in isolation. Born from the friction of working on small teams where testing required PRs, documentation was scattered, and quick API connectors took too long to set up. Soleprint is a **development workflow platform** - a self-contained environment where you can run, test, and document everything in isolation. Born from the friction of working on small teams where testing required PRs, documentation was scattered, and quick API connectors took too long to set up.
**Core idea:** BDD Gherkin Backend/Frontend Tests, with reusable connectors and tools that work across projects. **Core idea:** BDD -> Gherkin -> Backend/Frontend Tests, with reusable connectors and tools that work across projects.
**Name:** Soleprint - "Cada paso deja huella" / "Each step leaves a mark" **Name:** Soleprint - "Cada paso deja huella" / "Each step leaves a mark"
@@ -15,91 +15,53 @@ spr/
├── CLAUDE.md # You are here ├── CLAUDE.md # You are here
├── README.md # User-facing docs ├── README.md # User-facing docs
├── schema.json # Source of truth for models ├── schema.json # Source of truth for models
├── build.py # Build tool (dev/deploy) ├── build.py # Build tool
├── cfg/ # Room configurations ├── cfg/ # Room configurations
│ ├── soleprint.config.json # Framework model definitions │ ├── standalone/ # Base soleprint config
│ │ ├── config.json # Framework branding/terminology
│ │ └── data/ # Data files (veins.json, shunts.json, etc.)
│ └── amar/ # AMAR room config │ └── amar/ # AMAR room config
│ ├── .env │ ├── config.json # Can rebrand (e.g., "pawprint")
│ ├── data/ # Room-specific data files
│ ├── .env.example
│ ├── docker-compose.yml │ ├── docker-compose.yml
│ ├── Dockerfile.backend │ ├── soleprint/ # Soleprint Docker config for this room
│ ├── Dockerfile.frontend │ ├── databrowse/depot/
│ ├── ctrl/ # Room-specific scripts │ ├── tester/tests/
│ ├── databrowse/depot/ # AMAR schema for databrowse │ ├── monitors/
── tester/tests/ # AMAR-specific tests ── models/
│ ├── monitors/turnos/ # AMAR-specific monitors
│ ├── models/ # AMAR models (pydantic, django, prisma)
│ └── link/ # Databrowse adapter
├── ctrl/ # Soleprint standalone ctrl (Docker) ├── ctrl/ # Build/run scripts
│ ├── build.sh # python build.py dev --cfg <room> │ ├── build.sh # ./build.sh [room]
│ ├── start.sh # docker compose up │ ├── start.sh # ./start.sh [room] [-d]
│ ├── stop.sh │ ├── stop.sh # ./stop.sh [room]
│ └── logs.sh │ └── logs.sh # ./logs.sh [room]
├── artery/ # Vital connections ├── artery/ # Vital connections
│ ├── veins/ # Stateless API connectors │ ├── veins/ # Stateless API connectors (jira, slack, google)
│ │ ├── jira/
│ │ ├── slack/
│ │ ├── google/
│ │ ├── base.py
│ │ ├── oauth.py
│ │ └── PATTERNS.md
│ ├── shunts/ # Fake connectors for testing │ ├── shunts/ # Fake connectors for testing
│ │ └── example/
│ ├── pulses/ # Composed: Vein + Room + Depot │ ├── pulses/ # Composed: Vein + Room + Depot
── plexuses/ # Full apps: backend + frontend + DB ── plexus/ # Full apps: backend + frontend + DB
│ └── room/ # Base room templates
├── atlas/ # Documentation system ├── atlas/ # Documentation system
── book/ # Gherkin samples, feature docs, arch diagrams ── books/ # Soleprint docs (external via depots)
│ ├── static/ # Prism syntax highlighting
│ ├── main.py
│ └── index.html
├── station/ # Tools & execution ├── station/ # Tools & execution
│ ├── tools/ │ ├── tools/ # modelgen, datagen, tester, sbwrapper
│ ├── modelgen/ # Model generation └── monitors/ # databrowse
│ │ ├── datagen/ # Test data generation
│ │ ├── tester/ # BDD/playwright test runner
│ │ ├── graphgen/ # Graph generation
│ │ └── ...
│ └── monitors/
│ └── databrowse/ # SQL data browser (generic)
├── data/ # JSON content files
│ ├── rooms.json
│ ├── depots.json
│ ├── veins.json
│ └── ...
├── soleprint/ # Core entry points (versioned) ├── soleprint/ # Core entry points (versioned)
│ ├── main.py # Multi-port entry │ ├── main.py
│ ├── run.py # Single-port dev server │ ├── run.py
│ ├── index.html │ ├── index.html
│ ├── requirements.txt │ ├── requirements.txt
│ ├── Dockerfile │ ├── Dockerfile
│ └── dataloader/ │ └── dataloader/
── gen/ # Built instance (gitignored) ── gen/ # Built instances (gitignored)
├── ... (copies from soleprint/, artery/, atlas/, station/) ├── standalone/ # python build.py dev
── models/ # Generated by modelgen ── amar/ # python build.py dev --cfg amar
│ └── docker-compose.yml # For standalone Docker
└── mainroom/ # Orchestration: soleprint + managed room
├── amar -> ../cfg/amar # Symlink to room config
├── soleprint/ # Soleprint Docker config
│ ├── docker-compose.yml
│ └── docker-compose.nginx.yml
├── sbwrapper/ # Sidebar wrapper UI
│ ├── config.json
│ ├── sidebar.js
│ └── sidebar.css
└── ctrl/ # Orchestration scripts
├── start.sh # Start amar + soleprint
├── stop.sh
├── deploy.sh # Deploy to AWS
└── server/ # AWS setup scripts
``` ```
## The Four Systems ## The Four Systems
@@ -117,82 +79,71 @@ spr/
Vein ──────► Pulse ──────► Plexus Vein ──────► Pulse ──────► Plexus
│ │ │ │ │ │
│ │ └── Full app: backend + frontend + DB │ │ └── Full app: backend + frontend + DB
│ │ (e.g., WhatsApp with chat UI)
│ │ │ │
│ └── Composed: Vein + Room + Depot │ └── Composed: Vein + Room + Depot
│ (e.g., Jira vein for specific project)
└── Stateless API connector └── Stateless API connector
(e.g., Jira, Slack, Google)
Shunt ─── Fake connector for testing Shunt ─── Fake connector for testing
(e.g., mercadopago mock with UI to set responses)
``` ```
| Type | State | Frontend | Deploy | | Type | State | Frontend | Deploy |
|------|-------|----------|--------| |------|-------|----------|--------|
| Vein | None (or OAuth) | Optional test UI | With soleprint | | Vein | None (or OAuth) | Optional test UI | With soleprint |
| Shunt | Configurable responses | Config UI | With soleprint |
| Pulse | Vein + config | Uses vein's | With soleprint | | Pulse | Vein + config | Uses vein's | With soleprint |
| Plexus | Full app state | Required | Self-contained | | Plexus | Full app state | Required | Self-contained |
| Shunt | Configurable responses | Config UI | With soleprint |
## Key Concepts ## Room Configuration
### Rooms Each room in `cfg/` has:
A **Room** is an environment config with: - `config.json` - Framework branding/terminology (can rebrand soleprint)
- `ctrl/` folder with commands for that room - `data/` - Data files (veins.json, shunts.json, depots.json, etc.)
- `.env` with paths and settings - Room-specific: databrowse depot, tester tests, monitors, models
- Room-specific configs (databrowse depot, tester tests, monitors, models)
### Mainroom Managed rooms (like amar) also have:
Orchestrates soleprint + managed room together: - `docker-compose.yml` - Room's own services
- `mainroom/amar` → symlink to `cfg/amar` - `soleprint/` - Soleprint Docker config for this room
- `mainroom/soleprint/` → soleprint Docker config - `.env.example` - Environment template
- `mainroom/sbwrapper/` → sidebar overlay for quick login, Jira info
- `mainroom/ctrl/` → start/stop/deploy scripts
### Build & Gen ## Build & Run
- `soleprint/` = Versioned source
- `gen/` = Built instance (gitignored, Docker-ready)
- `python build.py dev --cfg amar` copies everything + room config
## Development Workflow ### Commands
### Soleprint Standalone (no managed room)
```bash ```bash
# Build # Build
cd spr/ python build.py dev # -> gen/standalone/
python build.py dev python build.py dev --cfg amar # -> gen/amar/
python build.py dev --all # -> both
# Run with Docker # Using ctrl scripts
./ctrl/start.sh ./ctrl/build.sh # Build standalone
./ctrl/build.sh amar # Build amar
./ctrl/build.sh --all # Build all
# Or bare-metal ./ctrl/start.sh # Start standalone
cd gen && .venv/bin/python run.py ./ctrl/start.sh amar # Start amar
./ctrl/start.sh amar -d # Detached
./ctrl/stop.sh amar # Stop
./ctrl/logs.sh amar # View logs
# Bare-metal dev
cd gen/standalone && .venv/bin/python run.py
``` ```
### Soleprint + Amar (with managed room) ### Adding a New Managed Room
```bash ```bash
# Build soleprint with amar config # 1. Create room config
cd spr/ mkdir -p cfg/clientx/data
python build.py dev --cfg amar
# Create shared network # 2. Copy base config
docker network create soleprint_network cp cfg/standalone/config.json cfg/clientx/
cp -r cfg/standalone/data/* cfg/clientx/data/
# Start everything # 3. Customize as needed (shunts, depots, branding)
cd mainroom/ctrl
./start.sh -d # Detached
./start.sh # Foreground (logs)
./stop.sh # Stop all
```
### Deploy to AWS # 4. Build and run
```bash python build.py dev --cfg clientx
cd mainroom/ctrl ./ctrl/start.sh clientx
./deploy.sh --dry-run # Preview
./deploy.sh # Deploy
``` ```
## Ports ## Ports
@@ -200,110 +151,34 @@ cd mainroom/ctrl
| Service | Port | | Service | Port |
|---------|------| |---------|------|
| Soleprint | 12000 | | Soleprint | 12000 |
| Artery | 12001 |
| Atlas | 12002 |
| Station | 12003 |
| Amar Backend | 8000 | | Amar Backend | 8000 |
| Amar Frontend | 3000 | | Amar Frontend | 3000 |
## Tools ## Tools
| Tool | Location | Status | | Tool | Location | Purpose |
|------|----------|--------| |------|----------|---------|
| modelgen | station/tools/modelgen | Working | | modelgen | station/tools/modelgen | Model generation |
| datagen | station/tools/datagen | Working | | datagen | station/tools/datagen | Test data generation |
| tester | station/tools/tester | Advanced | | tester | station/tools/tester | BDD/playwright test runner |
| graphgen | station/tools/graphgen | WIP | | sbwrapper | station/tools/sbwrapper | Sidebar wrapper UI |
| databrowse | station/monitors/databrowse | SQL data browser |
## Monitors
| Monitor | Location | Notes |
|---------|----------|-------|
| databrowse | station/monitors/databrowse | Generic SQL browser |
| turnos | cfg/amar/monitors/turnos | AMAR-specific |
## Veins
| Vein | Location | Auth |
|------|----------|------|
| jira | artery/veins/jira | Token |
| slack | artery/veins/slack | Token |
| google | artery/veins/google | OAuth2 |
## Quick Reference
```bash
# === Build ===
python build.py dev # Soleprint only
python build.py dev --cfg amar # With amar config
python build.py deploy --output /path/ # Production build
# === Standalone (soleprint only) ===
./ctrl/build.sh amar # Build with amar
./ctrl/start.sh # Docker start
./ctrl/stop.sh # Docker stop
# === With Managed Room (mainroom) ===
cd mainroom/ctrl
./start.sh -d # Start detached
./start.sh amar # Start only amar
./start.sh soleprint # Start only soleprint
./stop.sh # Stop all
./deploy.sh # Deploy to AWS
# === Bare-metal Dev ===
cd gen
.venv/bin/python run.py # Single-port dev server
# === Health Checks ===
curl localhost:12000/health # Soleprint
curl localhost:8000/health # Amar backend
```
## Integration with ppl/ (Infrastructure) ## Integration with ppl/ (Infrastructure)
The `ppl/` repo manages infrastructure alongside spr:
``` ```
wdir/ wdir/
├── spr/ # This repo (soleprint) ├── spr/ # This repo (soleprint)
├── ppl/ # Pipelines & infrastructure ├── ppl/ # Pipelines & infrastructure
│ ├── ctrl/ │ ├── pipelines/spr-standalone/ # CI/CD for standalone
│ ├── deploy-gen.sh # Build spr + deploy via mainroom │ ├── pipelines/spr-managed/ # Manual deploy for rooms
│ └── dns.sh # Route53 DNS management │ └── gateway/ # Nginx configs
│ ├── ci/ # Woodpecker CI configs
│ ├── gateway/ # Nginx/Caddy configs
│ └── pipelines/ # CI/CD pipelines
└── ama/ # Amar source code └── ama/ # Amar source code
├── amar_django_back/
└── amar_frontend/
``` ```
### Deploy from ppl/ ### Pipeline (standalone only)
```bash - git push -> woodpecker -> build gen/standalone/ -> docker push -> deploy
cd ppl/ctrl - Managed rooms deploy manually (no pipeline for client code)
./deploy-gen.sh # Build spr + deploy to AWS
./deploy-gen.sh --dry-run # Preview
./dns.sh add soleprint # Add soleprint.mcrn.ar DNS
```
### Server Structure (mcrn.ar)
```
~/mainroom/ # Deployed mainroom
├── amar/ # Amar Docker services
├── soleprint/ # Soleprint Docker services
└── ctrl/ # Server-side scripts
# Services run on:
soleprint.mcrn.ar:12000 # Soleprint
amar.mcrn.ar # Amar (nginx proxied)
```
### Adding New Services
1. Add DNS: `ppl/ctrl/dns.sh add <service>`
2. Add nginx config in `ppl/gateway/`
3. Add docker-compose in `mainroom/<service>/`
4. Update `mainroom/ctrl/start.sh` if needed
## External Paths ## External Paths
@@ -311,7 +186,6 @@ amar.mcrn.ar # Amar (nginx proxied)
|------|------| |------|------|
| Amar Backend | /home/mariano/wdir/ama/amar_django_back | | Amar Backend | /home/mariano/wdir/ama/amar_django_back |
| Amar Frontend | /home/mariano/wdir/ama/amar_frontend | | Amar Frontend | /home/mariano/wdir/ama/amar_frontend |
| Venv | /home/mariano/wdir/venv/spr |
| Pipelines | /home/mariano/wdir/ppl | | Pipelines | /home/mariano/wdir/ppl |
## Files Ignored ## Files Ignored

View File

@@ -1,44 +0,0 @@
# Mainroom - Environment Configuration
# Copy this file to .env and fill in your values
#
# This configuration is shared across all services in mainroom
# Individual services can override these values in their own .env files
# =============================================================================
# DEPLOYMENT CONFIG
# =============================================================================
# Unique identifier for this deployment (used for container/network names)
DEPLOYMENT_NAME=soleprint
# Network name for Docker services
NETWORK_NAME=soleprint_network
# =============================================================================
# DOMAINS (Local Development)
# =============================================================================
# Domain for soleprint services
SOLEPRINT_DOMAIN=soleprint.local.com
# Domain for the managed application (e.g., amar)
MANAGED_DOMAIN=amar.local.com
# =============================================================================
# SOLEPRINT PATHS
# =============================================================================
# Path to generated soleprint instance
SOLEPRINT_BARE_PATH=/home/mariano/wdir/spr/gen
# =============================================================================
# PORTS
# =============================================================================
SOLEPRINT_PORT=12000
ARTERY_PORT=12001
ATLAS_PORT=12002
STATION_PORT=12003
# =============================================================================
# MANAGED APP CONFIG (when using with amar or other rooms)
# =============================================================================
# Complete nginx location blocks for the managed domain
# This is where you define your app's routing structure
# See cfg/amar/.env.example for managed app specific config

View File

@@ -1,145 +0,0 @@
# Mainroom - Orchestration Layer
## Purpose
Mainroom orchestrates **soleprint + managed rooms** together (e.g., amar).
Key principle: Connect soleprint to managed apps **without modifying either side**.
## Structure
```
mainroom/
├── amar -> ../cfg/amar # Symlink to room config
├── soleprint/ # Soleprint Docker config
│ ├── docker-compose.yml
│ ├── docker-compose.nginx.yml
│ └── .env
├── sbwrapper/ # Sidebar wrapper UI
│ ├── config.json # Room-specific (users, Jira)
│ ├── sidebar.js
│ └── sidebar.css
└── ctrl/ # Orchestration scripts
├── start.sh # Start all services
├── stop.sh
├── deploy.sh # Deploy to AWS
└── server/ # AWS setup scripts
```
## Usage
### Local Development
```bash
# First, build soleprint
cd spr/
python build.py dev --cfg amar
# Create shared network
docker network create soleprint_network
# Start everything
cd mainroom/ctrl
./start.sh -d # Detached
./start.sh # Foreground (logs)
./start.sh amar # Only amar
./start.sh soleprint # Only soleprint
./stop.sh # Stop all
```
### Deploy to AWS
```bash
cd mainroom/ctrl
./deploy.sh --dry-run # Preview
./deploy.sh # Deploy
```
## Components
### ctrl/ - Orchestration
| Script | Purpose |
|--------|---------|
| start.sh | Start amar + soleprint |
| stop.sh | Stop all |
| deploy.sh | rsync to AWS |
| server/ | AWS setup scripts |
### soleprint/ - Docker Config
Uses `SOLEPRINT_BARE_PATH` to mount gen/ into container.
**Env vars:**
- `SOLEPRINT_BARE_PATH` - Path to gen/
- `DEPLOYMENT_NAME` - Container prefix
- `NETWORK_NAME` - Docker network (soleprint_network)
- `SOLEPRINT_PORT` - Default 12000
### sbwrapper/ - Sidebar Wrapper
Collapsible sidebar overlay for managed apps.
**Features:**
- Quick login (switch test users)
- Jira ticket info
- Environment info
- Keyboard: `Ctrl+Shift+P`
**config.json:**
```json
{
"room_name": "amar",
"wrapper": {
"users": [
{"id": "admin", "label": "Admin", "username": "admin@test.com", ...}
],
"jira": {"ticket_id": "VET-535"}
}
}
```
### amar/ - Room Symlink
Points to `../cfg/amar` which contains:
- docker-compose.yml
- .env
- Dockerfile.backend, Dockerfile.frontend
- databrowse/depot/, tester/tests/, monitors/, models/
## How It Works
1. `build.py dev --cfg amar` creates gen/ with room config
2. `mainroom/amar` symlinks to `cfg/amar`
3. `ctrl/start.sh` finds docker-compose.yml in amar/ and soleprint/
4. Both share `soleprint_network` for inter-container communication
5. sbwrapper overlays UI on managed app
## Ports
| Service | Port |
|---------|------|
| Soleprint | 12000 |
| Amar Backend | 8000 |
| Amar Frontend | 3000 |
## Integration with ppl/
Deploy via ppl/ctrl for centralized infrastructure management:
```bash
cd /home/mariano/wdir/ppl/ctrl
./deploy-gen.sh # Build spr + deploy
./dns.sh add soleprint # Add DNS record
```
## Server Structure (mcrn.ar)
```
~/mainroom/
├── amar/ # Amar Docker services
├── soleprint/ # Soleprint Docker services
└── ctrl/ # Server-side scripts
```
## External Paths
| What | Path |
|------|------|
| Amar Backend | /home/mariano/wdir/ama/amar_django_back |
| Amar Frontend | /home/mariano/wdir/ama/amar_frontend |
| Soleprint gen | /home/mariano/wdir/spr/gen |
| Pipelines | /home/mariano/wdir/ppl |

View File

@@ -1 +0,0 @@
../cfg/amar

View File

@@ -1,14 +0,0 @@
# Configuration for mainroom deployment
# Server configuration
DEPLOY_SERVER=mariano@mcrn.ar
# Docker deployment (default)
DEPLOY_REMOTE_PATH=~/mainroom
# Bare metal deployment (--bare-metal flag)
DEPLOY_BARE_METAL_PATH=~/soleprint
# Local source code paths
# (Defaults are set in deploy.sh if not specified here)
LOCAL_SOLEPRINT=/home/mariano/wdir/spr/gen

View File

@@ -1,102 +0,0 @@
# Exclude patterns for rsync deployment
# Used by deploy.sh for all sync operations
# =============================================================================
# VERSION CONTROL
# =============================================================================
.git
.gitignore
# =============================================================================
# PYTHON
# =============================================================================
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
.mypy_cache
.coverage
htmlcov
*.egg-info
dist
build
.venv
venv
env
ENV
# Django build artifacts
staticfiles
*.sqlite3
*.db
# =============================================================================
# NODE/JAVASCRIPT
# =============================================================================
node_modules
.next
.nuxt
dist
out
.cache
.parcel-cache
coverage
.nyc_output
.npm
.pnp
.pnp.js
.eslintcache
.turbo
# =============================================================================
# IDE / EDITOR
# =============================================================================
.vscode
.idea
*.swp
*.swo
*~
# =============================================================================
# OS
# =============================================================================
.DS_Store
Thumbs.db
# =============================================================================
# LOGS
# =============================================================================
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# =============================================================================
# ENVIRONMENT FILES
# =============================================================================
.env
.env.*
!.env.example
# =============================================================================
# CORE_NEST SPECIFIC
# =============================================================================
# Large dev seed data (use test/prod on server)
init-db/seed-dev
# Dumps directory (source for seed files)
dumps
# Build artifacts (Django collectstatic output - not album/static which has Prism.js)
amar/src/back/static
amar/src/back/staticfiles
media
# =============================================================================
# PAWPRINT SPECIFIC
# =============================================================================
# Local workspace/definition folder (not for production)
def/

View File

@@ -1,92 +0,0 @@
# Core Room Control Scripts
Control scripts for managing the core_room deployment (amar + soleprint).
## Structure
```
ctrl/
├── .env.sync # Configuration for deploy
├── .exclude # Rsync exclusion patterns
├── build.sh # Build Docker images (auto-detects services)
├── deploy.sh # Deploy to server (sync all files)
├── logs.sh # View container logs
├── setup.sh # Initial setup (nginx, certs, .env)
├── start.sh # Start Docker services
├── status.sh # Show container status
├── stop.sh # Stop Docker services
└── manual_sync/ # Source code sync scripts
├── sync_ama.sh # Sync amar source code
└── sync_soleprint.sh # Sync soleprint source code
```
## Configuration
Edit `.env.sync` to configure deployment:
```bash
# Server
DEPLOY_SERVER=mariano@mcrn.ar
DEPLOY_REMOTE_PATH=~/core_room
# Local paths
LOCAL_SOLEPRINT_PATH=/home/mariano/wdir/ama/soleprint
LOCAL_AMAR_BASE=/home/mariano/wdir/ama
# Remote paths
REMOTE_SOLEPRINT_PATH=/home/mariano/soleprint
REMOTE_AMAR_PATH=/home/mariano/core_room/amar/src
```
## Usage
### Full Deployment
```bash
cd ctrl
./deploy.sh # Deploy everything to server
./deploy.sh --dry-run # Preview what would be synced
# Then on server:
ssh server 'cd ~/core_room/ctrl && ./build.sh && ./start.sh -d'
```
### Local Development
```bash
./start.sh # Start all services (foreground, see logs)
./start.sh -d # Start all services (detached)
./start.sh --build # Start with rebuild
./start.sh -d --build # Detached with rebuild
./logs.sh # View logs
./stop.sh # Stop all services
```
### Service Management
```bash
# All scripts auto-detect services (any dir with docker-compose.yml)
./build.sh # Build all images
./build.sh amar # Build only amar images
./build.sh --no-cache # Force rebuild without cache
./start.sh # Start all (foreground)
./start.sh -d # Start all (detached)
./start.sh amar # Start specific service
./start.sh --build # Start with rebuild
./stop.sh # Stop all services
./stop.sh amar # Stop specific service
./logs.sh # View all logs
./logs.sh amar # View amar compose logs
./logs.sh backend # View specific container logs
./status.sh # Show container status
```
## Room vs Soleprint Control
- **core_room/ctrl/** - Manages the full room (amar + soleprint) via Docker
- **soleprint/ctrl/** - Manages soleprint services via systemd (alternative deployment)
Use core_room/ctrl for orchestrating the full room with Docker Compose.
Use soleprint/ctrl for direct systemd deployment of soleprint services only.

View File

@@ -1,65 +0,0 @@
#!/bin/bash
# Build core_room Docker images
#
# Usage:
# ./build.sh # Build all
# ./build.sh <service> # Build specific service
# ./build.sh --no-cache # Force rebuild without cache
set -e
# Change to parent directory (services are in ../service_name)
cd "$(dirname "$0")/.."
# Export core_room/.env vars so child docker-compose files can use them
if [ -f ".env" ]; then
export $(grep -v '^#' .env | grep -v '^$' | xargs)
fi
TARGET="all"
NO_CACHE=""
SERVICE_DIRS=()
# Find all service directories (have docker-compose.yml, exclude ctrl/nginx)
for dir in */; do
dirname="${dir%/}"
if [ -f "$dir/docker-compose.yml" ] && [ "$dirname" != "ctrl" ] && [ "$dirname" != "nginx" ]; then
SERVICE_DIRS+=("$dirname")
fi
done
for arg in "$@"; do
case $arg in
--no-cache) NO_CACHE="--no-cache" ;;
all) TARGET="all" ;;
*)
# Check if it's a valid service directory
if [[ " ${SERVICE_DIRS[@]} " =~ " ${arg} " ]]; then
TARGET="$arg"
fi
;;
esac
done
build_service() {
local service=$1
echo "Building $service images..."
cd "$service"
DOCKER_BUILDKIT=0 COMPOSE_DOCKER_CLI_BUILD=0 docker compose build $NO_CACHE
cd ..
echo " $service images built"
}
if [ "$TARGET" = "all" ]; then
for service in "${SERVICE_DIRS[@]}"; do
build_service "$service"
echo ""
done
elif [[ " ${SERVICE_DIRS[@]} " =~ " ${TARGET} " ]]; then
build_service "$TARGET"
else
echo "Usage: ./build.sh [${SERVICE_DIRS[*]}|all] [--no-cache]"
exit 1
fi
echo "=== Build Complete ==="

View File

@@ -1,161 +0,0 @@
#!/bin/bash
# Deploy mainroom to server (amar + soleprint)
#
# Two deployment modes:
# 1. Docker (default): Full mainroom structure + source code
# 2. Bare metal (--bare-metal): Only soleprint source to systemd services
#
# Usage:
# ./deploy.sh # Deploy Docker setup (default)
# ./deploy.sh --bare-metal # Deploy bare metal soleprint only
# ./deploy.sh --dry-run # Preview what would be synced
# ./deploy.sh --bare-metal --dry-run # Preview bare metal sync
set -e
# Load configuration
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/.env.sync" 2>/dev/null || true
SERVER="${DEPLOY_SERVER:-mariano@mcrn.ar}"
REMOTE_PATH="${DEPLOY_REMOTE_PATH:-~/mainroom}"
BARE_METAL_PATH="${DEPLOY_BARE_METAL_PATH:-~/soleprint}"
# Source code paths (defaults if not in .env.sync)
LOCAL_AMAR_BACKEND="${LOCAL_AMAR_BACKEND:-$HOME/wdir/ama/amar_django_back}"
LOCAL_AMAR_FRONTEND="${LOCAL_AMAR_FRONTEND:-$HOME/wdir/ama/amar_frontend}"
LOCAL_SOLEPRINT="${LOCAL_SOLEPRINT:-$HOME/wdir/spr/gen}"
DRY_RUN=""
BARE_METAL=""
for arg in "$@"; do
case $arg in
--dry-run) DRY_RUN="--dry-run" ;;
--bare-metal) BARE_METAL="true" ;;
esac
done
cd "$SCRIPT_DIR/.." # Go to root (parent of ctrl/)
# Common rsync options (using single .exclude file for everything)
RSYNC_CMD="rsync -avz --mkpath --delete --progress --exclude-from=ctrl/.exclude"
# =============================================================================
# BARE METAL DEPLOYMENT
# =============================================================================
if [ -n "$BARE_METAL" ]; then
echo "=== Deploying to Bare Metal (Systemd Services) ==="
echo ""
# Only sync soleprint source
if [ -d "$LOCAL_SOLEPRINT" ] && [ -f "$LOCAL_SOLEPRINT/main.py" ]; then
echo "Syncing soleprint source to bare metal..."
$RSYNC_CMD $DRY_RUN \
"$LOCAL_SOLEPRINT/" \
"$SERVER:$BARE_METAL_PATH/"
echo " ✓ Soleprint synced to $BARE_METAL_PATH"
else
echo "⚠ Soleprint not found at: $LOCAL_SOLEPRINT"
exit 1
fi
echo ""
if [ -n "$DRY_RUN" ]; then
echo "=== Dry run complete ==="
exit 0
fi
echo "=== Bare Metal Sync Complete ==="
echo ""
echo "Next steps on server (as mariano user):"
echo " Restart systemd services:"
echo " sudo systemctl restart soleprint artery album ward"
echo ""
echo " Check status:"
echo " sudo systemctl status soleprint artery album ward"
echo ""
exit 0
fi
# =============================================================================
# DOCKER DEPLOYMENT (DEFAULT)
# =============================================================================
echo "=== Deploying to Docker ==="
echo ""
# 1. Sync mainroom structure (excluding src directories - they're synced separately)
echo "1. Syncing mainroom structure..."
$RSYNC_CMD $DRY_RUN \
--exclude='*/src/' \
./ \
"$SERVER:$REMOTE_PATH/"
echo " [OK] Core room structure synced"
echo ""
# 2. Sync amar backend source
if [ -d "$LOCAL_AMAR_BACKEND" ]; then
echo "2. Syncing amar backend source..."
$RSYNC_CMD $DRY_RUN \
"$LOCAL_AMAR_BACKEND/" \
"$SERVER:$REMOTE_PATH/amar/src/back/"
echo " [OK] Backend synced"
else
echo "2. [WARN] Backend source not found at: $LOCAL_AMAR_BACKEND"
fi
echo ""
# 3. Sync amar frontend source
if [ -d "$LOCAL_AMAR_FRONTEND" ]; then
echo "3. Syncing amar frontend source..."
$RSYNC_CMD $DRY_RUN \
"$LOCAL_AMAR_FRONTEND/" \
"$SERVER:$REMOTE_PATH/amar/src/front/"
echo " [OK] Frontend synced"
else
echo "3. [WARN] Frontend source not found at: $LOCAL_AMAR_FRONTEND"
fi
echo ""
# 4. Sync soleprint source
if [ -d "$LOCAL_SOLEPRINT" ] && [ -f "$LOCAL_SOLEPRINT/main.py" ]; then
echo "4. Syncing soleprint source..."
$RSYNC_CMD $DRY_RUN \
"$LOCAL_SOLEPRINT/" \
"$SERVER:$REMOTE_PATH/soleprint/src/"
echo " [OK] Soleprint synced"
else
echo "4. [INFO] Soleprint not found at: $LOCAL_SOLEPRINT"
fi
echo ""
# 5. Sync tests to ward (silent fail if not available)
if [ -z "$DRY_RUN" ]; then
echo "5. Syncing tests to ward..."
if SILENT_FAIL=true "$SCRIPT_DIR/sync-tests.sh" >/dev/null 2>&1; then
echo " [OK] Tests synced"
else
echo " [SKIP] Tests sync not configured or not available"
fi
echo ""
fi
if [ -n "$DRY_RUN" ]; then
echo "=== Dry run complete ==="
exit 0
fi
echo "=== Docker Sync Complete ==="
echo ""
echo "Next steps on server (as mariano user):"
echo " 1. Setup (first time only):"
echo " ssh $SERVER 'cd $REMOTE_PATH/ctrl/server && ./setup.sh'"
echo ""
echo " 2. Setup test symlinks (optional, enables test sharing):"
echo " ssh $SERVER 'cd $REMOTE_PATH/ctrl/server && ./setup-symlinks.sh'"
echo " Or sync tests without symlinks: ./ctrl/sync-tests.sh"
echo ""
echo " 3. Build and start:"
echo " ssh $SERVER 'cd $REMOTE_PATH/ctrl && ./build.sh && ./start.sh -d'"
echo ""
echo "Note: Bare metal services remain running as fallback (*.bare.mcrn.ar)"

View File

@@ -1,49 +0,0 @@
#!/bin/bash
# View mainroom logs
#
# Usage:
# ./logs.sh # All logs
# ./logs.sh <service> # Service compose logs (e.g., amar, soleprint)
# ./logs.sh <container> # Specific container name (e.g., backend, db)
set -e
# Change to parent directory (services are in ../service_name)
cd "$(dirname "$0")/.."
# Export mainroom/.env vars
if [ -f ".env" ]; then
set -a
source .env
set +a
fi
TARGET=${1:-all}
SERVICE_DIRS=()
# Find all service directories (have docker-compose.yml, exclude ctrl/nginx)
for dir in */; do
dirname="${dir%/}"
if [ -f "$dir/docker-compose.yml" ] && [ "$dirname" != "ctrl" ] && [ "$dirname" != "nginx" ]; then
SERVICE_DIRS+=("$dirname")
fi
done
if [[ " ${SERVICE_DIRS[@]} " =~ " ${TARGET} " ]]; then
# Service directory logs
cd "$TARGET" && docker compose logs -f
elif [ "$TARGET" = "all" ]; then
# All containers from all services
echo "Tailing logs for: ${SERVICE_DIRS[*]}"
for service in "${SERVICE_DIRS[@]}"; do
cd "$service"
docker compose logs -f &
cd ..
done
wait
else
# Specific container name - try exact match
docker logs -f "$TARGET" 2>/dev/null || \
echo "Container not found: $TARGET"
echo "Use service name (e.g., ./logs.sh soleprint) or full container name"
fi

View File

@@ -1,110 +0,0 @@
# Core Room - Environment Configuration
# This configuration is shared across all services in the room
# =============================================================================
# DEPLOYMENT CONFIG
# =============================================================================
# Unique identifier for this deployment (used for container/network names)
DEPLOYMENT_NAME=core_room
# Room identifier (logical grouping of services)
ROOM_NAME=core_room
# Network name for Docker services
NETWORK_NAME=core_room_network
# =============================================================================
# DOMAINS (Local Development)
# =============================================================================
# Domain for the managed application (e.g., amar)
MANAGED_DOMAIN=amar.local.com
# Domain for soleprint management interface
SOLEPRINT_DOMAIN=soleprint.local.com
# =============================================================================
# PORTS (Local Development)
# =============================================================================
# Managed app ports
BACKEND_PORT=8000
FRONTEND_PORT=3000
# Soleprint ecosystem ports
SOLEPRINT_PORT=13000
ARTERY_PORT=13001
ALBUM_PORT=13002
WARD_PORT=13003
# =============================================================================
# Ports
MANAGED_FRONTEND_PORT=3000
MANAGED_BACKEND_PORT=8000
# Backend location blocks (Django-specific)
MANAGED_BACKEND_LOCATIONS='
location /api/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
location /admin/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/admin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/static/;
}
'
# =============================================================================
# MANAGED DOMAIN CONFIG (AMAR-specific - core_room context)
# =============================================================================
# Complete nginx location blocks for amar
MANAGED_LOCATIONS='
location /api/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
location /admin/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/admin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/static/;
}
location / {
proxy_pass http://${DEPLOYMENT_NAME}_frontend:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
'
# =============================================================================
# AMAR PATHS (core_room specific - managed app)
# =============================================================================
BACKEND_PATH=../../amar_django_back
FRONTEND_PATH=../../amar_frontend
DOCKERFILE_BACKEND=../def/core_room/amar/Dockerfile.backend
DOCKERFILE_FRONTEND=../def/core_room/amar/Dockerfile.frontend
# Database seed data
INIT_DB_SEED=test

View File

@@ -1,6 +0,0 @@
# Generated configuration files
.generated/
# Old backups
*.old
*.bak

View File

@@ -1,234 +0,0 @@
# Server Configuration
Everything that runs **on the server** (not locally).
## Purpose
This directory contains **server-side** scripts and configs that get deployed to AWS.
Separate from `ctrl/` which contains **local** orchestration scripts.
## Structure
```
server/
├── setup.sh # Idempotent server setup (run on AWS)
├── nginx/
│ └── core_room.conf # Single nginx config for all services
└── scripts/ # Any other server-side scripts
```
## Expected Server Structure
When deployed, the AWS instance should look like:
```
~/core_room/ # This repo (deployed via deploy.sh)
├── server/ # Server-side scripts
│ ├── setup.sh # Run this first
│ └── nginx/
├── ctrl/ # Local scripts (work remotely too)
│ ├── build.sh, start.sh, stop.sh, logs.sh, status.sh
│ └── manual_sync/
├── amar/
│ ├── docker-compose.yml
│ ├── .env # Production values
│ ├── Dockerfile.*
│ ├── init-db/
│ └── src/ # Synced from local via manual_sync/
│ ├── back/ # Django source
│ └── front/ # Next.js source
└── soleprint/
├── docker-compose.yml
├── .env # Production values
└── (bare metal or src/ depending on deployment)
```
## Usage
### First-Time Server Setup
```bash
# 1. From local machine: Deploy files
cd ~/wdir/ama/core_room/ctrl
./deploy.sh
# 2. SSH to server
ssh mariano@mcrn.ar
# 3. Run server setup (idempotent - safe to re-run)
cd ~/core_room/server
./setup.sh
```
This will:
- Ensure directory structure exists
- Install Docker, Docker Compose, Nginx, Certbot
- Check SSL certificates (prompts if missing)
- Install nginx config
- Create .env files from examples
### Updates/Changes
```bash
# From local: edit server/nginx/core_room.conf or server/setup.sh
# Then deploy:
./deploy.sh
# On server: re-run setup to apply changes
ssh mariano@mcrn.ar 'cd ~/core_room/server && ./setup.sh'
```
### Build and Start Services
```bash
# On server (or via SSH):
cd ~/core_room/ctrl
./build.sh # Build all images
./start.sh -d # Start detached
./status.sh # Check status
```
## Key Files
### server/setup.sh
Idempotent setup script that runs on AWS:
- Checks/installs: Docker, Nginx, Certbot
- Verifies SSL certs exist
- Installs nginx config
- Creates .env files from examples
**Safe to run multiple times** - won't break existing setup.
### server/nginx/core_room.conf
Single nginx config file for all services:
- amar.room.mcrn.ar (frontend + backend)
- soleprint.mcrn.ar
- artery.mcrn.ar
- album.mcrn.ar
- ward.mcrn.ar
Edit this file locally, deploy, re-run setup.sh to apply.
## Environment Variables
Create production `.env` files:
```bash
# On server:
nano ~/core_room/amar/.env # Set INIT_DB_SEED=test or prod
nano ~/core_room/soleprint/.env # Set ROOM_NAME, ports, etc.
```
## SSL Certificates
Certificates are managed via Let's Encrypt:
```bash
# Wildcard for *.room.mcrn.ar (for amar)
sudo certbot certonly --manual --preferred-challenges dns -d '*.room.mcrn.ar'
# Wildcard for *.mcrn.ar (for soleprint services)
sudo certbot certonly --manual --preferred-challenges dns -d '*.mcrn.ar'
```
Auto-renewal is handled by certbot systemd timer.
## Troubleshooting
### Nginx config test fails
```bash
sudo nginx -t
# Fix errors in server/nginx/core_room.conf
```
### Services won't start
```bash
cd ~/core_room/ctrl
./logs.sh # Check all logs
./logs.sh amar # Check specific service
docker ps -a # See all containers
```
### Database issues
```bash
# Check which seed data is configured
grep INIT_DB_SEED ~/core_room/amar/.env
# Rebuild database (WARNING: deletes data)
cd ~/core_room
docker compose -f amar/docker-compose.yml down -v
./ctrl/start.sh amar -d
```
## Test Directory Symlinking
### setup-symlinks.sh
**Purpose:** Create symlinks to share test directories across services on the same filesystem.
This allows ward/tester to access tests from amar_django_back_contracts without duplication.
```bash
# Preview changes
ssh mariano@mcrn.ar 'cd ~/core_room/ctrl/server && ./setup-symlinks.sh --dry-run'
# Apply changes
ssh mariano@mcrn.ar 'cd ~/core_room/ctrl/server && ./setup-symlinks.sh'
```
**What it does:**
- Creates symlinks from `soleprint/src/ward/tools/tester/tests/` to `amar/src/back/tests/contracts/`
- Symlinks each domain directory (mascotas, productos, solicitudes, workflows)
- Symlinks shared utilities (endpoints.py, helpers.py, base.py, conftest.py)
**Benefits:**
- Single source of truth for tests
- No duplication
- Tests automatically sync when backend is deployed
- Works across Docker containers sharing the same filesystem
**Alternative:** If symlinks don't work (different filesystems, Windows hosts), use `../ctrl/sync-tests.sh` to copy test files.
### sync-tests.sh (in ctrl/ directory)
**Purpose:** Sync test files as an alternative to symlinks.
```bash
# From local machine - sync to Docker
./ctrl/sync-tests.sh
# From local machine - sync to bare metal
./ctrl/sync-tests.sh --to-bare-metal
```
Use this when:
- Symlinks are not supported
- Services are on different filesystems
- You need independent test copies
### Verification
After setup, verify symlinks are working:
```bash
# Check symlinks exist
ssh mariano@mcrn.ar 'ls -lah ~/core_room/soleprint/src/ward/tools/tester/tests'
# Verify they point to correct location
ssh mariano@mcrn.ar 'readlink ~/core_room/soleprint/src/ward/tools/tester/tests/mascotas'
# Test in browser
open https://ward.mcrn.ar/tools/tester/
```
## Security Notes
- Never commit production `.env` files
- SSL certs in `/etc/letsencrypt/` (not in repo)
- Database volumes persist in Docker volumes
- Backup database regularly:
```bash
docker exec core_room_db pg_dump -U postgres amarback > backup.sql
```

View File

@@ -1,186 +0,0 @@
#!/bin/bash
# Server Audit - Run on AWS to see current state
# Usage: ssh server 'bash -s' < ctrl/server/audit.sh
echo "=== SERVER AUDIT ==="
echo "Date: $(date)"
echo "Host: $(hostname)"
echo "User: $USER"
echo ""
# =============================================================================
# Directory Structure
# =============================================================================
echo "=== DIRECTORY STRUCTURE ==="
echo ""
echo "Home directory contents:"
ls -lah ~/
echo ""
echo "core_room structure (if exists):"
if [ -d ~/core_room ]; then
tree ~/core_room -L 2 -I ".git" 2>/dev/null || find ~/core_room -maxdepth 2 -type d | sort
else
echo " ~/core_room does NOT exist"
fi
echo ""
echo "soleprint location:"
if [ -d ~/soleprint ]; then
ls -lah ~/soleprint/ | head -10
echo " ..."
else
echo " ~/soleprint does NOT exist"
fi
echo ""
# =============================================================================
# Docker
# =============================================================================
echo "=== DOCKER ==="
echo ""
echo "Docker version:"
docker --version 2>/dev/null || echo " Docker NOT installed"
echo ""
echo "Docker Compose version:"
docker compose version 2>/dev/null || echo " Docker Compose NOT installed"
echo ""
echo "Running containers:"
docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}" 2>/dev/null || echo " None or Docker not running"
echo ""
echo "All containers (including stopped):"
docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}" 2>/dev/null | head -20
echo ""
echo "Docker networks:"
docker network ls 2>/dev/null || echo " None"
echo ""
echo "Docker volumes:"
docker volume ls 2>/dev/null | grep -E "core_room|amar|soleprint|DRIVER" || echo " No core_room/amar/soleprint volumes"
echo ""
# =============================================================================
# Nginx
# =============================================================================
echo "=== NGINX ==="
echo ""
echo "Nginx version:"
nginx -v 2>&1 || echo " Nginx NOT installed"
echo ""
echo "Nginx status:"
systemctl status nginx --no-pager -l 2>/dev/null | head -5 || echo " Cannot check status"
echo ""
echo "Sites enabled:"
ls -lah /etc/nginx/sites-enabled/ 2>/dev/null || echo " Directory does not exist"
echo ""
echo "Sites available (core_room related):"
ls -lah /etc/nginx/sites-available/ 2>/dev/null | grep -E "room|amar|soleprint|artery|album|ward" || echo " None found"
echo ""
# =============================================================================
# SSL Certificates
# =============================================================================
echo "=== SSL CERTIFICATES ==="
echo ""
echo "Certbot version:"
certbot --version 2>/dev/null || echo " Certbot NOT installed"
echo ""
echo "Certificates:"
if [ -d /etc/letsencrypt/live ]; then
sudo ls -lah /etc/letsencrypt/live/ 2>/dev/null || echo " Permission denied"
else
echo " /etc/letsencrypt/live does NOT exist"
fi
echo ""
# =============================================================================
# Environment Files
# =============================================================================
echo "=== ENVIRONMENT FILES ==="
echo ""
for location in ~/core_room/amar ~/core_room/soleprint ~/soleprint; do
if [ -d "$location" ]; then
echo "$location/.env:"
if [ -f "$location/.env" ]; then
echo " EXISTS"
echo " Size: $(stat -c%s "$location/.env" 2>/dev/null || stat -f%z "$location/.env" 2>/dev/null) bytes"
echo " ROOM_NAME: $(grep "^ROOM_NAME=" "$location/.env" 2>/dev/null || echo "not set")"
echo " NETWORK_NAME: $(grep "^NETWORK_NAME=" "$location/.env" 2>/dev/null || echo "not set")"
else
echo " does NOT exist"
fi
echo "$location/.env.example:"
[ -f "$location/.env.example" ] && echo " EXISTS" || echo " does NOT exist"
echo ""
fi
done
# =============================================================================
# Ports in Use
# =============================================================================
echo "=== PORTS IN USE ==="
echo ""
echo "Listening on ports (3000, 8000, 13000-13003):"
sudo netstat -tlnp 2>/dev/null | grep -E ":3000|:8000|:1300[0-3]" || sudo ss -tlnp | grep -E ":3000|:8000|:1300[0-3]" || echo " Cannot check (need sudo)"
echo ""
# =============================================================================
# Systemd Services
# =============================================================================
echo "=== SYSTEMD SERVICES ==="
echo ""
echo "Soleprint-related services:"
systemctl list-units --type=service --all 2>/dev/null | grep -E "soleprint|artery|album|ward" || echo " None found"
echo ""
# =============================================================================
# Disk Usage
# =============================================================================
echo "=== DISK USAGE ==="
echo ""
echo "Overall:"
df -h / 2>/dev/null
echo ""
echo "Docker space:"
docker system df 2>/dev/null || echo " Docker not available"
echo ""
# =============================================================================
# Summary
# =============================================================================
echo "=== SUMMARY ==="
echo ""
echo "Key Questions:"
echo ""
echo "1. Is there an existing core_room deployment?"
[ -d ~/core_room ] && echo " YES - ~/core_room exists" || echo " NO"
echo ""
echo "2. Are Docker containers running?"
docker ps -q 2>/dev/null | wc -l | xargs -I {} echo " {} containers running"
echo ""
echo "3. Is nginx configured for core_room?"
[ -f /etc/nginx/sites-enabled/core_room.conf ] && echo " YES - core_room.conf installed" || echo " NO"
echo ""
echo "4. Are there old individual nginx configs?"
ls /etc/nginx/sites-enabled/ 2>/dev/null | grep -E "amar|soleprint|artery|album|ward" | wc -l | xargs -I {} echo " {} old configs found"
echo ""
echo "5. SSL certificates present?"
[ -d /etc/letsencrypt/live/room.mcrn.ar ] && echo " *.room.mcrn.ar: YES" || echo " *.room.mcrn.ar: NO"
[ -d /etc/letsencrypt/live/mcrn.ar ] && echo " *.mcrn.ar: YES" || echo " *.mcrn.ar: NO"
echo ""
echo "=== END AUDIT ==="

View File

@@ -1,156 +0,0 @@
#!/bin/bash
# Server Cleanup - Run on AWS to prepare for fresh deployment
# This script safely cleans up old deployments
#
# Usage: ssh server 'cd ~/core_room/ctrl/server && ./cleanup.sh'
set -e
echo "=== SERVER CLEANUP ==="
echo ""
echo "⚠️ This will:"
echo " - Stop all Docker containers"
echo " - Remove old nginx configs"
echo " - Keep data volumes and SSL certs"
echo " - Keep .env files"
echo ""
read -p "Continue? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Aborted."
exit 1
fi
# =============================================================================
# 1. Stop Docker Containers
# =============================================================================
echo ""
echo "Step 1: Stopping Docker containers..."
# Stop containers if Docker is available
if command -v docker &> /dev/null; then
# Stop all core_room/amar/soleprint containers
CONTAINERS=$(docker ps -q --filter "name=core_room" --filter "name=amar" --filter "name=soleprint" 2>/dev/null || true)
if [ -n "$CONTAINERS" ]; then
echo " Stopping containers..."
docker stop $CONTAINERS 2>/dev/null || true
echo " ✓ Containers stopped"
else
echo " ✓ No running containers to stop"
fi
else
echo " ⓘ Docker not installed, skipping..."
fi
# =============================================================================
# 2. Stop Systemd Services
# =============================================================================
echo ""
echo "Step 2: Stopping systemd services..."
SERVICES=$(systemctl list-units --type=service --all --no-pager 2>/dev/null | grep -E "soleprint|artery|album|ward" | awk '{print $1}' || true)
if [ -n "$SERVICES" ]; then
echo " Found services: $SERVICES"
for service in $SERVICES; do
echo " Stopping $service..."
sudo systemctl stop "$service" 2>/dev/null || true
sudo systemctl disable "$service" 2>/dev/null || true
done
echo " ✓ Services stopped and disabled"
else
echo " ✓ No systemd services found"
fi
# =============================================================================
# 3. Clean Up Nginx Configs
# =============================================================================
echo ""
echo "Step 3: Cleaning up old nginx configs..."
if [ -d /etc/nginx/sites-enabled ]; then
# Remove old individual configs
OLD_CONFIGS=(
"amar.room.mcrn.ar"
"amar.room.mcrn.ar.conf"
"api.amar.room.mcrn.ar"
"api.amar.room.mcrn.ar.conf"
"soleprint.mcrn.ar"
"soleprint.mcrn.ar.conf"
"artery.mcrn.ar"
"artery.mcrn.ar.conf"
"album.mcrn.ar"
"album.mcrn.ar.conf"
"ward.mcrn.ar"
"ward.mcrn.ar.conf"
)
for config in "${OLD_CONFIGS[@]}"; do
if [ -L "/etc/nginx/sites-enabled/$config" ] || [ -f "/etc/nginx/sites-enabled/$config" ]; then
echo " Removing /etc/nginx/sites-enabled/$config"
sudo rm -f "/etc/nginx/sites-enabled/$config"
fi
if [ -f "/etc/nginx/sites-available/$config" ]; then
echo " Removing /etc/nginx/sites-available/$config"
sudo rm -f "/etc/nginx/sites-available/$config"
fi
done
echo " ✓ Old nginx configs removed"
# Test nginx config
if command -v nginx &> /dev/null; then
if sudo nginx -t 2>/dev/null; then
echo " Reloading nginx..."
sudo systemctl reload nginx 2>/dev/null || true
fi
fi
else
echo " ⓘ Nginx not configured, skipping..."
fi
# =============================================================================
# 4. Verify What's Kept
# =============================================================================
echo ""
echo "Step 4: Verifying preserved data..."
# Check Docker volumes
if command -v docker &> /dev/null; then
VOLUMES=$(docker volume ls -q | grep -E "core_room|amar|soleprint" 2>/dev/null || true)
if [ -n "$VOLUMES" ]; then
echo " ✓ Docker volumes preserved:"
docker volume ls | grep -E "core_room|amar|soleprint|DRIVER" || true
fi
fi
# Check .env files
echo ""
echo " .env files preserved:"
for envfile in ~/core_room/amar/.env ~/core_room/soleprint/.env ~/soleprint/.env; do
[ -f "$envfile" ] && echo "$envfile" || true
done
# Check SSL certs
echo ""
echo " SSL certificates preserved:"
[ -d /etc/letsencrypt/live/room.mcrn.ar ] && echo " ✓ *.room.mcrn.ar" || echo " ✗ *.room.mcrn.ar (missing)"
[ -d /etc/letsencrypt/live/mcrn.ar ] && echo " ✓ *.mcrn.ar" || echo " ✗ *.mcrn.ar (missing)"
# =============================================================================
# Done
# =============================================================================
echo ""
echo "=== Cleanup Complete ==="
echo ""
echo "Next steps:"
echo " 1. Deploy from local:"
echo " ./ctrl/deploy.sh"
echo ""
echo " 2. Run server setup:"
echo " cd ~/core_room/ctrl/server && ./setup.sh"
echo ""
echo " 3. Build and start:"
echo " cd ~/core_room/ctrl && ./build.sh && ./start.sh -d"
echo ""

View File

@@ -1,185 +0,0 @@
#!/bin/bash
# Configure - Generate configuration files
# Run as appuser (mariano), no sudo required
#
# Usage:
# ./configure.sh
#
# Generates:
# - Nginx configs for core_room
# - Validates .env files
# - Outputs to .generated/ directory
#
# After running this, admin runs: sudo ./setup.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
GEN_DIR="$SCRIPT_DIR/.generated"
CORE_ROOM_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo "=== Core Room Configure ==="
echo ""
echo "This script generates configuration files for deployment."
echo "Run as appuser (no sudo required)."
echo ""
# Ensure we're NOT running as root
if [ "$EUID" -eq 0 ]; then
echo "ERROR: Do not run this script with sudo"
echo "Run as appuser instead: ./configure.sh"
exit 1
fi
# =============================================================================
# 1. Create .generated directory
# =============================================================================
echo "Step 1: Preparing output directory..."
mkdir -p "$GEN_DIR"
echo " Output directory: $GEN_DIR"
# =============================================================================
# 2. Load and validate environment
# =============================================================================
echo ""
echo "Step 2: Loading environment..."
# Load core_room/.env
if [ -f "$CORE_ROOM_ROOT/.env" ]; then
set -a
source "$CORE_ROOM_ROOT/.env"
set +a
echo " Loaded: core_room/.env"
else
echo " ERROR: core_room/.env not found"
exit 1
fi
# Validate required vars
REQUIRED_VARS="ROOM_NAME DEPLOYMENT_NAME NETWORK_NAME MANAGED_DOMAIN SOLEPRINT_DOMAIN"
MISSING=""
for var in $REQUIRED_VARS; do
if [ -z "${!var}" ]; then
MISSING="$MISSING $var"
fi
done
if [ -n "$MISSING" ]; then
echo " ERROR: Missing required vars in core_room/.env:$MISSING"
exit 1
fi
echo " ROOM_NAME: $ROOM_NAME"
echo " DEPLOYMENT_NAME: $DEPLOYMENT_NAME"
echo " MANAGED_DOMAIN: $MANAGED_DOMAIN"
echo " SOLEPRINT_DOMAIN: $SOLEPRINT_DOMAIN"
# =============================================================================
# 3. Check .env files for services
# =============================================================================
echo ""
echo "Step 3: Checking service .env files..."
for service in amar soleprint; do
SERVICE_DIR="$CORE_ROOM_ROOT/$service"
if [ ! -f "$SERVICE_DIR/.env" ]; then
if [ -f "$SERVICE_DIR/.env.example" ]; then
echo " Creating $service/.env from example..."
cp "$SERVICE_DIR/.env.example" "$SERVICE_DIR/.env"
echo " ⚠️ Edit $service/.env with production values before deployment"
else
echo " ERROR: $service/.env.example not found"
exit 1
fi
else
echo "$service/.env exists"
fi
done
# =============================================================================
# 4. Generate Nginx configuration
# =============================================================================
echo ""
echo "Step 4: Generating nginx configuration..."
TEMPLATE="$SCRIPT_DIR/nginx/core_room.conf.template"
OUTPUT="$GEN_DIR/core_room.nginx.conf"
if [ ! -f "$TEMPLATE" ]; then
echo " ERROR: Template not found: $TEMPLATE"
exit 1
fi
# Check for SSL certificates (just warn, don't fail)
SSL_CERT_AMAR="/etc/letsencrypt/live/room.mcrn.ar/fullchain.pem"
SSL_KEY_AMAR="/etc/letsencrypt/live/room.mcrn.ar/privkey.pem"
SSL_CERT_SOLEPRINT="/etc/letsencrypt/live/mcrn.ar/fullchain.pem"
SSL_KEY_SOLEPRINT="/etc/letsencrypt/live/mcrn.ar/privkey.pem"
echo " Checking SSL certificates..."
for cert in "$SSL_CERT_AMAR" "$SSL_KEY_AMAR" "$SSL_CERT_SOLEPRINT" "$SSL_KEY_SOLEPRINT"; do
if [ -f "$cert" ]; then
echo "$(basename $cert)"
else
echo " ⚠️ Missing: $cert"
echo " Admin will need to generate SSL certificates"
fi
done
# Generate nginx config from template
export ROOM_NAME DEPLOYMENT_NAME MANAGED_DOMAIN SOLEPRINT_DOMAIN
export SSL_CERT_AMAR SSL_KEY_AMAR SSL_CERT_SOLEPRINT SSL_KEY_SOLEPRINT
envsubst < "$TEMPLATE" > "$OUTPUT"
echo " ✓ Generated: $OUTPUT"
# =============================================================================
# 5. Generate deployment summary
# =============================================================================
echo ""
echo "Step 5: Generating deployment summary..."
SUMMARY="$GEN_DIR/DEPLOYMENT.txt"
cat > "$SUMMARY" <<EOF
Core Room Deployment Configuration
Generated: $(date)
User: $USER
Host: $(hostname)
=== Environment ===
ROOM_NAME=$ROOM_NAME
DEPLOYMENT_NAME=$DEPLOYMENT_NAME
NETWORK_NAME=$NETWORK_NAME
MANAGED_DOMAIN=$MANAGED_DOMAIN
SOLEPRINT_DOMAIN=$SOLEPRINT_DOMAIN
=== Generated Files ===
- core_room.nginx.conf → /etc/nginx/sites-available/core_room.conf
=== Next Steps ===
1. Review generated files in: $GEN_DIR
2. Have admin run: sudo ./setup.sh
EOF
echo " ✓ Generated: $SUMMARY"
# =============================================================================
# Done
# =============================================================================
echo ""
echo "=== Configuration Complete ==="
echo ""
echo "Generated files in: $GEN_DIR"
echo ""
echo "Next steps:"
echo " 1. Review generated nginx config:"
echo " cat $OUTPUT"
echo ""
echo " 2. Have system admin run:"
echo " sudo ./setup.sh"
echo ""
echo " 3. Or review deployment summary:"
echo " cat $SUMMARY"
echo ""

View File

@@ -1,55 +0,0 @@
#!/bin/bash
# Install nginx config for core_room
# Run with: sudo ./install-nginx.sh
set -e
# Application user (can be overridden with environment variable)
APP_USER="${APP_USER:-mariano}"
APP_HOME="/home/${APP_USER}"
NGINX_SOURCE="${APP_HOME}/core_room/ctrl/server/nginx/core_room.conf"
NGINX_AVAILABLE="/etc/nginx/sites-available/core_room.conf"
NGINX_ENABLED="/etc/nginx/sites-enabled/core_room.conf"
echo "=== Installing nginx config for core_room ==="
echo "App user: $APP_USER"
echo "App home: $APP_HOME"
echo ""
# Check if source file exists
if [ ! -f "$NGINX_SOURCE" ]; then
echo "Error: Source file not found: $NGINX_SOURCE"
exit 1
fi
# Copy to sites-available
echo "Installing config to sites-available..."
cp "$NGINX_SOURCE" "$NGINX_AVAILABLE"
echo " ✓ Config installed to $NGINX_AVAILABLE"
# Create symlink to sites-enabled
echo "Enabling config..."
ln -sf "$NGINX_AVAILABLE" "$NGINX_ENABLED"
echo " ✓ Config enabled"
# Test nginx configuration
echo ""
echo "Testing nginx configuration..."
if nginx -t; then
echo " ✓ Nginx config is valid"
# Reload nginx
echo ""
echo "Reloading nginx..."
systemctl reload nginx
echo " ✓ Nginx reloaded"
echo ""
echo "=== Installation complete ==="
else
echo ""
echo "Error: Nginx configuration test failed"
echo "Config was installed but nginx was not reloaded"
exit 1
fi

View File

@@ -1,292 +0,0 @@
# Core Room - All Services Nginx Config
# Single config for entire room deployment
#
# Docker Services (primary):
# - amar.room.mcrn.ar (frontend:3000 + backend:8000)
# - soleprint.mcrn.ar (port 13000)
# - artery.mcrn.ar (port 13001)
# - album.mcrn.ar (port 13002)
# - ward.mcrn.ar (port 13003)
#
# Bare Metal Services (fallback):
# - soleprint.bare.mcrn.ar (port 12000)
# - artery.bare.mcrn.ar (port 12001)
# - album.bare.mcrn.ar (port 12002)
# - ward.bare.mcrn.ar (port 12003)
# =============================================================================
# AMAR - Frontend + Backend
# =============================================================================
server {
listen 80;
server_name amar.room.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name amar.room.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/room.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/room.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Backend API
location /api/ {
proxy_pass http://127.0.0.1:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
# Django admin
location /admin/ {
proxy_pass http://127.0.0.1:8000/admin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Django static files
location /static/ {
proxy_pass http://127.0.0.1:8000/static/;
}
# Frontend (default)
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
}
# =============================================================================
# SOLEPRINT - Main Service
# =============================================================================
server {
listen 80;
server_name soleprint.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name soleprint.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/soleprint.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/soleprint.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:13000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# ARTERY - API Gateway
# =============================================================================
server {
listen 80;
server_name artery.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name artery.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/artery.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/artery.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:13001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# ALBUM - Media Service
# =============================================================================
server {
listen 80;
server_name album.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name album.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/album.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/album.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:13002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# WARD - Admin Interface
# =============================================================================
server {
listen 80;
server_name ward.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name ward.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/ward.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ward.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:13003;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# BARE METAL SERVICES (FALLBACK)
# =============================================================================
# =============================================================================
# SOLEPRINT BARE - Main Service (Bare Metal)
# =============================================================================
server {
listen 80;
server_name soleprint.bare.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name soleprint.bare.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/bare.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bare.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:12000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# ARTERY BARE - API Gateway (Bare Metal)
# =============================================================================
server {
listen 80;
server_name artery.bare.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name artery.bare.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/bare.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bare.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:12001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# ALBUM BARE - Media Service (Bare Metal)
# =============================================================================
server {
listen 80;
server_name album.bare.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name album.bare.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/bare.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bare.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:12002;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# =============================================================================
# WARD BARE - Admin Interface (Bare Metal)
# =============================================================================
server {
listen 80;
server_name ward.bare.mcrn.ar;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name ward.bare.mcrn.ar;
ssl_certificate /etc/letsencrypt/live/bare.mcrn.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bare.mcrn.ar/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:12003;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -1,107 +0,0 @@
# Core Room - Nginx Config Template
# Generated from environment variables
#
# Environment variables:
# DOMAIN_AMAR - Amar domain (e.g., amarmascotas.local.com or amar.room.mcrn.ar)
# DOMAIN_SOLEPRINT - Soleprint domain (e.g., soleprint.local.com or soleprint.mcrn.ar)
# USE_SSL - true/false - whether to use SSL
# SSL_CERT_PATH - Path to SSL certificate (if USE_SSL=true)
# SSL_KEY_PATH - Path to SSL key (if USE_SSL=true)
# BACKEND_PORT - Backend port (default: 8000)
# FRONTEND_PORT - Frontend port (default: 3000)
# SOLEPRINT_PORT - Soleprint port (default: 13000)
# =============================================================================
# AMAR - Frontend + Backend
# =============================================================================
server {
listen 80;
server_name ${DOMAIN_AMAR};
${SSL_REDIRECT}
# Backend API
location /api/ {
proxy_pass http://127.0.0.1:${BACKEND_PORT}/api/;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 300;
}
# Django admin
location /admin/ {
proxy_pass http://127.0.0.1:${BACKEND_PORT}/admin/;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
# Django static files
location /static/ {
proxy_pass http://127.0.0.1:${BACKEND_PORT}/static/;
}
# Frontend (default)
location / {
proxy_pass http://127.0.0.1:${FRONTEND_PORT};
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 300;
# WebSocket support for Next.js hot reload
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
}
}
${SSL_SERVER_BLOCK}
# =============================================================================
# SOLEPRINT - Main Service + Ecosystem
# =============================================================================
server {
listen 80;
server_name ${DOMAIN_SOLEPRINT};
${SOLEPRINT_SSL_REDIRECT}
# Artery - API Gateway
location /artery/ {
proxy_pass http://127.0.0.1:${ARTERY_PORT}/;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
# Album - Media Service
location /album/ {
proxy_pass http://127.0.0.1:${ALBUM_PORT}/;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
# Ward - Admin Interface
location /ward/ {
proxy_pass http://127.0.0.1:${WARD_PORT}/;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
# Soleprint - Main Service (default)
location / {
proxy_pass http://127.0.0.1:${SOLEPRINT_PORT};
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
${SOLEPRINT_SSL_SERVER_BLOCK}

View File

@@ -1,152 +0,0 @@
# Nginx Config Template for Docker Local Development
# Uses environment variables from .env files
# Variables: DEPLOYMENT_NAME, ROOM_NAME, MANAGED_DOMAIN, SOLEPRINT_DOMAIN
# =============================================================================
# MANAGED APP WITH WRAPPER - amar.room.local.com
# =============================================================================
server {
listen 80;
server_name ${MANAGED_DOMAIN};
# Wrapper static files
location /wrapper/ {
alias /app/wrapper/;
add_header Cache-Control "no-cache";
}
# Backend API
location /api/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
# Django admin
location /admin/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/admin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Django static files
location /static/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/static/;
}
# Frontend with wrapper injection
location / {
proxy_pass http://${DEPLOYMENT_NAME}_frontend:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
# WebSocket support for Next.js hot reload
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Inject wrapper scripts into HTML
sub_filter '</head>' '<link rel="stylesheet" href="/wrapper/sidebar.css"><script src="/wrapper/sidebar.js"></script></head>';
sub_filter_once on;
proxy_set_header Accept-Encoding "";
}
}
# =============================================================================
# MANAGED APP WITHOUT WRAPPER - amar.local.com
# =============================================================================
server {
listen 80;
server_name amar.local.com;
# Backend API
location /api/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
# Django admin
location /admin/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/admin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Django static files
location /static/ {
proxy_pass http://${DEPLOYMENT_NAME}_backend:8000/static/;
}
# Frontend (clean, no wrapper)
location / {
proxy_pass http://${DEPLOYMENT_NAME}_frontend:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
# WebSocket support for Next.js hot reload
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# =============================================================================
# SOLEPRINT - Main Service + Ecosystem
# =============================================================================
server {
listen 80;
server_name ${SOLEPRINT_DOMAIN};
# Artery - API Gateway
location /artery/ {
proxy_pass http://${ROOM_NAME}_artery:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Album - Media Service
location /album/ {
proxy_pass http://${ROOM_NAME}_album:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Ward - Admin Interface
location /ward/ {
proxy_pass http://${ROOM_NAME}_ward:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Soleprint - Main Service (default)
location / {
proxy_pass http://${ROOM_NAME}_soleprint:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -1,6 +0,0 @@
# Conditional wrapper injection based on ENABLE_WRAPPER env var
{{if ENABLE_WRAPPER}}
sub_filter '</head>' '<link rel="stylesheet" href="/wrapper/sidebar.css"><script src="/wrapper/sidebar.js"></script></head>';
sub_filter_once on;
proxy_set_header Accept-Encoding "";
{{endif}}

View File

@@ -1,60 +0,0 @@
# Nginx Config Template for Docker
# Uses environment variables from .env files
# Variables: DEPLOYMENT_NAME, MANAGED_DOMAIN, SOLEPRINT_DOMAIN, MANAGED_*
# =============================================================================
# MANAGED DOMAIN
# =============================================================================
# Completely defined by the parent deployment (e.g., core_room)
# Soleprint doesn't know or care about the managed app's structure
server {
listen 80;
server_name ${MANAGED_DOMAIN};
# All location blocks defined in MANAGED_LOCATIONS env var
${MANAGED_LOCATIONS}
}
# =============================================================================
# SOLEPRINT - Main Service + Ecosystem
# =============================================================================
server {
listen 80;
server_name ${SOLEPRINT_DOMAIN};
# Artery - API Gateway
location /artery/ {
proxy_pass http://${DEPLOYMENT_NAME}_artery:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Album - Media Service
location /album/ {
proxy_pass http://${DEPLOYMENT_NAME}_album:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Ward - Admin Interface
location /ward/ {
proxy_pass http://${DEPLOYMENT_NAME}_ward:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Soleprint - Main Service (default)
location / {
proxy_pass http://${DEPLOYMENT_NAME}_soleprint:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -1,23 +0,0 @@
#!/bin/sh
# Generate nginx config based on ENABLE_WRAPPER env var
TEMPLATE="/etc/nginx/templates/docker-local.conf.template"
OUTPUT="/etc/nginx/conf.d/default.conf"
# Start with the template
cp "$TEMPLATE" "$OUTPUT"
# If ENABLE_WRAPPER is not true, remove wrapper injection
if [ "$ENABLE_WRAPPER" != "true" ]; then
echo "Wrapper disabled - removing injection lines"
sed -i '/wrapper/d' "$OUTPUT"
sed -i '/sub_filter/d' "$OUTPUT"
sed -i '/Accept-Encoding/d' "$OUTPUT"
fi
# Replace env vars
envsubst '${DEPLOYMENT_NAME} ${ROOM_NAME} ${MANAGED_DOMAIN} ${SOLEPRINT_DOMAIN}' < "$OUTPUT" > /tmp/nginx.conf
mv /tmp/nginx.conf "$OUTPUT"
echo "Nginx config generated (ENABLE_WRAPPER=$ENABLE_WRAPPER)"
cat "$OUTPUT"

View File

@@ -1,160 +0,0 @@
#!/bin/bash
# Setup symlinks for test directories
# Enables sharing test directories across different services on the same filesystem
#
# This script should be run on the AWS server after deployment
# It creates symlinks to allow ward/tester to access test data from amar_django_back_contracts
#
# Usage:
# ./setup-symlinks.sh [--dry-run]
set -e
DRY_RUN=""
if [ "$1" == "--dry-run" ]; then
DRY_RUN="echo [DRY-RUN]"
fi
echo "=== Setting up Test Directory Symlinks ==="
echo ""
# Check if we're on the server
if [ ! -d "$HOME/core_room" ]; then
echo "Error: ~/core_room directory not found"
echo "This script should run on the AWS server after deployment"
exit 1
fi
cd "$HOME/core_room"
# =============================================================================
# Test Directory Symlinks
# =============================================================================
echo "Step 1: Creating symlinks for test directories..."
echo ""
# Ward tester tests directory
WARD_TESTS_DIR="soleprint/src/ward/tools/tester/tests"
CONTRACTS_SOURCE="amar/src/back/tests/contracts"
# Create ward tests directory if it doesn't exist
if [ ! -d "$WARD_TESTS_DIR" ]; then
$DRY_RUN mkdir -p "$WARD_TESTS_DIR"
echo " Created $WARD_TESTS_DIR"
fi
# Check if source contracts directory exists
if [ ! -d "$CONTRACTS_SOURCE" ]; then
echo " ⚠ Warning: Source contracts directory not found: $CONTRACTS_SOURCE"
echo " Skipping test symlinks"
else
# Create symlinks for each test domain
for domain_dir in "$CONTRACTS_SOURCE"/*; do
if [ -d "$domain_dir" ]; then
domain_name=$(basename "$domain_dir")
# Skip __pycache__ and other Python artifacts
if [[ "$domain_name" == "__pycache__" ]] || [[ "$domain_name" == "*.pyc" ]]; then
continue
fi
target_link="$WARD_TESTS_DIR/$domain_name"
# Remove existing symlink or directory
if [ -L "$target_link" ]; then
$DRY_RUN rm "$target_link"
echo " Removed existing symlink: $target_link"
elif [ -d "$target_link" ]; then
echo " ⚠ Warning: $target_link exists as directory, not symlink"
echo " To replace with symlink, manually remove: rm -rf $target_link"
continue
fi
# Create relative symlink
# From: soleprint/src/ward/tools/tester/tests/
# To: amar/src/back/tests/contracts/
# Relative path: ../../../../../amar/src/back/tests/contracts/
$DRY_RUN ln -s "../../../../../$CONTRACTS_SOURCE/$domain_name" "$target_link"
echo " ✓ Created symlink: $target_link -> $domain_name"
fi
done
# Also symlink shared test utilities
for shared_file in "endpoints.py" "helpers.py" "base.py" "conftest.py"; do
source_file="$CONTRACTS_SOURCE/$shared_file"
target_file="$WARD_TESTS_DIR/$shared_file"
if [ -f "$source_file" ]; then
if [ -L "$target_file" ]; then
$DRY_RUN rm "$target_file"
fi
if [ ! -e "$target_file" ]; then
$DRY_RUN ln -s "../../../../../$source_file" "$target_file"
echo " ✓ Created symlink: $target_file"
fi
fi
done
fi
echo ""
# =============================================================================
# Bare Metal Symlinks (if bare metal path exists)
# =============================================================================
if [ -d "$HOME/soleprint" ]; then
echo "Step 2: Creating bare metal symlinks..."
echo ""
BARE_WARD_TESTS="$HOME/soleprint/ward/tools/tester/tests"
if [ ! -d "$BARE_WARD_TESTS" ]; then
$DRY_RUN mkdir -p "$BARE_WARD_TESTS"
echo " Created $BARE_WARD_TESTS"
fi
# For bare metal, we can symlink to the docker contract source if it's synced
# Or we can sync tests separately (handled by sync-tests.sh)
echo " Bare metal tests managed by sync-tests.sh"
echo " Run: $HOME/core_room/ctrl/sync-tests.sh"
else
echo "Step 2: Bare metal path not found, skipping"
fi
echo ""
# =============================================================================
# Verification
# =============================================================================
echo "=== Verification ==="
echo ""
if [ -d "$WARD_TESTS_DIR" ]; then
echo "Ward tester tests:"
ls -lah "$WARD_TESTS_DIR" | grep -E "^l|^d" || echo " No directories or symlinks found"
else
echo " ⚠ Ward tests directory not found"
fi
echo ""
# =============================================================================
# Done
# =============================================================================
if [ -n "$DRY_RUN" ]; then
echo "=== Dry run complete (no changes made) ==="
else
echo "=== Symlink Setup Complete ==="
fi
echo ""
echo "Next steps:"
echo " 1. Verify symlinks are working:"
echo " ls -lah $WARD_TESTS_DIR"
echo ""
echo " 2. Restart ward container to pick up changes:"
echo " cd ~/core_room/ctrl && docker compose restart ward"
echo ""
echo " 3. Test in browser:"
echo " https://ward.mcrn.ar/tools/tester/"
echo ""

View File

@@ -1,217 +0,0 @@
#!/bin/bash
# Setup - Apply configuration to system
# Must run with sudo/as root
#
# Usage:
# sudo ./setup.sh
#
# Prerequisites:
# - Run ./configure.sh first (as appuser)
#
# This script:
# - Installs system packages (docker, nginx, certbot)
# - Applies generated nginx config to /etc/nginx/
# - Manages nginx service
set -e
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
GEN_DIR="$SCRIPT_DIR/.generated"
echo "=== Core Room Setup (System Configuration) ==="
echo ""
# Must run as root
if [ "$EUID" -ne 0 ]; then
echo "ERROR: This script must be run with sudo"
echo "Usage: sudo ./setup.sh"
exit 1
fi
# Get the actual user who ran sudo
if [ -z "$SUDO_USER" ]; then
echo "ERROR: SUDO_USER not set"
echo "Run with: sudo ./setup.sh (not as root directly)"
exit 1
fi
ACTUAL_USER="$SUDO_USER"
ACTUAL_HOME=$(eval echo ~$ACTUAL_USER)
echo "Running as: root (via sudo)"
echo "Actual user: $ACTUAL_USER"
echo "User home: $ACTUAL_HOME"
echo ""
# Check that configure was run first
if [ ! -d "$GEN_DIR" ] || [ ! -f "$GEN_DIR/core_room.nginx.conf" ]; then
echo "ERROR: Configuration files not found"
echo ""
echo "Run ./configure.sh first (as $ACTUAL_USER):"
echo " su - $ACTUAL_USER"
echo " cd $(dirname $SCRIPT_DIR)"
echo " ./server/configure.sh"
exit 1
fi
echo "✓ Found generated configuration files"
echo ""
# =============================================================================
# 1. Install System Dependencies
# =============================================================================
echo "Step 1: Installing system dependencies..."
echo ""
# Docker
if ! command -v docker &> /dev/null; then
echo " Installing Docker..."
curl -fsSL https://get.docker.com -o /tmp/get-docker.sh
sh /tmp/get-docker.sh
rm /tmp/get-docker.sh
echo " ✓ Docker installed"
else
echo " ✓ Docker already installed"
fi
# Add user to docker group
if ! groups "$ACTUAL_USER" | grep -q docker; then
echo " Adding $ACTUAL_USER to docker group..."
usermod -aG docker "$ACTUAL_USER"
echo "$ACTUAL_USER added to docker group"
echo " (User must log out and back in for this to take effect)"
else
echo "$ACTUAL_USER already in docker group"
fi
# Docker Compose
if ! docker compose version &> /dev/null; then
echo " Installing Docker Compose plugin..."
apt-get update
apt-get install -y docker-compose-plugin
echo " ✓ Docker Compose installed"
else
echo " ✓ Docker Compose already installed"
fi
# Nginx
if ! command -v nginx &> /dev/null; then
echo " Installing Nginx..."
apt-get update
apt-get install -y nginx
echo " ✓ Nginx installed"
else
echo " ✓ Nginx already installed"
fi
# Certbot
if ! command -v certbot &> /dev/null; then
echo " Installing Certbot..."
apt-get update
apt-get install -y certbot python3-certbot-nginx
echo " ✓ Certbot installed"
else
echo " ✓ Certbot already installed"
fi
# =============================================================================
# 2. Install Nginx Configuration
# =============================================================================
echo ""
echo "Step 2: Installing nginx configuration..."
NGINX_AVAILABLE="/etc/nginx/sites-available/core_room.conf"
NGINX_ENABLED="/etc/nginx/sites-enabled/core_room.conf"
SOURCE_CONFIG="$GEN_DIR/core_room.nginx.conf"
# Copy generated config
cp "$SOURCE_CONFIG" "$NGINX_AVAILABLE"
echo " ✓ Copied to: $NGINX_AVAILABLE"
# Enable site
ln -sf "$NGINX_AVAILABLE" "$NGINX_ENABLED"
echo " ✓ Enabled site: $NGINX_ENABLED"
# Remove default site if exists
if [ -f "/etc/nginx/sites-enabled/default" ]; then
rm "/etc/nginx/sites-enabled/default"
echo " ✓ Removed default site"
fi
# Test nginx config
echo " Testing nginx configuration..."
if nginx -t; then
echo " ✓ Nginx configuration valid"
else
echo " ERROR: Nginx configuration test failed"
exit 1
fi
# =============================================================================
# 3. Manage Nginx Service
# =============================================================================
echo ""
echo "Step 3: Managing nginx service..."
if systemctl is-active --quiet nginx; then
echo " Reloading nginx..."
systemctl reload nginx
echo " ✓ Nginx reloaded"
else
echo " Starting nginx..."
systemctl start nginx
systemctl enable nginx
echo " ✓ Nginx started and enabled"
fi
# =============================================================================
# 4. SSL Certificate Information
# =============================================================================
echo ""
echo "Step 4: SSL certificates..."
SSL_CERTS=(
"/etc/letsencrypt/live/room.mcrn.ar"
"/etc/letsencrypt/live/mcrn.ar"
)
ALL_EXIST=true
for cert_dir in "${SSL_CERTS[@]}"; do
if [ -d "$cert_dir" ]; then
echo " ✓ Certificate exists: $(basename $cert_dir)"
else
echo " ⚠️ Certificate missing: $(basename $cert_dir)"
ALL_EXIST=false
fi
done
if [ "$ALL_EXIST" = false ]; then
echo ""
echo " To generate missing certificates:"
echo " certbot certonly --manual --preferred-challenges dns -d '*.room.mcrn.ar'"
echo " certbot certonly --manual --preferred-challenges dns -d '*.mcrn.ar'"
echo ""
echo " After generating, reload nginx:"
echo " systemctl reload nginx"
fi
# =============================================================================
# Done
# =============================================================================
echo ""
echo "=== Setup Complete ==="
echo ""
echo "System configuration applied successfully."
echo ""
echo "Next steps:"
echo " 1. If $ACTUAL_USER was added to docker group, they must:"
echo " - Log out and log back in"
echo " - Or run: newgrp docker"
echo ""
echo " 2. Generate SSL certificates if missing (see above)"
echo ""
echo " 3. Deploy application:"
echo " su - $ACTUAL_USER"
echo " cd $ACTUAL_HOME/core_room/ctrl"
echo " ./deploy.sh"
echo ""

View File

@@ -1,48 +0,0 @@
#!/bin/bash
# Local setup - prepare .env files
#
# This script runs LOCALLY to create .env files from examples.
# For server setup, use: ssh server 'cd ~/core_room/server && ./setup.sh'
#
# Usage:
# ./setup.sh
set -e
# Change to parent directory (services are in ../service_name)
cd "$(dirname "$0")/.."
SERVICE_DIRS=()
# Find all service directories (have docker-compose.yml, exclude ctrl/nginx/server)
for dir in */; do
dirname="${dir%/}"
if [ -f "$dir/docker-compose.yml" ] && [ "$dirname" != "ctrl" ] && [ "$dirname" != "nginx" ] && [ "$dirname" != "server" ]; then
SERVICE_DIRS+=("$dirname")
fi
done
echo "=== Local Environment Setup ==="
echo ""
# Create .env files from examples
echo "Creating .env files from examples..."
for service in "${SERVICE_DIRS[@]}"; do
if [ ! -f "$service/.env" ] && [ -f "$service/.env.example" ]; then
cp "$service/.env.example" "$service/.env"
echo " Created $service/.env"
elif [ -f "$service/.env" ]; then
echo " $service/.env already exists"
fi
done
echo ""
echo "=== Local Setup Complete ==="
echo ""
echo "Local development:"
echo " - Edit .env files for local values"
echo " - Run: ./start.sh"
echo ""
echo "Server deployment:"
echo " 1. Deploy: ./deploy.sh"
echo " 2. On server: ssh server 'cd ~/core_room/server && ./setup.sh'"

View File

@@ -1,102 +0,0 @@
#!/bin/bash
# Start mainroom services (amar + soleprint)
#
# Usage:
# ./start.sh # Start all (foreground, see logs)
# ./start.sh <service> # Start specific service (e.g., amar, soleprint)
# ./start.sh -d # Start all (detached)
# ./start.sh --build # Start with rebuild
# ./start.sh -d --build # Start detached with rebuild
# ./start.sh --with-nginx # Start with nginx container (local dev only)
set -e
# Change to parent directory (services are in ../service_name)
cd "$(dirname "$0")/.."
# Export mainroom/.env vars so child docker-compose files can use them
if [ -f ".env" ]; then
set -a
source .env
set +a
fi
TARGET="all"
DETACH=""
BUILD=""
WITH_NGINX=""
SERVICE_DIRS=()
# Find all service directories (have docker-compose.yml, exclude ctrl/nginx)
for dir in */; do
dirname="${dir%/}"
if [ -f "$dir/docker-compose.yml" ] && [ "$dirname" != "ctrl" ] && [ "$dirname" != "nginx" ]; then
SERVICE_DIRS+=("$dirname")
fi
done
for arg in "$@"; do
case $arg in
-d|--detached) DETACH="-d" ;;
--build) BUILD="--build" ;;
--with-nginx) WITH_NGINX="true" ;;
all) TARGET="all" ;;
*)
# Check if it's a valid service directory
if [[ " ${SERVICE_DIRS[@]} " =~ " ${arg} " ]]; then
TARGET="$arg"
fi
;;
esac
done
start_service() {
local service=$1
echo "Starting $service..."
cd "$service"
# If --with-nginx and service is soleprint, include nginx compose
if [ "$WITH_NGINX" = "true" ] && [ "$service" = "soleprint" ]; then
echo " Including nginx container..."
DOCKER_BUILDKIT=0 COMPOSE_DOCKER_CLI_BUILD=0 docker compose -f docker-compose.yml -f docker-compose.nginx.yml up $DETACH $BUILD
else
DOCKER_BUILDKIT=0 COMPOSE_DOCKER_CLI_BUILD=0 docker compose up $DETACH $BUILD
fi
cd ..
[ -n "$DETACH" ] && echo " $service started"
}
if [ "$TARGET" = "all" ]; then
if [ -z "$DETACH" ]; then
# Foreground mode: start all services in parallel
echo "Starting all services (foreground): ${SERVICE_DIRS[*]}"
PIDS=()
for service in "${SERVICE_DIRS[@]}"; do
cd "$service"
DOCKER_BUILDKIT=0 COMPOSE_DOCKER_CLI_BUILD=0 docker compose up $BUILD &
PIDS+=($!)
cd ..
done
# Wait for all processes
wait "${PIDS[@]}"
else
# Detached mode: start sequentially
for service in "${SERVICE_DIRS[@]}"; do
start_service "$service"
echo ""
done
fi
elif [[ " ${SERVICE_DIRS[@]} " =~ " ${TARGET} " ]]; then
start_service "$TARGET"
else
echo "Usage: ./start.sh [${SERVICE_DIRS[*]}|all] [-d|--detached] [--build]"
exit 1
fi
if [ -n "$DETACH" ]; then
echo ""
echo "=== Services Started ==="
echo ""
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "(mainroom|amar|soleprint|NAMES)"
fi

View File

@@ -1,42 +0,0 @@
#!/bin/bash
# Show core_room status
#
# Usage:
# ./status.sh
# Change to parent directory (services are in ../service_name)
cd "$(dirname "$0")/.."
# Export core_room/.env vars
if [ -f ".env" ]; then
export $(grep -v '^#' .env | grep -v '^$' | xargs)
fi
SERVICE_DIRS=()
# Find all service directories (have docker-compose.yml, exclude ctrl/nginx)
for dir in */; do
dirname="${dir%/}"
if [ -f "$dir/docker-compose.yml" ] && [ "$dirname" != "ctrl" ] && [ "$dirname" != "nginx" ]; then
SERVICE_DIRS+=("$dirname")
fi
done
# ROOM_NAME comes from core_room/.env
ROOM_NAME=${ROOM_NAME:-core_room}
echo "=== Room Status: $ROOM_NAME ==="
echo ""
echo "Containers:"
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "(${ROOM_NAME}|NAMES)" || echo " No containers running"
echo ""
echo "Networks:"
docker network ls | grep -E "(${ROOM_NAME}|NETWORK)" || echo " No networks"
echo ""
echo "Volumes:"
docker volume ls | grep -E "(${ROOM_NAME}|VOLUME)" || echo " No volumes"
echo ""

View File

@@ -1,50 +0,0 @@
#!/bin/bash
# Stop core_room services
#
# Usage:
# ./stop.sh # Stop all
# ./stop.sh <service> # Stop specific service
set -e
# Change to parent directory (services are in ../service_name)
cd "$(dirname "$0")/.."
# Export core_room/.env vars so child docker-compose files can use them
if [ -f ".env" ]; then
export $(grep -v '^#' .env | grep -v '^$' | xargs)
fi
TARGET=${1:-all}
SERVICE_DIRS=()
# Find all service directories (have docker-compose.yml, exclude ctrl/nginx)
for dir in */; do
dirname="${dir%/}"
if [ -f "$dir/docker-compose.yml" ] && [ "$dirname" != "ctrl" ] && [ "$dirname" != "nginx" ]; then
SERVICE_DIRS+=("$dirname")
fi
done
stop_service() {
local service=$1
echo "Stopping $service..."
cd "$service"
docker compose down
cd ..
}
if [ "$TARGET" = "all" ]; then
# Stop all services in reverse order (dependencies first)
for ((i=${#SERVICE_DIRS[@]}-1; i>=0; i--)); do
stop_service "${SERVICE_DIRS[$i]}"
done
elif [[ " ${SERVICE_DIRS[@]} " =~ " ${TARGET} " ]]; then
stop_service "$TARGET"
else
echo "Usage: ./stop.sh [${SERVICE_DIRS[*]}|all]"
exit 1
fi
echo ""
echo "=== Services Stopped ==="

View File

@@ -1,83 +0,0 @@
#!/bin/bash
# Sync tests to ward tester (standalone, no coupling)
# Configure paths via environment variables
#
# Usage:
# # Set env vars
# export TEST_SOURCE_PATH=~/wdir/ama/amar_django_back/tests/contracts
# export WARD_TESTS_PATH=~/wdir/ama/soleprint/ward/tools/tester/tests
#
# # Run sync
# ./sync-tests-local.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SILENT_FAIL="${SILENT_FAIL:-false}"
# Load from .env.sync if it exists
if [ -f "$SCRIPT_DIR/.env.sync" ]; then
source "$SCRIPT_DIR/.env.sync"
fi
# Check required vars
if [ -z "$TEST_SOURCE_PATH" ]; then
if [ "$SILENT_FAIL" = "true" ]; then
exit 0
fi
echo "Error: TEST_SOURCE_PATH not set"
echo ""
echo "Set environment variables:"
echo " export TEST_SOURCE_PATH=~/wdir/ama/amar_django_back/tests/contracts"
echo " export WARD_TESTS_PATH=~/wdir/ama/soleprint/ward/tools/tester/tests"
echo ""
echo "Or create ctrl/.env.sync with these variables"
exit 1
fi
if [ -z "$WARD_TESTS_PATH" ]; then
if [ "$SILENT_FAIL" = "true" ]; then
exit 0
fi
echo "Error: WARD_TESTS_PATH not set"
exit 1
fi
# Expand paths
SOURCE=$(eval echo "$TEST_SOURCE_PATH")
TARGET=$(eval echo "$WARD_TESTS_PATH")
if [ ! -d "$SOURCE" ]; then
if [ "$SILENT_FAIL" = "true" ]; then
exit 0
fi
echo "Error: Source directory not found: $SOURCE"
exit 1
fi
echo "=== Syncing Contract Tests ==="
echo ""
echo "Source: $SOURCE"
echo "Target: $TARGET"
echo ""
# Create target if it doesn't exist
mkdir -p "$TARGET"
# Sync tests (use shared exclude file)
rsync -av --delete \
--exclude-from="$SCRIPT_DIR/.exclude" \
"$SOURCE/" \
"$TARGET/"
echo ""
echo "[OK] Tests synced successfully"
echo ""
echo "Changes are immediately visible in Docker (volume mount)"
echo "Just refresh your browser - no restart needed!"
echo ""
# Count test files
TEST_COUNT=$(find "$TARGET" -name "test_*.py" | wc -l)
echo "Total test files: $TEST_COUNT"
echo ""

View File

@@ -1,33 +0,0 @@
# Soleprint Docker Services - Environment Configuration
# Copy this file to .env
# =============================================================================
# DEPLOYMENT
# =============================================================================
DEPLOYMENT_NAME=soleprint
NETWORK_NAME=soleprint_network
# =============================================================================
# PATHS
# =============================================================================
# Path to generated soleprint (gen/ folder)
SOLEPRINT_BARE_PATH=/home/mariano/wdir/spr/gen
# =============================================================================
# PORTS
# =============================================================================
SOLEPRINT_PORT=12000
ARTERY_PORT=12001
ATLAS_PORT=12002
STATION_PORT=12003
# =============================================================================
# DATABASE (for station tools that need DB access)
# =============================================================================
# These are passed to station container when orchestrated with managed room
# Leave empty for standalone soleprint; set in room config (e.g., cfg/amar/)
DB_HOST=
DB_PORT=5432
DB_NAME=
DB_USER=
DB_PASSWORD=

View File

@@ -1,33 +0,0 @@
# Soleprint Docker Services - Environment Configuration
# Copy this file to .env
# =============================================================================
# DEPLOYMENT
# =============================================================================
DEPLOYMENT_NAME=soleprint
NETWORK_NAME=soleprint_network
# =============================================================================
# PATHS
# =============================================================================
# Path to deployed soleprint (use deploy build, not gen/)
# For dev: python build.py deploy --output /path/to/deploy
SOLEPRINT_BARE_PATH=/tmp/soleprint-deploy
# =============================================================================
# PORTS
# =============================================================================
SOLEPRINT_PORT=12000
ARTERY_PORT=12001
ATLAS_PORT=12002
STATION_PORT=12003
# =============================================================================
# DATABASE (for station tools that need DB access)
# =============================================================================
# These are passed to station container when orchestrated with managed room
DB_HOST=
DB_PORT=5432
DB_NAME=
DB_USER=
DB_PASSWORD=

View File

@@ -1,35 +0,0 @@
# Soleprint Services - Docker Compose
#
# Runs soleprint hub as a single service
# Artery, atlas, station are accessed via path-based routing
#
# Usage:
# cd mainroom/soleprint && docker compose up -d
services:
soleprint:
build:
context: ${SOLEPRINT_BARE_PATH}
dockerfile: Dockerfile
container_name: ${DEPLOYMENT_NAME}_soleprint
restart: unless-stopped
volumes:
- ${SOLEPRINT_BARE_PATH}:/app
ports:
- "${SOLEPRINT_PORT}:8000"
env_file:
- .env
environment:
# For single-port mode, all subsystems are internal routes
- ARTERY_EXTERNAL_URL=/artery
- ATLAS_EXTERNAL_URL=/atlas
- STATION_EXTERNAL_URL=/station
networks:
- default
# Use run.py for single-port bare-metal mode
command: uvicorn run:app --host 0.0.0.0 --port 8000 --reload
networks:
default:
external: true
name: ${NETWORK_NAME}