updated deploy scripts and locations
This commit is contained in:
43
CLAUDE.md
43
CLAUDE.md
@@ -27,7 +27,7 @@ spr/
|
|||||||
├── artery/ # VERSIONED - Vital connections
|
├── artery/ # VERSIONED - Vital connections
|
||||||
│ ├── veins/ # Single-responsibility connectors
|
│ ├── veins/ # Single-responsibility connectors
|
||||||
│ ├── pulses/ # Composed: Vein + Room + Depot
|
│ ├── pulses/ # Composed: Vein + Room + Depot
|
||||||
│ ├── rooms/ # Environment configs
|
│ ├── room/ # Base room code and ctrl templates
|
||||||
│ └── depots/ # Data storage
|
│ └── depots/ # Data storage
|
||||||
│
|
│
|
||||||
├── atlas/ # VERSIONED - Documentation system
|
├── atlas/ # VERSIONED - Documentation system
|
||||||
@@ -54,18 +54,19 @@ spr/
|
|||||||
│ ├── requirements.txt # Dependencies
|
│ ├── requirements.txt # Dependencies
|
||||||
│ └── dataloader/ # Data loading module
|
│ └── dataloader/ # Data loading module
|
||||||
│
|
│
|
||||||
├── gen/ # RUNNABLE instance (gitignored, symlinks)
|
├── gen/ # RUNNABLE instance (gitignored, copies)
|
||||||
│ ├── main.py # → ../soleprint/main.py
|
│ ├── main.py
|
||||||
│ ├── run.py # → ../soleprint/run.py
|
│ ├── run.py
|
||||||
│ ├── index.html # → ../soleprint/index.html
|
│ ├── index.html
|
||||||
│ ├── requirements.txt # → ../soleprint/requirements.txt
|
│ ├── requirements.txt
|
||||||
│ ├── dataloader/ # → ../soleprint/dataloader/
|
│ ├── Dockerfile
|
||||||
│ ├── artery/ # → ../artery/
|
│ ├── dataloader/
|
||||||
│ ├── atlas/ # → ../atlas/
|
│ ├── artery/
|
||||||
│ ├── station/ # → ../station/
|
│ ├── atlas/
|
||||||
│ ├── data/ # → ../data/
|
│ ├── station/
|
||||||
│ ├── cfg/ # Copied config
|
│ ├── data/
|
||||||
│ └── models/ # GENERATED (one-time per client)
|
│ ├── cfg/
|
||||||
|
│ └── models/ # Generated by modelgen
|
||||||
│ └── pydantic/
|
│ └── pydantic/
|
||||||
│
|
│
|
||||||
└── mainroom/ # Orchestration: soleprint ↔ managed room
|
└── mainroom/ # Orchestration: soleprint ↔ managed room
|
||||||
@@ -110,10 +111,8 @@ A **Room** is an environment with soleprint context, features, and conventions:
|
|||||||
### Mainroom
|
### Mainroom
|
||||||
The **mainroom** orchestrates interaction between soleprint and managed rooms:
|
The **mainroom** orchestrates interaction between soleprint and managed rooms:
|
||||||
- `sbwrapper/` - Sidebar UI overlay for any managed app (quick login, Jira info, etc.)
|
- `sbwrapper/` - Sidebar UI overlay for any managed app (quick login, Jira info, etc.)
|
||||||
- `soleprint/` - Docker configs + ctrl scripts for running soleprint services
|
- `soleprint/` - Docker configs for running soleprint services
|
||||||
- `ctrl/local/` - Local deployment scripts (push.sh, deploy.sh)
|
- `ctrl/` - Mainroom-level orchestration commands (start.sh, stop.sh, etc.)
|
||||||
- `ctrl/server/` - Server setup scripts
|
|
||||||
- `ctrl/` - Mainroom-level orchestration commands
|
|
||||||
|
|
||||||
Soleprint can run without a managed room (for testing veins, etc.).
|
Soleprint can run without a managed room (for testing veins, etc.).
|
||||||
|
|
||||||
@@ -125,11 +124,10 @@ Soleprint can run without a managed room (for testing veins, etc.).
|
|||||||
|
|
||||||
### soleprint/ vs gen/
|
### soleprint/ vs gen/
|
||||||
- `soleprint/` = Versioned core files (main.py, run.py, dataloader, index.html)
|
- `soleprint/` = Versioned core files (main.py, run.py, dataloader, index.html)
|
||||||
- `gen/` = Gitignored runnable instance with symlinks to soleprint/ + systems
|
- `gen/` = Gitignored runnable instance (copies, not symlinks - Docker compatible)
|
||||||
- `gen/models/` = Generated models (one-time per client, like an install)
|
- `gen/models/` = Generated models
|
||||||
|
|
||||||
**Development:** Edit in soleprint/, artery/, atlas/, station/, data/ → run from gen/
|
**Development:** Edit source → `python build.py dev` → run from gen/
|
||||||
**Production:** Copy everything (resolve symlinks)
|
|
||||||
|
|
||||||
### Modelgen (Generic Tool)
|
### Modelgen (Generic Tool)
|
||||||
Lives in `station/tools/modelgen/`. It:
|
Lives in `station/tools/modelgen/`. It:
|
||||||
@@ -146,8 +144,7 @@ The build script at spr root handles both development and deployment builds:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# From spr/
|
# From spr/
|
||||||
python build.py --help
|
python build.py dev # Build gen/ from source (copies)
|
||||||
python build.py dev # Build with symlinks (soleprint only)
|
|
||||||
python build.py dev --cfg amar # Include amar room config
|
python build.py dev --cfg amar # Include amar room config
|
||||||
python build.py deploy --output /path/ # Build for production
|
python build.py deploy --output /path/ # Build for production
|
||||||
python build.py models # Only regenerate models
|
python build.py models # Only regenerate models
|
||||||
|
|||||||
79
artery/room/README.md
Normal file
79
artery/room/README.md
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# Room - Runtime Environment Configuration
|
||||||
|
|
||||||
|
A **Room** defines connection details for a managed environment (hosts, ports, domains, credentials).
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
Rooms are used in composed types:
|
||||||
|
- `Pulse = Vein + Room + Depot` (artery)
|
||||||
|
- `Desk = Cabinet + Room + Depots` (station)
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
artery/room/
|
||||||
|
├── __init__.py # Room model (Pydantic)
|
||||||
|
├── ctrl/ # Base ctrl script templates
|
||||||
|
│ ├── start.sh # Start services
|
||||||
|
│ ├── stop.sh # Stop services
|
||||||
|
│ ├── status.sh # Show status
|
||||||
|
│ ├── logs.sh # View logs
|
||||||
|
│ └── build.sh # Build images
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Room Data
|
||||||
|
|
||||||
|
Room instances are stored in `data/rooms.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"name": "soleprint-local",
|
||||||
|
"slug": "soleprint-local",
|
||||||
|
"title": "Soleprint Local",
|
||||||
|
"status": "dev",
|
||||||
|
"config_path": "mainroom/soleprint"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## ctrl/ Templates
|
||||||
|
|
||||||
|
The scripts in `ctrl/` are templates for room management. Copy them to your room's `ctrl/` folder and customize.
|
||||||
|
|
||||||
|
All scripts:
|
||||||
|
- Auto-detect services (directories with `docker-compose.yml`)
|
||||||
|
- Support targeting specific services: `./start.sh myservice`
|
||||||
|
- Load `.env` from the room root
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start
|
||||||
|
./ctrl/start.sh # All services (foreground)
|
||||||
|
./ctrl/start.sh -d # Detached
|
||||||
|
./ctrl/start.sh --build # With rebuild
|
||||||
|
|
||||||
|
# Stop
|
||||||
|
./ctrl/stop.sh # All services
|
||||||
|
./ctrl/stop.sh myservice # Specific service
|
||||||
|
|
||||||
|
# Status
|
||||||
|
./ctrl/status.sh
|
||||||
|
|
||||||
|
# Logs
|
||||||
|
./ctrl/logs.sh # All
|
||||||
|
./ctrl/logs.sh -f # Follow
|
||||||
|
./ctrl/logs.sh myservice # Specific service
|
||||||
|
|
||||||
|
# Build
|
||||||
|
./ctrl/build.sh # All
|
||||||
|
./ctrl/build.sh --no-cache # Force rebuild
|
||||||
|
```
|
||||||
|
|
||||||
|
## CI/CD
|
||||||
|
|
||||||
|
For production deployments, use Woodpecker CI/CD instead of manual ctrl scripts.
|
||||||
77
artery/room/__init__.py
Normal file
77
artery/room/__init__.py
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
"""
|
||||||
|
Room - Runtime environment configuration.
|
||||||
|
|
||||||
|
A Room defines connection details for a managed environment (hosts, ports, domains, credentials).
|
||||||
|
Used by Pulse (Vein + Room + Depot) and Desk (Cabinet + Room + Depots).
|
||||||
|
|
||||||
|
Room instances are stored in data/rooms.json.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from enum import Enum
|
||||||
|
from typing import Optional
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
|
||||||
|
class RoomStatus(str, Enum):
|
||||||
|
PENDING = "pending"
|
||||||
|
PLANNED = "planned"
|
||||||
|
BUILDING = "building"
|
||||||
|
DEV = "dev"
|
||||||
|
LIVE = "live"
|
||||||
|
READY = "ready"
|
||||||
|
|
||||||
|
|
||||||
|
class RoomConfig(BaseModel):
|
||||||
|
"""Environment-specific configuration for a room."""
|
||||||
|
|
||||||
|
# Network
|
||||||
|
host: Optional[str] = Field(None, description="Primary host/domain")
|
||||||
|
port: Optional[int] = Field(None, description="Primary port")
|
||||||
|
|
||||||
|
# Paths
|
||||||
|
config_path: Optional[str] = Field(None, description="Path to room config folder")
|
||||||
|
deploy_path: Optional[str] = Field(None, description="Deployment target path")
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
network_name: Optional[str] = Field(None, description="Docker network name")
|
||||||
|
deployment_name: Optional[str] = Field(None, description="Container name prefix")
|
||||||
|
|
||||||
|
# Database (when room has DB access)
|
||||||
|
db_host: Optional[str] = None
|
||||||
|
db_port: Optional[int] = Field(None, ge=1, le=65535)
|
||||||
|
db_name: Optional[str] = None
|
||||||
|
db_user: Optional[str] = None
|
||||||
|
# Note: db_password should come from env vars, not stored in config
|
||||||
|
|
||||||
|
|
||||||
|
class Room(BaseModel):
|
||||||
|
"""Runtime environment configuration."""
|
||||||
|
|
||||||
|
name: str = Field(..., description="Unique identifier")
|
||||||
|
slug: str = Field(..., description="URL-friendly identifier")
|
||||||
|
title: str = Field(..., description="Display title for UI")
|
||||||
|
status: RoomStatus = Field(RoomStatus.PENDING, description="Current status")
|
||||||
|
|
||||||
|
# Optional extended config
|
||||||
|
config: Optional[RoomConfig] = Field(None, description="Environment configuration")
|
||||||
|
|
||||||
|
# Legacy field for backwards compatibility
|
||||||
|
config_path: Optional[str] = Field(None, description="Path to room config folder")
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
use_enum_values = True
|
||||||
|
|
||||||
|
|
||||||
|
def load_rooms(data_path: str = "data/rooms.json") -> list[Room]:
|
||||||
|
"""Load rooms from data file."""
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
path = Path(data_path)
|
||||||
|
if not path.exists():
|
||||||
|
return []
|
||||||
|
|
||||||
|
with open(path) as f:
|
||||||
|
data = json.load(f)
|
||||||
|
|
||||||
|
return [Room(**item) for item in data.get("items", [])]
|
||||||
44
artery/room/ctrl/build.sh
Executable file
44
artery/room/ctrl/build.sh
Executable file
@@ -0,0 +1,44 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Build room Docker images
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./build.sh # Build all
|
||||||
|
# ./build.sh <service> # Build specific service
|
||||||
|
# ./build.sh --no-cache # Force rebuild
|
||||||
|
#
|
||||||
|
# This is a TEMPLATE. Copy to your room's ctrl/ and customize.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
NO_CACHE=""
|
||||||
|
TARGET="all"
|
||||||
|
SERVICE_DIRS=()
|
||||||
|
|
||||||
|
for dir in */; do
|
||||||
|
[ -f "$dir/docker-compose.yml" ] && SERVICE_DIRS+=("${dir%/}")
|
||||||
|
done
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case $arg in
|
||||||
|
--no-cache) NO_CACHE="--no-cache" ;;
|
||||||
|
*) [[ " ${SERVICE_DIRS[*]} " =~ " ${arg} " ]] && TARGET="$arg" ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
build_service() {
|
||||||
|
local svc=$1
|
||||||
|
echo "Building $svc..."
|
||||||
|
(cd "$svc" && docker compose build $NO_CACHE)
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ]; then
|
||||||
|
for svc in "${SERVICE_DIRS[@]}"; do
|
||||||
|
build_service "$svc"
|
||||||
|
done
|
||||||
|
else
|
||||||
|
build_service "$TARGET"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Done."
|
||||||
43
artery/room/ctrl/logs.sh
Executable file
43
artery/room/ctrl/logs.sh
Executable file
@@ -0,0 +1,43 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# View room service logs
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./logs.sh # All logs
|
||||||
|
# ./logs.sh <service> # Service compose logs
|
||||||
|
# ./logs.sh <container> # Specific container logs
|
||||||
|
# ./logs.sh -f # Follow mode
|
||||||
|
#
|
||||||
|
# This is a TEMPLATE. Copy to your room's ctrl/ and customize.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
FOLLOW=""
|
||||||
|
TARGET=""
|
||||||
|
SERVICE_DIRS=()
|
||||||
|
|
||||||
|
for dir in */; do
|
||||||
|
[ -f "$dir/docker-compose.yml" ] && SERVICE_DIRS+=("${dir%/}")
|
||||||
|
done
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case $arg in
|
||||||
|
-f|--follow) FOLLOW="-f" ;;
|
||||||
|
*) TARGET="$arg" ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$TARGET" ]; then
|
||||||
|
# Show all logs
|
||||||
|
for svc in "${SERVICE_DIRS[@]}"; do
|
||||||
|
echo "=== $svc ==="
|
||||||
|
(cd "$svc" && docker compose logs --tail=20 $FOLLOW) || true
|
||||||
|
done
|
||||||
|
elif [[ " ${SERVICE_DIRS[*]} " =~ " ${TARGET} " ]]; then
|
||||||
|
# Service compose logs
|
||||||
|
(cd "$TARGET" && docker compose logs $FOLLOW)
|
||||||
|
else
|
||||||
|
# Specific container
|
||||||
|
docker logs $FOLLOW "$TARGET"
|
||||||
|
fi
|
||||||
52
artery/room/ctrl/start.sh
Executable file
52
artery/room/ctrl/start.sh
Executable file
@@ -0,0 +1,52 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Start room services
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./start.sh # Start all (foreground)
|
||||||
|
# ./start.sh -d # Start all (detached)
|
||||||
|
# ./start.sh --build # Start with rebuild
|
||||||
|
# ./start.sh <service> # Start specific service
|
||||||
|
#
|
||||||
|
# This is a TEMPLATE. Copy to your room's ctrl/ and customize.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
# Load environment
|
||||||
|
[ -f ".env" ] && set -a && source .env && set +a
|
||||||
|
|
||||||
|
DETACH=""
|
||||||
|
BUILD=""
|
||||||
|
TARGET="all"
|
||||||
|
SERVICE_DIRS=()
|
||||||
|
|
||||||
|
# Auto-detect services (dirs with docker-compose.yml)
|
||||||
|
for dir in */; do
|
||||||
|
[ -f "$dir/docker-compose.yml" ] && SERVICE_DIRS+=("${dir%/}")
|
||||||
|
done
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case $arg in
|
||||||
|
-d|--detached) DETACH="-d" ;;
|
||||||
|
--build) BUILD="--build" ;;
|
||||||
|
*) [[ " ${SERVICE_DIRS[*]} " =~ " ${arg} " ]] && TARGET="$arg" ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
start_service() {
|
||||||
|
local svc=$1
|
||||||
|
echo "Starting $svc..."
|
||||||
|
(cd "$svc" && docker compose up $DETACH $BUILD)
|
||||||
|
[ -n "$DETACH" ] && echo " $svc started"
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ]; then
|
||||||
|
for svc in "${SERVICE_DIRS[@]}"; do
|
||||||
|
start_service "$svc"
|
||||||
|
done
|
||||||
|
else
|
||||||
|
start_service "$TARGET"
|
||||||
|
fi
|
||||||
|
|
||||||
|
[ -n "$DETACH" ] && docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||||
22
artery/room/ctrl/status.sh
Executable file
22
artery/room/ctrl/status.sh
Executable file
@@ -0,0 +1,22 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Show room service status
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./status.sh
|
||||||
|
#
|
||||||
|
# This is a TEMPLATE. Copy to your room's ctrl/ and customize.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
[ -f ".env" ] && source .env
|
||||||
|
|
||||||
|
NAME="${DEPLOYMENT_NAME:-room}"
|
||||||
|
|
||||||
|
echo "=== Docker Containers ==="
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "($NAME|NAMES)" || echo "No containers running"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=== Networks ==="
|
||||||
|
docker network ls | grep -E "(${NETWORK_NAME:-$NAME}|NETWORK)" || echo "No matching networks"
|
||||||
38
artery/room/ctrl/stop.sh
Executable file
38
artery/room/ctrl/stop.sh
Executable file
@@ -0,0 +1,38 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Stop room services
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./stop.sh # Stop all
|
||||||
|
# ./stop.sh <service> # Stop specific service
|
||||||
|
#
|
||||||
|
# This is a TEMPLATE. Copy to your room's ctrl/ and customize.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
TARGET="all"
|
||||||
|
SERVICE_DIRS=()
|
||||||
|
|
||||||
|
# Auto-detect services
|
||||||
|
for dir in */; do
|
||||||
|
[ -f "$dir/docker-compose.yml" ] && SERVICE_DIRS+=("${dir%/}")
|
||||||
|
done
|
||||||
|
|
||||||
|
[ -n "$1" ] && [[ " ${SERVICE_DIRS[*]} " =~ " $1 " ]] && TARGET="$1"
|
||||||
|
|
||||||
|
stop_service() {
|
||||||
|
local svc=$1
|
||||||
|
echo "Stopping $svc..."
|
||||||
|
(cd "$svc" && docker compose down)
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ]; then
|
||||||
|
for svc in "${SERVICE_DIRS[@]}"; do
|
||||||
|
stop_service "$svc"
|
||||||
|
done
|
||||||
|
else
|
||||||
|
stop_service "$TARGET"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Done."
|
||||||
192
build.py
192
build.py
@@ -4,31 +4,29 @@ Soleprint Build Tool
|
|||||||
|
|
||||||
Builds the soleprint instance using modelgen for model generation.
|
Builds the soleprint instance using modelgen for model generation.
|
||||||
|
|
||||||
Modes:
|
Both dev and deploy modes copy files (no symlinks) for Docker compatibility.
|
||||||
- dev: Uses symlinks for quick development (edit source, run from gen/)
|
After editing source files, re-run `python build.py dev` to update gen/.
|
||||||
- deploy: Copies everything for production deployment (no symlinks)
|
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
python build.py dev
|
python build.py dev # Build gen/ from source
|
||||||
python build.py dev --cfg amar
|
python build.py dev --cfg amar # Include amar room config
|
||||||
python build.py deploy --output /path/to/deploy/
|
python build.py deploy --output /path/ # Build for production
|
||||||
python build.py models
|
python build.py models # Only regenerate models
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
# Set up dev environment (soleprint only)
|
# Set up dev environment
|
||||||
python build.py dev
|
python build.py dev
|
||||||
|
cd gen && .venv/bin/python run.py
|
||||||
|
|
||||||
# Set up dev environment with amar room config
|
# With room config
|
||||||
python build.py dev --cfg amar
|
python build.py dev --cfg amar
|
||||||
|
|
||||||
# Build for deployment
|
# Build for deployment
|
||||||
python build.py deploy --output ../deploy/soleprint/
|
python build.py deploy --output ../deploy/soleprint/
|
||||||
|
|
||||||
# Only regenerate models
|
|
||||||
python build.py models
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
import subprocess
|
import subprocess
|
||||||
@@ -38,31 +36,24 @@ from pathlib import Path
|
|||||||
# SPR root is where this script lives
|
# SPR root is where this script lives
|
||||||
SPR_ROOT = Path(__file__).resolve().parent
|
SPR_ROOT = Path(__file__).resolve().parent
|
||||||
|
|
||||||
|
# Configure logging
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
format="%(message)s",
|
||||||
|
)
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def ensure_dir(path: Path):
|
def ensure_dir(path: Path):
|
||||||
"""Create directory if it doesn't exist."""
|
"""Create directory if it doesn't exist."""
|
||||||
path.mkdir(parents=True, exist_ok=True)
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
def create_symlink(source: Path, target: Path):
|
|
||||||
"""Create a symlink, removing existing if present."""
|
|
||||||
if target.exists() or target.is_symlink():
|
|
||||||
if target.is_symlink():
|
|
||||||
target.unlink()
|
|
||||||
elif target.is_dir():
|
|
||||||
shutil.rmtree(target)
|
|
||||||
else:
|
|
||||||
target.unlink()
|
|
||||||
|
|
||||||
# Make relative symlink
|
|
||||||
rel_source = os.path.relpath(source, target.parent)
|
|
||||||
target.symlink_to(rel_source)
|
|
||||||
print(f" Linked: {target.name} -> {rel_source}")
|
|
||||||
|
|
||||||
|
|
||||||
def copy_path(source: Path, target: Path):
|
def copy_path(source: Path, target: Path):
|
||||||
"""Copy file or directory, resolving symlinks."""
|
"""Copy file or directory, resolving symlinks."""
|
||||||
if target.exists():
|
if target.is_symlink():
|
||||||
|
target.unlink()
|
||||||
|
elif target.exists():
|
||||||
if target.is_dir():
|
if target.is_dir():
|
||||||
shutil.rmtree(target)
|
shutil.rmtree(target)
|
||||||
else:
|
else:
|
||||||
@@ -70,10 +61,10 @@ def copy_path(source: Path, target: Path):
|
|||||||
|
|
||||||
if source.is_dir():
|
if source.is_dir():
|
||||||
shutil.copytree(source, target, symlinks=False)
|
shutil.copytree(source, target, symlinks=False)
|
||||||
print(f" Copied: {target.name}/ ({count_files(target)} files)")
|
log.info(f" Copied: {target.name}/ ({count_files(target)} files)")
|
||||||
else:
|
else:
|
||||||
shutil.copy2(source, target)
|
shutil.copy2(source, target)
|
||||||
print(f" Copied: {target.name}")
|
log.info(f" Copied: {target.name}")
|
||||||
|
|
||||||
|
|
||||||
def count_files(path: Path) -> int:
|
def count_files(path: Path) -> int:
|
||||||
@@ -90,7 +81,7 @@ def generate_models(output_dir: Path):
|
|||||||
config_path = SPR_ROOT / "cfg" / "soleprint.config.json"
|
config_path = SPR_ROOT / "cfg" / "soleprint.config.json"
|
||||||
|
|
||||||
if not config_path.exists():
|
if not config_path.exists():
|
||||||
print(f"Warning: Config not found at {config_path}")
|
log.warning(f"Config not found at {config_path}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Soleprint-specific: models go in models/pydantic/__init__.py
|
# Soleprint-specific: models go in models/pydantic/__init__.py
|
||||||
@@ -134,7 +125,7 @@ def copy_cfg(output_dir: Path, cfg_name: str | None):
|
|||||||
if cfg_name:
|
if cfg_name:
|
||||||
room_cfg = SPR_ROOT / "cfg" / cfg_name
|
room_cfg = SPR_ROOT / "cfg" / cfg_name
|
||||||
if room_cfg.exists() and room_cfg.is_dir():
|
if room_cfg.exists() and room_cfg.is_dir():
|
||||||
print(f"\nCopying {cfg_name} room config...")
|
log.info(f"\nCopying {cfg_name} room config...")
|
||||||
for item in room_cfg.iterdir():
|
for item in room_cfg.iterdir():
|
||||||
if item.name == ".env.example":
|
if item.name == ".env.example":
|
||||||
# Copy .env.example to output root as template
|
# Copy .env.example to output root as template
|
||||||
@@ -145,97 +136,100 @@ def copy_cfg(output_dir: Path, cfg_name: str | None):
|
|||||||
ensure_dir(cfg_dir / cfg_name)
|
ensure_dir(cfg_dir / cfg_name)
|
||||||
copy_path(item, cfg_dir / cfg_name / item.name)
|
copy_path(item, cfg_dir / cfg_name / item.name)
|
||||||
else:
|
else:
|
||||||
print(f"Warning: Room config '{cfg_name}' not found at {room_cfg}")
|
log.warning(f"Room config '{cfg_name}' not found at {room_cfg}")
|
||||||
|
|
||||||
|
|
||||||
def build_dev(output_dir: Path, cfg_name: str | None = None):
|
def build_dev(output_dir: Path, cfg_name: str | None = None):
|
||||||
"""
|
"""
|
||||||
Build for development using symlinks.
|
Build for development using copies (Docker-compatible).
|
||||||
|
|
||||||
Structure:
|
Structure:
|
||||||
gen/
|
gen/
|
||||||
├── main.py -> ../soleprint/main.py
|
├── main.py
|
||||||
├── run.py -> ../soleprint/run.py
|
├── run.py
|
||||||
├── index.html -> ../soleprint/index.html
|
├── index.html
|
||||||
├── requirements.txt -> ../soleprint/requirements.txt
|
├── requirements.txt
|
||||||
├── dataloader/ -> ../soleprint/dataloader/
|
├── Dockerfile
|
||||||
├── artery/ -> ../artery/
|
├── dataloader/
|
||||||
├── atlas/ -> ../atlas/
|
├── artery/
|
||||||
├── station/ -> ../station/
|
├── atlas/
|
||||||
├── data/ -> ../data/
|
├── station/
|
||||||
├── cfg/ # Copied config
|
├── data/
|
||||||
|
├── cfg/
|
||||||
├── .env.example # From cfg/<room>/.env.example
|
├── .env.example # From cfg/<room>/.env.example
|
||||||
└── models/ # Generated
|
└── models/ # Generated
|
||||||
|
|
||||||
|
After editing source files, re-run `python build.py dev` to update gen/.
|
||||||
"""
|
"""
|
||||||
print(f"\n=== Building DEV environment ===")
|
log.info("\n=== Building DEV environment ===")
|
||||||
print(f"SPR root: {SPR_ROOT}")
|
log.info(f"SPR root: {SPR_ROOT}")
|
||||||
print(f"Output: {output_dir}")
|
log.info(f"Output: {output_dir}")
|
||||||
if cfg_name:
|
if cfg_name:
|
||||||
print(f"Room cfg: {cfg_name}")
|
log.info(f"Room cfg: {cfg_name}")
|
||||||
|
|
||||||
ensure_dir(output_dir)
|
ensure_dir(output_dir)
|
||||||
|
|
||||||
# Soleprint core files (symlinks)
|
# Soleprint core files
|
||||||
print("\nLinking soleprint files...")
|
log.info("\nCopying soleprint files...")
|
||||||
soleprint = SPR_ROOT / "soleprint"
|
soleprint = SPR_ROOT / "soleprint"
|
||||||
create_symlink(soleprint / "main.py", output_dir / "main.py")
|
copy_path(soleprint / "main.py", output_dir / "main.py")
|
||||||
create_symlink(soleprint / "run.py", output_dir / "run.py")
|
copy_path(soleprint / "run.py", output_dir / "run.py")
|
||||||
create_symlink(soleprint / "index.html", output_dir / "index.html")
|
copy_path(soleprint / "index.html", output_dir / "index.html")
|
||||||
create_symlink(soleprint / "requirements.txt", output_dir / "requirements.txt")
|
copy_path(soleprint / "requirements.txt", output_dir / "requirements.txt")
|
||||||
create_symlink(soleprint / "dataloader", output_dir / "dataloader")
|
copy_path(soleprint / "dataloader", output_dir / "dataloader")
|
||||||
if (soleprint / "Dockerfile").exists():
|
if (soleprint / "Dockerfile").exists():
|
||||||
create_symlink(soleprint / "Dockerfile", output_dir / "Dockerfile")
|
copy_path(soleprint / "Dockerfile", output_dir / "Dockerfile")
|
||||||
|
|
||||||
# System directories (symlinks)
|
# System directories
|
||||||
print("\nLinking systems...")
|
log.info("\nCopying systems...")
|
||||||
for system in ["artery", "atlas", "station"]:
|
for system in ["artery", "atlas", "station"]:
|
||||||
source = SPR_ROOT / system
|
source = SPR_ROOT / system
|
||||||
if source.exists():
|
if source.exists():
|
||||||
create_symlink(source, output_dir / system)
|
copy_path(source, output_dir / system)
|
||||||
|
|
||||||
# Data directory (symlink)
|
# Data directory
|
||||||
print("\nLinking data...")
|
log.info("\nCopying data...")
|
||||||
create_symlink(SPR_ROOT / "data", output_dir / "data")
|
copy_path(SPR_ROOT / "data", output_dir / "data")
|
||||||
|
|
||||||
# Config (copy, not symlink - may be customized)
|
# Config
|
||||||
print("\nCopying config...")
|
log.info("\nCopying config...")
|
||||||
copy_cfg(output_dir, cfg_name)
|
copy_cfg(output_dir, cfg_name)
|
||||||
|
|
||||||
# Models (generated) - pass output_dir, modelgen adds models/pydantic
|
# Models (generated)
|
||||||
print("\nGenerating models...")
|
log.info("\nGenerating models...")
|
||||||
if not generate_models(output_dir):
|
if not generate_models(output_dir):
|
||||||
print(" Warning: Model generation failed, you may need to run it manually")
|
log.warning("Model generation failed, you may need to run it manually")
|
||||||
|
|
||||||
print("\n✓ Dev build complete!")
|
log.info("\n✓ Dev build complete!")
|
||||||
print(f"\nTo run:")
|
log.info(f"\nTo run:")
|
||||||
print(f" cd {output_dir}")
|
log.info(f" cd {output_dir}")
|
||||||
print(f" python3 -m venv .venv")
|
log.info(f" python3 -m venv .venv")
|
||||||
print(f" .venv/bin/pip install -r requirements.txt")
|
log.info(f" .venv/bin/pip install -r requirements.txt")
|
||||||
print(f" .venv/bin/python main.py # Multi-port (production-like)")
|
log.info(f" .venv/bin/python run.py # Single-port bare-metal dev")
|
||||||
print(f" .venv/bin/python run.py # Single-port (bare-metal dev)")
|
log.info(f"\nAfter editing source, rebuild with: python build.py dev")
|
||||||
|
|
||||||
|
|
||||||
def build_deploy(output_dir: Path, cfg_name: str | None = None):
|
def build_deploy(output_dir: Path, cfg_name: str | None = None):
|
||||||
"""
|
"""
|
||||||
Build for deployment by copying all files (no symlinks).
|
Build for deployment by copying all files (no symlinks).
|
||||||
"""
|
"""
|
||||||
print(f"\n=== Building DEPLOY package ===")
|
log.info("\n=== Building DEPLOY package ===")
|
||||||
print(f"SPR root: {SPR_ROOT}")
|
log.info(f"SPR root: {SPR_ROOT}")
|
||||||
print(f"Output: {output_dir}")
|
log.info(f"Output: {output_dir}")
|
||||||
if cfg_name:
|
if cfg_name:
|
||||||
print(f"Room cfg: {cfg_name}")
|
log.info(f"Room cfg: {cfg_name}")
|
||||||
|
|
||||||
if output_dir.exists():
|
if output_dir.exists():
|
||||||
response = input(f"\nOutput directory exists. Overwrite? [y/N] ")
|
response = input(f"\nOutput directory exists. Overwrite? [y/N] ")
|
||||||
if response.lower() != "y":
|
if response.lower() != "y":
|
||||||
print("Aborted.")
|
log.info("Aborted.")
|
||||||
return
|
return
|
||||||
shutil.rmtree(output_dir)
|
shutil.rmtree(output_dir)
|
||||||
|
|
||||||
ensure_dir(output_dir)
|
ensure_dir(output_dir)
|
||||||
|
|
||||||
# Soleprint core files (copy)
|
# Soleprint core files (copy)
|
||||||
print("\nCopying soleprint files...")
|
log.info("\nCopying soleprint files...")
|
||||||
soleprint = SPR_ROOT / "soleprint"
|
soleprint = SPR_ROOT / "soleprint"
|
||||||
copy_path(soleprint / "main.py", output_dir / "main.py")
|
copy_path(soleprint / "main.py", output_dir / "main.py")
|
||||||
copy_path(soleprint / "run.py", output_dir / "run.py")
|
copy_path(soleprint / "run.py", output_dir / "run.py")
|
||||||
@@ -246,31 +240,31 @@ def build_deploy(output_dir: Path, cfg_name: str | None = None):
|
|||||||
copy_path(soleprint / "Dockerfile", output_dir / "Dockerfile")
|
copy_path(soleprint / "Dockerfile", output_dir / "Dockerfile")
|
||||||
|
|
||||||
# System directories (copy)
|
# System directories (copy)
|
||||||
print("\nCopying systems...")
|
log.info("\nCopying systems...")
|
||||||
for system in ["artery", "atlas", "station"]:
|
for system in ["artery", "atlas", "station"]:
|
||||||
source = SPR_ROOT / system
|
source = SPR_ROOT / system
|
||||||
if source.exists():
|
if source.exists():
|
||||||
copy_path(source, output_dir / system)
|
copy_path(source, output_dir / system)
|
||||||
|
|
||||||
# Data directory (copy)
|
# Data directory (copy)
|
||||||
print("\nCopying data...")
|
log.info("\nCopying data...")
|
||||||
copy_path(SPR_ROOT / "data", output_dir / "data")
|
copy_path(SPR_ROOT / "data", output_dir / "data")
|
||||||
|
|
||||||
# Config (copy)
|
# Config (copy)
|
||||||
print("\nCopying config...")
|
log.info("\nCopying config...")
|
||||||
copy_cfg(output_dir, cfg_name)
|
copy_cfg(output_dir, cfg_name)
|
||||||
|
|
||||||
# Models (generate fresh) - pass output_dir, modelgen adds models/pydantic
|
# Models (generate fresh) - pass output_dir, modelgen adds models/pydantic
|
||||||
print("\nGenerating models...")
|
log.info("\nGenerating models...")
|
||||||
if not generate_models(output_dir):
|
if not generate_models(output_dir):
|
||||||
# Fallback: copy from gen if exists
|
# Fallback: copy from gen if exists
|
||||||
existing = SPR_ROOT / "gen" / "models"
|
existing = SPR_ROOT / "gen" / "models"
|
||||||
if existing.exists():
|
if existing.exists():
|
||||||
print(" Using existing models from gen/")
|
log.info(" Using existing models from gen/")
|
||||||
copy_path(existing, output_dir / "models")
|
copy_path(existing, output_dir / "models")
|
||||||
|
|
||||||
# Copy schema.json for reference
|
# Copy schema.json for reference
|
||||||
print("\nCopying schema...")
|
log.info("\nCopying schema...")
|
||||||
copy_path(SPR_ROOT / "schema.json", output_dir / "schema.json")
|
copy_path(SPR_ROOT / "schema.json", output_dir / "schema.json")
|
||||||
|
|
||||||
# Create run script
|
# Create run script
|
||||||
@@ -289,29 +283,29 @@ echo "Starting soleprint on http://localhost:12000"
|
|||||||
.venv/bin/python main.py
|
.venv/bin/python main.py
|
||||||
""")
|
""")
|
||||||
run_script.chmod(0o755)
|
run_script.chmod(0o755)
|
||||||
print(" Created: start.sh")
|
log.info(" Created: start.sh")
|
||||||
|
|
||||||
total_files = count_files(output_dir)
|
total_files = count_files(output_dir)
|
||||||
print(f"\n✓ Deploy build complete! ({total_files} files)")
|
log.info(f"\n✓ Deploy build complete! ({total_files} files)")
|
||||||
print(f"\nTo run:")
|
log.info(f"\nTo run:")
|
||||||
print(f" cd {output_dir}")
|
log.info(f" cd {output_dir}")
|
||||||
print(f" ./start.sh")
|
log.info(f" ./start.sh")
|
||||||
print(f"\nOr deploy to server:")
|
log.info(f"\nOr deploy to server:")
|
||||||
print(f" rsync -av {output_dir}/ server:/app/soleprint/")
|
log.info(f" rsync -av {output_dir}/ server:/app/soleprint/")
|
||||||
print(f" ssh server 'cd /app/soleprint && ./start.sh'")
|
log.info(f" ssh server 'cd /app/soleprint && ./start.sh'")
|
||||||
|
|
||||||
|
|
||||||
def build_models():
|
def build_models():
|
||||||
"""Only regenerate models."""
|
"""Only regenerate models."""
|
||||||
print(f"\n=== Generating models only ===")
|
log.info("\n=== Generating models only ===")
|
||||||
|
|
||||||
output_dir = SPR_ROOT / "gen"
|
output_dir = SPR_ROOT / "gen"
|
||||||
ensure_dir(output_dir)
|
ensure_dir(output_dir)
|
||||||
|
|
||||||
if generate_models(output_dir):
|
if generate_models(output_dir):
|
||||||
print("\n✓ Models generated!")
|
log.info("\n✓ Models generated!")
|
||||||
else:
|
else:
|
||||||
print("\nError: Model generation failed", file=sys.stderr)
|
log.error("Model generation failed")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
@@ -325,7 +319,7 @@ def main():
|
|||||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||||
|
|
||||||
# dev command
|
# dev command
|
||||||
dev_parser = subparsers.add_parser("dev", help="Build for development (symlinks)")
|
dev_parser = subparsers.add_parser("dev", help="Build for development (copies)")
|
||||||
dev_parser.add_argument(
|
dev_parser.add_argument(
|
||||||
"--output",
|
"--output",
|
||||||
"-o",
|
"-o",
|
||||||
|
|||||||
104
cfg/amar/.env
Normal file
104
cfg/amar/.env
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# AMAR - Local Development Configuration
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# DEPLOYMENT
|
||||||
|
# =============================================================================
|
||||||
|
DEPLOYMENT_NAME=amar
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# NETWORK (shared with soleprint)
|
||||||
|
# =============================================================================
|
||||||
|
NETWORK_NAME=soleprint_network
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# PATHS (local code locations)
|
||||||
|
# =============================================================================
|
||||||
|
BACKEND_PATH=/home/mariano/wdir/ama/amar_django_back
|
||||||
|
FRONTEND_PATH=/home/mariano/wdir/ama/amar_frontend
|
||||||
|
DOCKERFILE_BACKEND=/home/mariano/wdir/spr/cfg/amar/Dockerfile.backend
|
||||||
|
DOCKERFILE_FRONTEND=/home/mariano/wdir/spr/cfg/amar/Dockerfile.frontend
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# DATABASE
|
||||||
|
# =============================================================================
|
||||||
|
DB_DUMP=dev.sql
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# PORTS
|
||||||
|
# =============================================================================
|
||||||
|
BACKEND_PORT=8000
|
||||||
|
FRONTEND_PORT=3000
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# BACKEND SERVER (Uvicorn)
|
||||||
|
# =============================================================================
|
||||||
|
BACKEND_WORKERS=1
|
||||||
|
BACKEND_RELOAD=--reload
|
||||||
|
|
||||||
|
# Database connection
|
||||||
|
POSTGRES_DB=amarback
|
||||||
|
POSTGRES_USER=postgres
|
||||||
|
POSTGRES_PASSWORD=localdev123
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# DJANGO
|
||||||
|
# =============================================================================
|
||||||
|
SECRET_KEY=local-dev-key
|
||||||
|
DEBUG=True
|
||||||
|
DJANGO_ENV=development
|
||||||
|
ALLOWED_HOSTS=*
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# CORS
|
||||||
|
# =============================================================================
|
||||||
|
CORS_ALLOW_ALL=true
|
||||||
|
CORS_ALLOWED_ORIGINS=
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# GOOGLE SERVICES
|
||||||
|
# =============================================================================
|
||||||
|
SUBJECT_CALENDAR=
|
||||||
|
SHEET_ID=
|
||||||
|
RANGE_NAME=
|
||||||
|
GOOGLE_MAPS_API_KEY=
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# ANALYTICS
|
||||||
|
# =============================================================================
|
||||||
|
GA4_MEASUREMENT_ID=
|
||||||
|
AMPLITUDE_API_KEY=
|
||||||
|
HOTJAR_API_KEY=
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# MERCADO PAGO
|
||||||
|
# =============================================================================
|
||||||
|
ACCESS_TOKEN_MERCADO_PAGO=
|
||||||
|
MP_PLATFORM_ACCESS_TOKEN=
|
||||||
|
USER_ID=
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# WEB PUSH
|
||||||
|
# =============================================================================
|
||||||
|
WEBPUSH_VAPID_PUBLIC_KEY=
|
||||||
|
WEBPUSH_VAPID_PRIVATE_KEY=
|
||||||
|
WEBPUSH_VAPID_ADMIN_EMAIL=
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# INIT
|
||||||
|
# =============================================================================
|
||||||
|
USER_PASSWORD=initial_admin_password
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# FRONTEND
|
||||||
|
# =============================================================================
|
||||||
|
NEXT_PUBLIC_APP_API_URL_BACKOFFICE=
|
||||||
|
NEXT_PUBLIC_APP_API_URL_STAGE=
|
||||||
|
NEXT_PUBLIC_IS_STAGE=false
|
||||||
|
NEXT_PUBLIC_FB_PIXEL_ID=
|
||||||
|
NEXT_PUBLIC_TAG_MANAGER=
|
||||||
|
NEXT_PUBLIC_WHATSAPP_CONTACT=
|
||||||
|
NEXT_PUBLIC_API_KEY=
|
||||||
|
NEXT_PUBLIC_AMPLITUDE_API_KEY=
|
||||||
|
NEXT_PUBLIC_GMAPS_API_KEY=
|
||||||
71
cfg/amar/Dockerfile.backend
Normal file
71
cfg/amar/Dockerfile.backend
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# Dockerfile for Django Backend with Uvicorn
|
||||||
|
# =============================================================================
|
||||||
|
# Usage:
|
||||||
|
# Development: WORKERS=1 RELOAD=--reload (source mounted, hot reload)
|
||||||
|
# Production: WORKERS=4 RELOAD="" (no reload, multiple workers)
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 1: Base with system dependencies
|
||||||
|
# =============================================================================
|
||||||
|
FROM python:3.11-slim AS base
|
||||||
|
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Install system dependencies (cached layer)
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
build-essential \
|
||||||
|
libpq-dev \
|
||||||
|
gdal-bin \
|
||||||
|
libgdal-dev \
|
||||||
|
libgeos-dev \
|
||||||
|
libproj-dev \
|
||||||
|
postgresql-client \
|
||||||
|
# WeasyPrint dependencies (for django-afip PDF generation)
|
||||||
|
libglib2.0-0 \
|
||||||
|
libpango-1.0-0 \
|
||||||
|
libpangocairo-1.0-0 \
|
||||||
|
libgdk-pixbuf-2.0-0 \
|
||||||
|
libffi-dev \
|
||||||
|
libcairo2 \
|
||||||
|
libgirepository1.0-dev \
|
||||||
|
gir1.2-pango-1.0 \
|
||||||
|
fonts-dejavu-core \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 2: Dependencies (cached layer)
|
||||||
|
# =============================================================================
|
||||||
|
FROM base AS deps
|
||||||
|
|
||||||
|
# Copy only requirements for dependency installation
|
||||||
|
COPY requirements.txt .
|
||||||
|
|
||||||
|
# Install Python dependencies
|
||||||
|
RUN pip install --no-cache-dir -r requirements.txt
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 3: Runtime (uvicorn with configurable workers)
|
||||||
|
# =============================================================================
|
||||||
|
FROM base AS runtime
|
||||||
|
|
||||||
|
# Copy dependencies from deps stage
|
||||||
|
COPY --from=deps /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
|
||||||
|
COPY --from=deps /usr/local/bin /usr/local/bin
|
||||||
|
|
||||||
|
# Copy requirements (for reference)
|
||||||
|
COPY requirements.txt .
|
||||||
|
|
||||||
|
# Create directories
|
||||||
|
RUN mkdir -p /var/etc/static /app/media
|
||||||
|
|
||||||
|
# Note: Source code mounted at runtime for dev, copied for prod
|
||||||
|
EXPOSE 8000
|
||||||
|
|
||||||
|
# Uvicorn with configurable workers and reload
|
||||||
|
# Dev: WORKERS=1 RELOAD=--reload
|
||||||
|
# Prod: WORKERS=4 RELOAD=""
|
||||||
|
CMD uvicorn amar_django_back.asgi:application --host 0.0.0.0 --port 8000 --workers ${WORKERS:-1} ${RELOAD}
|
||||||
80
cfg/amar/Dockerfile.frontend
Normal file
80
cfg/amar/Dockerfile.frontend
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# Multi-stage Dockerfile for Next.js Frontend
|
||||||
|
# =============================================================================
|
||||||
|
# Usage:
|
||||||
|
# Development: docker compose up (mounts source, hot reload)
|
||||||
|
# Production: docker compose -f docker-compose.yml -f docker-compose.prod.yml up
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 1: Dependencies (cached layer)
|
||||||
|
# =============================================================================
|
||||||
|
FROM node:18-alpine AS deps
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy only package files for dependency installation
|
||||||
|
COPY package*.json ./
|
||||||
|
|
||||||
|
# Install ALL dependencies (including devDependencies for dev mode)
|
||||||
|
RUN npm ci
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 2: Development (hot reload, source mounted)
|
||||||
|
# =============================================================================
|
||||||
|
FROM node:18-alpine AS development
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy dependencies from deps stage
|
||||||
|
COPY --from=deps /app/node_modules ./node_modules
|
||||||
|
|
||||||
|
# Copy package files (for npm scripts)
|
||||||
|
COPY package*.json ./
|
||||||
|
|
||||||
|
# Copy config files
|
||||||
|
COPY next.config.js postcss.config.js tailwind.config.js tsconfig.json ./
|
||||||
|
|
||||||
|
# Note: src/ and public/ are mounted at runtime for hot reload
|
||||||
|
# Start dev server
|
||||||
|
EXPOSE 3000
|
||||||
|
CMD ["npm", "run", "dev"]
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 3: Builder (compile for production)
|
||||||
|
# =============================================================================
|
||||||
|
FROM node:18-alpine AS builder
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy dependencies
|
||||||
|
COPY --from=deps /app/node_modules ./node_modules
|
||||||
|
|
||||||
|
# Copy all source files
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Build for production
|
||||||
|
RUN npm run build
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Stage 4: Production (optimized, minimal)
|
||||||
|
# =============================================================================
|
||||||
|
FROM node:18-alpine AS production
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
ENV NODE_ENV=production
|
||||||
|
|
||||||
|
# Copy only production dependencies
|
||||||
|
COPY package*.json ./
|
||||||
|
RUN npm ci --only=production && npm cache clean --force
|
||||||
|
|
||||||
|
# Copy built application from builder
|
||||||
|
COPY --from=builder /app/.next ./.next
|
||||||
|
COPY --from=builder /app/public ./public
|
||||||
|
COPY --from=builder /app/next.config.js ./
|
||||||
|
|
||||||
|
EXPOSE 3000
|
||||||
|
|
||||||
|
USER node
|
||||||
|
|
||||||
|
CMD ["npm", "run", "start"]
|
||||||
33
cfg/amar/ctrl/logs.sh
Executable file
33
cfg/amar/ctrl/logs.sh
Executable file
@@ -0,0 +1,33 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# View amar room logs
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./logs.sh # All logs (tail)
|
||||||
|
# ./logs.sh -f # Follow mode
|
||||||
|
# ./logs.sh backend # Specific container
|
||||||
|
# ./logs.sh soleprint # Soleprint logs
|
||||||
|
|
||||||
|
set -e
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
FOLLOW=""
|
||||||
|
TARGET=""
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case $arg in
|
||||||
|
-f|--follow) FOLLOW="-f" ;;
|
||||||
|
*) TARGET="$arg" ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$TARGET" ]; then
|
||||||
|
echo "=== Amar ==="
|
||||||
|
docker compose logs --tail=20 $FOLLOW
|
||||||
|
echo ""
|
||||||
|
echo "=== Soleprint ==="
|
||||||
|
(cd soleprint && docker compose logs --tail=20 $FOLLOW)
|
||||||
|
elif [ "$TARGET" = "soleprint" ]; then
|
||||||
|
(cd soleprint && docker compose logs $FOLLOW)
|
||||||
|
else
|
||||||
|
docker logs $FOLLOW "amar_$TARGET" 2>/dev/null || docker logs $FOLLOW "$TARGET"
|
||||||
|
fi
|
||||||
40
cfg/amar/ctrl/start.sh
Executable file
40
cfg/amar/ctrl/start.sh
Executable file
@@ -0,0 +1,40 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Start amar room (managed app + soleprint)
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./start.sh # Start all (foreground)
|
||||||
|
# ./start.sh -d # Start all (detached)
|
||||||
|
# ./start.sh amar # Start only amar
|
||||||
|
# ./start.sh soleprint # Start only soleprint
|
||||||
|
# ./start.sh --build # Rebuild images
|
||||||
|
|
||||||
|
set -e
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
BUILD=""
|
||||||
|
DETACH=""
|
||||||
|
TARGET="all"
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case $arg in
|
||||||
|
-d|--detached) DETACH="-d" ;;
|
||||||
|
--build) BUILD="--build" ;;
|
||||||
|
amar) TARGET="amar" ;;
|
||||||
|
soleprint) TARGET="soleprint" ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ] || [ "$TARGET" = "amar" ]; then
|
||||||
|
echo "Starting amar..."
|
||||||
|
docker compose up $DETACH $BUILD
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ] || [ "$TARGET" = "soleprint" ]; then
|
||||||
|
echo "Starting soleprint..."
|
||||||
|
(cd soleprint && docker compose up $DETACH $BUILD)
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$DETACH" ]; then
|
||||||
|
echo ""
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "(amar|soleprint|NAMES)"
|
||||||
|
fi
|
||||||
7
cfg/amar/ctrl/status.sh
Executable file
7
cfg/amar/ctrl/status.sh
Executable file
@@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Show amar room status
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
echo "=== Containers ==="
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "(amar|soleprint|NAMES)" || echo "No containers running"
|
||||||
25
cfg/amar/ctrl/stop.sh
Executable file
25
cfg/amar/ctrl/stop.sh
Executable file
@@ -0,0 +1,25 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Stop amar room (managed app + soleprint)
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./stop.sh # Stop all
|
||||||
|
# ./stop.sh amar # Stop only amar
|
||||||
|
# ./stop.sh soleprint # Stop only soleprint
|
||||||
|
|
||||||
|
set -e
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
|
||||||
|
TARGET="all"
|
||||||
|
[ -n "$1" ] && TARGET="$1"
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ] || [ "$TARGET" = "soleprint" ]; then
|
||||||
|
echo "Stopping soleprint..."
|
||||||
|
(cd soleprint && docker compose down)
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$TARGET" = "all" ] || [ "$TARGET" = "amar" ]; then
|
||||||
|
echo "Stopping amar..."
|
||||||
|
docker compose down
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Done."
|
||||||
117
cfg/amar/ctrl/xtras/reload-db.sh
Executable file
117
cfg/amar/ctrl/xtras/reload-db.sh
Executable file
@@ -0,0 +1,117 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Smart database reload - only swaps if DB_DUMP changed
|
||||||
|
#
|
||||||
|
# Tracks which dump is currently loaded and only reloads if different.
|
||||||
|
# Edit DB_DUMP in .env and run this script - it handles the rest.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./reload-db.sh # Reload if DB_DUMP changed
|
||||||
|
# ./reload-db.sh --force # Force reload even if same
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd "$(dirname "$0")/../.."
|
||||||
|
|
||||||
|
FORCE=false
|
||||||
|
if [ "$1" = "--force" ]; then
|
||||||
|
FORCE=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get config from .env
|
||||||
|
DEPLOYMENT_NAME=$(grep "^DEPLOYMENT_NAME=" .env 2>/dev/null | cut -d'=' -f2 || echo "amar")
|
||||||
|
POSTGRES_DB=$(grep "^POSTGRES_DB=" .env 2>/dev/null | cut -d'=' -f2 || echo "amarback")
|
||||||
|
POSTGRES_USER=$(grep "^POSTGRES_USER=" .env 2>/dev/null | cut -d'=' -f2 || echo "postgres")
|
||||||
|
DB_DUMP=$(grep "^DB_DUMP=" .env 2>/dev/null | cut -d'=' -f2)
|
||||||
|
|
||||||
|
if [ -z "$DB_DUMP" ]; then
|
||||||
|
echo "Error: DB_DUMP not set in .env"
|
||||||
|
echo ""
|
||||||
|
echo "Add to .env:"
|
||||||
|
echo " DB_DUMP=dev.sql"
|
||||||
|
echo ""
|
||||||
|
echo "Available dumps:"
|
||||||
|
ls -1 dumps/*.sql 2>/dev/null | sed 's/dumps\// /' || echo " No dumps found in dumps/"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
DUMP_FILE="dumps/${DB_DUMP}"
|
||||||
|
if [ ! -f "$DUMP_FILE" ]; then
|
||||||
|
echo "Error: Dump file not found: $DUMP_FILE"
|
||||||
|
echo ""
|
||||||
|
echo "Available dumps:"
|
||||||
|
ls -1 dumps/*.sql 2>/dev/null | sed 's/dumps\// /' || echo " No dumps found in dumps/"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
DB_CONTAINER="${DEPLOYMENT_NAME}_db"
|
||||||
|
BACKEND_CONTAINER="${DEPLOYMENT_NAME}_backend"
|
||||||
|
STATE_FILE=".db_state"
|
||||||
|
|
||||||
|
# Check if db container is running
|
||||||
|
if ! docker ps --format "{{.Names}}" | grep -q "^${DB_CONTAINER}$"; then
|
||||||
|
echo "Error: Database container not running: $DB_CONTAINER"
|
||||||
|
echo "Start services first with: ./ctrl/start.sh -d"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check current state
|
||||||
|
if [ -f "$STATE_FILE" ] && [ "$FORCE" = false ]; then
|
||||||
|
CURRENT_DUMP=$(cat "$STATE_FILE")
|
||||||
|
if [ "$CURRENT_DUMP" = "$DB_DUMP" ]; then
|
||||||
|
echo "Database already loaded with: $DB_DUMP"
|
||||||
|
echo "Use --force to reload anyway"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
echo "Database dump changed: $CURRENT_DUMP → $DB_DUMP"
|
||||||
|
else
|
||||||
|
if [ "$FORCE" = true ]; then
|
||||||
|
echo "Force reloading database with: $DB_DUMP"
|
||||||
|
else
|
||||||
|
echo "Loading database with: $DB_DUMP"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
read -p "Continue with database reload? (y/N) " -n 1 -r
|
||||||
|
echo ""
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo "Cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "[1/5] Stopping backend and celery services..."
|
||||||
|
docker compose stop backend celery celery-beat 2>/dev/null || true
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "[2/5] Dropping and recreating database..."
|
||||||
|
docker exec "$DB_CONTAINER" psql -U "$POSTGRES_USER" -d postgres -c "DROP DATABASE IF EXISTS $POSTGRES_DB WITH (FORCE);"
|
||||||
|
docker exec "$DB_CONTAINER" psql -U "$POSTGRES_USER" -d postgres -c "CREATE DATABASE $POSTGRES_DB;"
|
||||||
|
docker exec "$DB_CONTAINER" psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "CREATE EXTENSION IF NOT EXISTS postgis_topology;"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "[3/5] Loading dump: $DB_DUMP..."
|
||||||
|
docker exec -i "$DB_CONTAINER" psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" < "$DUMP_FILE"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "[4/5] Restarting services and running migrations..."
|
||||||
|
docker compose start backend celery celery-beat
|
||||||
|
|
||||||
|
echo "Waiting for backend to start..."
|
||||||
|
sleep 3
|
||||||
|
|
||||||
|
echo "Running migrations..."
|
||||||
|
docker exec "$BACKEND_CONTAINER" python manage.py migrate --noinput
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "[5/5] Updating state..."
|
||||||
|
echo "$DB_DUMP" > "$STATE_FILE"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=========================================="
|
||||||
|
echo " Database Reloaded Successfully"
|
||||||
|
echo "=========================================="
|
||||||
|
echo ""
|
||||||
|
echo "Current dump: $DB_DUMP"
|
||||||
|
echo "Database: $POSTGRES_DB"
|
||||||
|
echo ""
|
||||||
159267
cfg/amar/dumps/dev.sql
Normal file
159267
cfg/amar/dumps/dev.sql
Normal file
File diff suppressed because it is too large
Load Diff
35
cfg/amar/soleprint/.env
Normal file
35
cfg/amar/soleprint/.env
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# SOLEPRINT - Amar Room Configuration
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# DEPLOYMENT
|
||||||
|
# =============================================================================
|
||||||
|
DEPLOYMENT_NAME=amar_soleprint
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# NETWORK (shared with amar)
|
||||||
|
# =============================================================================
|
||||||
|
NETWORK_NAME=soleprint_network
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# PATHS
|
||||||
|
# =============================================================================
|
||||||
|
SOLEPRINT_BARE_PATH=/home/mariano/wdir/spr/gen
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# PORTS
|
||||||
|
# =============================================================================
|
||||||
|
SOLEPRINT_PORT=12000
|
||||||
|
ARTERY_PORT=12001
|
||||||
|
ATLAS_PORT=12002
|
||||||
|
STATION_PORT=12003
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# DATABASE (amar's DB for station tools)
|
||||||
|
# =============================================================================
|
||||||
|
DB_HOST=amar_db
|
||||||
|
DB_PORT=5432
|
||||||
|
DB_NAME=amarback
|
||||||
|
DB_USER=postgres
|
||||||
|
DB_PASSWORD=localdev123
|
||||||
34
cfg/amar/soleprint/docker-compose.yml
Normal file
34
cfg/amar/soleprint/docker-compose.yml
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# Soleprint Services - Docker Compose
|
||||||
|
#
|
||||||
|
# Runs soleprint hub as a single service
|
||||||
|
# Artery, atlas, station are accessed via path-based routing
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# cd mainroom/soleprint && docker compose up -d
|
||||||
|
|
||||||
|
services:
|
||||||
|
soleprint:
|
||||||
|
build:
|
||||||
|
context: ${SOLEPRINT_BARE_PATH}
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
container_name: ${DEPLOYMENT_NAME}_soleprint
|
||||||
|
volumes:
|
||||||
|
- ${SOLEPRINT_BARE_PATH}:/app
|
||||||
|
ports:
|
||||||
|
- "${SOLEPRINT_PORT}:8000"
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
environment:
|
||||||
|
# For single-port mode, all subsystems are internal routes
|
||||||
|
- ARTERY_EXTERNAL_URL=/artery
|
||||||
|
- ATLAS_EXTERNAL_URL=/atlas
|
||||||
|
- STATION_EXTERNAL_URL=/station
|
||||||
|
networks:
|
||||||
|
- default
|
||||||
|
# Use run.py for single-port bare-metal mode
|
||||||
|
command: uvicorn run:app --host 0.0.0.0 --port 8000 --reload
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
external: true
|
||||||
|
name: ${NETWORK_NAME}
|
||||||
@@ -11,7 +11,7 @@ NETWORK_NAME=soleprint_network
|
|||||||
# PATHS
|
# PATHS
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# Path to generated soleprint (gen/ folder)
|
# Path to generated soleprint (gen/ folder)
|
||||||
SOLEPRINT_BARE_PATH=/tmp/soleprint-deploy
|
SOLEPRINT_BARE_PATH=/home/mariano/wdir/spr/gen
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# PORTS
|
# PORTS
|
||||||
@@ -25,6 +25,7 @@ STATION_PORT=12003
|
|||||||
# DATABASE (for station tools that need DB access)
|
# DATABASE (for station tools that need DB access)
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# These are passed to station container when orchestrated with managed room
|
# These are passed to station container when orchestrated with managed room
|
||||||
|
# Leave empty for standalone soleprint; set in room config (e.g., cfg/amar/)
|
||||||
DB_HOST=
|
DB_HOST=
|
||||||
DB_PORT=5432
|
DB_PORT=5432
|
||||||
DB_NAME=
|
DB_NAME=
|
||||||
|
|||||||
@@ -1,157 +0,0 @@
|
|||||||
# Soleprint Control Scripts
|
|
||||||
|
|
||||||
Control scripts for managing soleprint services via systemd (alternative to Docker deployment).
|
|
||||||
|
|
||||||
## Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
ctrl/
|
|
||||||
├── .env.soleprint # Shared configuration
|
|
||||||
├── local/ # Scripts run from developer machine
|
|
||||||
│ ├── commit.sh # Commit changes across all repos
|
|
||||||
│ ├── deploy.sh # Full deployment workflow
|
|
||||||
│ ├── init.sh # Initial sync to server
|
|
||||||
│ ├── push.sh # Deploy to server (all by default)
|
|
||||||
│ └── status.sh # Git status of all repos
|
|
||||||
└── server/ # Scripts run on server
|
|
||||||
├── install-deps.sh # Install Python deps (all by default)
|
|
||||||
├── restart.sh # Restart services (all by default)
|
|
||||||
├── setup-cert.sh # Setup SSL certificate
|
|
||||||
├── setup-nginx.sh # Create nginx config
|
|
||||||
└── setup-service.sh # Create systemd service
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Edit `.env.soleprint` to configure:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deployment
|
|
||||||
DEPLOY_SERVER=mariano@mcrn.ar
|
|
||||||
DEPLOY_REMOTE_PATH=~/soleprint
|
|
||||||
|
|
||||||
# Local paths
|
|
||||||
SOLEPRINT_BARE_PATH=/home/mariano/soleprint
|
|
||||||
|
|
||||||
# Server paths
|
|
||||||
SERVER_USER=mariano
|
|
||||||
SERVER_SOLEPRINT_PATH=/home/mariano/soleprint
|
|
||||||
SERVER_VENV_BASE=/home/mariano/venvs
|
|
||||||
```
|
|
||||||
|
|
||||||
## Design Principle
|
|
||||||
|
|
||||||
**All services are the default.** No flags needed for common operations.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./push.sh # Deploys all (default)
|
|
||||||
./push.sh artery # Deploy only artery (when needed)
|
|
||||||
```
|
|
||||||
|
|
||||||
See `DESIGN_SOLEPRINT.md` for detailed philosophy.
|
|
||||||
|
|
||||||
## Local Scripts
|
|
||||||
|
|
||||||
### commit.sh
|
|
||||||
```bash
|
|
||||||
./local/commit.sh "Your commit message"
|
|
||||||
```
|
|
||||||
|
|
||||||
### status.sh
|
|
||||||
```bash
|
|
||||||
./local/status.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### push.sh
|
|
||||||
```bash
|
|
||||||
./local/push.sh # Push all services (default)
|
|
||||||
./local/push.sh artery # Push only artery
|
|
||||||
```
|
|
||||||
|
|
||||||
### deploy.sh
|
|
||||||
```bash
|
|
||||||
./local/deploy.sh
|
|
||||||
# Then restart on server:
|
|
||||||
# ssh mariano@mcrn.ar 'bash ~/soleprint/ctrl/server/restart.sh'
|
|
||||||
```
|
|
||||||
|
|
||||||
### init.sh
|
|
||||||
```bash
|
|
||||||
./local/init.sh # Initial full sync (run once)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Server Scripts
|
|
||||||
|
|
||||||
### restart.sh
|
|
||||||
```bash
|
|
||||||
sudo ./server/restart.sh # Restart all (default)
|
|
||||||
sudo ./server/restart.sh artery # Restart only artery
|
|
||||||
```
|
|
||||||
|
|
||||||
### install-deps.sh
|
|
||||||
```bash
|
|
||||||
./server/install-deps.sh # Install all (default)
|
|
||||||
./server/install-deps.sh artery # Install only artery
|
|
||||||
```
|
|
||||||
|
|
||||||
### setup-service.sh
|
|
||||||
```bash
|
|
||||||
sudo ./server/setup-service.sh soleprint 12000 main:app
|
|
||||||
sudo ./server/setup-service.sh artery 12001 main:app
|
|
||||||
```
|
|
||||||
|
|
||||||
### setup-nginx.sh
|
|
||||||
```bash
|
|
||||||
sudo ./server/setup-nginx.sh artery artery.mcrn.ar 12001
|
|
||||||
```
|
|
||||||
|
|
||||||
### setup-cert.sh
|
|
||||||
```bash
|
|
||||||
sudo ./server/setup-cert.sh artery.mcrn.ar
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deployment Workflow
|
|
||||||
|
|
||||||
### Initial Setup (once)
|
|
||||||
|
|
||||||
Local:
|
|
||||||
```bash
|
|
||||||
cd ctrl/local
|
|
||||||
./init.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Server:
|
|
||||||
```bash
|
|
||||||
cd ~/soleprint/ctrl/server
|
|
||||||
./install-deps.sh
|
|
||||||
sudo ./setup-service.sh soleprint 12000 main:app
|
|
||||||
sudo ./setup-service.sh artery 12001 main:app
|
|
||||||
sudo ./setup-service.sh album 12002 main:app
|
|
||||||
sudo ./setup-service.sh ward 12003 main:app
|
|
||||||
sudo ./setup-nginx.sh soleprint soleprint.mcrn.ar 12000
|
|
||||||
sudo ./setup-nginx.sh artery artery.mcrn.ar 12001
|
|
||||||
sudo ./setup-nginx.sh album album.mcrn.ar 12002
|
|
||||||
sudo ./setup-nginx.sh ward ward.mcrn.ar 12003
|
|
||||||
```
|
|
||||||
|
|
||||||
### Regular Updates
|
|
||||||
|
|
||||||
Local:
|
|
||||||
```bash
|
|
||||||
cd ctrl/local
|
|
||||||
./commit.sh "Update feature X"
|
|
||||||
./deploy.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Server:
|
|
||||||
```bash
|
|
||||||
sudo ~/soleprint/ctrl/server/restart.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Room vs Soleprint Control
|
|
||||||
|
|
||||||
- **core_room/ctrl/** - Manages full room (amar + soleprint) via Docker
|
|
||||||
- **soleprint/ctrl/** - Manages soleprint services via systemd
|
|
||||||
|
|
||||||
This directory provides systemd-based deployment as an alternative to Docker.
|
|
||||||
For full room orchestration with Docker, use `core_room/ctrl/`.
|
|
||||||
@@ -1,34 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Commit changes across all repos with the same message
|
|
||||||
# Usage: ./commit.sh "commit message"
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
MSG="${1:?Usage: $0 \"commit message\"}"
|
|
||||||
|
|
||||||
# Find soleprint bare metal directory from SOLEPRINT_BARE_PATH or default
|
|
||||||
SOLEPRINT_DIR="${SOLEPRINT_BARE_PATH:-/home/mariano/soleprint}"
|
|
||||||
REPOS=("$SOLEPRINT_DIR" "$SOLEPRINT_DIR/artery" "$SOLEPRINT_DIR/album" "$SOLEPRINT_DIR/ward")
|
|
||||||
|
|
||||||
for repo in "${REPOS[@]}"; do
|
|
||||||
name=$(basename "$repo")
|
|
||||||
[ "$repo" = "$SOLEPRINT_DIR" ] && name="soleprint"
|
|
||||||
|
|
||||||
if [ ! -d "$repo/.git" ]; then
|
|
||||||
echo "=== $name: not a git repo, skipping ==="
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd "$repo"
|
|
||||||
|
|
||||||
if git diff --quiet && git diff --cached --quiet && [ -z "$(git ls-files --others --exclude-standard)" ]; then
|
|
||||||
echo "=== $name: nothing to commit ==="
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "=== $name ==="
|
|
||||||
git add -A
|
|
||||||
git commit -m "$MSG"
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Done!"
|
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Push all to server (run locally)
|
|
||||||
# Usage: ./deploy.sh
|
|
||||||
# Then run restart on server as admin
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
|
||||||
CTRL_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
|
||||||
|
|
||||||
# Load configuration
|
|
||||||
source "$CTRL_DIR/.env.soleprint" 2>/dev/null || true
|
|
||||||
REMOTE="${DEPLOY_SERVER:-mariano@mcrn.ar}"
|
|
||||||
|
|
||||||
echo "=== Pushing all ==="
|
|
||||||
"$SCRIPT_DIR/push.sh"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "=== Push complete ==="
|
|
||||||
echo "Now restart services on server:"
|
|
||||||
echo " ssh $REMOTE 'sudo systemctl restart soleprint artery atlas station'"
|
|
||||||
echo ""
|
|
||||||
echo "# Or restart specific service:"
|
|
||||||
echo "# ssh $REMOTE 'sudo systemctl restart artery'"
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Initial full sync of soleprint to server
|
|
||||||
# Run once to setup, then use push.sh for updates
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Load configuration
|
|
||||||
CTRL_DIR="$(cd "$(dirname "$0")/../.." SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" pwd)"
|
|
||||||
source "$CTRL_DIR/.env.soleprint" 2>/dev/null || true
|
|
||||||
|
|
||||||
SOLEPRINT_DIR="${SOLEPRINT_BARE_PATH:-/home/mariano/soleprint}"
|
|
||||||
REMOTE="${DEPLOY_SERVER:-mariano@mcrn.ar}"
|
|
||||||
REMOTE_DIR="${DEPLOY_REMOTE_PATH:-~/soleprint}"
|
|
||||||
|
|
||||||
echo "=== Initial sync of soleprint ==="
|
|
||||||
echo "From: $SOLEPRINT_DIR"
|
|
||||||
echo "To: $REMOTE:$REMOTE_DIR"
|
|
||||||
|
|
||||||
rsync -avz \
|
|
||||||
--filter=':- .gitignore' \
|
|
||||||
--exclude '.git' \
|
|
||||||
--exclude '.env' \
|
|
||||||
"$SOLEPRINT_DIR/" "$REMOTE:$REMOTE_DIR/"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Done! Now on server run:"
|
|
||||||
echo " cd ~/soleprint"
|
|
||||||
echo " # Use core_room/soleprint/tools/server/setup-*.sh scripts for initial setup"
|
|
||||||
@@ -1,66 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Deploy repos via rsync
|
|
||||||
# Usage: ./push.sh [target]
|
|
||||||
# Example: ./push.sh (deploys all: soleprint, artery, atlas, station)
|
|
||||||
# ./push.sh artery (deploys only artery)
|
|
||||||
# ./push.sh soleprint (deploys only soleprint root, no sub-repos)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
TARGET="${1:-all}"
|
|
||||||
|
|
||||||
# Load configuration
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
|
||||||
CTRL_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
|
||||||
source "$CTRL_DIR/.env.soleprint" 2>/dev/null || true
|
|
||||||
|
|
||||||
SOLEPRINT_DIR="${SOLEPRINT_BARE_PATH:-/home/mariano/wdir/spr/gen}"
|
|
||||||
REMOTE="${DEPLOY_SERVER:-mariano@mcrn.ar}"
|
|
||||||
REMOTE_BASE="${DEPLOY_REMOTE_PATH:-~/soleprint}"
|
|
||||||
|
|
||||||
# Handle all (default)
|
|
||||||
if [ "$TARGET" = "all" ]; then
|
|
||||||
echo "=== Deploying all services ==="
|
|
||||||
for target in soleprint artery atlas station; do
|
|
||||||
"$0" "$target"
|
|
||||||
echo ""
|
|
||||||
done
|
|
||||||
echo "=== All done ==="
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$TARGET" = "soleprint" ]; then
|
|
||||||
# Push only root files (no sub-repos)
|
|
||||||
echo "=== Deploying soleprint (root only) ==="
|
|
||||||
rsync -avz \
|
|
||||||
--filter=':- .gitignore' \
|
|
||||||
--exclude '.git' \
|
|
||||||
--exclude '.env' \
|
|
||||||
--exclude '.venv' \
|
|
||||||
--exclude 'artery/' \
|
|
||||||
--exclude 'atlas/' \
|
|
||||||
--exclude 'station/' \
|
|
||||||
"$SOLEPRINT_DIR/" "$REMOTE:$REMOTE_BASE/"
|
|
||||||
echo "Done!"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
LOCAL_DIR="$SOLEPRINT_DIR/$TARGET"
|
|
||||||
REMOTE_DIR="$REMOTE_BASE/$TARGET"
|
|
||||||
|
|
||||||
if [ ! -d "$LOCAL_DIR" ]; then
|
|
||||||
echo "Error: $LOCAL_DIR does not exist"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "=== Deploying $TARGET ==="
|
|
||||||
echo "From: $LOCAL_DIR"
|
|
||||||
echo "To: $REMOTE:$REMOTE_DIR"
|
|
||||||
|
|
||||||
rsync -avz \
|
|
||||||
--filter=':- .gitignore' \
|
|
||||||
--exclude '.git' \
|
|
||||||
--exclude '.env' \
|
|
||||||
"$LOCAL_DIR/" "$REMOTE:$REMOTE_DIR/"
|
|
||||||
|
|
||||||
echo "Done!"
|
|
||||||
@@ -1,33 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Show git status of all repos
|
|
||||||
# Usage: ./status.sh
|
|
||||||
|
|
||||||
# Find soleprint bare metal directory from SOLEPRINT_BARE_PATH or default
|
|
||||||
SOLEPRINT_DIR="${SOLEPRINT_BARE_PATH:-/home/mariano/soleprint}"
|
|
||||||
REPOS=("$SOLEPRINT_DIR" "$SOLEPRINT_DIR/artery" "$SOLEPRINT_DIR/album" "$SOLEPRINT_DIR/ward")
|
|
||||||
|
|
||||||
for repo in "${REPOS[@]}"; do
|
|
||||||
name=$(basename "$repo")
|
|
||||||
[ "$repo" = "$SOLEPRINT_DIR" ] && name="soleprint"
|
|
||||||
|
|
||||||
if [ ! -d "$repo/.git" ]; then
|
|
||||||
echo "=== $name: not a git repo ==="
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd "$repo"
|
|
||||||
branch=$(git branch --show-current)
|
|
||||||
|
|
||||||
# Check for changes
|
|
||||||
staged=$(git diff --cached --numstat | wc -l)
|
|
||||||
unstaged=$(git diff --numstat | wc -l)
|
|
||||||
untracked=$(git ls-files --others --exclude-standard | wc -l)
|
|
||||||
|
|
||||||
if [ "$staged" -eq 0 ] && [ "$unstaged" -eq 0 ] && [ "$untracked" -eq 0 ]; then
|
|
||||||
echo "=== $name ($branch): clean ==="
|
|
||||||
else
|
|
||||||
echo "=== $name ($branch): +$staged staged, ~$unstaged modified, ?$untracked untracked ==="
|
|
||||||
git status --short
|
|
||||||
fi
|
|
||||||
echo
|
|
||||||
done
|
|
||||||
@@ -1,55 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Install/update dependencies for apps
|
|
||||||
# Usage: ./install-deps.sh [app-name]
|
|
||||||
# Example: ./install-deps.sh (installs deps for all services)
|
|
||||||
# ./install-deps.sh artery (installs deps for artery only)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
APP_NAME="${1:-all}"
|
|
||||||
|
|
||||||
# Load configuration
|
|
||||||
CTRL_DIR="$(cd "$(dirname "$0")/../.." SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" pwd)"
|
|
||||||
source "$CTRL_DIR/.env.soleprint" 2>/dev/null || true
|
|
||||||
|
|
||||||
APP_USER="${SERVER_USER:-mariano}"
|
|
||||||
SOLEPRINT_PATH="${SERVER_SOLEPRINT_PATH:-/home/mariano/soleprint}"
|
|
||||||
VENV_BASE="${SERVER_VENV_BASE:-/home/mariano/venvs}"
|
|
||||||
|
|
||||||
# Handle all (default)
|
|
||||||
if [ "$APP_NAME" = "all" ]; then
|
|
||||||
echo "=== Installing deps for all services ==="
|
|
||||||
for app in soleprint artery album ward; do
|
|
||||||
echo ""
|
|
||||||
echo "--- $app ---"
|
|
||||||
"$0" "$app"
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
echo "=== All done ==="
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
VENV_DIR="$VENV_BASE/$APP_NAME"
|
|
||||||
|
|
||||||
if [ "$APP_NAME" = "soleprint" ]; then
|
|
||||||
REQ_FILE="$SOLEPRINT_PATH/requirements.txt"
|
|
||||||
else
|
|
||||||
REQ_FILE="$SOLEPRINT_PATH/$APP_NAME/requirements.txt"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -f "$REQ_FILE" ]; then
|
|
||||||
echo "Error: $REQ_FILE not found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d "$VENV_DIR" ]; then
|
|
||||||
echo "Creating venv: $VENV_DIR"
|
|
||||||
python3 -m venv "$VENV_DIR"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Installing deps from $REQ_FILE"
|
|
||||||
source "$VENV_DIR/bin/activate"
|
|
||||||
pip install -r "$REQ_FILE"
|
|
||||||
deactivate
|
|
||||||
|
|
||||||
echo "Done!"
|
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Restart soleprint services
|
|
||||||
# Usage: ./restart.sh [service]
|
|
||||||
# Example: ./restart.sh (restarts all services)
|
|
||||||
# ./restart.sh artery (restarts only artery)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
TARGET="${1:-all}"
|
|
||||||
|
|
||||||
# Handle all (default)
|
|
||||||
if [ "$TARGET" = "all" ]; then
|
|
||||||
echo "Restarting all services..."
|
|
||||||
systemctl restart soleprint artery album ward
|
|
||||||
echo "Status:"
|
|
||||||
systemctl status soleprint artery album ward --no-pager | grep -E "●|Active:"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Restarting $TARGET..."
|
|
||||||
systemctl restart "$TARGET"
|
|
||||||
|
|
||||||
echo "Status:"
|
|
||||||
systemctl status "$TARGET" --no-pager | grep -E "●|Active:"
|
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Install/update SSL certificate for a subdomain
|
|
||||||
# Usage: ./setup-cert.sh <subdomain>
|
|
||||||
# Example: ./setup-cert.sh soleprint.mcrn.ar
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
SUBDOMAIN="${1:?Usage: $0 <subdomain>}"
|
|
||||||
|
|
||||||
echo "=== Setting up SSL cert for $SUBDOMAIN ==="
|
|
||||||
|
|
||||||
# Check if certbot is installed
|
|
||||||
if ! command -v certbot &> /dev/null; then
|
|
||||||
echo "Installing certbot..."
|
|
||||||
apt update
|
|
||||||
apt install -y certbot python3-certbot-nginx
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get/renew certificate
|
|
||||||
certbot --nginx -d "$SUBDOMAIN" --non-interactive --agree-tos --register-unsafely-without-email
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Done! Certificate installed for $SUBDOMAIN"
|
|
||||||
echo "Auto-renewal is enabled via systemd timer"
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Creates nginx config for FastAPI app
|
|
||||||
# Usage: ./setup-nginx.sh <app-name> <subdomain> <port>
|
|
||||||
# Example: ./setup-nginx.sh artery artery.mcrn.ar 12001
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
APP_NAME="${1:?Usage: $0 <app-name> <subdomain> <port>}"
|
|
||||||
SUBDOMAIN="${2:?Usage: $0 <app-name> <subdomain> <port>}"
|
|
||||||
PORT="${3:?Usage: $0 <app-name> <subdomain> <port>}"
|
|
||||||
|
|
||||||
NGINX_CONF="/etc/nginx/sites-available/$APP_NAME"
|
|
||||||
|
|
||||||
echo "Creating nginx config: $NGINX_CONF"
|
|
||||||
|
|
||||||
sudo tee "$NGINX_CONF" > /dev/null << EOF
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name $SUBDOMAIN;
|
|
||||||
return 301 https://\$host\$request_uri;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 443 ssl;
|
|
||||||
server_name $SUBDOMAIN;
|
|
||||||
|
|
||||||
ssl_certificate /etc/letsencrypt/live/mcrn.ar/fullchain.pem;
|
|
||||||
ssl_certificate_key /etc/letsencrypt/live/mcrn.ar/privkey.pem;
|
|
||||||
include /etc/letsencrypt/options-ssl-nginx.conf;
|
|
||||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
proxy_pass http://127.0.0.1:$PORT;
|
|
||||||
proxy_set_header Host \$host;
|
|
||||||
proxy_set_header X-Real-IP \$remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
|
|
||||||
proxy_set_header X-Forwarded-Proto \$scheme;
|
|
||||||
proxy_read_timeout 300;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
echo "Enabling site..."
|
|
||||||
sudo ln -sf "$NGINX_CONF" /etc/nginx/sites-enabled/
|
|
||||||
|
|
||||||
echo "Testing nginx config..."
|
|
||||||
sudo nginx -t
|
|
||||||
|
|
||||||
echo "Reloading nginx..."
|
|
||||||
sudo systemctl reload nginx
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Done! Site available at https://$SUBDOMAIN"
|
|
||||||
echo "Note: Make sure DNS points $SUBDOMAIN to this server"
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Creates systemd service for FastAPI app
|
|
||||||
# Usage: ./setup-service.sh <app-name> <port> <app-module>
|
|
||||||
# Example: ./setup-service.sh artery 12001 main:app
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
APP_NAME="${1:?Usage: $0 <app-name> <port> <app-module>}"
|
|
||||||
PORT="${2:?Usage: $0 <app-name> <port> <app-module>}"
|
|
||||||
APP_MODULE="${3:-main:app}"
|
|
||||||
|
|
||||||
APP_USER="mariano"
|
|
||||||
VENV_DIR="/home/$APP_USER/venvs/$APP_NAME"
|
|
||||||
|
|
||||||
# soleprint root is special case
|
|
||||||
if [ "$APP_NAME" = "soleprint" ]; then
|
|
||||||
WORK_DIR="/home/$APP_USER/soleprint"
|
|
||||||
else
|
|
||||||
WORK_DIR="/home/$APP_USER/soleprint/$APP_NAME"
|
|
||||||
fi
|
|
||||||
SERVICE_FILE="/etc/systemd/system/${APP_NAME}.service"
|
|
||||||
|
|
||||||
echo "Creating systemd service: $SERVICE_FILE"
|
|
||||||
|
|
||||||
sudo tee "$SERVICE_FILE" > /dev/null << EOF
|
|
||||||
[Unit]
|
|
||||||
Description=$APP_NAME FastAPI service
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
User=$APP_USER
|
|
||||||
Group=$APP_USER
|
|
||||||
WorkingDirectory=$WORK_DIR
|
|
||||||
Environment="PATH=$VENV_DIR/bin"
|
|
||||||
EnvironmentFile=$SOLEPRINT_PATH/.env
|
|
||||||
ExecStart=$VENV_DIR/bin/uvicorn $APP_MODULE --host 127.0.0.1 --port $PORT
|
|
||||||
Restart=always
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
|
|
||||||
echo "Reloading systemd..."
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
|
|
||||||
echo "Enabling service..."
|
|
||||||
sudo systemctl enable "$APP_NAME"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Done! Service commands:"
|
|
||||||
echo " sudo systemctl start $APP_NAME"
|
|
||||||
echo " sudo systemctl status $APP_NAME"
|
|
||||||
echo " sudo journalctl -u $APP_NAME -f"
|
|
||||||
@@ -1,82 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Sync contract tests from amar_django_back_contracts to ward/tools/tester
|
|
||||||
#
|
|
||||||
# Usage: ./sync-tests.sh
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Paths
|
|
||||||
SOURCE_REPO="/home/mariano/wdir/ama/amar_django_back_contracts"
|
|
||||||
DEST_DIR="/home/mariano/wdir/ama/soleprint/ward/tools/tester/tests"
|
|
||||||
|
|
||||||
# Colors
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
echo -e "${BLUE}=== Syncing Contract Tests ===${NC}"
|
|
||||||
echo "Source: $SOURCE_REPO/tests/contracts"
|
|
||||||
echo "Dest: $DEST_DIR"
|
|
||||||
echo
|
|
||||||
|
|
||||||
# Check source exists
|
|
||||||
if [ ! -d "$SOURCE_REPO/tests/contracts" ]; then
|
|
||||||
echo "Error: Source directory not found: $SOURCE_REPO/tests/contracts"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create destination if it doesn't exist
|
|
||||||
mkdir -p "$DEST_DIR"
|
|
||||||
|
|
||||||
# Sync test files (preserve structure)
|
|
||||||
echo -e "${BLUE}Copying test files...${NC}"
|
|
||||||
|
|
||||||
# Copy the contract test structure
|
|
||||||
rsync -av --delete \
|
|
||||||
--include="*/" \
|
|
||||||
--include="test_*.py" \
|
|
||||||
--include="__init__.py" \
|
|
||||||
--include="base*.py" \
|
|
||||||
--include="conftest.py" \
|
|
||||||
--include="endpoints.py" \
|
|
||||||
--include="helpers.py" \
|
|
||||||
--exclude="*" \
|
|
||||||
"$SOURCE_REPO/tests/contracts/" \
|
|
||||||
"$DEST_DIR/"
|
|
||||||
|
|
||||||
# Remove base_api.py and base_live.py (we only need pure HTTP base.py)
|
|
||||||
rm -f "$DEST_DIR/base_api.py" "$DEST_DIR/base_live.py"
|
|
||||||
|
|
||||||
# Create a simple base.py that uses tester's base class
|
|
||||||
cat > "$DEST_DIR/base.py" << 'EOF'
|
|
||||||
"""
|
|
||||||
Contract Tests - Base Class
|
|
||||||
|
|
||||||
Uses tester's HTTP base class for framework-agnostic testing.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Import from tester's base
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add tester to path if needed
|
|
||||||
tester_path = Path(__file__).parent.parent
|
|
||||||
if str(tester_path) not in sys.path:
|
|
||||||
sys.path.insert(0, str(tester_path))
|
|
||||||
|
|
||||||
from base import ContractTestCase
|
|
||||||
|
|
||||||
__all__ = ["ContractTestCase"]
|
|
||||||
EOF
|
|
||||||
|
|
||||||
echo
|
|
||||||
echo -e "${GREEN}✓ Tests synced successfully${NC}"
|
|
||||||
echo
|
|
||||||
echo "Test structure:"
|
|
||||||
find "$DEST_DIR" -name "test_*.py" -type f | sed 's|'"$DEST_DIR"'||' | sort
|
|
||||||
|
|
||||||
echo
|
|
||||||
echo -e "${BLUE}Next steps:${NC}"
|
|
||||||
echo "1. Run tester locally: cd /home/mariano/wdir/ama/soleprint/ward && python -m tools.tester"
|
|
||||||
echo "2. Deploy to server: cd /home/mariano/wdir/ama/soleprint/deploy && ./deploy.sh"
|
|
||||||
Reference in New Issue
Block a user