soleprint init commit
This commit is contained in:
4
.gitignore
vendored
Normal file
4
.gitignore
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
fails
|
||||
gen
|
||||
def
|
||||
__pycache__
|
||||
191
CLAUDE.md
Normal file
191
CLAUDE.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Soleprint - Development Control Room
|
||||
|
||||
## What Is This?
|
||||
|
||||
Soleprint is a **development workflow platform** - a self-contained environment where you can run, test, and document everything in isolation. Born from the friction of working on small teams where testing required PRs, documentation was scattered, and quick API connectors took too long to set up.
|
||||
|
||||
**Core idea:** BDD → Gherkin → Backend/Frontend Tests, with reusable connectors and tools that work across projects.
|
||||
|
||||
**Name:** Soleprint - "Cada paso deja huella" / "Each step leaves a mark"
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
spr/
|
||||
├── CLAUDE.md # You are here
|
||||
├── README.md # User-facing docs
|
||||
├── schema.json # Source of truth for models
|
||||
├── config/ # Framework configurations
|
||||
│ └── soleprint.config.json
|
||||
│
|
||||
├── artery/ # ACTUAL source - Vital connections
|
||||
│ ├── veins/ # Single-responsibility connectors
|
||||
│ ├── pulses/ # Composed: Vein + Room + Depot
|
||||
│ ├── rooms/ # Environment configs
|
||||
│ └── depots/ # Data storage
|
||||
│
|
||||
├── atlas/ # ACTUAL source - Documentation system
|
||||
│ ├── templates/ # Gherkin, BDD patterns
|
||||
│ ├── books/ # Composed: Template + Depot
|
||||
│ └── depots/ # Data storage
|
||||
│
|
||||
├── station/ # ACTUAL source - Tools & execution
|
||||
│ ├── tools/ # Utilities, generators, runners
|
||||
│ │ ├── generator/ # Model/framework generator
|
||||
│ │ ├── datagen/ # Test data generation
|
||||
│ │ ├── tester/ # Test runner (BDD/playwright)
|
||||
│ │ └── ...
|
||||
│ ├── desks/ # Composed: Cabinet + Room + Depots
|
||||
│ ├── rooms/ # Environment configs
|
||||
│ └── depots/ # Data storage
|
||||
│
|
||||
├── data/ # Site content as JSON files
|
||||
│
|
||||
└── gen/ # RUNNABLE instance (run from here)
|
||||
├── main.py # Hub entry point
|
||||
├── index.html # Landing page
|
||||
├── requirements.txt
|
||||
├── models/ # Generated Pydantic models
|
||||
├── data/ # Symlink → ../data/
|
||||
├── artery/ # Symlink → ../artery/
|
||||
├── atlas/ # Symlink → ../atlas/
|
||||
└── station/ # Symlink → ../station/
|
||||
```
|
||||
|
||||
## The Three Systems
|
||||
|
||||
| System | Purpose | Tagline |
|
||||
|--------|---------|---------|
|
||||
| **Artery** | Connectors to external services | Todo lo vital |
|
||||
| **Atlas** | Actionable documentation | Mapeando el recorrido |
|
||||
| **Station** | Tools, environments, execution | Centro de control |
|
||||
|
||||
## Model Hierarchy
|
||||
|
||||
```
|
||||
Shared: Room (configs), Depot (data)
|
||||
System-specific: Vein (artery), Template (atlas), Tool (station)
|
||||
Composed: Pulse (artery), Book (atlas), Desk (station)
|
||||
```
|
||||
|
||||
**Formulas:**
|
||||
- Pulse = Vein + Room + Depot
|
||||
- Book = Template + Depot
|
||||
- Desk = Cabinet + Room + Depots
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Rooms (Environments)
|
||||
A **Room** is an environment with soleprint context, features, and conventions:
|
||||
- Every room has a `ctrl/` folder with commands that act only on that room
|
||||
- Tools are pluggable into any room
|
||||
- **core_room** is special: orchestrates soleprint + managed sites (Docker lives outside soleprint)
|
||||
|
||||
### The Generator
|
||||
Lives in `station/tools/generator/`. It:
|
||||
1. Reads `schema.json` (source of truth)
|
||||
2. Generates Pydantic models to `gen/models/`
|
||||
3. Model generation is **infrequent** - only when schema changes
|
||||
|
||||
**Bootstrap:** Generator runs standalone (no model dependencies), generates models, then station can use them.
|
||||
|
||||
### Development Symlinks
|
||||
For development, `gen/` contains symlinks back to source:
|
||||
- `gen/artery/` → `../artery/`
|
||||
- `gen/atlas/` → `../atlas/`
|
||||
- `gen/station/` → `../station/`
|
||||
- `gen/data/` → `../data/`
|
||||
|
||||
This means: edit in `spr/artery/`, run from `spr/gen/`, no regeneration needed.
|
||||
|
||||
**Production:** Copy everything (resolve symlinks).
|
||||
|
||||
### Naming Flexibility
|
||||
Code inside soleprint components should NOT have imports too tied to "artery", "atlas", "station" names. At some point these could be swapped for different naming schemes (for teams with different domain language).
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Running Locally
|
||||
```bash
|
||||
cd spr/gen
|
||||
python main.py # Hub on :12000
|
||||
```
|
||||
|
||||
### Regenerating Models (infrequent)
|
||||
```bash
|
||||
cd spr/station/tools/generator
|
||||
python -m generators.orchestrator --config ../../../config/soleprint.config.json --output ../../../gen
|
||||
```
|
||||
|
||||
### Worktrees
|
||||
Feature development in: `/home/mariano/wdir/wts/spr/<branch>`
|
||||
|
||||
Planned:
|
||||
- `databrowse` - Data browser tool (separate CLAUDE.md)
|
||||
- `sbwrapper` - Sidebar wrapper UI for core_room (separate CLAUDE.md)
|
||||
|
||||
## External Dependencies
|
||||
|
||||
| What | Location | Notes |
|
||||
|------|----------|-------|
|
||||
| Core Room | `core_nest/` | Orchestration + Docker (outside spr) |
|
||||
| Amar Backend | `ama/amar_django_back` | Test subject |
|
||||
| Amar Frontend | `ama/amar_frontend` | Test subject |
|
||||
| Pawprint | `ama/pawprint` | Legacy - migrate tools then deprecate |
|
||||
|
||||
## Tools Status
|
||||
|
||||
| Tool | Source | Status | Notes |
|
||||
|------|--------|--------|-------|
|
||||
| generator | fails/02/generators | Move to station/tools/ | Refactor file IO |
|
||||
| datagen | pawprint/ward/tools | Consolidate | Merge with tester/generate_test_data |
|
||||
| tester | pawprint/ward/tools | Advanced | Full BDD/playwright |
|
||||
| databrowse | - | WIP | Separate worktree |
|
||||
| hub | pawprint/ward/tools | Idea | Port management |
|
||||
| infra | pawprint/ward/tools | Idea | Cloud deploy scripts |
|
||||
| graphgen | pawprint/ward/tools | Idea | Graph generation |
|
||||
|
||||
## Ports
|
||||
|
||||
| Service | Port |
|
||||
|---------|------|
|
||||
| Hub (soleprint) | 12000 |
|
||||
| Artery | 12001 |
|
||||
| Atlas | 12002 |
|
||||
| Station | 12003 |
|
||||
|
||||
## Current State
|
||||
|
||||
**Done:**
|
||||
- [x] Model schema defined (pawprint/models/schema.json)
|
||||
- [x] Generator working (fails/02/generators/)
|
||||
- [x] Generated instance in gen/
|
||||
|
||||
**Next (in order):**
|
||||
1. [ ] Create folder structure (artery/, atlas/, station/, config/)
|
||||
2. [ ] Move schema.json to spr/
|
||||
3. [ ] Move generator to station/tools/generator/
|
||||
4. [ ] Move config to spr/config/
|
||||
5. [ ] Set up symlinks in gen/
|
||||
6. [ ] Consolidate tools from pawprint/ward/tools/
|
||||
7. [ ] Integrate core_room (sbwrapper)
|
||||
8. [ ] Worktrees for databrowse, sbwrapper
|
||||
|
||||
## Files Ignored (gitignore)
|
||||
|
||||
- `fails/` - Previous attempts, reference only
|
||||
- `gen/` - Generated/runnable, not source (except models/)
|
||||
- `def/` - Definition drafts
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# Start dev server
|
||||
cd gen && python main.py
|
||||
|
||||
# Health check
|
||||
curl localhost:12000/health
|
||||
|
||||
# View systems
|
||||
open http://localhost:12000
|
||||
```
|
||||
96
README.md
Normal file
96
README.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Soleprint
|
||||
|
||||
> Cada paso deja huella / Each step leaves a mark
|
||||
|
||||
Development workflow and documentation platform. Run, test, and document everything in one place.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd gen
|
||||
pip install -r requirements.txt
|
||||
python main.py
|
||||
# Visit http://localhost:12000
|
||||
```
|
||||
|
||||
## Systems
|
||||
|
||||
| | System | What it does |
|
||||
|---|--------|--------------|
|
||||
| 💉 | **Artery** | Connectors to external services (Jira, Slack, APIs) |
|
||||
| 🗺️ | **Atlas** | Actionable documentation (BDD, Gherkin, specs) |
|
||||
| 🎛️ | **Station** | Tools, environments, test runners |
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
spr/
|
||||
├── schema.json # Model definitions (source of truth)
|
||||
├── config/ # Framework configuration
|
||||
├── artery/ # Connectors (source)
|
||||
├── atlas/ # Documentation (source)
|
||||
├── station/ # Tools (source)
|
||||
│ └── tools/
|
||||
│ ├── generator/ # Generates models & structure
|
||||
│ ├── datagen/ # Test data generation
|
||||
│ └── tester/ # Test runner
|
||||
├── data/ # Content (JSON)
|
||||
└── gen/ # Runnable instance (run from here)
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
**Shared:**
|
||||
- **Room** - Environment configuration
|
||||
- **Depot** - Data storage
|
||||
|
||||
**System-specific:**
|
||||
- **Vein** (Artery) - Single connector
|
||||
- **Template** (Atlas) - Doc pattern
|
||||
- **Tool** (Station) - Utility
|
||||
|
||||
**Composed:**
|
||||
- **Pulse** = Vein + Room + Depot
|
||||
- **Book** = Template + Depot
|
||||
- **Desk** = Cabinet + Room + Depots
|
||||
|
||||
## Development
|
||||
|
||||
### Run locally
|
||||
```bash
|
||||
cd gen
|
||||
python main.py
|
||||
```
|
||||
|
||||
### Regenerate models (when schema.json changes)
|
||||
```bash
|
||||
cd station/tools/generator
|
||||
python -m generators.orchestrator --output ../../../gen
|
||||
```
|
||||
|
||||
## Ports
|
||||
|
||||
| Service | Port |
|
||||
|---------|------|
|
||||
| Hub | 12000 |
|
||||
| Artery | 12001 |
|
||||
| Atlas | 12002 |
|
||||
| Station | 12003 |
|
||||
|
||||
## Background
|
||||
|
||||
Born from the friction of:
|
||||
- Testing requiring PRs on small teams
|
||||
- Documentation scattered across tools
|
||||
- Quick API connectors taking too long to set up
|
||||
- No self-contained environment to experiment freely
|
||||
|
||||
Soleprint lets you run everything in isolation while building reusable pieces.
|
||||
|
||||
## License
|
||||
|
||||
TBD
|
||||
|
||||
---
|
||||
|
||||
*Built for small teams who need to move fast without breaking things.*
|
||||
11
artery/__init__.py
Normal file
11
artery/__init__.py
Normal file
@@ -0,0 +1,11 @@
|
||||
"""
|
||||
Artery - Todo lo vital
|
||||
|
||||
Connectors to external services (Jira, Slack, APIs, etc.)
|
||||
|
||||
Components:
|
||||
- veins/ Single-responsibility connectors
|
||||
- pulses/ Composed: Vein + Room + Depot
|
||||
- rooms/ Environment configs
|
||||
- depots/ Data storage
|
||||
"""
|
||||
10
atlas/__init__.py
Normal file
10
atlas/__init__.py
Normal file
@@ -0,0 +1,10 @@
|
||||
"""
|
||||
Atlas - Documentation System
|
||||
|
||||
Mapeando el recorrido / Mapping the journey
|
||||
|
||||
Components:
|
||||
- templates/ : Documentation patterns (Gherkin, BDD)
|
||||
- books/ : Composed documentation (Template + Depot)
|
||||
- depots/ : Data storage
|
||||
"""
|
||||
128
config/soleprint.config.json
Normal file
128
config/soleprint.config.json
Normal file
@@ -0,0 +1,128 @@
|
||||
{
|
||||
"framework": {
|
||||
"name": "soleprint",
|
||||
"slug": "soleprint",
|
||||
"version": "0.1.0",
|
||||
"description": "Development workflow and documentation system",
|
||||
"tagline": "Mapping development footprints",
|
||||
"icon": "👣",
|
||||
"hub_port": 12000
|
||||
},
|
||||
"systems": [
|
||||
{
|
||||
"key": "data_flow",
|
||||
"name": "artery",
|
||||
"slug": "artery",
|
||||
"title": "Artery",
|
||||
"tagline": "Todo lo vital",
|
||||
"port": 12001,
|
||||
"icon": "💉"
|
||||
},
|
||||
{
|
||||
"key": "documentation",
|
||||
"name": "atlas",
|
||||
"slug": "atlas",
|
||||
"title": "Atlas",
|
||||
"tagline": "Documentación accionable",
|
||||
"port": 12002,
|
||||
"icon": "🗺️"
|
||||
},
|
||||
{
|
||||
"key": "execution",
|
||||
"name": "station",
|
||||
"slug": "station",
|
||||
"title": "Station",
|
||||
"tagline": "Monitores, Entornos y Herramientas",
|
||||
"port": 12003,
|
||||
"icon": "🎛️"
|
||||
}
|
||||
],
|
||||
"components": {
|
||||
"shared": {
|
||||
"config": {
|
||||
"name": "room",
|
||||
"title": "Room",
|
||||
"description": "Runtime environment configuration",
|
||||
"plural": "rooms"
|
||||
},
|
||||
"data": {
|
||||
"name": "depot",
|
||||
"title": "Depot",
|
||||
"description": "Data storage / provisions",
|
||||
"plural": "depots"
|
||||
}
|
||||
},
|
||||
"data_flow": {
|
||||
"connector": {
|
||||
"name": "vein",
|
||||
"title": "Vein",
|
||||
"description": "Single responsibility connector",
|
||||
"plural": "veins"
|
||||
},
|
||||
"composed": {
|
||||
"name": "pulse",
|
||||
"title": "Pulse",
|
||||
"description": "Composed data flow",
|
||||
"plural": "pulses",
|
||||
"formula": "Vein + Room + Depot"
|
||||
}
|
||||
},
|
||||
"documentation": {
|
||||
"pattern": {
|
||||
"name": "template",
|
||||
"title": "Template",
|
||||
"description": "Documentation pattern",
|
||||
"plural": "templates"
|
||||
},
|
||||
"library": {
|
||||
"name": "book",
|
||||
"title": "Book",
|
||||
"description": "Documentation library"
|
||||
},
|
||||
"composed": {
|
||||
"name": "book",
|
||||
"title": "Book",
|
||||
"description": "Composed documentation",
|
||||
"plural": "books",
|
||||
"formula": "Template + Depot"
|
||||
}
|
||||
},
|
||||
"execution": {
|
||||
"utility": {
|
||||
"name": "tool",
|
||||
"title": "Tool",
|
||||
"description": "Execution utility",
|
||||
"plural": "tools"
|
||||
},
|
||||
"watcher": {
|
||||
"name": "monitor",
|
||||
"title": "Monitor",
|
||||
"description": "Service monitor",
|
||||
"plural": "monitors"
|
||||
},
|
||||
"container": {
|
||||
"name": "cabinet",
|
||||
"title": "Cabinet",
|
||||
"description": "Tool container",
|
||||
"plural": "cabinets"
|
||||
},
|
||||
"workspace": {
|
||||
"name": "desk",
|
||||
"title": "Desk",
|
||||
"description": "Execution workspace"
|
||||
},
|
||||
"workbench": {
|
||||
"name": "desk",
|
||||
"title": "Desk",
|
||||
"description": "Work surface"
|
||||
},
|
||||
"composed": {
|
||||
"name": "desk",
|
||||
"title": "Desk",
|
||||
"description": "Composed execution bundle",
|
||||
"plural": "desks",
|
||||
"formula": "Cabinet + Room + Depots"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
79
data/books.json
Normal file
79
data/books.json
Normal file
@@ -0,0 +1,79 @@
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"name": "arch-model",
|
||||
"slug": "arch-model",
|
||||
"title": "Architecture Model",
|
||||
"status": "ready",
|
||||
"template": null,
|
||||
"larder": {
|
||||
"name": "arch-model",
|
||||
"slug": "arch-model",
|
||||
"title": "Architecture Model",
|
||||
"status": "ready",
|
||||
"source_template": null,
|
||||
"data_path": "album/book/arch-model"
|
||||
},
|
||||
"output_larder": null,
|
||||
"system": "album"
|
||||
},
|
||||
{
|
||||
"name": "feature-flow",
|
||||
"slug": "feature-flow",
|
||||
"title": "Feature Flow Pipeline",
|
||||
"status": "ready",
|
||||
"template": null,
|
||||
"larder": {
|
||||
"name": "feature-flow",
|
||||
"slug": "feature-flow",
|
||||
"title": "Feature Flow Pipeline",
|
||||
"status": "ready",
|
||||
"source_template": null,
|
||||
"data_path": "album/book/feature-flow"
|
||||
},
|
||||
"output_larder": null,
|
||||
"system": "album"
|
||||
},
|
||||
{
|
||||
"name": "gherkin-samples",
|
||||
"slug": "gherkin-samples",
|
||||
"title": "Gherkin Samples",
|
||||
"status": "ready",
|
||||
"template": null,
|
||||
"larder": {
|
||||
"name": "gherkin-samples",
|
||||
"slug": "gherkin-samples",
|
||||
"title": "Gherkin Samples",
|
||||
"status": "ready",
|
||||
"source_template": null,
|
||||
"data_path": "album/book/gherkin-samples"
|
||||
},
|
||||
"output_larder": null,
|
||||
"system": "album"
|
||||
},
|
||||
{
|
||||
"name": "feature-form-samples",
|
||||
"slug": "feature-form-samples",
|
||||
"title": "Feature Form Samples",
|
||||
"status": "ready",
|
||||
"template": {
|
||||
"name": "feature-form",
|
||||
"slug": "feature-form",
|
||||
"title": "Feature Form Template",
|
||||
"status": "ready",
|
||||
"template_path": "album/template/feature-form",
|
||||
"system": "album"
|
||||
},
|
||||
"larder": {
|
||||
"name": "feature-form",
|
||||
"slug": "feature-form",
|
||||
"title": "Feature Forms",
|
||||
"status": "ready",
|
||||
"source_template": "feature-form",
|
||||
"data_path": "album/book/feature-form-samples/feature-form"
|
||||
},
|
||||
"output_larder": null,
|
||||
"system": "album"
|
||||
}
|
||||
]
|
||||
}
|
||||
3
data/cabinets.json
Normal file
3
data/cabinets.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"items": []
|
||||
}
|
||||
12
data/depots.json
Normal file
12
data/depots.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"name": "feature-form",
|
||||
"slug": "feature-form",
|
||||
"title": "Feature Forms",
|
||||
"status": "ready",
|
||||
"source_template": "feature-form",
|
||||
"data_path": "album/book/feature-form-samples/feature-form"
|
||||
}
|
||||
]
|
||||
}
|
||||
3
data/desks.json
Normal file
3
data/desks.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"items": []
|
||||
}
|
||||
22
data/monitors.json
Normal file
22
data/monitors.json
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"name": "turnos",
|
||||
"slug": "turnos",
|
||||
"title": "Turnos Monitor",
|
||||
"status": "dev",
|
||||
"system": "ward",
|
||||
"description": "Pipeline view of requests → turnos. Shows vet-petowner at a glance.",
|
||||
"path": "ward/monitor/turnos"
|
||||
},
|
||||
{
|
||||
"name": "data_browse",
|
||||
"slug": "data-browse",
|
||||
"title": "Data Browse",
|
||||
"status": "ready",
|
||||
"system": "ward",
|
||||
"description": "Quick navigation to test users and data states. Book/larder pattern with SQL mode for manual testing workflows.",
|
||||
"path": "ward/monitor/data_browse"
|
||||
}
|
||||
]
|
||||
}
|
||||
3
data/pulses.json
Normal file
3
data/pulses.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"items": []
|
||||
}
|
||||
5
data/rooms.json
Normal file
5
data/rooms.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"items": [
|
||||
{"name": "pawprint-local", "slug": "pawprint-local", "title": "Pawprint Local", "status": "dev", "config_path": "deploy/pawprint-local"}
|
||||
]
|
||||
}
|
||||
12
data/templates.json
Normal file
12
data/templates.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"name": "feature-form",
|
||||
"slug": "feature-form",
|
||||
"title": "Feature Form Template",
|
||||
"status": "ready",
|
||||
"template_path": "data/template/feature-form",
|
||||
"system": "album"
|
||||
}
|
||||
]
|
||||
}
|
||||
48
data/tools.json
Normal file
48
data/tools.json
Normal file
@@ -0,0 +1,48 @@
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"name": "tester",
|
||||
"slug": "tester",
|
||||
"title": "Contract Tests",
|
||||
"status": "live",
|
||||
"system": "ward",
|
||||
"type": "app",
|
||||
"description": "HTTP contract test runner with multi-environment support. Filter, run, and track tests against dev/stage/prod.",
|
||||
"path": "ward/tools/tester",
|
||||
"url": "/tools/tester/"
|
||||
},
|
||||
{
|
||||
"name": "datagen",
|
||||
"slug": "datagen",
|
||||
"title": "Test Data Generator",
|
||||
"status": "live",
|
||||
"system": "ward",
|
||||
"type": "cli",
|
||||
"description": "Generate realistic test data for Amar domain (users, pets, services) and MercadoPago API responses. Used by mock veins and test seeders.",
|
||||
"path": "ward/tools/datagen",
|
||||
"cli": "python -m datagen"
|
||||
},
|
||||
{
|
||||
"name": "generate_test_data",
|
||||
"slug": "generate-test-data",
|
||||
"title": "DB Test Data Extractor",
|
||||
"status": "dev",
|
||||
"system": "ward",
|
||||
"type": "cli",
|
||||
"description": "Extract representative subsets from PostgreSQL dumps for testing/development.",
|
||||
"path": "ward/tools/generate_test_data",
|
||||
"cli": "python -m generate_test_data"
|
||||
},
|
||||
{
|
||||
"name": "modelgen",
|
||||
"slug": "modelgen",
|
||||
"title": "Model Generator",
|
||||
"status": "dev",
|
||||
"system": "ward",
|
||||
"type": "cli",
|
||||
"description": "Generate platform-specific models (Pydantic, Django, Prisma) from JSON Schema.",
|
||||
"path": "ward/tools/modelgen",
|
||||
"cli": "python -m modelgen"
|
||||
}
|
||||
]
|
||||
}
|
||||
14
data/veins.json
Normal file
14
data/veins.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"items": [
|
||||
{"name": "jira", "slug": "jira", "title": "Jira", "status": "live", "system": "artery"},
|
||||
{"name": "amar", "slug": "amar", "title": "Amar (Mock)", "status": "ready", "system": "artery", "description": "Mock Amar API for testing turnero flow without hitting real backend"},
|
||||
{"name": "mercadopago", "slug": "mercadopago", "title": "MercadoPago (Mock)", "status": "ready", "system": "artery", "description": "Mock MercadoPago API for payment integration testing"},
|
||||
{"name": "google", "slug": "google", "title": "Google", "status": "planned", "system": "artery"},
|
||||
{"name": "maps", "slug": "maps", "title": "Maps", "status": "planned", "system": "artery"},
|
||||
{"name": "slack", "slug": "slack", "title": "Slack", "status": "building", "system": "artery"},
|
||||
{"name": "whatsapp", "slug": "whatsapp", "title": "WhatsApp", "status": "planned", "system": "artery"},
|
||||
{"name": "gnucash", "slug": "gnucash", "title": "GNUCash", "status": "planned", "system": "artery"},
|
||||
{"name": "vnc", "slug": "vnc", "title": "VPN", "status": "planned", "system": "artery"},
|
||||
{"name": "ia", "slug": "ia", "title": "IA", "status": "planned", "system": "artery"}
|
||||
]
|
||||
}
|
||||
163
schema.json
Normal file
163
schema.json
Normal file
@@ -0,0 +1,163 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Pawprint Models",
|
||||
"description": "Platform-agnostic model definitions. Portable to TypeScript, Pydantic, Django, Prisma.",
|
||||
"definitions": {
|
||||
"Status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "planned", "building", "dev", "live", "ready"]
|
||||
},
|
||||
"System": {
|
||||
"type": "string",
|
||||
"enum": ["artery", "album", "ward"]
|
||||
},
|
||||
"Nest": {
|
||||
"type": "object",
|
||||
"description": "Runtime environment configuration. Shared across artery, ward.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"config_path": { "type": "string" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Larder": {
|
||||
"type": "object",
|
||||
"description": "Data storage. When generated from Template = 'Book (written)'. Independent in ward/artery.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"source_template": { "type": "string", "description": "Template name if generated" },
|
||||
"data_path": { "type": "string", "description": "Path to data files" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Vein": {
|
||||
"type": "object",
|
||||
"description": "Connector (artery). Single responsibility.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"system": { "const": "artery" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Template": {
|
||||
"type": "object",
|
||||
"description": "Documentation template (album). Gherkin, BDD patterns.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"template_path": { "type": "string", "description": "Path to template files" },
|
||||
"system": { "const": "album" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"ToolType": {
|
||||
"type": "string",
|
||||
"enum": ["app", "cli"],
|
||||
"description": "Type of tool: app (web UI) or cli (command line)"
|
||||
},
|
||||
"Tool": {
|
||||
"type": "object",
|
||||
"description": "Execution tool (ward). Test runners, seeders.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"system": { "const": "ward" },
|
||||
"type": { "$ref": "#/definitions/ToolType" },
|
||||
"description": { "type": "string", "description": "Human-readable description" },
|
||||
"path": { "type": "string", "description": "Path to tool source" },
|
||||
"url": { "type": "string", "description": "URL path for app tools" },
|
||||
"cli": { "type": "string", "description": "CLI command for cli tools" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Monitor": {
|
||||
"type": "object",
|
||||
"description": "Service monitor (ward). Health checks, status watchers.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"system": { "const": "ward" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Cabinet": {
|
||||
"type": "object",
|
||||
"description": "Tool cabinet (ward). Contains 0+ tools.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"tools": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/Tool" }
|
||||
},
|
||||
"system": { "const": "ward" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Pulse": {
|
||||
"type": "object",
|
||||
"description": "Composed data flow (artery). Pulse = Vein + Nest + Larder.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"vein": { "$ref": "#/definitions/Vein" },
|
||||
"nest": { "$ref": "#/definitions/Nest" },
|
||||
"larder": { "$ref": "#/definitions/Larder" },
|
||||
"system": { "const": "artery" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Book": {
|
||||
"type": "object",
|
||||
"description": "Composed documentation (album). Book = Template + Larder.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"template": { "$ref": "#/definitions/Template" },
|
||||
"larder": { "$ref": "#/definitions/Larder" },
|
||||
"output_larder": { "$ref": "#/definitions/Larder" },
|
||||
"system": { "const": "album" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
},
|
||||
"Table": {
|
||||
"type": "object",
|
||||
"description": "Composed execution bundle (ward). Table = Cabinet + Nest + Larders.",
|
||||
"properties": {
|
||||
"name": { "type": "string", "description": "Unique identifier" },
|
||||
"slug": { "type": "string", "description": "URL-friendly identifier" },
|
||||
"title": { "type": "string", "description": "Display title for UI" },
|
||||
"status": { "$ref": "#/definitions/Status" },
|
||||
"cabinet": { "$ref": "#/definitions/Cabinet" },
|
||||
"nest": { "$ref": "#/definitions/Nest" },
|
||||
"larders": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/Larder" }
|
||||
},
|
||||
"system": { "const": "ward" }
|
||||
},
|
||||
"required": ["name", "slug", "title"]
|
||||
}
|
||||
}
|
||||
}
|
||||
11
station/__init__.py
Normal file
11
station/__init__.py
Normal file
@@ -0,0 +1,11 @@
|
||||
"""
|
||||
Station - Tools, environments, and execution.
|
||||
|
||||
Centro de control / Control center
|
||||
|
||||
Components:
|
||||
tools/ - Utilities, generators, runners
|
||||
desks/ - Composed: Cabinet + Room + Depots
|
||||
rooms/ - Environment configs
|
||||
depots/ - Data storage
|
||||
"""
|
||||
6
station/tools/__init__.py
Normal file
6
station/tools/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
"""
|
||||
Station Tools
|
||||
|
||||
Pluggable utilities for soleprint environments (rooms).
|
||||
Each tool can be used standalone or composed into desks.
|
||||
"""
|
||||
164
station/tools/datagen/README.md
Normal file
164
station/tools/datagen/README.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Datagen - Test Data Generator
|
||||
|
||||
Pluggable test data generators for various domain models and external APIs.
|
||||
|
||||
## Purpose
|
||||
|
||||
- Generate realistic test data for Amar domain models
|
||||
- Generate mock API responses for external services (MercadoPago, etc.)
|
||||
- Can be plugged into any nest (test suites, mock veins, seeders)
|
||||
- Domain-agnostic and reusable
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
datagen/
|
||||
├── __init__.py
|
||||
├── amar.py # Amar domain models (petowner, pet, cart, etc.)
|
||||
├── mercadopago.py # MercadoPago API responses
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### In Tests
|
||||
|
||||
```python
|
||||
from ward.tools.datagen.amar import AmarDataGenerator
|
||||
|
||||
def test_petowner_creation():
|
||||
owner_data = AmarDataGenerator.petowner(address="Av. Corrientes 1234")
|
||||
assert owner_data["address"] == "Av. Corrientes 1234"
|
||||
```
|
||||
|
||||
### In Mock Veins
|
||||
|
||||
```python
|
||||
from ward.tools.datagen.mercadopago import MercadoPagoDataGenerator
|
||||
|
||||
@router.post("/v1/preferences")
|
||||
async def create_preference(request: dict):
|
||||
# Generate mock response
|
||||
return MercadoPagoDataGenerator.preference(
|
||||
description=request["items"][0]["title"],
|
||||
total=request["items"][0]["unit_price"],
|
||||
)
|
||||
```
|
||||
|
||||
### In Seeders
|
||||
|
||||
```python
|
||||
from ward.tools.datagen.amar import AmarDataGenerator
|
||||
|
||||
# Create 10 test pet owners
|
||||
for i in range(10):
|
||||
owner = AmarDataGenerator.petowner(is_guest=False)
|
||||
# Save to database...
|
||||
```
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Pluggable**: Can be used anywhere, not tied to specific frameworks
|
||||
2. **Realistic**: Generated data matches real-world patterns
|
||||
3. **Flexible**: Override any field via `**overrides` parameter
|
||||
4. **Domain-focused**: Each generator focuses on a specific domain
|
||||
5. **Stateless**: Pure functions, no global state
|
||||
|
||||
## Generators
|
||||
|
||||
### AmarDataGenerator (amar.py)
|
||||
|
||||
Generates data for Amar platform:
|
||||
|
||||
- `petowner()` - Pet owners (guest and registered)
|
||||
- `pet()` - Pets with species, age, etc.
|
||||
- `cart()` - Shopping carts
|
||||
- `service_request()` - Service requests
|
||||
- `filter_services()` - Service filtering by species/neighborhood
|
||||
- `filter_categories()` - Category filtering
|
||||
- `calculate_cart_summary()` - Cart totals with discounts
|
||||
|
||||
### MercadoPagoDataGenerator (mercadopago.py)
|
||||
|
||||
Generates MercadoPago API responses:
|
||||
|
||||
- `preference()` - Checkout Pro preference
|
||||
- `payment()` - Payment (Checkout API/Bricks)
|
||||
- `merchant_order()` - Merchant order
|
||||
- `oauth_token()` - OAuth token exchange
|
||||
- `webhook_notification()` - Webhook payloads
|
||||
|
||||
## Examples
|
||||
|
||||
### Generate a complete turnero flow
|
||||
|
||||
```python
|
||||
from ward.tools.datagen.amar import AmarDataGenerator
|
||||
|
||||
# Step 1: Guest pet owner
|
||||
owner = AmarDataGenerator.petowner(
|
||||
address="Av. Santa Fe 1234, Palermo",
|
||||
is_guest=True
|
||||
)
|
||||
|
||||
# Step 2: Pet
|
||||
pet = AmarDataGenerator.pet(
|
||||
owner_id=owner["id"],
|
||||
name="Luna",
|
||||
species="DOG",
|
||||
age_value=3,
|
||||
age_unit="years"
|
||||
)
|
||||
|
||||
# Step 3: Cart
|
||||
cart = AmarDataGenerator.cart(owner_id=owner["id"])
|
||||
|
||||
# Step 4: Add services to cart
|
||||
services = AmarDataGenerator.filter_services(
|
||||
species="DOG",
|
||||
neighborhood_id=owner["neighborhood"]["id"]
|
||||
)
|
||||
|
||||
cart_with_items = AmarDataGenerator.calculate_cart_summary(
|
||||
cart,
|
||||
items=[
|
||||
{"service_id": services[0]["id"], "price": services[0]["price"], "quantity": 1, "pet_id": pet["id"]},
|
||||
]
|
||||
)
|
||||
|
||||
# Step 5: Service request
|
||||
request = AmarDataGenerator.service_request(cart_id=cart["id"])
|
||||
```
|
||||
|
||||
### Generate a payment flow
|
||||
|
||||
```python
|
||||
from ward.tools.datagen.mercadopago import MercadoPagoDataGenerator
|
||||
|
||||
# Create preference
|
||||
pref = MercadoPagoDataGenerator.preference(
|
||||
description="Visita a domicilio",
|
||||
total=95000,
|
||||
external_reference="SR-12345"
|
||||
)
|
||||
|
||||
# Simulate payment
|
||||
payment = MercadoPagoDataGenerator.payment(
|
||||
transaction_amount=95000,
|
||||
description="Visita a domicilio",
|
||||
status="approved",
|
||||
application_fee=45000 # Platform fee (split payment)
|
||||
)
|
||||
|
||||
# Webhook notification
|
||||
webhook = MercadoPagoDataGenerator.webhook_notification(
|
||||
topic="payment",
|
||||
resource_id=str(payment["id"])
|
||||
)
|
||||
```
|
||||
|
||||
## Future Generators
|
||||
|
||||
- `google.py` - Google API responses (Calendar, Sheets)
|
||||
- `whatsapp.py` - WhatsApp API responses
|
||||
- `slack.py` - Slack API responses
|
||||
1
station/tools/datagen/__init__.py
Normal file
1
station/tools/datagen/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Datagen - Test data generator for Amar domain models."""
|
||||
255
station/tools/datagen/amar.py
Normal file
255
station/tools/datagen/amar.py
Normal file
@@ -0,0 +1,255 @@
|
||||
"""Data generator for Amar domain models - can be plugged into any nest."""
|
||||
|
||||
import random
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
|
||||
class AmarDataGenerator:
|
||||
"""Generates realistic test data for Amar domain models."""
|
||||
|
||||
# Sample data pools
|
||||
FIRST_NAMES = ["Lucas", "María", "Juan", "Carolina", "Diego", "Valentina", "Martín", "Sofía", "Mateo", "Emma"]
|
||||
LAST_NAMES = ["González", "Rodríguez", "Pérez", "García", "Martínez", "López", "Fernández", "Sánchez"]
|
||||
|
||||
PET_NAMES = ["Luna", "Max", "Bella", "Rocky", "Coco", "Toby", "Mia", "Charlie", "Lola", "Simba"]
|
||||
PET_SPECIES = [
|
||||
{"id": 1, "name": "Perro", "code": "DOG"},
|
||||
{"id": 2, "name": "Gato", "code": "CAT"}
|
||||
]
|
||||
|
||||
NEIGHBORHOODS = [
|
||||
{"id": 1, "name": "Palermo", "has_coverage": True, "zone": "CABA"},
|
||||
{"id": 2, "name": "Recoleta", "has_coverage": True, "zone": "CABA"},
|
||||
{"id": 3, "name": "Belgrano", "has_coverage": True, "zone": "CABA"},
|
||||
{"id": 4, "name": "Caballito", "has_coverage": True, "zone": "CABA"},
|
||||
{"id": 5, "name": "Mataderos", "has_coverage": False, "zone": "CABA"},
|
||||
{"id": 6, "name": "Villa Urquiza", "has_coverage": True, "zone": "CABA"},
|
||||
]
|
||||
|
||||
SERVICE_CATEGORIES = [
|
||||
{"id": 1, "name": "Consultas", "description": "Consultas veterinarias"},
|
||||
{"id": 2, "name": "Vacunación", "description": "Vacunas y antiparasitarios"},
|
||||
{"id": 3, "name": "Estudios", "description": "Análisis y estudios clínicos"},
|
||||
{"id": 4, "name": "Videollamada", "description": "Consultas por videollamada"},
|
||||
]
|
||||
|
||||
SERVICES = [
|
||||
{"id": 1, "category_id": 1, "name": "Consulta general clínica programada", "price": 95000, "species": ["DOG", "CAT"]},
|
||||
{"id": 2, "category_id": 2, "name": "Vacuna Antirrábica", "price": 7000, "species": ["DOG", "CAT"]},
|
||||
{"id": 3, "category_id": 2, "name": "Sextuple", "price": 12000, "species": ["DOG"]},
|
||||
{"id": 4, "category_id": 2, "name": "Triple Felina", "price": 11000, "species": ["CAT"]},
|
||||
{"id": 5, "category_id": 3, "name": "Análisis de sangre", "price": 25000, "species": ["DOG", "CAT"]},
|
||||
{"id": 6, "category_id": 3, "name": "Ecografía", "price": 35000, "species": ["DOG", "CAT"]},
|
||||
{"id": 7, "category_id": 4, "name": "Consulta por videollamada", "price": 15000, "species": ["DOG", "CAT"]},
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def petowner(cls, address: str = None, is_guest: bool = True, **overrides) -> Dict[str, Any]:
|
||||
"""Generate a petowner.
|
||||
|
||||
Args:
|
||||
address: Owner address
|
||||
is_guest: Whether this is a guest user
|
||||
**overrides: Override any fields
|
||||
"""
|
||||
owner_id = overrides.get("id", random.randint(1000, 9999))
|
||||
first_name = overrides.get("first_name", random.choice(cls.FIRST_NAMES) if not is_guest else "")
|
||||
last_name = overrides.get("last_name", random.choice(cls.LAST_NAMES) if not is_guest else "")
|
||||
address = address or f"{random.choice(['Av.', 'Calle'])} {random.choice(['Corrientes', 'Santa Fe', 'Córdoba', 'Rivadavia'])} {random.randint(1000, 5000)}"
|
||||
|
||||
# Determine neighborhood from address or random
|
||||
neighborhood = overrides.get("neighborhood", random.choice([n for n in cls.NEIGHBORHOODS if n["has_coverage"]]))
|
||||
|
||||
data = {
|
||||
"id": owner_id,
|
||||
"first_name": first_name,
|
||||
"last_name": last_name,
|
||||
"email": f"guest_{owner_id}@amarmascotas.ar" if is_guest else f"{first_name.lower()}.{last_name.lower()}@example.com",
|
||||
"phone": f"+54911{random.randint(10000000, 99999999)}",
|
||||
"address": address,
|
||||
"neighborhood": neighborhood,
|
||||
"is_guest": is_guest,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
# Apply overrides
|
||||
data.update(overrides)
|
||||
return data
|
||||
|
||||
@classmethod
|
||||
def pet(cls, owner_id: int, name: str = None, species: str = "DOG", age_value: int = None, age_unit: str = "years", **overrides) -> Dict[str, Any]:
|
||||
"""Generate a pet.
|
||||
|
||||
Args:
|
||||
owner_id: Owner ID
|
||||
name: Pet name
|
||||
species: Pet species code (DOG, CAT)
|
||||
age_value: Age value
|
||||
age_unit: Age unit (years, months)
|
||||
**overrides: Override any fields
|
||||
"""
|
||||
species_data = next((s for s in cls.PET_SPECIES if s["code"] == species.upper()), cls.PET_SPECIES[0])
|
||||
age = age_value or random.randint(1, 10)
|
||||
name = name or random.choice(cls.PET_NAMES)
|
||||
|
||||
data = {
|
||||
"id": random.randint(1000, 9999),
|
||||
"owner_id": owner_id,
|
||||
"name": name,
|
||||
"species": species_data,
|
||||
"age": age,
|
||||
"age_unit": age_unit,
|
||||
"age_in_months": age if age_unit == "months" else age * 12,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
data.update(overrides)
|
||||
return data
|
||||
|
||||
@classmethod
|
||||
def cart(cls, owner_id: int, **overrides) -> Dict[str, Any]:
|
||||
"""Generate an empty cart.
|
||||
|
||||
Args:
|
||||
owner_id: Owner ID
|
||||
**overrides: Override any fields
|
||||
"""
|
||||
cart_id = overrides.get("id", random.randint(10000, 99999))
|
||||
|
||||
data = {
|
||||
"id": cart_id,
|
||||
"owner_id": owner_id,
|
||||
"items": [],
|
||||
"resume": {
|
||||
"subtotal": 0.0,
|
||||
"discounts": 0.0,
|
||||
"total": 0.0,
|
||||
},
|
||||
"resume_items": [],
|
||||
"created_at": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
data.update(overrides)
|
||||
return data
|
||||
|
||||
@classmethod
|
||||
def filter_services(cls, species: str = None, neighborhood_id: int = None) -> List[Dict[str, Any]]:
|
||||
"""Filter services by species and neighborhood coverage.
|
||||
|
||||
Args:
|
||||
species: Species code to filter by (DOG, CAT)
|
||||
neighborhood_id: Neighborhood ID for coverage check
|
||||
"""
|
||||
services = cls.SERVICES.copy()
|
||||
|
||||
if species:
|
||||
species_code = species.upper()
|
||||
services = [s for s in services if species_code in s["species"]]
|
||||
|
||||
# Neighborhood coverage - only videollamada if no coverage
|
||||
if neighborhood_id:
|
||||
neighborhood = next((n for n in cls.NEIGHBORHOODS if n["id"] == neighborhood_id), None)
|
||||
if not neighborhood or not neighborhood.get("has_coverage"):
|
||||
services = [s for s in services if s["category_id"] == 4] # Only videollamada
|
||||
|
||||
return [
|
||||
{
|
||||
"id": s["id"],
|
||||
"name": s["name"],
|
||||
"category_id": s["category_id"],
|
||||
"price": s["price"],
|
||||
"currency": "ARS",
|
||||
"available": True,
|
||||
}
|
||||
for s in services
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def filter_categories(cls, species: str = None, neighborhood_id: int = None) -> List[Dict[str, Any]]:
|
||||
"""Filter categories that have available services.
|
||||
|
||||
Args:
|
||||
species: Species code to filter by
|
||||
neighborhood_id: Neighborhood ID for coverage check
|
||||
"""
|
||||
available_services = cls.filter_services(species, neighborhood_id)
|
||||
service_category_ids = {s["category_id"] for s in available_services}
|
||||
|
||||
return [
|
||||
{
|
||||
"id": cat["id"],
|
||||
"name": cat["name"],
|
||||
"description": cat["description"],
|
||||
"service_count": len([s for s in available_services if s["category_id"] == cat["id"]]),
|
||||
}
|
||||
for cat in cls.SERVICE_CATEGORIES
|
||||
if cat["id"] in service_category_ids
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def calculate_cart_summary(cls, cart: Dict[str, Any], items: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Calculate cart totals with discounts and splits.
|
||||
|
||||
Args:
|
||||
cart: Cart data
|
||||
items: List of cart items with price and quantity
|
||||
"""
|
||||
subtotal = sum(item.get("price", 0) * item.get("quantity", 1) for item in items)
|
||||
|
||||
# Multi-pet discount
|
||||
pet_count = len({item.get("pet_id") for item in items if item.get("pet_id")})
|
||||
discount_rate = 0.0
|
||||
if pet_count >= 2:
|
||||
discount_rate = 0.10 # 10% for 2+ pets
|
||||
|
||||
discount_amount = subtotal * discount_rate
|
||||
total = subtotal - discount_amount
|
||||
|
||||
resume_items = [
|
||||
{"concept": "SUBTOTAL", "amount": subtotal},
|
||||
]
|
||||
|
||||
if discount_amount > 0:
|
||||
resume_items.append({"concept": "DESCUENTO MULTIMASCOTA", "amount": -discount_amount})
|
||||
|
||||
# Calculate vet honorarios (52% of total in real system)
|
||||
honorarios = total * 0.52
|
||||
resume_items.append({"concept": "HONORARIOS", "amount": honorarios})
|
||||
resume_items.append({"concept": "TOTAL", "amount": total})
|
||||
|
||||
return {
|
||||
**cart,
|
||||
"items": items,
|
||||
"resume": {
|
||||
"subtotal": round(subtotal, 2),
|
||||
"discounts": round(discount_amount, 2),
|
||||
"total": round(total, 2),
|
||||
},
|
||||
"resume_items": resume_items,
|
||||
"updated_at": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def service_request(cls, cart_id: int, requested_date: str = None, **overrides) -> Dict[str, Any]:
|
||||
"""Generate a service request.
|
||||
|
||||
Args:
|
||||
cart_id: Cart ID
|
||||
requested_date: ISO format date string
|
||||
**overrides: Override any fields
|
||||
"""
|
||||
request_id = overrides.get("id", random.randint(100000, 999999))
|
||||
requested_date = requested_date or (datetime.now() + timedelta(days=random.randint(1, 7))).isoformat()
|
||||
|
||||
data = {
|
||||
"id": request_id,
|
||||
"cart_id": cart_id,
|
||||
"requested_date": requested_date,
|
||||
"state": "PENDING",
|
||||
"veterinarian": None,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
}
|
||||
|
||||
data.update(overrides)
|
||||
return data
|
||||
8
station/tools/generator/__init__.py
Normal file
8
station/tools/generator/__init__.py
Normal file
@@ -0,0 +1,8 @@
|
||||
"""
|
||||
Framework Generator System
|
||||
|
||||
Generates complete framework instances (pawprint, soleprint, etc.)
|
||||
from configuration files.
|
||||
"""
|
||||
|
||||
__version__ = "0.1.0"
|
||||
418
station/tools/generator/code_generator.py
Normal file
418
station/tools/generator/code_generator.py
Normal file
@@ -0,0 +1,418 @@
|
||||
"""
|
||||
Code Generator
|
||||
|
||||
Generates Python code files (main.py, data layer, system main files).
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from .config_loader import ConfigLoader
|
||||
|
||||
|
||||
class CodeGenerator:
|
||||
"""Generates Python code from configuration"""
|
||||
|
||||
def __init__(self, config: ConfigLoader, output_dir: Path):
|
||||
self.config = config
|
||||
self.output_dir = Path(output_dir)
|
||||
|
||||
def generate(self):
|
||||
"""Generate all code files"""
|
||||
|
||||
# Generate hub main.py
|
||||
self._generate_hub_main()
|
||||
|
||||
# Generate data layer
|
||||
self._generate_data_layer()
|
||||
|
||||
# Generate system main files
|
||||
for system in self.config.systems:
|
||||
self._generate_system_main(system)
|
||||
|
||||
print(f"Generated code in {self.output_dir}")
|
||||
|
||||
def _generate_hub_main(self):
|
||||
"""Generate hub main.py file"""
|
||||
|
||||
fw = self.config.framework
|
||||
systems = self.config.systems
|
||||
|
||||
# Build system URL mappings
|
||||
system_urls = "\n".join([
|
||||
f'{s.name.upper()}_URL = os.getenv("{s.name.upper()}_URL", "http://localhost:{s.port}")'
|
||||
for s in systems
|
||||
])
|
||||
|
||||
system_external_urls = "\n".join([
|
||||
f'{s.name.upper()}_EXTERNAL_URL = os.getenv("{s.name.upper()}_EXTERNAL_URL", {s.name.upper()}_URL)'
|
||||
for s in systems
|
||||
])
|
||||
|
||||
system_health = ",\n ".join([
|
||||
f'"{s.name}": {s.name.upper()}_URL'
|
||||
for s in systems
|
||||
])
|
||||
|
||||
system_routes = "\n".join([
|
||||
f' "{s.name}": {s.name.upper()}_EXTERNAL_URL,'
|
||||
for s in systems
|
||||
])
|
||||
|
||||
system_redirects = "\n\n".join([
|
||||
f'''@app.get("/{s.name}")
|
||||
@app.get("/{s.name}/{{path:path}}")
|
||||
def {s.name}_redirect(path: str = ""):
|
||||
"""Redirect to {s.name} service."""
|
||||
target = os.getenv("{s.name.upper()}_URL")
|
||||
if target:
|
||||
return RedirectResponse(url=f"{{target}}/{{path}}")
|
||||
return {{"error": "{s.name.upper()}_URL not configured"}}'''
|
||||
for s in systems
|
||||
])
|
||||
|
||||
content = f'''"""
|
||||
{fw.name.capitalize()} - Overview and routing hub.
|
||||
|
||||
{fw.description}
|
||||
{fw.icon} {fw.tagline}
|
||||
|
||||
Systems:
|
||||
'''
|
||||
|
||||
# Add system documentation
|
||||
for s in systems:
|
||||
content += f' {s.icon} {s.title} ({s.name}) - {s.tagline}\n'
|
||||
|
||||
content += f'''
|
||||
Routes:
|
||||
/ → index
|
||||
/health → health check
|
||||
'''
|
||||
|
||||
# Add data routes
|
||||
for s in systems:
|
||||
content += f' /api/data/{s.name} → {s.name} data\n'
|
||||
|
||||
# Add system redirects
|
||||
for s in systems:
|
||||
content += f' /{s.name}/* → proxy to {s.name} service\n'
|
||||
|
||||
content += f'''"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.responses import RedirectResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
# Import data functions
|
||||
from data import get_{systems[0].name}_data, get_{systems[1].name}_data, get_{systems[2].name}_data
|
||||
|
||||
app = FastAPI(title="{fw.name.capitalize()}", version="{fw.version}")
|
||||
|
||||
templates = Jinja2Templates(directory=Path(__file__).parent)
|
||||
|
||||
# Service URLs (internal for API calls)
|
||||
{system_urls}
|
||||
|
||||
# External URLs (for frontend links, falls back to internal)
|
||||
{system_external_urls}
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health():
|
||||
return {{
|
||||
"status": "ok",
|
||||
"service": "{fw.name}",
|
||||
"subsystems": {{
|
||||
{system_health},
|
||||
}}
|
||||
}}
|
||||
|
||||
|
||||
# === Data API ===
|
||||
|
||||
@app.get("/api/data/{systems[0].name}")
|
||||
def api_{systems[0].name}_data():
|
||||
"""Data for {systems[0].name} service."""
|
||||
return get_{systems[0].name}_data()
|
||||
|
||||
|
||||
@app.get("/api/data/{systems[1].name}")
|
||||
def api_{systems[1].name}_data():
|
||||
"""Data for {systems[1].name} service."""
|
||||
return get_{systems[1].name}_data()
|
||||
|
||||
|
||||
@app.get("/api/data/{systems[2].name}")
|
||||
def api_{systems[2].name}_data():
|
||||
"""Data for {systems[2].name} service."""
|
||||
return get_{systems[2].name}_data()
|
||||
|
||||
|
||||
@app.get("/")
|
||||
def index(request: Request):
|
||||
return templates.TemplateResponse("index.html", {{
|
||||
"request": request,
|
||||
{system_routes}
|
||||
}})
|
||||
|
||||
|
||||
# === Cross-system redirects ===
|
||||
# These allow {fw.name} to act as a hub, redirecting to subsystem routes
|
||||
|
||||
{system_redirects}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
host="0.0.0.0",
|
||||
port=int(os.getenv("PORT", "{fw.hub_port}")),
|
||||
reload=os.getenv("DEV", "").lower() in ("1", "true"),
|
||||
)
|
||||
'''
|
||||
|
||||
(self.output_dir / "main.py").write_text(content)
|
||||
|
||||
def _generate_data_layer(self):
|
||||
"""Generate data/__init__.py file"""
|
||||
|
||||
# Get all component names for imports
|
||||
connector = self.config.get_component('data_flow', 'connector')
|
||||
pattern = self.config.get_component('documentation', 'pattern')
|
||||
tool = self.config.get_component('execution', 'utility')
|
||||
monitor = self.config.get_component('execution', 'watcher')
|
||||
cabinet = self.config.get_component('execution', 'container')
|
||||
config_comp = self.config.get_shared_component('config')
|
||||
data_comp = self.config.get_shared_component('data')
|
||||
|
||||
pulse = self.config.get_component('data_flow', 'composed')
|
||||
doc_composed = self.config.get_component('documentation', 'composed')
|
||||
exec_composed = self.config.get_component('execution', 'composed')
|
||||
|
||||
systems = self.config.systems
|
||||
|
||||
# Build imports
|
||||
imports = f'''from models.pydantic import (
|
||||
{connector.title}, {config_comp.title}, {data_comp.title}, {pattern.title}, {tool.title},
|
||||
{pulse.title}, {doc_composed.title}, {exec_composed.title},
|
||||
{connector.title}Collection, {config_comp.title}Collection, {data_comp.title}Collection,
|
||||
{pattern.title}Collection, {tool.title}Collection,
|
||||
{pulse.title}Collection, {doc_composed.title}Collection, {exec_composed.title}Collection,
|
||||
Status
|
||||
)'''
|
||||
|
||||
# Build loader functions
|
||||
loaders = f'''
|
||||
def get_{connector.plural}() -> List[{connector.title}]:
|
||||
data = _load_json("{connector.plural}.json")
|
||||
return {connector.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{config_comp.plural}() -> List[{config_comp.title}]:
|
||||
data = _load_json("{config_comp.plural}.json")
|
||||
return {config_comp.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{data_comp.plural}() -> List[{data_comp.title}]:
|
||||
data = _load_json("{data_comp.plural}.json")
|
||||
return {data_comp.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{pattern.plural}() -> List[{pattern.title}]:
|
||||
data = _load_json("{pattern.plural}.json")
|
||||
return {pattern.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{tool.plural}() -> List[{tool.title}]:
|
||||
data = _load_json("{tool.plural}.json")
|
||||
return {tool.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{cabinet.plural}() -> list:
|
||||
"""Load {cabinet.plural} (simple dict, no pydantic yet)."""
|
||||
data = _load_json("{cabinet.plural}.json")
|
||||
return data.get("items", [])
|
||||
|
||||
|
||||
def get_{monitor.plural}() -> list:
|
||||
"""Load {monitor.plural} (simple dict, no pydantic yet)."""
|
||||
data = _load_json("{monitor.plural}.json")
|
||||
return data.get("items", [])
|
||||
|
||||
|
||||
def get_{pulse.plural}() -> List[{pulse.title}]:
|
||||
data = _load_json("{pulse.plural}.json")
|
||||
return {pulse.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{doc_composed.plural}() -> List[{doc_composed.title}]:
|
||||
data = _load_json("{doc_composed.plural}.json")
|
||||
return {doc_composed.title}Collection(**data).items
|
||||
|
||||
|
||||
def get_{exec_composed.plural}() -> List[{exec_composed.title}]:
|
||||
data = _load_json("{exec_composed.plural}.json")
|
||||
return {exec_composed.title}Collection(**data).items
|
||||
'''
|
||||
|
||||
# Build system data functions
|
||||
data_flow_sys = systems[0]
|
||||
doc_sys = systems[1]
|
||||
exec_sys = systems[2]
|
||||
|
||||
system_data = f'''
|
||||
def get_{data_flow_sys.name}_data() -> dict:
|
||||
"""Data for {data_flow_sys.name} frontend."""
|
||||
return {{
|
||||
"{connector.plural}": [v.model_dump() for v in get_{connector.plural}()],
|
||||
"{config_comp.plural}": [n.model_dump() for n in get_{config_comp.plural}()],
|
||||
"{data_comp.plural}": [l.model_dump() for l in get_{data_comp.plural}()],
|
||||
"{pulse.plural}": [p.model_dump() for p in get_{pulse.plural}()],
|
||||
}}
|
||||
|
||||
|
||||
def get_{doc_sys.name}_data() -> dict:
|
||||
"""Data for {doc_sys.name} frontend."""
|
||||
return {{
|
||||
"{pattern.plural}": [t.model_dump() for t in get_{pattern.plural}()],
|
||||
"{data_comp.plural}": [l.model_dump() for l in get_{data_comp.plural}()],
|
||||
"{doc_composed.plural}": [b.model_dump() for b in get_{doc_composed.plural}()],
|
||||
}}
|
||||
|
||||
|
||||
def get_{exec_sys.name}_data() -> dict:
|
||||
"""Data for {exec_sys.name} frontend."""
|
||||
return {{
|
||||
"{tool.plural}": [t.model_dump() for t in get_{tool.plural}()],
|
||||
"{monitor.plural}": get_{monitor.plural}(),
|
||||
"{cabinet.plural}": get_{cabinet.plural}(),
|
||||
"{config_comp.plural}": [n.model_dump() for n in get_{config_comp.plural}()],
|
||||
"{data_comp.plural}": [l.model_dump() for l in get_{data_comp.plural}()],
|
||||
"{exec_composed.plural}": [t.model_dump() for t in get_{exec_composed.plural}()],
|
||||
}}
|
||||
'''
|
||||
|
||||
content = f'''"""
|
||||
{self.config.framework.name.capitalize()} Data Layer
|
||||
|
||||
JSON file storage (future: MongoDB)
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import List, Optional
|
||||
|
||||
# Add parent to path for models import
|
||||
import sys
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
{imports}
|
||||
|
||||
DATA_DIR = Path(__file__).parent.resolve()
|
||||
|
||||
|
||||
def _load_json(filename: str) -> dict:
|
||||
filepath = DATA_DIR / filename
|
||||
if filepath.exists():
|
||||
with open(filepath) as f:
|
||||
return json.load(f)
|
||||
return {{"items": []}}
|
||||
|
||||
|
||||
def _save_json(filename: str, data: dict):
|
||||
filepath = DATA_DIR / filename
|
||||
with open(filepath, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
|
||||
# === Loaders ===
|
||||
{loaders}
|
||||
|
||||
# === For frontend rendering ===
|
||||
{system_data}
|
||||
'''
|
||||
|
||||
(self.output_dir / "data" / "__init__.py").write_text(content)
|
||||
|
||||
def _generate_system_main(self, system):
|
||||
"""Generate main.py for a system"""
|
||||
|
||||
fw = self.config.framework
|
||||
|
||||
content = f'''"""
|
||||
{system.title} - {system.tagline}
|
||||
"""
|
||||
|
||||
import os
|
||||
import httpx
|
||||
from pathlib import Path
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
app = FastAPI(title="{system.title}", version="{fw.version}")
|
||||
|
||||
templates = Jinja2Templates(directory=Path(__file__).parent)
|
||||
|
||||
# {fw.name.capitalize()} URL for data fetching
|
||||
{fw.name.upper()}_URL = os.getenv("{fw.name.upper()}_URL", "http://localhost:{fw.hub_port}")
|
||||
|
||||
|
||||
def get_data():
|
||||
"""Fetch data from {fw.name} hub."""
|
||||
try:
|
||||
resp = httpx.get(f"{{{fw.name.upper()}_URL}}/api/data/{system.name}", timeout=5.0)
|
||||
if resp.status_code == 200:
|
||||
return resp.json()
|
||||
except Exception as e:
|
||||
print(f"Failed to fetch data from {fw.name}: {{e}}")
|
||||
return {{"items": []}}
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health():
|
||||
return {{"status": "ok", "service": "{system.name}"}}
|
||||
|
||||
|
||||
@app.get("/")
|
||||
def index(request: Request):
|
||||
data = get_data()
|
||||
return templates.TemplateResponse("index.html", {{
|
||||
"request": request,
|
||||
"{fw.name}_url": os.getenv("{fw.name.upper()}_EXTERNAL_URL", {fw.name.upper()}_URL),
|
||||
**data,
|
||||
}})
|
||||
|
||||
|
||||
@app.get("/api/data")
|
||||
def api_data():
|
||||
"""API endpoint for frontend data (proxied from {fw.name})."""
|
||||
return get_data()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
host="0.0.0.0",
|
||||
port=int(os.getenv("PORT", "{system.port}")),
|
||||
reload=os.getenv("DEV", "").lower() in ("1", "true"),
|
||||
)
|
||||
'''
|
||||
|
||||
(self.output_dir / system.name / "main.py").write_text(content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from .config_loader import load_config
|
||||
|
||||
# Test with soleprint config
|
||||
config_path = Path(__file__).parent.parent / "soleprint.config.json"
|
||||
config = load_config(config_path)
|
||||
|
||||
output_dir = Path(__file__).parent.parent
|
||||
generator = CodeGenerator(config, output_dir)
|
||||
generator.generate()
|
||||
|
||||
print("Code generated successfully!")
|
||||
130
station/tools/generator/config_loader.py
Normal file
130
station/tools/generator/config_loader.py
Normal file
@@ -0,0 +1,130 @@
|
||||
"""
|
||||
Configuration Loader
|
||||
|
||||
Loads and validates framework configuration files.
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List, Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class FrameworkConfig:
|
||||
"""Framework metadata"""
|
||||
name: str
|
||||
slug: str
|
||||
version: str
|
||||
description: str
|
||||
tagline: str
|
||||
icon: str
|
||||
hub_port: int
|
||||
|
||||
|
||||
@dataclass
|
||||
class SystemConfig:
|
||||
"""System configuration"""
|
||||
key: str
|
||||
name: str
|
||||
slug: str
|
||||
title: str
|
||||
tagline: str
|
||||
port: int
|
||||
icon: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class ComponentConfig:
|
||||
"""Component configuration"""
|
||||
name: str
|
||||
title: str
|
||||
description: str
|
||||
plural: Optional[str] = None
|
||||
formula: Optional[str] = None
|
||||
|
||||
|
||||
class ConfigLoader:
|
||||
"""Loads and parses framework configuration"""
|
||||
|
||||
def __init__(self, config_path: Path):
|
||||
self.config_path = Path(config_path)
|
||||
self.raw_config: Dict[str, Any] = {}
|
||||
self.framework: Optional[FrameworkConfig] = None
|
||||
self.systems: List[SystemConfig] = []
|
||||
self.components: Dict[str, Dict[str, ComponentConfig]] = {}
|
||||
|
||||
def load(self) -> 'ConfigLoader':
|
||||
"""Load configuration from file"""
|
||||
with open(self.config_path) as f:
|
||||
self.raw_config = json.load(f)
|
||||
|
||||
self._parse_framework()
|
||||
self._parse_systems()
|
||||
self._parse_components()
|
||||
|
||||
return self
|
||||
|
||||
def _parse_framework(self):
|
||||
"""Parse framework metadata"""
|
||||
fw = self.raw_config['framework']
|
||||
self.framework = FrameworkConfig(**fw)
|
||||
|
||||
def _parse_systems(self):
|
||||
"""Parse system configurations"""
|
||||
for sys in self.raw_config['systems']:
|
||||
self.systems.append(SystemConfig(**sys))
|
||||
|
||||
def _parse_components(self):
|
||||
"""Parse component configurations"""
|
||||
comps = self.raw_config['components']
|
||||
|
||||
# Shared components
|
||||
self.components['shared'] = {}
|
||||
for key, value in comps.get('shared', {}).items():
|
||||
self.components['shared'][key] = ComponentConfig(**value)
|
||||
|
||||
# System-specific components
|
||||
for system_key in ['data_flow', 'documentation', 'execution']:
|
||||
self.components[system_key] = {}
|
||||
for comp_key, comp_value in comps.get(system_key, {}).items():
|
||||
self.components[system_key][comp_key] = ComponentConfig(**comp_value)
|
||||
|
||||
def get_system(self, key: str) -> Optional[SystemConfig]:
|
||||
"""Get system config by key"""
|
||||
for sys in self.systems:
|
||||
if sys.key == key:
|
||||
return sys
|
||||
return None
|
||||
|
||||
def get_component(self, system_key: str, component_key: str) -> Optional[ComponentConfig]:
|
||||
"""Get component config"""
|
||||
return self.components.get(system_key, {}).get(component_key)
|
||||
|
||||
def get_shared_component(self, key: str) -> Optional[ComponentConfig]:
|
||||
"""Get shared component config"""
|
||||
return self.components.get('shared', {}).get(key)
|
||||
|
||||
|
||||
def load_config(config_path: str | Path) -> ConfigLoader:
|
||||
"""Load and validate configuration file"""
|
||||
loader = ConfigLoader(config_path)
|
||||
return loader.load()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test with pawprint config
|
||||
import sys
|
||||
config_path = Path(__file__).parent.parent / "pawprint.config.json"
|
||||
|
||||
loader = load_config(config_path)
|
||||
|
||||
print(f"Framework: {loader.framework.name} v{loader.framework.version}")
|
||||
print(f"Tagline: {loader.framework.tagline}")
|
||||
print(f"\nSystems:")
|
||||
for sys in loader.systems:
|
||||
print(f" {sys.icon} {sys.title} ({sys.name}) - {sys.tagline}")
|
||||
|
||||
print(f"\nShared Components:")
|
||||
for key, comp in loader.components['shared'].items():
|
||||
print(f" {comp.name} - {comp.description}")
|
||||
255
station/tools/generator/model_generator.py
Normal file
255
station/tools/generator/model_generator.py
Normal file
@@ -0,0 +1,255 @@
|
||||
"""
|
||||
Model Generator
|
||||
|
||||
Generates Pydantic models from framework configuration.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from .config_loader import ConfigLoader
|
||||
|
||||
|
||||
class ModelGenerator:
|
||||
"""Generates Pydantic model files from configuration"""
|
||||
|
||||
def __init__(self, config: ConfigLoader, output_dir: Path):
|
||||
self.config = config
|
||||
self.output_dir = Path(output_dir)
|
||||
|
||||
def generate(self):
|
||||
"""Generate all model files"""
|
||||
models_dir = self.output_dir / "models" / "pydantic"
|
||||
models_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate __init__.py with all models
|
||||
self._generate_models_file(models_dir / "__init__.py")
|
||||
|
||||
print(f"Generated models in {models_dir}")
|
||||
|
||||
def _generate_models_file(self, output_path: Path):
|
||||
"""Generate the main models file"""
|
||||
|
||||
# Get component names from config
|
||||
config_comp = self.config.get_shared_component('config')
|
||||
data_comp = self.config.get_shared_component('data')
|
||||
|
||||
data_flow_sys = self.config.get_system('data_flow')
|
||||
doc_sys = self.config.get_system('documentation')
|
||||
exec_sys = self.config.get_system('execution')
|
||||
|
||||
connector_comp = self.config.get_component('data_flow', 'connector')
|
||||
pulse_comp = self.config.get_component('data_flow', 'composed')
|
||||
|
||||
pattern_comp = self.config.get_component('documentation', 'pattern')
|
||||
maps_comp = self.config.get_component('documentation', 'library')
|
||||
doc_composed = self.config.get_component('documentation', 'composed')
|
||||
|
||||
tool_comp = self.config.get_component('execution', 'utility')
|
||||
monitor_comp = self.config.get_component('execution', 'watcher')
|
||||
cabinet_comp = self.config.get_component('execution', 'container')
|
||||
exec_composed = self.config.get_component('execution', 'composed')
|
||||
|
||||
# Build the template
|
||||
content = f'''"""
|
||||
Pydantic models - Generated from {self.config.framework.name}.config.json
|
||||
|
||||
DO NOT EDIT MANUALLY - Regenerate from config
|
||||
"""
|
||||
|
||||
from enum import Enum
|
||||
from typing import Optional, List, Literal
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class Status(str, Enum):
|
||||
PENDING = "pending"
|
||||
PLANNED = "planned"
|
||||
BUILDING = "building"
|
||||
DEV = "dev"
|
||||
LIVE = "live"
|
||||
READY = "ready"
|
||||
|
||||
|
||||
class System(str, Enum):
|
||||
{data_flow_sys.name.upper()} = "{data_flow_sys.name}"
|
||||
{doc_sys.name.upper()} = "{doc_sys.name}"
|
||||
{exec_sys.name.upper()} = "{exec_sys.name}"
|
||||
|
||||
|
||||
class ToolType(str, Enum):
|
||||
APP = "app"
|
||||
CLI = "cli"
|
||||
|
||||
|
||||
# === Shared Components ===
|
||||
|
||||
class {config_comp.title}(BaseModel):
|
||||
"""{config_comp.description}. Shared across {data_flow_sys.name}, {exec_sys.name}."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
config_path: Optional[str] = None
|
||||
|
||||
|
||||
class {data_comp.title}(BaseModel):
|
||||
"""{data_comp.description}. Shared across all systems."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
source_template: Optional[str] = None
|
||||
data_path: Optional[str] = None
|
||||
|
||||
|
||||
# === System-Specific Components ===
|
||||
|
||||
class {connector_comp.title}(BaseModel):
|
||||
"""{connector_comp.description} ({data_flow_sys.name})."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
system: Literal["{data_flow_sys.name}"] = "{data_flow_sys.name}"
|
||||
mock: Optional[bool] = None
|
||||
description: Optional[str] = None
|
||||
|
||||
|
||||
class {pattern_comp.title}(BaseModel):
|
||||
"""{pattern_comp.description} ({doc_sys.name})."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
template_path: Optional[str] = None
|
||||
system: Literal["{doc_sys.name}"] = "{doc_sys.name}"
|
||||
|
||||
|
||||
class {tool_comp.title}(BaseModel):
|
||||
"""{tool_comp.description} ({exec_sys.name})."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
system: Literal["{exec_sys.name}"] = "{exec_sys.name}"
|
||||
type: Optional[ToolType] = None
|
||||
description: Optional[str] = None
|
||||
path: Optional[str] = None
|
||||
url: Optional[str] = None
|
||||
cli: Optional[str] = None
|
||||
|
||||
|
||||
class {monitor_comp.title}(BaseModel):
|
||||
"""{monitor_comp.description} ({exec_sys.name})."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
system: Literal["{exec_sys.name}"] = "{exec_sys.name}"
|
||||
|
||||
|
||||
class {cabinet_comp.title}(BaseModel):
|
||||
"""{cabinet_comp.description} ({exec_sys.name})."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
tools: List[{tool_comp.title}] = Field(default_factory=list)
|
||||
system: Literal["{exec_sys.name}"] = "{exec_sys.name}"
|
||||
|
||||
|
||||
# === Composed Types ===
|
||||
|
||||
class {pulse_comp.title}(BaseModel):
|
||||
"""{pulse_comp.description} ({data_flow_sys.name}). Formula: {pulse_comp.formula}."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
{connector_comp.name}: Optional[{connector_comp.title}] = None
|
||||
{config_comp.name}: Optional[{config_comp.title}] = None
|
||||
{data_comp.name}: Optional[{data_comp.title}] = None
|
||||
system: Literal["{data_flow_sys.name}"] = "{data_flow_sys.name}"
|
||||
|
||||
|
||||
class {doc_composed.title}(BaseModel):
|
||||
"""{doc_composed.description} ({doc_sys.name}). Formula: {doc_composed.formula}."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
template: Optional[{pattern_comp.title}] = None
|
||||
{data_comp.name}: Optional[{data_comp.title}] = None
|
||||
output_{data_comp.name}: Optional[{data_comp.title}] = None
|
||||
system: Literal["{doc_sys.name}"] = "{doc_sys.name}"
|
||||
|
||||
|
||||
class {exec_composed.title}(BaseModel):
|
||||
"""{exec_composed.description} ({exec_sys.name}). Formula: {exec_composed.formula}."""
|
||||
name: str # Unique identifier
|
||||
slug: str # URL-friendly identifier
|
||||
title: str # Display title for UI
|
||||
status: Optional[Status] = None
|
||||
cabinet: Optional[{cabinet_comp.title}] = None
|
||||
{config_comp.name}: Optional[{config_comp.title}] = None
|
||||
{data_comp.plural}: List[{data_comp.title}] = Field(default_factory=list)
|
||||
system: Literal["{exec_sys.name}"] = "{exec_sys.name}"
|
||||
|
||||
|
||||
# === Collection wrappers for JSON files ===
|
||||
|
||||
class {config_comp.title}Collection(BaseModel):
|
||||
items: List[{config_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {data_comp.title}Collection(BaseModel):
|
||||
items: List[{data_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {connector_comp.title}Collection(BaseModel):
|
||||
items: List[{connector_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {pattern_comp.title}Collection(BaseModel):
|
||||
items: List[{pattern_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {tool_comp.title}Collection(BaseModel):
|
||||
items: List[{tool_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {monitor_comp.title}Collection(BaseModel):
|
||||
items: List[{monitor_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {cabinet_comp.title}Collection(BaseModel):
|
||||
items: List[{cabinet_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {pulse_comp.title}Collection(BaseModel):
|
||||
items: List[{pulse_comp.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {doc_composed.title}Collection(BaseModel):
|
||||
items: List[{doc_composed.title}] = Field(default_factory=list)
|
||||
|
||||
|
||||
class {exec_composed.title}Collection(BaseModel):
|
||||
items: List[{exec_composed.title}] = Field(default_factory=list)
|
||||
'''
|
||||
|
||||
output_path.write_text(content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from .config_loader import load_config
|
||||
|
||||
# Test with soleprint config
|
||||
config_path = Path(__file__).parent.parent / "soleprint.config.json"
|
||||
config = load_config(config_path)
|
||||
|
||||
output_dir = Path(__file__).parent.parent / "soleprint-room"
|
||||
generator = ModelGenerator(config, output_dir)
|
||||
generator.generate()
|
||||
|
||||
print("Models generated successfully!")
|
||||
128
station/tools/generator/orchestrator.py
Normal file
128
station/tools/generator/orchestrator.py
Normal file
@@ -0,0 +1,128 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Framework Generator Orchestrator
|
||||
|
||||
Generates complete framework from configuration file.
|
||||
|
||||
Usage:
|
||||
python generators/orchestrator.py [--config CONFIG_PATH]
|
||||
|
||||
Example:
|
||||
python generators/orchestrator.py
|
||||
python generators/orchestrator.py --config custom.config.json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from .config_loader import load_config
|
||||
from .structure_generator import StructureGenerator
|
||||
from .model_generator import ModelGenerator
|
||||
from .code_generator import CodeGenerator
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def generate_framework(config_path: Path, output_dir: Path):
|
||||
"""Generate complete framework from configuration"""
|
||||
|
||||
logger.info("="*60)
|
||||
logger.info("Framework Generator")
|
||||
logger.info("="*60)
|
||||
|
||||
# Load configuration
|
||||
logger.info(f"Loading configuration from {config_path}...")
|
||||
config = load_config(config_path)
|
||||
|
||||
logger.info(f"\nGenerating {config.framework.name.capitalize()}")
|
||||
logger.info(f" {config.framework.tagline}")
|
||||
logger.info(f" Version: {config.framework.version}")
|
||||
|
||||
logger.info(f"\nOutput directory: {output_dir}")
|
||||
|
||||
logger.info(f"\nSystems:")
|
||||
for sys in config.systems:
|
||||
logger.info(f" {sys.title} ({sys.name}) - {sys.tagline}")
|
||||
|
||||
# Generate structure
|
||||
logger.info(f"\n[1/4] Generating folder structure...")
|
||||
struct_gen = StructureGenerator(config, output_dir)
|
||||
struct_gen.generate()
|
||||
|
||||
# Generate models
|
||||
logger.info(f"\n[2/4] Generating Pydantic models...")
|
||||
model_gen = ModelGenerator(config, output_dir)
|
||||
model_gen.generate()
|
||||
|
||||
# Generate code
|
||||
logger.info(f"\n[3/4] Generating Python code...")
|
||||
code_gen = CodeGenerator(config, output_dir)
|
||||
code_gen.generate()
|
||||
|
||||
# Copy templates
|
||||
logger.info(f"\n[4/4] Copying templates...")
|
||||
templates_dir = Path(__file__).parent.parent / "templates"
|
||||
if (templates_dir / "index.html").exists():
|
||||
shutil.copy(templates_dir / "index.html", output_dir / "index.html")
|
||||
logger.info(f" Copied index.html")
|
||||
if (templates_dir / "requirements.txt").exists():
|
||||
shutil.copy(templates_dir / "requirements.txt", output_dir / "requirements.txt")
|
||||
logger.info(f" Copied requirements.txt")
|
||||
|
||||
logger.info(f"\n{'='*60}")
|
||||
logger.info(f"Framework generated successfully!")
|
||||
logger.info(f"{'='*60}\n")
|
||||
|
||||
logger.info(f"Next steps:")
|
||||
logger.info(f" 1. Review generated files in {output_dir}")
|
||||
logger.info(f" 2. Install dependencies: pip install -r requirements.txt")
|
||||
logger.info(f" 3. Run hub: python {output_dir}/main.py")
|
||||
logger.info(f" 4. Visit http://localhost:{config.framework.hub_port}")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate framework from configuration"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config",
|
||||
default="soleprint.config.json",
|
||||
help="Path to configuration file (default: soleprint.config.json)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
default=None,
|
||||
help="Output directory (default: same as config directory)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
config_path = Path(args.config)
|
||||
if not config_path.exists():
|
||||
# Try relative to script location
|
||||
script_dir = Path(__file__).parent.parent
|
||||
config_path = script_dir / args.config
|
||||
|
||||
if not config_path.exists():
|
||||
logger.error(f"Configuration file not found: {args.config}")
|
||||
return 1
|
||||
|
||||
# Output directory defaults to config directory
|
||||
if args.output:
|
||||
output_dir = Path(args.output)
|
||||
else:
|
||||
output_dir = config_path.parent
|
||||
|
||||
generate_framework(config_path, output_dir)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
127
station/tools/generator/structure_generator.py
Normal file
127
station/tools/generator/structure_generator.py
Normal file
@@ -0,0 +1,127 @@
|
||||
"""
|
||||
Structure Generator
|
||||
|
||||
Creates folder structure for framework instance.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from .config_loader import ConfigLoader
|
||||
|
||||
|
||||
class StructureGenerator:
|
||||
"""Generates folder structure from configuration"""
|
||||
|
||||
def __init__(self, config: ConfigLoader, output_dir: Path):
|
||||
self.config = config
|
||||
self.output_dir = Path(output_dir)
|
||||
|
||||
def generate(self):
|
||||
"""Generate complete folder structure"""
|
||||
|
||||
# Note: output_dir is the framework root (spr/), not soleprint-room/
|
||||
# soleprint-room/ is generated separately as Docker orchestration
|
||||
|
||||
# Create models/
|
||||
(self.output_dir / "models").mkdir(parents=True, exist_ok=True)
|
||||
(self.output_dir / "models" / "pydantic").mkdir(exist_ok=True)
|
||||
|
||||
# Create data/
|
||||
data_dir = self.output_dir / "data"
|
||||
data_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Create ctrl/ (for local scripts)
|
||||
(self.output_dir / "ctrl").mkdir(exist_ok=True)
|
||||
|
||||
# Get component names
|
||||
connector = self.config.get_component('data_flow', 'connector')
|
||||
pattern = self.config.get_component('documentation', 'pattern')
|
||||
tool = self.config.get_component('execution', 'utility')
|
||||
monitor = self.config.get_component('execution', 'watcher')
|
||||
config_comp = self.config.get_shared_component('config')
|
||||
data_comp = self.config.get_shared_component('data')
|
||||
|
||||
# Create system directories
|
||||
for system in self.config.systems:
|
||||
sys_dir = self.output_dir / system.name
|
||||
sys_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Create __init__.py markers
|
||||
(sys_dir / "__init__.py").touch()
|
||||
|
||||
# System-specific structure
|
||||
if system.key == 'data_flow':
|
||||
# artery/vein/, artery/pulse/, artery/room/, artery/depot/
|
||||
(sys_dir / connector.plural).mkdir(exist_ok=True)
|
||||
(sys_dir / self.config.get_component('data_flow', 'composed').plural).mkdir(exist_ok=True)
|
||||
(sys_dir / config_comp.plural).mkdir(exist_ok=True)
|
||||
(sys_dir / data_comp.plural).mkdir(exist_ok=True)
|
||||
|
||||
elif system.key == 'documentation':
|
||||
# atlas/template/, atlas/maps/, atlas/depot/
|
||||
(sys_dir / pattern.plural).mkdir(exist_ok=True)
|
||||
(sys_dir / self.config.get_component('documentation', 'library').name).mkdir(exist_ok=True)
|
||||
(sys_dir / data_comp.plural).mkdir(exist_ok=True)
|
||||
|
||||
elif system.key == 'execution':
|
||||
# station/tools/, station/monitors/, station/desk/, station/room/, station/depot/
|
||||
(sys_dir / tool.plural).mkdir(exist_ok=True)
|
||||
(sys_dir / monitor.plural).mkdir(exist_ok=True)
|
||||
exec_composed = self.config.get_component('execution', 'composed')
|
||||
(sys_dir / exec_composed.plural).mkdir(exist_ok=True)
|
||||
(sys_dir / config_comp.plural).mkdir(exist_ok=True)
|
||||
(sys_dir / data_comp.plural).mkdir(exist_ok=True)
|
||||
|
||||
# Create data JSON files
|
||||
self._create_data_files(data_dir)
|
||||
|
||||
print(f"Generated structure in {self.output_dir}")
|
||||
|
||||
def _create_data_files(self, data_dir: Path):
|
||||
"""Create empty data JSON files"""
|
||||
|
||||
# Get component names for plurals
|
||||
connector = self.config.get_component('data_flow', 'connector')
|
||||
pattern = self.config.get_component('documentation', 'pattern')
|
||||
tool = self.config.get_component('execution', 'utility')
|
||||
monitor = self.config.get_component('execution', 'watcher')
|
||||
cabinet = self.config.get_component('execution', 'container')
|
||||
config_comp = self.config.get_shared_component('config')
|
||||
data_comp = self.config.get_shared_component('data')
|
||||
|
||||
pulse = self.config.get_component('data_flow', 'composed')
|
||||
doc_composed = self.config.get_component('documentation', 'composed')
|
||||
exec_composed = self.config.get_component('execution', 'composed')
|
||||
|
||||
# Create JSON files with empty items arrays
|
||||
files = [
|
||||
f"{connector.plural}.json",
|
||||
f"{pattern.plural}.json",
|
||||
f"{tool.plural}.json",
|
||||
f"{monitor.plural}.json",
|
||||
f"{cabinet.plural}.json",
|
||||
f"{config_comp.plural}.json",
|
||||
f"{data_comp.plural}.json",
|
||||
f"{pulse.plural}.json",
|
||||
f"{doc_composed.plural}.json",
|
||||
f"{exec_composed.plural}.json",
|
||||
]
|
||||
|
||||
for filename in files:
|
||||
filepath = data_dir / filename
|
||||
if not filepath.exists():
|
||||
filepath.write_text('{\n "items": []\n}\n')
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from .config_loader import load_config
|
||||
|
||||
# Test with soleprint config
|
||||
config_path = Path(__file__).parent.parent / "soleprint.config.json"
|
||||
config = load_config(config_path)
|
||||
|
||||
# Output to framework root (spr/), not soleprint-room/
|
||||
output_dir = Path(__file__).parent.parent
|
||||
generator = StructureGenerator(config, output_dir)
|
||||
generator.generate()
|
||||
|
||||
print("Structure generated successfully!")
|
||||
73
station/tools/hub/README.md
Normal file
73
station/tools/hub/README.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Hub Port Management Scripts
|
||||
|
||||
Super alpha version of firewall port management for Core Nest services.
|
||||
|
||||
## Files
|
||||
|
||||
- **ports** - List of ports to manage (one per line, comments allowed)
|
||||
- **update-ports.sh** - Generate ports file from .env configurations
|
||||
- **iptables.sh** - Manage ports using iptables
|
||||
- **ufw.sh** - Manage ports using ufw
|
||||
- **firewalld.sh** - Manage ports using firewalld
|
||||
|
||||
## Firewall Tools
|
||||
|
||||
Choose the tool that matches your system:
|
||||
|
||||
- **iptables** - Most Linux systems (rules not persistent by default)
|
||||
- **ufw** - Ubuntu/Debian (Uncomplicated Firewall)
|
||||
- **firewalld** - RHEL/CentOS/Fedora
|
||||
|
||||
## Usage
|
||||
|
||||
### Update ports from configuration
|
||||
```bash
|
||||
./update-ports.sh
|
||||
```
|
||||
|
||||
### Open ports (choose your firewall)
|
||||
```bash
|
||||
# Using iptables
|
||||
sudo ./iptables.sh open
|
||||
|
||||
# Using ufw
|
||||
sudo ./ufw.sh open
|
||||
|
||||
# Using firewalld
|
||||
sudo ./firewalld.sh open
|
||||
```
|
||||
|
||||
### Close ports (choose your firewall)
|
||||
```bash
|
||||
# Using iptables
|
||||
sudo ./iptables.sh close
|
||||
|
||||
# Using ufw
|
||||
sudo ./ufw.sh close
|
||||
|
||||
# Using firewalld
|
||||
sudo ./firewalld.sh close
|
||||
```
|
||||
|
||||
## Default Ports
|
||||
|
||||
- **3000** - Amar Frontend
|
||||
- **8000** - Amar Backend
|
||||
- **13000** - Pawprint
|
||||
- **13001** - Artery
|
||||
- **13002** - Album
|
||||
- **13003** - Ward
|
||||
|
||||
## Notes
|
||||
|
||||
- **iptables**: Rules are not persistent across reboots unless you install `iptables-persistent`
|
||||
- **ufw**: Remember to run `sudo ufw reload` after making changes
|
||||
- **firewalld**: Scripts automatically reload the firewall
|
||||
|
||||
## Future Improvements
|
||||
|
||||
- Auto-detect firewall system
|
||||
- Support for multiple nests
|
||||
- Integration with ward UI
|
||||
- Per-service port management
|
||||
- LAN subnet restrictions
|
||||
63
station/tools/hub/firewalld.sh
Executable file
63
station/tools/hub/firewalld.sh
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/bin/bash
|
||||
# Manage Core Nest ports using firewalld
|
||||
# Usage: sudo ./firewalld.sh [open|close]
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PORTS_FILE="$SCRIPT_DIR/ports"
|
||||
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Error: This script must be run as root (use sudo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v firewall-cmd &> /dev/null; then
|
||||
echo "Error: firewalld is not installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$PORTS_FILE" ]; then
|
||||
echo "Error: ports file not found at $PORTS_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ACTION="${1:-}"
|
||||
if [ "$ACTION" != "open" ] && [ "$ACTION" != "close" ]; then
|
||||
echo "Usage: sudo $0 [open|close]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
echo "=== Opening Core Nest Ports (firewalld) ==="
|
||||
else
|
||||
echo "=== Closing Core Nest Ports (firewalld) ==="
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Read ports and apply action
|
||||
while IFS= read -r line || [ -n "$line" ]; do
|
||||
# Skip comments and empty lines
|
||||
[[ "$line" =~ ^#.*$ ]] && continue
|
||||
[[ -z "$line" ]] && continue
|
||||
|
||||
port=$(echo "$line" | tr -d ' ')
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
echo " Port $port: Opening..."
|
||||
firewall-cmd --permanent --add-port="${port}/tcp"
|
||||
echo " Port $port: ✓ Opened"
|
||||
else
|
||||
echo " Port $port: Closing..."
|
||||
firewall-cmd --permanent --remove-port="${port}/tcp" 2>/dev/null || echo " Port $port: Not found (already closed)"
|
||||
echo " Port $port: ✓ Closed"
|
||||
fi
|
||||
done < "$PORTS_FILE"
|
||||
|
||||
# Reload firewall to apply changes
|
||||
echo ""
|
||||
echo "Reloading firewall..."
|
||||
firewall-cmd --reload
|
||||
|
||||
echo ""
|
||||
echo "=== Done ==="
|
||||
71
station/tools/hub/iptables.sh
Executable file
71
station/tools/hub/iptables.sh
Executable file
@@ -0,0 +1,71 @@
|
||||
#!/bin/bash
|
||||
# Manage Core Nest ports using iptables
|
||||
# Usage: sudo ./iptables.sh [open|close]
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PORTS_FILE="$SCRIPT_DIR/ports"
|
||||
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Error: This script must be run as root (use sudo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$PORTS_FILE" ]; then
|
||||
echo "Error: ports file not found at $PORTS_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ACTION="${1:-}"
|
||||
if [ "$ACTION" != "open" ] && [ "$ACTION" != "close" ]; then
|
||||
echo "Usage: sudo $0 [open|close]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
echo "=== Opening Core Nest Ports (iptables) ==="
|
||||
else
|
||||
echo "=== Closing Core Nest Ports (iptables) ==="
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Read ports and apply action
|
||||
while IFS= read -r line || [ -n "$line" ]; do
|
||||
# Skip comments and empty lines
|
||||
[[ "$line" =~ ^#.*$ ]] && continue
|
||||
[[ -z "$line" ]] && continue
|
||||
|
||||
port=$(echo "$line" | tr -d ' ')
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
# Open port
|
||||
if iptables -C INPUT -p tcp --dport "$port" -j ACCEPT 2>/dev/null; then
|
||||
echo " Port $port: Already open"
|
||||
else
|
||||
echo " Port $port: Opening..."
|
||||
iptables -I INPUT -p tcp --dport "$port" -j ACCEPT
|
||||
echo " Port $port: ✓ Opened"
|
||||
fi
|
||||
else
|
||||
# Close port
|
||||
if iptables -C INPUT -p tcp --dport "$port" -j ACCEPT 2>/dev/null; then
|
||||
echo " Port $port: Closing..."
|
||||
iptables -D INPUT -p tcp --dport "$port" -j ACCEPT
|
||||
echo " Port $port: ✓ Closed"
|
||||
else
|
||||
echo " Port $port: Already closed"
|
||||
fi
|
||||
fi
|
||||
done < "$PORTS_FILE"
|
||||
|
||||
echo ""
|
||||
echo "=== Done ==="
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
echo ""
|
||||
echo "Note: iptables rules are not persistent across reboots."
|
||||
echo "To make persistent, install iptables-persistent:"
|
||||
echo " apt-get install iptables-persistent"
|
||||
echo " netfilter-persistent save"
|
||||
fi
|
||||
13
station/tools/hub/ports
Normal file
13
station/tools/hub/ports
Normal file
@@ -0,0 +1,13 @@
|
||||
# Core Nest Ports
|
||||
# Format: one port per line
|
||||
# Comments allowed with #
|
||||
|
||||
# Amar
|
||||
3000
|
||||
8000
|
||||
|
||||
# Pawprint Services
|
||||
13000
|
||||
13001
|
||||
13002
|
||||
13003
|
||||
61
station/tools/hub/ufw.sh
Executable file
61
station/tools/hub/ufw.sh
Executable file
@@ -0,0 +1,61 @@
|
||||
#!/bin/bash
|
||||
# Manage Core Nest ports using ufw
|
||||
# Usage: sudo ./ufw.sh [open|close]
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PORTS_FILE="$SCRIPT_DIR/ports"
|
||||
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Error: This script must be run as root (use sudo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v ufw &> /dev/null; then
|
||||
echo "Error: ufw is not installed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$PORTS_FILE" ]; then
|
||||
echo "Error: ports file not found at $PORTS_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ACTION="${1:-}"
|
||||
if [ "$ACTION" != "open" ] && [ "$ACTION" != "close" ]; then
|
||||
echo "Usage: sudo $0 [open|close]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
echo "=== Opening Core Nest Ports (ufw) ==="
|
||||
else
|
||||
echo "=== Closing Core Nest Ports (ufw) ==="
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Read ports and apply action
|
||||
while IFS= read -r line || [ -n "$line" ]; do
|
||||
# Skip comments and empty lines
|
||||
[[ "$line" =~ ^#.*$ ]] && continue
|
||||
[[ -z "$line" ]] && continue
|
||||
|
||||
port=$(echo "$line" | tr -d ' ')
|
||||
|
||||
if [ "$ACTION" = "open" ]; then
|
||||
echo " Port $port: Opening..."
|
||||
ufw allow "$port/tcp" comment "Core Nest"
|
||||
echo " Port $port: ✓ Opened"
|
||||
else
|
||||
echo " Port $port: Closing..."
|
||||
ufw delete allow "$port/tcp" 2>/dev/null || echo " Port $port: Not found (already closed)"
|
||||
echo " Port $port: ✓ Closed"
|
||||
fi
|
||||
done < "$PORTS_FILE"
|
||||
|
||||
echo ""
|
||||
echo "=== Done ==="
|
||||
echo ""
|
||||
echo "Reload ufw to apply changes:"
|
||||
echo " ufw reload"
|
||||
88
station/tools/hub/update-ports.sh
Executable file
88
station/tools/hub/update-ports.sh
Executable file
@@ -0,0 +1,88 @@
|
||||
#!/bin/bash
|
||||
# Update ports file from core_nest configuration
|
||||
# Gathers ports from pawprint and amar .env files
|
||||
#
|
||||
# Usage: ./update-ports.sh
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PORTS_FILE="$SCRIPT_DIR/ports"
|
||||
|
||||
# TODO: Make these configurable or auto-detect
|
||||
CORE_NEST_ROOT="${CORE_NEST_ROOT:-/home/mariano/core_nest}"
|
||||
PAWPRINT_ENV="$CORE_NEST_ROOT/pawprint/.env"
|
||||
AMAR_ENV="$CORE_NEST_ROOT/amar/.env"
|
||||
|
||||
echo "=== Updating Core Nest Ports ==="
|
||||
echo ""
|
||||
|
||||
# Backup existing ports file
|
||||
if [ -f "$PORTS_FILE" ]; then
|
||||
cp "$PORTS_FILE" "$PORTS_FILE.bak"
|
||||
echo " ✓ Backed up existing ports to ports.bak"
|
||||
fi
|
||||
|
||||
# Start new ports file
|
||||
cat > "$PORTS_FILE" <<'EOF'
|
||||
# Core Nest Ports
|
||||
# Auto-generated by update-ports.sh
|
||||
# Format: one port per line
|
||||
# Comments allowed with #
|
||||
|
||||
EOF
|
||||
|
||||
# Extract ports from amar .env
|
||||
if [ -f "$AMAR_ENV" ]; then
|
||||
echo " Reading amar ports..."
|
||||
echo "# Amar" >> "$PORTS_FILE"
|
||||
|
||||
# Frontend port (default 3000)
|
||||
AMAR_FRONTEND_PORT=$(grep "^AMAR_FRONTEND_PORT=" "$AMAR_ENV" 2>/dev/null | cut -d'=' -f2 || echo "3000")
|
||||
echo "$AMAR_FRONTEND_PORT" >> "$PORTS_FILE"
|
||||
|
||||
# Backend port (default 8000)
|
||||
AMAR_BACKEND_PORT=$(grep "^AMAR_BACKEND_PORT=" "$AMAR_ENV" 2>/dev/null | cut -d'=' -f2 || echo "8000")
|
||||
echo "$AMAR_BACKEND_PORT" >> "$PORTS_FILE"
|
||||
|
||||
echo " ✓ Added amar ports: $AMAR_FRONTEND_PORT, $AMAR_BACKEND_PORT"
|
||||
else
|
||||
echo " ⚠ Amar .env not found, using defaults"
|
||||
echo "# Amar (defaults)" >> "$PORTS_FILE"
|
||||
echo "3000" >> "$PORTS_FILE"
|
||||
echo "8000" >> "$PORTS_FILE"
|
||||
fi
|
||||
|
||||
echo "" >> "$PORTS_FILE"
|
||||
|
||||
# Extract ports from pawprint .env
|
||||
if [ -f "$PAWPRINT_ENV" ]; then
|
||||
echo " Reading pawprint ports..."
|
||||
echo "# Pawprint Services" >> "$PORTS_FILE"
|
||||
|
||||
PAWPRINT_PORT=$(grep "^PAWPRINT_PORT=" "$PAWPRINT_ENV" 2>/dev/null | cut -d'=' -f2 || echo "13000")
|
||||
ARTERY_PORT=$(grep "^ARTERY_PORT=" "$PAWPRINT_ENV" 2>/dev/null | cut -d'=' -f2 || echo "13001")
|
||||
ALBUM_PORT=$(grep "^ALBUM_PORT=" "$PAWPRINT_ENV" 2>/dev/null | cut -d'=' -f2 || echo "13002")
|
||||
WARD_PORT=$(grep "^WARD_PORT=" "$PAWPRINT_ENV" 2>/dev/null | cut -d'=' -f2 || echo "13003")
|
||||
|
||||
echo "$PAWPRINT_PORT" >> "$PORTS_FILE"
|
||||
echo "$ARTERY_PORT" >> "$PORTS_FILE"
|
||||
echo "$ALBUM_PORT" >> "$PORTS_FILE"
|
||||
echo "$WARD_PORT" >> "$PORTS_FILE"
|
||||
|
||||
echo " ✓ Added pawprint ports: $PAWPRINT_PORT, $ARTERY_PORT, $ALBUM_PORT, $WARD_PORT"
|
||||
else
|
||||
echo " ⚠ Pawprint .env not found, using defaults"
|
||||
echo "# Pawprint Services (defaults)" >> "$PORTS_FILE"
|
||||
echo "13000" >> "$PORTS_FILE"
|
||||
echo "13001" >> "$PORTS_FILE"
|
||||
echo "13002" >> "$PORTS_FILE"
|
||||
echo "13003" >> "$PORTS_FILE"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=== Done ==="
|
||||
echo ""
|
||||
echo "Updated ports file: $PORTS_FILE"
|
||||
echo ""
|
||||
cat "$PORTS_FILE"
|
||||
163
station/tools/infra/README.md
Normal file
163
station/tools/infra/README.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Amar Mascotas Infrastructure as Code
|
||||
|
||||
Pulumi configurations for deploying the Amar Mascotas backend to different cloud providers.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
infra/
|
||||
├── digitalocean/ # DigitalOcean configuration
|
||||
├── aws/ # AWS configuration
|
||||
├── gcp/ # Google Cloud configuration
|
||||
└── shared/ # Shared Python utilities
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```bash
|
||||
# Install Pulumi
|
||||
curl -fsSL https://get.pulumi.com | sh
|
||||
|
||||
# Install Python dependencies
|
||||
pip install pulumi pulumi-digitalocean pulumi-aws pulumi-gcp
|
||||
|
||||
# Login to Pulumi (free tier, or use local state)
|
||||
pulumi login --local # Local state (no account needed)
|
||||
# OR
|
||||
pulumi login # Pulumi Cloud (free tier available)
|
||||
```
|
||||
|
||||
## Cloud Provider Setup
|
||||
|
||||
### DigitalOcean
|
||||
```bash
|
||||
export DIGITALOCEAN_TOKEN="your-api-token"
|
||||
```
|
||||
|
||||
### AWS
|
||||
```bash
|
||||
aws configure
|
||||
# Or set environment variables:
|
||||
export AWS_ACCESS_KEY_ID="xxx"
|
||||
export AWS_SECRET_ACCESS_KEY="xxx"
|
||||
export AWS_REGION="us-east-1"
|
||||
```
|
||||
|
||||
### GCP
|
||||
```bash
|
||||
gcloud auth application-default login
|
||||
export GOOGLE_PROJECT="your-project-id"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
cd infra/digitalocean # or aws, gcp
|
||||
|
||||
# Preview changes
|
||||
pulumi preview
|
||||
|
||||
# Deploy
|
||||
pulumi up
|
||||
|
||||
# Destroy
|
||||
pulumi destroy
|
||||
```
|
||||
|
||||
## Cost Comparison (Estimated Monthly)
|
||||
|
||||
| Resource | DigitalOcean | AWS | GCP |
|
||||
|----------|--------------|-----|-----|
|
||||
| Compute (4GB RAM) | $24 | $35 | $30 |
|
||||
| Managed Postgres | $15 | $25 | $25 |
|
||||
| Managed Redis | $15 | $15 | $20 |
|
||||
| Load Balancer | $12 | $18 | $18 |
|
||||
| **Total** | **~$66** | **~$93** | **~$93** |
|
||||
|
||||
## Architecture
|
||||
|
||||
All configurations deploy:
|
||||
- 1x App server (Django + Gunicorn + Celery)
|
||||
- 1x Managed PostgreSQL with PostGIS
|
||||
- 1x Managed Redis
|
||||
- VPC/Network isolation
|
||||
- Firewall rules (SSH, HTTP, HTTPS)
|
||||
|
||||
## Provider Comparison
|
||||
|
||||
### Code Complexity
|
||||
|
||||
| Aspect | DigitalOcean | AWS | GCP |
|
||||
|--------|--------------|-----|-----|
|
||||
| Lines of code | ~180 | ~280 | ~260 |
|
||||
| Resources created | 8 | 15 | 14 |
|
||||
| Networking setup | Simple (VPC only) | Complex (VPC + subnets + IGW + routes) | Medium (VPC + subnet + peering) |
|
||||
| Learning curve | Low | High | Medium |
|
||||
|
||||
### Feature Comparison
|
||||
|
||||
| Feature | DigitalOcean | AWS | GCP |
|
||||
|---------|--------------|-----|-----|
|
||||
| **Managed Postgres** | Yes (DO Database) | Yes (RDS) | Yes (Cloud SQL) |
|
||||
| **PostGIS** | Via extension | Via extension | Via flags |
|
||||
| **Managed Redis** | Yes (DO Database) | Yes (ElastiCache) | Yes (Memorystore) |
|
||||
| **Private networking** | VPC | VPC + subnets | VPC + peering |
|
||||
| **Load balancer** | $12/mo | $18/mo | $18/mo |
|
||||
| **Auto-scaling** | Limited | Full (ASG) | Full (MIG) |
|
||||
| **Regions** | 15 | 30+ | 35+ |
|
||||
| **Free tier** | None | 12 months | $300 credit |
|
||||
|
||||
### When to Choose Each
|
||||
|
||||
**DigitalOcean:**
|
||||
- Simple deployments
|
||||
- Cost-sensitive
|
||||
- Small teams
|
||||
- Latin America (São Paulo region)
|
||||
|
||||
**AWS:**
|
||||
- Enterprise requirements
|
||||
- Need advanced services (Lambda, SQS, etc.)
|
||||
- Complex networking needs
|
||||
- Compliance requirements (HIPAA, PCI)
|
||||
|
||||
**GCP:**
|
||||
- Machine learning integration
|
||||
- Kubernetes-first approach
|
||||
- Good free credits to start
|
||||
- BigQuery/analytics needs
|
||||
|
||||
### Real Cost Breakdown (Your App)
|
||||
|
||||
```
|
||||
DigitalOcean (~$66/mo):
|
||||
├── Droplet 4GB $24
|
||||
├── Managed Postgres $15
|
||||
├── Managed Redis $15
|
||||
└── Load Balancer $12 (optional)
|
||||
|
||||
AWS (~$93/mo):
|
||||
├── EC2 t3.medium $35
|
||||
├── RDS db.t3.micro $25
|
||||
├── ElastiCache $15
|
||||
└── ALB $18 (optional)
|
||||
|
||||
GCP (~$93/mo):
|
||||
├── e2-medium $30
|
||||
├── Cloud SQL $25
|
||||
├── Memorystore $20
|
||||
└── Load Balancer $18 (optional)
|
||||
```
|
||||
|
||||
### Migration Effort
|
||||
|
||||
If you ever need to switch providers:
|
||||
|
||||
| From → To | Effort | Notes |
|
||||
|-----------|--------|-------|
|
||||
| DO → AWS | Medium | Postgres dump/restore, reconfigure Redis |
|
||||
| DO → GCP | Medium | Same as above |
|
||||
| AWS → GCP | Medium | Similar services, different APIs |
|
||||
| Any → Kubernetes | High | Need to containerize everything |
|
||||
|
||||
The Pulumi code is portable - only the provider-specific resources change.
|
||||
6
station/tools/infra/aws/Pulumi.yaml
Normal file
6
station/tools/infra/aws/Pulumi.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
name: amar-aws
|
||||
runtime:
|
||||
name: python
|
||||
options:
|
||||
virtualenv: venv
|
||||
description: Amar Mascotas infrastructure on AWS
|
||||
341
station/tools/infra/aws/__main__.py
Normal file
341
station/tools/infra/aws/__main__.py
Normal file
@@ -0,0 +1,341 @@
|
||||
"""
|
||||
AWS Infrastructure for Amar Mascotas
|
||||
|
||||
Deploys:
|
||||
- VPC with public/private subnets
|
||||
- EC2 instance for Django app + Celery
|
||||
- RDS PostgreSQL (PostGIS via extension)
|
||||
- ElastiCache Redis
|
||||
- Security Groups
|
||||
- (Optional) ALB, Route53
|
||||
|
||||
Estimated cost: ~$93/month
|
||||
|
||||
NOTE: AWS is more complex but offers more services and better scaling options.
|
||||
"""
|
||||
|
||||
import pulumi
|
||||
import pulumi_aws as aws
|
||||
import sys
|
||||
sys.path.append("..")
|
||||
from shared.config import get_config, APP_SERVER_INIT_SCRIPT
|
||||
|
||||
# Load configuration
|
||||
cfg = get_config()
|
||||
|
||||
# Get current region and availability zones
|
||||
region = aws.get_region()
|
||||
azs = aws.get_availability_zones(state="available")
|
||||
az1 = azs.names[0]
|
||||
az2 = azs.names[1] if len(azs.names) > 1 else azs.names[0]
|
||||
|
||||
# =============================================================================
|
||||
# NETWORKING - VPC
|
||||
# =============================================================================
|
||||
|
||||
vpc = aws.ec2.Vpc(
|
||||
f"{cfg.resource_prefix}-vpc",
|
||||
cidr_block="10.0.0.0/16",
|
||||
enable_dns_hostnames=True,
|
||||
enable_dns_support=True,
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-vpc"},
|
||||
)
|
||||
|
||||
# Internet Gateway (for public internet access)
|
||||
igw = aws.ec2.InternetGateway(
|
||||
f"{cfg.resource_prefix}-igw",
|
||||
vpc_id=vpc.id,
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-igw"},
|
||||
)
|
||||
|
||||
# Public subnets (for EC2, load balancer)
|
||||
public_subnet_1 = aws.ec2.Subnet(
|
||||
f"{cfg.resource_prefix}-public-1",
|
||||
vpc_id=vpc.id,
|
||||
cidr_block="10.0.1.0/24",
|
||||
availability_zone=az1,
|
||||
map_public_ip_on_launch=True,
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-public-1"},
|
||||
)
|
||||
|
||||
public_subnet_2 = aws.ec2.Subnet(
|
||||
f"{cfg.resource_prefix}-public-2",
|
||||
vpc_id=vpc.id,
|
||||
cidr_block="10.0.2.0/24",
|
||||
availability_zone=az2,
|
||||
map_public_ip_on_launch=True,
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-public-2"},
|
||||
)
|
||||
|
||||
# Private subnets (for RDS, ElastiCache)
|
||||
private_subnet_1 = aws.ec2.Subnet(
|
||||
f"{cfg.resource_prefix}-private-1",
|
||||
vpc_id=vpc.id,
|
||||
cidr_block="10.0.10.0/24",
|
||||
availability_zone=az1,
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-private-1"},
|
||||
)
|
||||
|
||||
private_subnet_2 = aws.ec2.Subnet(
|
||||
f"{cfg.resource_prefix}-private-2",
|
||||
vpc_id=vpc.id,
|
||||
cidr_block="10.0.11.0/24",
|
||||
availability_zone=az2,
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-private-2"},
|
||||
)
|
||||
|
||||
# Route table for public subnets
|
||||
public_rt = aws.ec2.RouteTable(
|
||||
f"{cfg.resource_prefix}-public-rt",
|
||||
vpc_id=vpc.id,
|
||||
routes=[
|
||||
aws.ec2.RouteTableRouteArgs(
|
||||
cidr_block="0.0.0.0/0",
|
||||
gateway_id=igw.id,
|
||||
),
|
||||
],
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-public-rt"},
|
||||
)
|
||||
|
||||
# Associate route table with public subnets
|
||||
aws.ec2.RouteTableAssociation(
|
||||
f"{cfg.resource_prefix}-public-1-rta",
|
||||
subnet_id=public_subnet_1.id,
|
||||
route_table_id=public_rt.id,
|
||||
)
|
||||
|
||||
aws.ec2.RouteTableAssociation(
|
||||
f"{cfg.resource_prefix}-public-2-rta",
|
||||
subnet_id=public_subnet_2.id,
|
||||
route_table_id=public_rt.id,
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# SECURITY GROUPS
|
||||
# =============================================================================
|
||||
|
||||
# App server security group
|
||||
app_sg = aws.ec2.SecurityGroup(
|
||||
f"{cfg.resource_prefix}-app-sg",
|
||||
vpc_id=vpc.id,
|
||||
description="Security group for app server",
|
||||
ingress=[
|
||||
# SSH
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=22,
|
||||
to_port=22,
|
||||
cidr_blocks=cfg.allowed_ssh_ips or ["0.0.0.0/0"],
|
||||
description="SSH access",
|
||||
),
|
||||
# HTTP
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=80,
|
||||
to_port=80,
|
||||
cidr_blocks=["0.0.0.0/0"],
|
||||
description="HTTP",
|
||||
),
|
||||
# HTTPS
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=443,
|
||||
to_port=443,
|
||||
cidr_blocks=["0.0.0.0/0"],
|
||||
description="HTTPS",
|
||||
),
|
||||
],
|
||||
egress=[
|
||||
aws.ec2.SecurityGroupEgressArgs(
|
||||
protocol="-1",
|
||||
from_port=0,
|
||||
to_port=0,
|
||||
cidr_blocks=["0.0.0.0/0"],
|
||||
description="Allow all outbound",
|
||||
),
|
||||
],
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-app-sg"},
|
||||
)
|
||||
|
||||
# Database security group (only accessible from app server)
|
||||
db_sg = aws.ec2.SecurityGroup(
|
||||
f"{cfg.resource_prefix}-db-sg",
|
||||
vpc_id=vpc.id,
|
||||
description="Security group for RDS",
|
||||
ingress=[
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=5432,
|
||||
to_port=5432,
|
||||
security_groups=[app_sg.id],
|
||||
description="PostgreSQL from app",
|
||||
),
|
||||
],
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-db-sg"},
|
||||
)
|
||||
|
||||
# Redis security group (only accessible from app server)
|
||||
redis_sg = aws.ec2.SecurityGroup(
|
||||
f"{cfg.resource_prefix}-redis-sg",
|
||||
vpc_id=vpc.id,
|
||||
description="Security group for ElastiCache",
|
||||
ingress=[
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=6379,
|
||||
to_port=6379,
|
||||
security_groups=[app_sg.id],
|
||||
description="Redis from app",
|
||||
),
|
||||
],
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-redis-sg"},
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# DATABASE - RDS PostgreSQL
|
||||
# =============================================================================
|
||||
|
||||
# Subnet group for RDS (requires at least 2 AZs)
|
||||
db_subnet_group = aws.rds.SubnetGroup(
|
||||
f"{cfg.resource_prefix}-db-subnet-group",
|
||||
subnet_ids=[private_subnet_1.id, private_subnet_2.id],
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-db-subnet-group"},
|
||||
)
|
||||
|
||||
# RDS PostgreSQL instance
|
||||
# Note: PostGIS is available as an extension, enable after creation
|
||||
db_instance = aws.rds.Instance(
|
||||
f"{cfg.resource_prefix}-db",
|
||||
identifier=f"{cfg.resource_prefix}-db",
|
||||
engine="postgres",
|
||||
engine_version=cfg.db_version,
|
||||
instance_class="db.t3.micro", # $25/mo - smallest
|
||||
allocated_storage=20,
|
||||
storage_type="gp3",
|
||||
db_name=cfg.db_name,
|
||||
username=cfg.db_user,
|
||||
password=pulumi.Config().require_secret("db_password"), # Set via: pulumi config set --secret db_password xxx
|
||||
vpc_security_group_ids=[db_sg.id],
|
||||
db_subnet_group_name=db_subnet_group.name,
|
||||
publicly_accessible=False,
|
||||
skip_final_snapshot=True, # Set False for production!
|
||||
backup_retention_period=7,
|
||||
multi_az=False, # Set True for HA ($$$)
|
||||
tags=cfg.tags,
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# CACHE - ElastiCache Redis
|
||||
# =============================================================================
|
||||
|
||||
# Subnet group for ElastiCache
|
||||
redis_subnet_group = aws.elasticache.SubnetGroup(
|
||||
f"{cfg.resource_prefix}-redis-subnet-group",
|
||||
subnet_ids=[private_subnet_1.id, private_subnet_2.id],
|
||||
tags=cfg.tags,
|
||||
)
|
||||
|
||||
# ElastiCache Redis cluster
|
||||
redis_cluster = aws.elasticache.Cluster(
|
||||
f"{cfg.resource_prefix}-redis",
|
||||
cluster_id=f"{cfg.resource_prefix}-redis",
|
||||
engine="redis",
|
||||
engine_version="7.0",
|
||||
node_type="cache.t3.micro", # $15/mo - smallest
|
||||
num_cache_nodes=1,
|
||||
port=6379,
|
||||
subnet_group_name=redis_subnet_group.name,
|
||||
security_group_ids=[redis_sg.id],
|
||||
tags=cfg.tags,
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# COMPUTE - EC2 Instance
|
||||
# =============================================================================
|
||||
|
||||
# Get latest Ubuntu 22.04 AMI
|
||||
ubuntu_ami = aws.ec2.get_ami(
|
||||
most_recent=True,
|
||||
owners=["099720109477"], # Canonical
|
||||
filters=[
|
||||
aws.ec2.GetAmiFilterArgs(
|
||||
name="name",
|
||||
values=["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"],
|
||||
),
|
||||
aws.ec2.GetAmiFilterArgs(
|
||||
name="virtualization-type",
|
||||
values=["hvm"],
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
# Key pair (import your existing key or create new)
|
||||
# key_pair = aws.ec2.KeyPair(
|
||||
# f"{cfg.resource_prefix}-key",
|
||||
# public_key=open("~/.ssh/id_rsa.pub").read(),
|
||||
# tags=cfg.tags,
|
||||
# )
|
||||
|
||||
# EC2 instance
|
||||
ec2_instance = aws.ec2.Instance(
|
||||
f"{cfg.resource_prefix}-app",
|
||||
ami=ubuntu_ami.id,
|
||||
instance_type="t3.medium", # $35/mo - 4GB RAM, 2 vCPU
|
||||
subnet_id=public_subnet_1.id,
|
||||
vpc_security_group_ids=[app_sg.id],
|
||||
# key_name=key_pair.key_name, # Uncomment when key_pair is defined
|
||||
user_data=APP_SERVER_INIT_SCRIPT,
|
||||
root_block_device=aws.ec2.InstanceRootBlockDeviceArgs(
|
||||
volume_size=30,
|
||||
volume_type="gp3",
|
||||
),
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-app"},
|
||||
)
|
||||
|
||||
# Elastic IP (static public IP)
|
||||
eip = aws.ec2.Eip(
|
||||
f"{cfg.resource_prefix}-eip",
|
||||
instance=ec2_instance.id,
|
||||
domain="vpc",
|
||||
tags={**cfg.tags, "Name": f"{cfg.resource_prefix}-eip"},
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# OPTIONAL: Application Load Balancer (uncomment if needed)
|
||||
# =============================================================================
|
||||
|
||||
# alb = aws.lb.LoadBalancer(
|
||||
# f"{cfg.resource_prefix}-alb",
|
||||
# load_balancer_type="application",
|
||||
# security_groups=[app_sg.id],
|
||||
# subnets=[public_subnet_1.id, public_subnet_2.id],
|
||||
# tags=cfg.tags,
|
||||
# )
|
||||
|
||||
# =============================================================================
|
||||
# OUTPUTS
|
||||
# =============================================================================
|
||||
|
||||
pulumi.export("ec2_public_ip", eip.public_ip)
|
||||
pulumi.export("ec2_private_ip", ec2_instance.private_ip)
|
||||
pulumi.export("db_endpoint", db_instance.endpoint)
|
||||
pulumi.export("db_name", cfg.db_name)
|
||||
pulumi.export("db_user", cfg.db_user)
|
||||
pulumi.export("redis_endpoint", redis_cluster.cache_nodes[0].address)
|
||||
pulumi.export("redis_port", redis_cluster.port)
|
||||
|
||||
# Generate .env content
|
||||
pulumi.export("env_file", pulumi.Output.all(
|
||||
db_instance.endpoint,
|
||||
redis_cluster.cache_nodes[0].address,
|
||||
redis_cluster.port,
|
||||
).apply(lambda args: f"""
|
||||
# Generated by Pulumi - AWS
|
||||
DB_HOST={args[0].split(':')[0]}
|
||||
DB_PORT=5432
|
||||
DB_NAME={cfg.db_name}
|
||||
DB_USER={cfg.db_user}
|
||||
DB_PASSWORD=<set via pulumi config>
|
||||
CELERY_BROKER_URL=redis://{args[1]}:{args[2]}/0
|
||||
CELERY_RESULT_BACKEND=redis://{args[1]}:{args[2]}/0
|
||||
"""))
|
||||
2
station/tools/infra/aws/requirements.txt
Normal file
2
station/tools/infra/aws/requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
pulumi>=3.0.0
|
||||
pulumi-aws>=6.0.0
|
||||
6
station/tools/infra/digitalocean/Pulumi.yaml
Normal file
6
station/tools/infra/digitalocean/Pulumi.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
name: amar-digitalocean
|
||||
runtime:
|
||||
name: python
|
||||
options:
|
||||
virtualenv: venv
|
||||
description: Amar Mascotas infrastructure on DigitalOcean
|
||||
269
station/tools/infra/digitalocean/__main__.py
Normal file
269
station/tools/infra/digitalocean/__main__.py
Normal file
@@ -0,0 +1,269 @@
|
||||
"""
|
||||
DigitalOcean Infrastructure for Amar Mascotas
|
||||
|
||||
Deploys:
|
||||
- VPC for network isolation
|
||||
- Droplet for Django app + Celery
|
||||
- Managed PostgreSQL (with PostGIS via extension)
|
||||
- Managed Redis
|
||||
- Firewall rules
|
||||
- (Optional) Load Balancer, Domain records
|
||||
|
||||
Estimated cost: ~$66/month
|
||||
"""
|
||||
|
||||
import pulumi
|
||||
import pulumi_digitalocean as do
|
||||
import sys
|
||||
sys.path.append("..")
|
||||
from shared.config import get_config, APP_SERVER_INIT_SCRIPT
|
||||
|
||||
# Load configuration
|
||||
cfg = get_config()
|
||||
|
||||
# =============================================================================
|
||||
# NETWORKING
|
||||
# =============================================================================
|
||||
|
||||
# VPC for private networking between resources
|
||||
vpc = do.Vpc(
|
||||
f"{cfg.resource_prefix}-vpc",
|
||||
name=f"{cfg.resource_prefix}-vpc",
|
||||
region="nyc1",
|
||||
ip_range="10.10.10.0/24",
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# DATABASE - Managed PostgreSQL
|
||||
# =============================================================================
|
||||
|
||||
# DigitalOcean managed Postgres (PostGIS available as extension)
|
||||
db_cluster = do.DatabaseCluster(
|
||||
f"{cfg.resource_prefix}-db",
|
||||
name=f"{cfg.resource_prefix}-db",
|
||||
engine="pg",
|
||||
version=cfg.db_version,
|
||||
size="db-s-1vcpu-1gb", # $15/mo - smallest managed DB
|
||||
region="nyc1",
|
||||
node_count=1, # Single node (use 2+ for HA)
|
||||
private_network_uuid=vpc.id,
|
||||
tags=[cfg.environment],
|
||||
)
|
||||
|
||||
# Create application database
|
||||
db = do.DatabaseDb(
|
||||
f"{cfg.resource_prefix}-database",
|
||||
cluster_id=db_cluster.id,
|
||||
name=cfg.db_name,
|
||||
)
|
||||
|
||||
# Create database user
|
||||
db_user = do.DatabaseUser(
|
||||
f"{cfg.resource_prefix}-db-user",
|
||||
cluster_id=db_cluster.id,
|
||||
name=cfg.db_user,
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# CACHE - Managed Redis
|
||||
# =============================================================================
|
||||
|
||||
redis_cluster = do.DatabaseCluster(
|
||||
f"{cfg.resource_prefix}-redis",
|
||||
name=f"{cfg.resource_prefix}-redis",
|
||||
engine="redis",
|
||||
version=cfg.redis_version,
|
||||
size="db-s-1vcpu-1gb", # $15/mo
|
||||
region="nyc1",
|
||||
node_count=1,
|
||||
private_network_uuid=vpc.id,
|
||||
tags=[cfg.environment],
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# COMPUTE - Droplet
|
||||
# =============================================================================
|
||||
|
||||
# SSH key (you should create this beforehand or import existing)
|
||||
# ssh_key = do.SshKey(
|
||||
# f"{cfg.resource_prefix}-ssh-key",
|
||||
# name=f"{cfg.resource_prefix}-key",
|
||||
# public_key=open("~/.ssh/id_rsa.pub").read(),
|
||||
# )
|
||||
|
||||
# Use existing SSH keys (fetch by name or fingerprint)
|
||||
ssh_keys = do.get_ssh_keys()
|
||||
|
||||
# App server droplet
|
||||
droplet = do.Droplet(
|
||||
f"{cfg.resource_prefix}-app",
|
||||
name=f"{cfg.resource_prefix}-app",
|
||||
image="ubuntu-22-04-x64",
|
||||
size="s-2vcpu-4gb", # $24/mo - 4GB RAM, 2 vCPU
|
||||
region="nyc1",
|
||||
vpc_uuid=vpc.id,
|
||||
ssh_keys=[k.id for k in ssh_keys.ssh_keys[:1]] if ssh_keys.ssh_keys else [],
|
||||
user_data=APP_SERVER_INIT_SCRIPT,
|
||||
tags=[cfg.environment, "app"],
|
||||
opts=pulumi.ResourceOptions(depends_on=[db_cluster, redis_cluster]),
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# FIREWALL
|
||||
# =============================================================================
|
||||
|
||||
firewall = do.Firewall(
|
||||
f"{cfg.resource_prefix}-firewall",
|
||||
name=f"{cfg.resource_prefix}-firewall",
|
||||
droplet_ids=[droplet.id],
|
||||
|
||||
# Inbound rules
|
||||
inbound_rules=[
|
||||
# SSH (restrict to specific IPs in production)
|
||||
do.FirewallInboundRuleArgs(
|
||||
protocol="tcp",
|
||||
port_range="22",
|
||||
source_addresses=cfg.allowed_ssh_ips or ["0.0.0.0/0", "::/0"],
|
||||
),
|
||||
# HTTP
|
||||
do.FirewallInboundRuleArgs(
|
||||
protocol="tcp",
|
||||
port_range="80",
|
||||
source_addresses=["0.0.0.0/0", "::/0"],
|
||||
),
|
||||
# HTTPS
|
||||
do.FirewallInboundRuleArgs(
|
||||
protocol="tcp",
|
||||
port_range="443",
|
||||
source_addresses=["0.0.0.0/0", "::/0"],
|
||||
),
|
||||
],
|
||||
|
||||
# Outbound rules (allow all outbound)
|
||||
outbound_rules=[
|
||||
do.FirewallOutboundRuleArgs(
|
||||
protocol="tcp",
|
||||
port_range="1-65535",
|
||||
destination_addresses=["0.0.0.0/0", "::/0"],
|
||||
),
|
||||
do.FirewallOutboundRuleArgs(
|
||||
protocol="udp",
|
||||
port_range="1-65535",
|
||||
destination_addresses=["0.0.0.0/0", "::/0"],
|
||||
),
|
||||
do.FirewallOutboundRuleArgs(
|
||||
protocol="icmp",
|
||||
destination_addresses=["0.0.0.0/0", "::/0"],
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# DATABASE FIREWALL - Only allow app server
|
||||
# =============================================================================
|
||||
|
||||
db_firewall = do.DatabaseFirewall(
|
||||
f"{cfg.resource_prefix}-db-firewall",
|
||||
cluster_id=db_cluster.id,
|
||||
rules=[
|
||||
do.DatabaseFirewallRuleArgs(
|
||||
type="droplet",
|
||||
value=droplet.id,
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
redis_firewall = do.DatabaseFirewall(
|
||||
f"{cfg.resource_prefix}-redis-firewall",
|
||||
cluster_id=redis_cluster.id,
|
||||
rules=[
|
||||
do.DatabaseFirewallRuleArgs(
|
||||
type="droplet",
|
||||
value=droplet.id,
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# OPTIONAL: Load Balancer (uncomment if needed)
|
||||
# =============================================================================
|
||||
|
||||
# load_balancer = do.LoadBalancer(
|
||||
# f"{cfg.resource_prefix}-lb",
|
||||
# name=f"{cfg.resource_prefix}-lb",
|
||||
# region="nyc1",
|
||||
# vpc_uuid=vpc.id,
|
||||
# droplet_ids=[droplet.id],
|
||||
# forwarding_rules=[
|
||||
# do.LoadBalancerForwardingRuleArgs(
|
||||
# entry_port=443,
|
||||
# entry_protocol="https",
|
||||
# target_port=80,
|
||||
# target_protocol="http",
|
||||
# certificate_name=f"{cfg.resource_prefix}-cert",
|
||||
# ),
|
||||
# do.LoadBalancerForwardingRuleArgs(
|
||||
# entry_port=80,
|
||||
# entry_protocol="http",
|
||||
# target_port=80,
|
||||
# target_protocol="http",
|
||||
# ),
|
||||
# ],
|
||||
# healthcheck=do.LoadBalancerHealthcheckArgs(
|
||||
# port=80,
|
||||
# protocol="http",
|
||||
# path="/health/",
|
||||
# ),
|
||||
# )
|
||||
|
||||
# =============================================================================
|
||||
# OPTIONAL: DNS Records (uncomment if managing domain in DO)
|
||||
# =============================================================================
|
||||
|
||||
# domain = do.Domain(
|
||||
# f"{cfg.resource_prefix}-domain",
|
||||
# name=cfg.domain,
|
||||
# )
|
||||
#
|
||||
# api_record = do.DnsRecord(
|
||||
# f"{cfg.resource_prefix}-api-dns",
|
||||
# domain=domain.name,
|
||||
# type="A",
|
||||
# name="backoffice",
|
||||
# value=droplet.ipv4_address,
|
||||
# ttl=300,
|
||||
# )
|
||||
|
||||
# =============================================================================
|
||||
# OUTPUTS
|
||||
# =============================================================================
|
||||
|
||||
pulumi.export("droplet_ip", droplet.ipv4_address)
|
||||
pulumi.export("droplet_private_ip", droplet.ipv4_address_private)
|
||||
pulumi.export("db_host", db_cluster.private_host)
|
||||
pulumi.export("db_port", db_cluster.port)
|
||||
pulumi.export("db_name", cfg.db_name)
|
||||
pulumi.export("db_user", cfg.db_user)
|
||||
pulumi.export("db_password", db_user.password)
|
||||
pulumi.export("redis_host", redis_cluster.private_host)
|
||||
pulumi.export("redis_port", redis_cluster.port)
|
||||
pulumi.export("redis_password", redis_cluster.password)
|
||||
|
||||
# Generate .env content for easy deployment
|
||||
pulumi.export("env_file", pulumi.Output.all(
|
||||
db_cluster.private_host,
|
||||
db_cluster.port,
|
||||
db_user.password,
|
||||
redis_cluster.private_host,
|
||||
redis_cluster.port,
|
||||
redis_cluster.password,
|
||||
).apply(lambda args: f"""
|
||||
# Generated by Pulumi - DigitalOcean
|
||||
DB_HOST={args[0]}
|
||||
DB_PORT={args[1]}
|
||||
DB_NAME={cfg.db_name}
|
||||
DB_USER={cfg.db_user}
|
||||
DB_PASSWORD={args[2]}
|
||||
CELERY_BROKER_URL=rediss://default:{args[5]}@{args[3]}:{args[4]}
|
||||
CELERY_RESULT_BACKEND=rediss://default:{args[5]}@{args[3]}:{args[4]}
|
||||
"""))
|
||||
2
station/tools/infra/digitalocean/requirements.txt
Normal file
2
station/tools/infra/digitalocean/requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
pulumi>=3.0.0
|
||||
pulumi-digitalocean>=4.0.0
|
||||
6
station/tools/infra/gcp/Pulumi.yaml
Normal file
6
station/tools/infra/gcp/Pulumi.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
name: amar-gcp
|
||||
runtime:
|
||||
name: python
|
||||
options:
|
||||
virtualenv: venv
|
||||
description: Amar Mascotas infrastructure on Google Cloud Platform
|
||||
286
station/tools/infra/gcp/__main__.py
Normal file
286
station/tools/infra/gcp/__main__.py
Normal file
@@ -0,0 +1,286 @@
|
||||
"""
|
||||
Google Cloud Platform Infrastructure for Amar Mascotas
|
||||
|
||||
Deploys:
|
||||
- VPC with subnets
|
||||
- Compute Engine instance for Django app + Celery
|
||||
- Cloud SQL PostgreSQL (PostGIS via flags)
|
||||
- Memorystore Redis
|
||||
- Firewall rules
|
||||
- (Optional) Cloud Load Balancer, Cloud DNS
|
||||
|
||||
Estimated cost: ~$93/month
|
||||
|
||||
NOTE: GCP has good free tier credits and competitive pricing.
|
||||
PostGIS requires enabling the `cloudsql.enable_pgaudit` flag.
|
||||
"""
|
||||
|
||||
import pulumi
|
||||
import pulumi_gcp as gcp
|
||||
import sys
|
||||
sys.path.append("..")
|
||||
from shared.config import get_config, APP_SERVER_INIT_SCRIPT
|
||||
|
||||
# Load configuration
|
||||
cfg = get_config()
|
||||
|
||||
# Get project
|
||||
project = gcp.organizations.get_project()
|
||||
|
||||
# =============================================================================
|
||||
# NETWORKING - VPC
|
||||
# =============================================================================
|
||||
|
||||
# VPC Network
|
||||
vpc = gcp.compute.Network(
|
||||
f"{cfg.resource_prefix}-vpc",
|
||||
name=f"{cfg.resource_prefix}-vpc",
|
||||
auto_create_subnetworks=False,
|
||||
description="VPC for Amar Mascotas",
|
||||
)
|
||||
|
||||
# Subnet for compute resources
|
||||
subnet = gcp.compute.Subnetwork(
|
||||
f"{cfg.resource_prefix}-subnet",
|
||||
name=f"{cfg.resource_prefix}-subnet",
|
||||
ip_cidr_range="10.0.1.0/24",
|
||||
region="us-east1",
|
||||
network=vpc.id,
|
||||
private_ip_google_access=True, # Access Google APIs without public IP
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# FIREWALL RULES
|
||||
# =============================================================================
|
||||
|
||||
# Allow SSH
|
||||
firewall_ssh = gcp.compute.Firewall(
|
||||
f"{cfg.resource_prefix}-allow-ssh",
|
||||
name=f"{cfg.resource_prefix}-allow-ssh",
|
||||
network=vpc.name,
|
||||
allows=[
|
||||
gcp.compute.FirewallAllowArgs(
|
||||
protocol="tcp",
|
||||
ports=["22"],
|
||||
),
|
||||
],
|
||||
source_ranges=cfg.allowed_ssh_ips or ["0.0.0.0/0"],
|
||||
target_tags=["app-server"],
|
||||
)
|
||||
|
||||
# Allow HTTP/HTTPS
|
||||
firewall_http = gcp.compute.Firewall(
|
||||
f"{cfg.resource_prefix}-allow-http",
|
||||
name=f"{cfg.resource_prefix}-allow-http",
|
||||
network=vpc.name,
|
||||
allows=[
|
||||
gcp.compute.FirewallAllowArgs(
|
||||
protocol="tcp",
|
||||
ports=["80", "443"],
|
||||
),
|
||||
],
|
||||
source_ranges=["0.0.0.0/0"],
|
||||
target_tags=["app-server"],
|
||||
)
|
||||
|
||||
# Allow internal traffic (for DB/Redis access)
|
||||
firewall_internal = gcp.compute.Firewall(
|
||||
f"{cfg.resource_prefix}-allow-internal",
|
||||
name=f"{cfg.resource_prefix}-allow-internal",
|
||||
network=vpc.name,
|
||||
allows=[
|
||||
gcp.compute.FirewallAllowArgs(
|
||||
protocol="tcp",
|
||||
ports=["0-65535"],
|
||||
),
|
||||
gcp.compute.FirewallAllowArgs(
|
||||
protocol="udp",
|
||||
ports=["0-65535"],
|
||||
),
|
||||
gcp.compute.FirewallAllowArgs(
|
||||
protocol="icmp",
|
||||
),
|
||||
],
|
||||
source_ranges=["10.0.0.0/8"],
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# DATABASE - Cloud SQL PostgreSQL
|
||||
# =============================================================================
|
||||
|
||||
# Cloud SQL instance
|
||||
# Note: PostGIS available via database flags
|
||||
db_instance = gcp.sql.DatabaseInstance(
|
||||
f"{cfg.resource_prefix}-db",
|
||||
name=f"{cfg.resource_prefix}-db",
|
||||
database_version="POSTGRES_15",
|
||||
region="us-east1",
|
||||
deletion_protection=False, # Set True for production!
|
||||
settings=gcp.sql.DatabaseInstanceSettingsArgs(
|
||||
tier="db-f1-micro", # $25/mo - smallest
|
||||
disk_size=10,
|
||||
disk_type="PD_SSD",
|
||||
ip_configuration=gcp.sql.DatabaseInstanceSettingsIpConfigurationArgs(
|
||||
ipv4_enabled=False,
|
||||
private_network=vpc.id,
|
||||
enable_private_path_for_google_cloud_services=True,
|
||||
),
|
||||
backup_configuration=gcp.sql.DatabaseInstanceSettingsBackupConfigurationArgs(
|
||||
enabled=True,
|
||||
start_time="03:00",
|
||||
),
|
||||
database_flags=[
|
||||
# Enable PostGIS extensions
|
||||
gcp.sql.DatabaseInstanceSettingsDatabaseFlagArgs(
|
||||
name="cloudsql.enable_pg_cron",
|
||||
value="on",
|
||||
),
|
||||
],
|
||||
user_labels=cfg.tags,
|
||||
),
|
||||
opts=pulumi.ResourceOptions(depends_on=[vpc]),
|
||||
)
|
||||
|
||||
# Database
|
||||
db = gcp.sql.Database(
|
||||
f"{cfg.resource_prefix}-database",
|
||||
name=cfg.db_name,
|
||||
instance=db_instance.name,
|
||||
)
|
||||
|
||||
# Database user
|
||||
db_user = gcp.sql.User(
|
||||
f"{cfg.resource_prefix}-db-user",
|
||||
name=cfg.db_user,
|
||||
instance=db_instance.name,
|
||||
password=pulumi.Config().require_secret("db_password"),
|
||||
)
|
||||
|
||||
# Private IP for Cloud SQL
|
||||
private_ip_address = gcp.compute.GlobalAddress(
|
||||
f"{cfg.resource_prefix}-db-private-ip",
|
||||
name=f"{cfg.resource_prefix}-db-private-ip",
|
||||
purpose="VPC_PEERING",
|
||||
address_type="INTERNAL",
|
||||
prefix_length=16,
|
||||
network=vpc.id,
|
||||
)
|
||||
|
||||
# VPC peering for Cloud SQL
|
||||
private_vpc_connection = gcp.servicenetworking.Connection(
|
||||
f"{cfg.resource_prefix}-private-vpc-connection",
|
||||
network=vpc.id,
|
||||
service="servicenetworking.googleapis.com",
|
||||
reserved_peering_ranges=[private_ip_address.name],
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# CACHE - Memorystore Redis
|
||||
# =============================================================================
|
||||
|
||||
redis_instance = gcp.redis.Instance(
|
||||
f"{cfg.resource_prefix}-redis",
|
||||
name=f"{cfg.resource_prefix}-redis",
|
||||
tier="BASIC", # $20/mo - no HA
|
||||
memory_size_gb=1,
|
||||
region="us-east1",
|
||||
redis_version="REDIS_7_0",
|
||||
authorized_network=vpc.id,
|
||||
connect_mode="PRIVATE_SERVICE_ACCESS",
|
||||
labels=cfg.tags,
|
||||
opts=pulumi.ResourceOptions(depends_on=[private_vpc_connection]),
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# COMPUTE - Compute Engine Instance
|
||||
# =============================================================================
|
||||
|
||||
# Service account for the instance
|
||||
service_account = gcp.serviceaccount.Account(
|
||||
f"{cfg.resource_prefix}-sa",
|
||||
account_id=f"{cfg.resource_prefix}-app-sa",
|
||||
display_name="Amar App Service Account",
|
||||
)
|
||||
|
||||
# Compute instance
|
||||
instance = gcp.compute.Instance(
|
||||
f"{cfg.resource_prefix}-app",
|
||||
name=f"{cfg.resource_prefix}-app",
|
||||
machine_type="e2-medium", # $30/mo - 4GB RAM, 2 vCPU
|
||||
zone="us-east1-b",
|
||||
tags=["app-server"],
|
||||
boot_disk=gcp.compute.InstanceBootDiskArgs(
|
||||
initialize_params=gcp.compute.InstanceBootDiskInitializeParamsArgs(
|
||||
image="ubuntu-os-cloud/ubuntu-2204-lts",
|
||||
size=30,
|
||||
type="pd-ssd",
|
||||
),
|
||||
),
|
||||
network_interfaces=[
|
||||
gcp.compute.InstanceNetworkInterfaceArgs(
|
||||
network=vpc.id,
|
||||
subnetwork=subnet.id,
|
||||
access_configs=[
|
||||
gcp.compute.InstanceNetworkInterfaceAccessConfigArgs(
|
||||
# Ephemeral public IP
|
||||
),
|
||||
],
|
||||
),
|
||||
],
|
||||
service_account=gcp.compute.InstanceServiceAccountArgs(
|
||||
email=service_account.email,
|
||||
scopes=["cloud-platform"],
|
||||
),
|
||||
metadata_startup_script=APP_SERVER_INIT_SCRIPT,
|
||||
labels=cfg.tags,
|
||||
)
|
||||
|
||||
# Static external IP (optional, costs extra)
|
||||
static_ip = gcp.compute.Address(
|
||||
f"{cfg.resource_prefix}-static-ip",
|
||||
name=f"{cfg.resource_prefix}-static-ip",
|
||||
region="us-east1",
|
||||
)
|
||||
|
||||
# =============================================================================
|
||||
# OPTIONAL: Cloud Load Balancer (uncomment if needed)
|
||||
# =============================================================================
|
||||
|
||||
# health_check = gcp.compute.HealthCheck(
|
||||
# f"{cfg.resource_prefix}-health-check",
|
||||
# name=f"{cfg.resource_prefix}-health-check",
|
||||
# http_health_check=gcp.compute.HealthCheckHttpHealthCheckArgs(
|
||||
# port=80,
|
||||
# request_path="/health/",
|
||||
# ),
|
||||
# )
|
||||
|
||||
# =============================================================================
|
||||
# OUTPUTS
|
||||
# =============================================================================
|
||||
|
||||
pulumi.export("instance_public_ip", instance.network_interfaces[0].access_configs[0].nat_ip)
|
||||
pulumi.export("instance_private_ip", instance.network_interfaces[0].network_ip)
|
||||
pulumi.export("static_ip", static_ip.address)
|
||||
pulumi.export("db_private_ip", db_instance.private_ip_address)
|
||||
pulumi.export("db_connection_name", db_instance.connection_name)
|
||||
pulumi.export("db_name", cfg.db_name)
|
||||
pulumi.export("db_user", cfg.db_user)
|
||||
pulumi.export("redis_host", redis_instance.host)
|
||||
pulumi.export("redis_port", redis_instance.port)
|
||||
|
||||
# Generate .env content
|
||||
pulumi.export("env_file", pulumi.Output.all(
|
||||
db_instance.private_ip_address,
|
||||
redis_instance.host,
|
||||
redis_instance.port,
|
||||
).apply(lambda args: f"""
|
||||
# Generated by Pulumi - GCP
|
||||
DB_HOST={args[0]}
|
||||
DB_PORT=5432
|
||||
DB_NAME={cfg.db_name}
|
||||
DB_USER={cfg.db_user}
|
||||
DB_PASSWORD=<set via pulumi config>
|
||||
CELERY_BROKER_URL=redis://{args[1]}:{args[2]}/0
|
||||
CELERY_RESULT_BACKEND=redis://{args[1]}:{args[2]}/0
|
||||
"""))
|
||||
2
station/tools/infra/gcp/requirements.txt
Normal file
2
station/tools/infra/gcp/requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
pulumi>=3.0.0
|
||||
pulumi-gcp>=7.0.0
|
||||
4
station/tools/infra/shared/__init__.py
Normal file
4
station/tools/infra/shared/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
# Shared configuration module
|
||||
from .config import get_config, AppConfig, APP_SERVER_INIT_SCRIPT
|
||||
|
||||
__all__ = ["get_config", "AppConfig", "APP_SERVER_INIT_SCRIPT"]
|
||||
99
station/tools/infra/shared/config.py
Normal file
99
station/tools/infra/shared/config.py
Normal file
@@ -0,0 +1,99 @@
|
||||
"""
|
||||
Shared configuration for all cloud deployments.
|
||||
Centralizes app-specific settings that are cloud-agnostic.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
import pulumi
|
||||
|
||||
|
||||
@dataclass
|
||||
class AppConfig:
|
||||
"""Application configuration shared across all cloud providers."""
|
||||
|
||||
# Naming
|
||||
project_name: str = "amar"
|
||||
environment: str = "production" # production, staging, dev
|
||||
|
||||
# Compute sizing
|
||||
app_cpu: int = 2 # vCPUs
|
||||
app_memory_gb: int = 4 # GB RAM
|
||||
|
||||
# Database
|
||||
db_name: str = "amarback"
|
||||
db_user: str = "amaruser"
|
||||
db_version: str = "15" # PostgreSQL version
|
||||
db_size_gb: int = 10 # Storage
|
||||
|
||||
# Redis
|
||||
redis_version: str = "7"
|
||||
redis_memory_mb: int = 1024
|
||||
|
||||
# Networking
|
||||
allowed_ssh_ips: list = None # IPs allowed to SSH (None = your IP only)
|
||||
domain: Optional[str] = "amarmascotas.ar"
|
||||
|
||||
def __post_init__(self):
|
||||
if self.allowed_ssh_ips is None:
|
||||
self.allowed_ssh_ips = []
|
||||
|
||||
@property
|
||||
def resource_prefix(self) -> str:
|
||||
"""Prefix for all resource names."""
|
||||
return f"{self.project_name}-{self.environment}"
|
||||
|
||||
@property
|
||||
def tags(self) -> dict:
|
||||
"""Common tags for all resources."""
|
||||
return {
|
||||
"Project": self.project_name,
|
||||
"Environment": self.environment,
|
||||
"ManagedBy": "Pulumi",
|
||||
}
|
||||
|
||||
|
||||
def get_config() -> AppConfig:
|
||||
"""Load configuration from Pulumi config or use defaults."""
|
||||
config = pulumi.Config()
|
||||
|
||||
return AppConfig(
|
||||
project_name=config.get("project_name") or "amar",
|
||||
environment=config.get("environment") or "production",
|
||||
app_memory_gb=config.get_int("app_memory_gb") or 4,
|
||||
db_name=config.get("db_name") or "amarback",
|
||||
db_user=config.get("db_user") or "amaruser",
|
||||
domain=config.get("domain") or "amarmascotas.ar",
|
||||
)
|
||||
|
||||
|
||||
# Cloud-init script for app server setup
|
||||
APP_SERVER_INIT_SCRIPT = """#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Update system
|
||||
apt-get update
|
||||
apt-get upgrade -y
|
||||
|
||||
# Install dependencies
|
||||
apt-get install -y \\
|
||||
python3-pip python3-venv \\
|
||||
postgresql-client \\
|
||||
gdal-bin libgdal-dev libgeos-dev libproj-dev \\
|
||||
nginx certbot python3-certbot-nginx \\
|
||||
supervisor \\
|
||||
git
|
||||
|
||||
# Create app user
|
||||
useradd -m -s /bin/bash amarapp || true
|
||||
|
||||
# Create directories
|
||||
mkdir -p /var/www/amarmascotas/media
|
||||
mkdir -p /var/etc/static
|
||||
mkdir -p /home/amarapp/app
|
||||
chown -R amarapp:amarapp /var/www/amarmascotas
|
||||
chown -R amarapp:amarapp /var/etc/static
|
||||
chown -R amarapp:amarapp /home/amarapp
|
||||
|
||||
echo "Base setup complete. Deploy application code separately."
|
||||
"""
|
||||
6
station/tools/tester/.env
Normal file
6
station/tools/tester/.env
Normal file
@@ -0,0 +1,6 @@
|
||||
# Contract HTTP Tests - Environment Configuration
|
||||
#
|
||||
# Get API key: ./get-api-key.sh --docker core_nest_db
|
||||
|
||||
CONTRACT_TEST_URL=http://backend:8000
|
||||
CONTRACT_TEST_API_KEY=118b1fcca089496919f0d82df2c4c89d35126793dfc3ea645366ae09d931f49f
|
||||
411
station/tools/tester/ENHANCEMENT_DESIGN.md
Normal file
411
station/tools/tester/ENHANCEMENT_DESIGN.md
Normal file
@@ -0,0 +1,411 @@
|
||||
# Tester Enhancement Design
|
||||
|
||||
## Problem Statement
|
||||
|
||||
Current tester filter UI "sucks" because:
|
||||
1. **Code-centric filtering** - organizes by Python modules/classes, not user behavior
|
||||
2. **No Gherkin integration** - can't filter by scenarios or features
|
||||
3. **No pulse variables** - can't filter by:
|
||||
- User roles (VET, USER/petowner, ADMIN)
|
||||
- Flow stages (coverage check, service selection, payment, turno)
|
||||
- Data states (has_pets, has_coverage, needs_payment)
|
||||
- Service types, mock behaviors
|
||||
4. **Clunky manual testing** - checkbox-based selection, not "piano playing" rapid execution
|
||||
5. **Backend tests only** - no frontend (Playwright) test support
|
||||
6. **No video captures** - critical for frontend test debugging
|
||||
|
||||
## Solution Overview
|
||||
|
||||
Transform tester into a **Gherkin-driven, behavior-first test execution platform** with:
|
||||
|
||||
### 1. Gherkin-First Organization
|
||||
- Import/sync feature files from `album/book/gherkin-samples/`
|
||||
- Parse scenarios and tags
|
||||
- Map tests to Gherkin scenarios via metadata/decorators
|
||||
- Filter by feature, scenario, tags (@smoke, @critical, @payment-flow)
|
||||
|
||||
### 2. Pulse Variables (Amar-specific filters)
|
||||
Enable filtering by behavioral dimensions:
|
||||
|
||||
**User Context:**
|
||||
- Role: VET, USER, ADMIN, GUEST
|
||||
- State: new_user, returning_user, has_pets, has_coverage
|
||||
|
||||
**Flow Stage:**
|
||||
- coverage_check, service_selection, cart, payment, turno_confirmation
|
||||
|
||||
**Service Type:**
|
||||
- medical, grooming, vaccination, clinical
|
||||
|
||||
**Mock Behavior:**
|
||||
- success, failure, timeout, partial_failure
|
||||
|
||||
**Environment:**
|
||||
- local, demo, staging, production
|
||||
|
||||
### 3. Rapid Testing UX ("Piano Playing")
|
||||
- **Quick filters** - one-click presets (e.g., "All payment tests", "Smoke tests")
|
||||
- **Keyboard shortcuts** - run selected with Enter, navigate with arrows
|
||||
- **Test chains** - define sequences to run in order
|
||||
- **Session memory** - remember last filters and selections
|
||||
- **Live search** - instant filter as you type
|
||||
- **Batch actions** - run all visible, clear all, select by pattern
|
||||
|
||||
### 4. Frontend Test Support (Playwright)
|
||||
- Detect and run `.spec.ts` tests via Playwright
|
||||
- Capture video/screenshots automatically
|
||||
- Display videos inline (like jira vein attachments)
|
||||
- Attach artifacts to test results
|
||||
|
||||
### 5. Enhanced Test Results
|
||||
```python
|
||||
@dataclass
|
||||
class TestResult:
|
||||
test_id: str
|
||||
name: str
|
||||
status: TestStatus
|
||||
duration: float
|
||||
error_message: Optional[str] = None
|
||||
traceback: Optional[str] = None
|
||||
|
||||
# NEW FIELDS
|
||||
gherkin_feature: Optional[str] = None # "Reservar turno veterinario"
|
||||
gherkin_scenario: Optional[str] = None # "Verificar cobertura en zona"
|
||||
tags: list[str] = field(default_factory=list) # ["@smoke", "@coverage"]
|
||||
artifacts: list[TestArtifact] = field(default_factory=list) # videos, screenshots
|
||||
pulse_context: dict = field(default_factory=dict) # {role: "USER", stage: "coverage"}
|
||||
|
||||
@dataclass
|
||||
class TestArtifact:
|
||||
type: str # "video", "screenshot", "trace", "log"
|
||||
filename: str
|
||||
path: str
|
||||
size: int
|
||||
mimetype: str
|
||||
url: str # streaming endpoint
|
||||
```
|
||||
|
||||
## Architecture Changes
|
||||
|
||||
### Directory Structure
|
||||
```
|
||||
ward/tools/tester/
|
||||
├── core.py # Test discovery/execution (existing)
|
||||
├── api.py # FastAPI routes (existing)
|
||||
├── config.py # Configuration (existing)
|
||||
├── base.py # HTTP test base (existing)
|
||||
├── gherkin/ # NEW - Gherkin integration
|
||||
│ ├── parser.py # Parse .feature files
|
||||
│ ├── mapper.py # Map tests to scenarios
|
||||
│ └── sync.py # Sync from album/book
|
||||
├── pulse/ # NEW - Pulse variable system
|
||||
│ ├── context.py # Define pulse dimensions
|
||||
│ ├── filters.py # Pulse-based filtering
|
||||
│ └── presets.py # Quick filter presets
|
||||
├── playwright/ # NEW - Frontend test support
|
||||
│ ├── runner.py # Playwright test execution
|
||||
│ ├── discovery.py # Find .spec.ts tests
|
||||
│ └── artifacts.py # Handle videos/screenshots
|
||||
├── templates/
|
||||
│ ├── index.html # Runner UI (existing)
|
||||
│ ├── filters.html # Filter UI (existing - needs redesign)
|
||||
│ ├── filters_v2.html # NEW - Gherkin/pulse-based filters
|
||||
│ └── artifacts.html # NEW - Video/screenshot viewer
|
||||
├── tests/ # Synced backend tests (existing)
|
||||
├── features/ # NEW - Synced Gherkin features
|
||||
├── frontend-tests/ # NEW - Synced frontend tests
|
||||
└── artifacts/ # NEW - Test artifacts storage
|
||||
├── videos/
|
||||
├── screenshots/
|
||||
└── traces/
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
**1. Test Discovery:**
|
||||
```
|
||||
Backend tests (pytest) → TestInfo
|
||||
Frontend tests (playwright) → TestInfo
|
||||
Gherkin features → FeatureInfo + ScenarioInfo
|
||||
Map tests → scenarios via comments/decorators
|
||||
```
|
||||
|
||||
**2. Filtering:**
|
||||
```
|
||||
User selects filters (UI)
|
||||
↓
|
||||
Filter by Gherkin (feature/scenario/tags)
|
||||
↓
|
||||
Filter by pulse variables (role/stage/state)
|
||||
↓
|
||||
Filter by test type (backend/frontend)
|
||||
↓
|
||||
Return filtered TestInfo list
|
||||
```
|
||||
|
||||
**3. Execution:**
|
||||
```
|
||||
Start test run
|
||||
↓
|
||||
Backend tests: pytest runner (existing)
|
||||
Frontend tests: Playwright runner (new)
|
||||
↓
|
||||
Collect artifacts (videos, screenshots)
|
||||
↓
|
||||
Store in artifacts/
|
||||
↓
|
||||
Return results with artifact URLs
|
||||
```
|
||||
|
||||
**4. Results Display:**
|
||||
```
|
||||
Poll run status
|
||||
↓
|
||||
Show progress + current test
|
||||
↓
|
||||
Display results with:
|
||||
- Status (pass/fail)
|
||||
- Duration
|
||||
- Error details
|
||||
- Gherkin context
|
||||
- Artifacts (inline videos)
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Gherkin Integration
|
||||
1. Create `gherkin/parser.py` - parse .feature files using `gherkin-python`
|
||||
2. Create `gherkin/sync.py` - sync features from album/book
|
||||
3. Enhance `TestInfo` with gherkin metadata
|
||||
4. Add API endpoint `/api/features` to list features/scenarios
|
||||
5. Update test discovery to extract Gherkin metadata from docstrings/comments
|
||||
|
||||
### Phase 2: Pulse Variables
|
||||
1. Create `pulse/context.py` - define pulse dimensions (role, stage, state)
|
||||
2. Create `pulse/filters.py` - filtering logic
|
||||
3. Create `pulse/presets.py` - quick filter configurations
|
||||
4. Enhance `TestInfo` with pulse context
|
||||
5. Add API endpoints for pulse filtering
|
||||
|
||||
### Phase 3: Frontend Test Support
|
||||
1. Create `playwright/discovery.py` - find .spec.ts tests
|
||||
2. Create `playwright/runner.py` - execute Playwright tests
|
||||
3. Create `playwright/artifacts.py` - collect videos/screenshots
|
||||
4. Add artifact storage directory
|
||||
5. Add API endpoint `/api/artifact/{run_id}/{artifact_id}` for streaming
|
||||
6. Enhance `TestResult` with artifacts field
|
||||
|
||||
### Phase 4: Enhanced Filter UI
|
||||
1. Design new filter layout (filters_v2.html)
|
||||
2. Gherkin filter section (features, scenarios, tags)
|
||||
3. Pulse filter section (role, stage, state, service, behavior)
|
||||
4. Quick filter presets
|
||||
5. Live search
|
||||
6. Keyboard navigation
|
||||
|
||||
### Phase 5: Rapid Testing UX
|
||||
1. Keyboard shortcuts
|
||||
2. Test chains/sequences
|
||||
3. Session persistence (localStorage)
|
||||
4. Batch actions
|
||||
5. One-click presets
|
||||
6. Video artifact viewer
|
||||
|
||||
## Quick Filter Presets
|
||||
|
||||
```python
|
||||
PRESETS = {
|
||||
"smoke": {
|
||||
"tags": ["@smoke"],
|
||||
"description": "Critical smoke tests",
|
||||
},
|
||||
"payment_flow": {
|
||||
"features": ["Pago de turno"],
|
||||
"pulse": {"stage": "payment"},
|
||||
"description": "All payment-related tests",
|
||||
},
|
||||
"coverage_check": {
|
||||
"scenarios": ["Verificar cobertura"],
|
||||
"pulse": {"stage": "coverage_check"},
|
||||
"description": "Coverage verification tests",
|
||||
},
|
||||
"frontend_only": {
|
||||
"test_type": "frontend",
|
||||
"description": "All Playwright tests",
|
||||
},
|
||||
"vet_role": {
|
||||
"pulse": {"role": "VET"},
|
||||
"description": "Tests requiring VET user",
|
||||
},
|
||||
"turnero_complete": {
|
||||
"features": ["Reservar turno"],
|
||||
"test_type": "all",
|
||||
"description": "Complete turnero flow (backend + frontend)",
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Gherkin Metadata in Tests
|
||||
|
||||
### Backend (pytest)
|
||||
```python
|
||||
class TestCoverageCheck(ContractHTTPTestCase):
|
||||
"""
|
||||
Feature: Reservar turno veterinario
|
||||
Scenario: Verificar cobertura en zona disponible
|
||||
Tags: @smoke @coverage
|
||||
Pulse: role=GUEST, stage=coverage_check
|
||||
"""
|
||||
|
||||
def test_coverage_returns_boolean(self):
|
||||
"""When ingreso direccion 'Av Santa Fe 1234, CABA'"""
|
||||
# test implementation
|
||||
```
|
||||
|
||||
### Frontend (Playwright)
|
||||
```typescript
|
||||
/**
|
||||
* Feature: Reservar turno veterinario
|
||||
* Scenario: Verificar cobertura en zona disponible
|
||||
* Tags: @smoke @coverage @frontend
|
||||
* Pulse: role=GUEST, stage=coverage_check
|
||||
*/
|
||||
test('coverage check shows message for valid address', async ({ page }) => {
|
||||
// test implementation
|
||||
});
|
||||
```
|
||||
|
||||
## Pulse Context Examples
|
||||
|
||||
```python
|
||||
# Coverage check test
|
||||
pulse_context = {
|
||||
"role": "GUEST",
|
||||
"stage": "coverage_check",
|
||||
"state": "new_user",
|
||||
"service_type": None,
|
||||
"mock_behavior": "success",
|
||||
}
|
||||
|
||||
# Payment test
|
||||
pulse_context = {
|
||||
"role": "USER",
|
||||
"stage": "payment",
|
||||
"state": "has_pets",
|
||||
"service_type": "medical",
|
||||
"mock_behavior": "success",
|
||||
}
|
||||
|
||||
# VET acceptance test
|
||||
pulse_context = {
|
||||
"role": "VET",
|
||||
"stage": "request_acceptance",
|
||||
"state": "has_availability",
|
||||
"service_type": "all",
|
||||
"mock_behavior": "success",
|
||||
}
|
||||
```
|
||||
|
||||
## New Filter UI Design
|
||||
|
||||
### Layout
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Ward Tester - Gherkin-Driven Test Execution │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ [Quick Filters: Smoke | Payment | Coverage | Frontend] │
|
||||
│ │
|
||||
│ ┌─ Gherkin Filters ────────────────────────────────────┐ │
|
||||
│ │ Features: [All ▼] Reservar turno Pago Historial │ │
|
||||
│ │ Scenarios: [All ▼] Cobertura Servicios Contacto │ │
|
||||
│ │ Tags: [@smoke] [@critical] [@payment-flow] │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─ Pulse Variables (Amar Context) ─────────────────────┐ │
|
||||
│ │ Role: [All] VET USER ADMIN GUEST │ │
|
||||
│ │ Stage: [All] coverage services cart payment │ │
|
||||
│ │ State: [All] new has_pets has_coverage │ │
|
||||
│ │ Service: [All] medical grooming vaccination │ │
|
||||
│ │ Behavior: [All] success failure timeout │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─ Test Type ──────────────────────────────────────────┐ │
|
||||
│ │ [All] Backend (HTTP) Frontend (Playwright) │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Search: [________________________] 🔍 [Clear Filters] │
|
||||
│ │
|
||||
│ ┌─ Tests (24 of 156) ──────────────────────────────────┐ │
|
||||
│ │ ☑ Verificar cobertura en zona disponible │ │
|
||||
│ │ Feature: Reservar turno [@smoke @coverage] │ │
|
||||
│ │ Backend + Frontend • Role: GUEST • Stage: cov │ │
|
||||
│ │ │ │
|
||||
│ │ ☑ Servicios filtrados por tipo de mascota │ │
|
||||
│ │ Feature: Reservar turno [@smoke @services] │ │
|
||||
│ │ Backend • Role: USER • Stage: services │ │
|
||||
│ │ │ │
|
||||
│ │ ... (more tests) │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [▶ Run Selected (24)] [Select All] [Deselect All] │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Keyboard Shortcuts
|
||||
- `Enter` - Run selected tests
|
||||
- `Ctrl+A` - Select all visible
|
||||
- `Ctrl+D` - Deselect all
|
||||
- `Ctrl+F` - Focus search
|
||||
- `Ctrl+1-9` - Quick filter presets
|
||||
- `Space` - Toggle test selection
|
||||
- `↑/↓` - Navigate tests
|
||||
|
||||
## Video Artifact Display
|
||||
|
||||
When a frontend test completes with video:
|
||||
|
||||
```
|
||||
┌─ Test Result: Verificar cobertura ─────────────────────┐
|
||||
│ Status: ✓ PASSED │
|
||||
│ Duration: 2.3s │
|
||||
│ │
|
||||
│ Artifacts: │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ 📹 coverage-check-chrome.webm (1.2 MB) │ │
|
||||
│ │ [▶ Play inline] [Download] [Full screen] │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ 📸 screenshot-before.png (234 KB) │ │
|
||||
│ │ [🖼 View] [Download] │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
└──────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Inline video player (like jira vein):
|
||||
```html
|
||||
<video controls width="800">
|
||||
<source src="/tools/tester/api/artifact/{run_id}/coverage-check.webm" type="video/webm">
|
||||
</video>
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Behavior-first filtering** - think like a user, not a developer
|
||||
2. **Rapid manual testing** - quickly run specific scenarios
|
||||
3. **Better debugging** - video captures show exactly what happened
|
||||
4. **Gherkin alignment** - tests map to documented behaviors
|
||||
5. **Context-aware** - filter by the variables that matter (role, stage, state)
|
||||
6. **Full coverage** - backend + frontend in one place
|
||||
7. **Quick smoke tests** - one-click preset filters
|
||||
8. **Better UX** - keyboard shortcuts, session memory, live search
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ Design approved
|
||||
2. Implement Phase 1 (Gherkin integration)
|
||||
3. Implement Phase 2 (Pulse variables)
|
||||
4. Implement Phase 3 (Frontend tests)
|
||||
5. Implement Phase 4 (New filter UI)
|
||||
6. Implement Phase 5 (Rapid testing UX)
|
||||
178
station/tools/tester/README.md
Normal file
178
station/tools/tester/README.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# Tester - HTTP Contract Test Runner
|
||||
|
||||
Web UI for discovering and running contract tests.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Sync tests from production repo (local dev)
|
||||
/home/mariano/wdir/ama/core_nest/pawprint/ctrl/sync-tests.sh
|
||||
|
||||
# Run locally
|
||||
cd /home/mariano/wdir/ama/pawprint/ward
|
||||
python -m tools.tester
|
||||
|
||||
# Open in browser
|
||||
http://localhost:12003/tester
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
**Test Definitions** → **Tester (Runner + UI)** → **Target API**
|
||||
|
||||
```
|
||||
amar_django_back_contracts/
|
||||
└── tests/contracts/ ← Test definitions (source of truth)
|
||||
├── mascotas/
|
||||
├── productos/
|
||||
└── workflows/
|
||||
|
||||
ward/tools/tester/
|
||||
├── tests/ ← Synced from contracts (deployment)
|
||||
│ ├── mascotas/
|
||||
│ ├── productos/
|
||||
│ └── workflows/
|
||||
├── base.py ← HTTP test base class
|
||||
├── core.py ← Test discovery & execution
|
||||
├── api.py ← FastAPI endpoints
|
||||
└── templates/ ← Web UI
|
||||
|
||||
```
|
||||
|
||||
## Strategy: Separation of Concerns
|
||||
|
||||
1. **Tests live in production repo** (`amar_django_back_contracts`)
|
||||
- Developers write tests alongside code
|
||||
- Tests are versioned with the API
|
||||
- PR reviews include test changes
|
||||
|
||||
2. **Tester consumes tests** (`ward/tools/tester`)
|
||||
- Provides web UI for visibility
|
||||
- Runs tests against any target (dev, stage, prod)
|
||||
- Shows test coverage to product team
|
||||
|
||||
3. **Deployment syncs tests**
|
||||
- `sync-tests.sh` copies tests from contracts to tester
|
||||
- Deployment script includes test sync
|
||||
- Server always has latest tests
|
||||
|
||||
## Configuration
|
||||
|
||||
### Single Environment (.env)
|
||||
|
||||
```env
|
||||
CONTRACT_TEST_URL=https://demo.amarmascotas.ar
|
||||
CONTRACT_TEST_API_KEY=your-api-key-here
|
||||
```
|
||||
|
||||
### Multiple Environments (environments.json)
|
||||
|
||||
Configure multiple target environments with individual tokens:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "demo",
|
||||
"name": "Demo",
|
||||
"url": "https://demo.amarmascotas.ar",
|
||||
"api_key": "",
|
||||
"description": "Demo environment for testing",
|
||||
"default": true
|
||||
},
|
||||
{
|
||||
"id": "dev",
|
||||
"name": "Development",
|
||||
"url": "https://dev.amarmascotas.ar",
|
||||
"api_key": "dev-token-here",
|
||||
"description": "Development environment"
|
||||
},
|
||||
{
|
||||
"id": "prod",
|
||||
"name": "Production",
|
||||
"url": "https://amarmascotas.ar",
|
||||
"api_key": "prod-token-here",
|
||||
"description": "Production (use with caution!)"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Environment Selector**: Available in UI header on both Runner and Filters pages. Selection persists via localStorage.
|
||||
|
||||
## Web UI Features
|
||||
|
||||
- **Filters**: Advanced filtering by domain, module, status, and search
|
||||
- **Runner**: Execute tests with real-time progress tracking
|
||||
- **Multi-Environment**: Switch between dev/stage/prod with per-environment tokens
|
||||
- **URL State**: Filter state persists via URL when running tests
|
||||
- **Real-time Status**: See test results as they run
|
||||
|
||||
## API Endpoints
|
||||
|
||||
```
|
||||
GET /tools/tester/ # Runner UI
|
||||
GET /tools/tester/filters # Filters UI
|
||||
GET /tools/tester/api/tests # List all tests
|
||||
GET /tools/tester/api/environments # List environments
|
||||
POST /tools/tester/api/environment/select # Switch environment
|
||||
POST /tools/tester/api/run # Start test run
|
||||
GET /tools/tester/api/run/{run_id} # Get run status (polling)
|
||||
GET /tools/tester/api/runs # List all runs
|
||||
```
|
||||
|
||||
## Usage Flow
|
||||
|
||||
### From Filters to Runner
|
||||
|
||||
1. Go to `/tools/tester/filters`
|
||||
2. Filter tests (domain, module, search)
|
||||
3. Select tests to run
|
||||
4. Click "Run Selected"
|
||||
5. → Redirects to Runner with filters applied and auto-starts execution
|
||||
|
||||
### URL Parameters
|
||||
|
||||
Runner accepts URL params for deep linking:
|
||||
|
||||
```
|
||||
/tools/tester/?run=abc123&domains=mascotas&search=owner
|
||||
```
|
||||
|
||||
- `run` - Auto-load results for this run ID
|
||||
- `domains` - Filter by domains (comma-separated)
|
||||
- `modules` - Filter by modules (comma-separated)
|
||||
- `search` - Search term for test names
|
||||
- `status` - Filter by status (passed,failed,skipped)
|
||||
|
||||
## Deployment
|
||||
|
||||
Tests are synced during deployment:
|
||||
|
||||
```bash
|
||||
# Full deployment (includes test sync)
|
||||
cd /home/mariano/wdir/ama/pawprint/deploy
|
||||
./deploy.sh
|
||||
|
||||
# Or sync tests only
|
||||
/home/mariano/wdir/ama/core_nest/pawprint/ctrl/sync-tests.sh
|
||||
```
|
||||
|
||||
## Why This Design?
|
||||
|
||||
**Problem**: Tests scattered, no visibility, hard to demonstrate value
|
||||
|
||||
**Solution**:
|
||||
- Tests in production repo (developer workflow)
|
||||
- Tester provides visibility (product team, demos)
|
||||
- Separation allows independent evolution
|
||||
|
||||
**Benefits**:
|
||||
- Product team sees test coverage
|
||||
- Demos show "quality dashboard"
|
||||
- Tests protect marketplace automation work
|
||||
- Non-devs can run tests via UI
|
||||
|
||||
## Related
|
||||
|
||||
- Production tests: `/home/mariano/wdir/ama/amar_django_back_contracts/tests/contracts/`
|
||||
- Sync script: `/home/mariano/wdir/ama/core_nest/pawprint/ctrl/sync-tests.sh`
|
||||
- Ward system: `/home/mariano/wdir/ama/pawprint/ward/`
|
||||
302
station/tools/tester/SESSION_6_IMPLEMENTATION.md
Normal file
302
station/tools/tester/SESSION_6_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# Session 6: Tester Enhancement Implementation
|
||||
|
||||
## Status: Complete ✅
|
||||
|
||||
All planned features implemented and ready for testing.
|
||||
|
||||
## What Was Built
|
||||
|
||||
### 1. Playwright Test Integration ✅
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
playwright/
|
||||
├── __init__.py
|
||||
├── discovery.py # Discover .spec.ts tests
|
||||
├── runner.py # Execute Playwright tests
|
||||
├── artifacts.py # Artifact storage
|
||||
└── README.md # Documentation
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Parse .spec.ts files for test discovery
|
||||
- Extract Gherkin metadata from JSDoc comments
|
||||
- Execute tests with Playwright runner
|
||||
- Capture videos and screenshots
|
||||
- Store artifacts by run ID
|
||||
|
||||
### 2. Artifact Streaming ✅
|
||||
|
||||
**Files Modified:**
|
||||
- `core.py` - Added `artifacts` field to TestResult
|
||||
- `api.py` - Added artifact streaming endpoints
|
||||
- `templates/index.html` - Added inline video/screenshot display
|
||||
|
||||
**New API Endpoints:**
|
||||
```
|
||||
GET /api/artifact/{run_id}/{filename} # Stream artifact
|
||||
GET /api/artifacts/{run_id} # List artifacts for run
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Stream videos directly in browser
|
||||
- Display screenshots inline
|
||||
- File streaming like jira vein pattern
|
||||
- Organized storage: artifacts/videos/, artifacts/screenshots/, artifacts/traces/
|
||||
|
||||
### 3. Gherkin Integration ✅
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
gherkin/
|
||||
├── __init__.py
|
||||
├── parser.py # Parse .feature files (ES + EN)
|
||||
├── sync.py # Sync from album/book/gherkin-samples/
|
||||
└── mapper.py # Map tests to scenarios
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Parse .feature files (both English and Spanish)
|
||||
- Extract features, scenarios, tags
|
||||
- Sync from album automatically
|
||||
- Match tests to scenarios via docstrings
|
||||
|
||||
**New API Endpoints:**
|
||||
```
|
||||
GET /api/features # List all features
|
||||
GET /api/features/tags # List all tags
|
||||
POST /api/features/sync # Sync from album
|
||||
```
|
||||
|
||||
### 4. Filters V2 UI ✅
|
||||
|
||||
**File Created:**
|
||||
- `templates/filters_v2.html` - Complete rewrite with new UX
|
||||
|
||||
**Features:**
|
||||
|
||||
**Quick Presets:**
|
||||
- 🔥 Smoke Tests (Ctrl+1)
|
||||
- 💳 Payment Flow (Ctrl+2)
|
||||
- 📍 Coverage Check (Ctrl+3)
|
||||
- 🎨 Frontend Only (Ctrl+4)
|
||||
- ⚙️ Backend Only (Ctrl+5)
|
||||
|
||||
**Gherkin Filters:**
|
||||
- Filter by Feature
|
||||
- Filter by Tag (@smoke, @coverage, @payment, etc.)
|
||||
- Filter by Scenario
|
||||
|
||||
**Pulse Variables (Amar Context):**
|
||||
- Role: VET, USER, ADMIN, GUEST
|
||||
- Stage: coverage, services, cart, payment, turno
|
||||
|
||||
**Other Filters:**
|
||||
- Live search
|
||||
- Test type (backend/frontend)
|
||||
|
||||
**Keyboard Shortcuts:**
|
||||
- `Enter` - Run selected tests
|
||||
- `Ctrl+A` - Select all visible
|
||||
- `Ctrl+D` - Deselect all
|
||||
- `Ctrl+F` - Focus search
|
||||
- `Ctrl+1-5` - Quick filter presets
|
||||
- `?` - Toggle keyboard shortcuts help
|
||||
|
||||
**UX Improvements:**
|
||||
- One-click preset filters
|
||||
- Real-time search filtering
|
||||
- Test cards with metadata badges
|
||||
- Selected test count
|
||||
- Clean, modern dark theme
|
||||
- Mobile responsive
|
||||
|
||||
### 5. New Routes ✅
|
||||
|
||||
**File Modified:**
|
||||
- `api.py` - Added `/filters_v2` route
|
||||
|
||||
**Access:**
|
||||
```
|
||||
http://localhost:12003/tools/tester/filters_v2
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
ward/tools/tester/
|
||||
├── playwright/ # NEW
|
||||
│ ├── discovery.py
|
||||
│ ├── runner.py
|
||||
│ ├── artifacts.py
|
||||
│ └── README.md
|
||||
├── gherkin/ # NEW
|
||||
│ ├── parser.py
|
||||
│ ├── sync.py
|
||||
│ └── mapper.py
|
||||
├── templates/
|
||||
│ ├── index.html # MODIFIED - artifact display
|
||||
│ ├── filters.html # UNCHANGED
|
||||
│ └── filters_v2.html # NEW
|
||||
├── features/ # NEW (gitignored, synced)
|
||||
├── frontend-tests/ # NEW (gitignored, for playwright tests)
|
||||
├── artifacts/ # NEW (gitignored, test artifacts)
|
||||
│ ├── videos/
|
||||
│ ├── screenshots/
|
||||
│ └── traces/
|
||||
├── core.py # MODIFIED - artifacts field
|
||||
└── api.py # MODIFIED - new endpoints + routes
|
||||
```
|
||||
|
||||
## How to Test
|
||||
|
||||
### 1. Start the tester service
|
||||
|
||||
If running standalone:
|
||||
```bash
|
||||
cd /home/mariano/wdir/ama/pawprint/ward/tools/tester
|
||||
python -m uvicorn main:app --reload --port 12003
|
||||
```
|
||||
|
||||
Or if integrated with ward:
|
||||
```bash
|
||||
# Ward service should pick it up automatically
|
||||
```
|
||||
|
||||
### 2. Access Filters V2
|
||||
|
||||
Navigate to:
|
||||
```
|
||||
http://localhost:12003/tools/tester/filters_v2
|
||||
```
|
||||
|
||||
### 3. Sync Features
|
||||
|
||||
The UI automatically syncs features from `album/book/gherkin-samples/` on load.
|
||||
|
||||
Or manually via API:
|
||||
```bash
|
||||
curl -X POST http://localhost:12003/tools/tester/api/features/sync
|
||||
```
|
||||
|
||||
### 4. Try Quick Presets
|
||||
|
||||
- Click "🔥 Smoke Tests" or press `Ctrl+1`
|
||||
- Click "💳 Payment Flow" or press `Ctrl+2`
|
||||
- Try other presets
|
||||
|
||||
### 5. Use Pulse Filters
|
||||
|
||||
- Select a Role (VET, USER, ADMIN, GUEST)
|
||||
- Select a Stage (coverage, services, cart, payment, turno)
|
||||
- Tests will filter based on metadata
|
||||
|
||||
### 6. Test Search
|
||||
|
||||
- Press `Ctrl+F` to focus search
|
||||
- Type to filter tests in real-time
|
||||
|
||||
### 7. Run Tests
|
||||
|
||||
- Select tests by clicking cards
|
||||
- Press `Enter` or click "▶ Run Selected"
|
||||
- View results in main runner with inline videos/screenshots
|
||||
|
||||
## Testing Playwright Tests
|
||||
|
||||
### 1. Add test metadata
|
||||
|
||||
In your .spec.ts files:
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Feature: Reservar turno veterinario
|
||||
* Scenario: Verificar cobertura en zona disponible
|
||||
* Tags: @smoke @coverage @frontend
|
||||
*/
|
||||
test('coverage check shows message', async ({ page }) => {
|
||||
// test code
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Configure Playwright
|
||||
|
||||
Ensure `playwright.config.ts` captures artifacts:
|
||||
|
||||
```typescript
|
||||
export default defineConfig({
|
||||
use: {
|
||||
video: 'retain-on-failure',
|
||||
screenshot: 'only-on-failure',
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### 3. Sync frontend tests
|
||||
|
||||
Copy your .spec.ts tests to:
|
||||
```
|
||||
ward/tools/tester/frontend-tests/
|
||||
```
|
||||
|
||||
## What's NOT Implemented Yet
|
||||
|
||||
These are in the design but not built:
|
||||
|
||||
1. **Pulse variable extraction from docstrings** - Tests don't yet extract pulse metadata
|
||||
2. **Playwright test execution** - Discovery is ready, but execution integration pending
|
||||
3. **Test-to-scenario mapping** - Mapper exists but not integrated
|
||||
4. **Scenario view** - Can't drill down into scenarios yet
|
||||
5. **Test chains** - Can't define sequences yet
|
||||
6. **Session persistence** - Filters don't save to localStorage yet
|
||||
|
||||
## Next Steps for You
|
||||
|
||||
1. **Test the UI** - Navigate to `/filters_v2` and try the filters
|
||||
2. **Add test metadata** - Add Gherkin comments to existing tests
|
||||
3. **Verify feature sync** - Check if features appear in the UI
|
||||
4. **Test presets** - Try quick filter presets
|
||||
5. **Keyboard shortcuts** - Test `Ctrl+1-5`, `Enter`, `Ctrl+A/D`
|
||||
|
||||
## Integration with Existing Code
|
||||
|
||||
- ✅ Doesn't touch `filters.html` - original still works
|
||||
- ✅ Backward compatible - existing tests run unchanged
|
||||
- ✅ Opt-in metadata - tests work without Gherkin comments
|
||||
- ✅ Same backend - uses existing test discovery and execution
|
||||
- ✅ Environment selector - shares environments with v1
|
||||
|
||||
## Feedback Loop
|
||||
|
||||
To add pulse metadata to tests, use docstrings:
|
||||
|
||||
```python
|
||||
class TestCoverageFlow(ContractHTTPTestCase):
|
||||
"""
|
||||
Feature: Reservar turno veterinario
|
||||
Tags: @smoke @coverage
|
||||
Pulse: role=GUEST, stage=coverage_check
|
||||
"""
|
||||
|
||||
def test_coverage_returns_boolean(self):
|
||||
"""
|
||||
Scenario: Verificar cobertura en zona disponible
|
||||
When ingreso direccion 'Av Santa Fe 1234, CABA'
|
||||
"""
|
||||
# test code
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
**Built:**
|
||||
- Complete Playwright infrastructure
|
||||
- Artifact streaming (videos, screenshots)
|
||||
- Gherkin parser (ES + EN)
|
||||
- Feature sync from album
|
||||
- Filters V2 UI with presets, pulse variables, keyboard shortcuts
|
||||
- 6 new API endpoints
|
||||
|
||||
**Result:**
|
||||
A production-ready Gherkin-driven test filter UI that can be tested and iterated on. The foundation is solid - now it's about using it with real tests and refining based on actual workflow.
|
||||
|
||||
**Time to test! 🎹**
|
||||
11
station/tools/tester/__init__.py
Normal file
11
station/tools/tester/__init__.py
Normal file
@@ -0,0 +1,11 @@
|
||||
"""
|
||||
Tester - HTTP contract test runner with web UI.
|
||||
|
||||
Discovers and runs contract tests from tests/ directory.
|
||||
Tests can be symlinked from production repos or copied during deployment.
|
||||
"""
|
||||
|
||||
from .api import router
|
||||
from .core import discover_tests, start_test_run, get_run_status
|
||||
|
||||
__all__ = ["router", "discover_tests", "start_test_run", "get_run_status"]
|
||||
13
station/tools/tester/__main__.py
Normal file
13
station/tools/tester/__main__.py
Normal file
@@ -0,0 +1,13 @@
|
||||
"""
|
||||
CLI entry point for contracts_http tool.
|
||||
|
||||
Usage:
|
||||
python -m contracts_http discover
|
||||
python -m contracts_http run
|
||||
python -m contracts_http run mascotas
|
||||
"""
|
||||
|
||||
from .cli import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
347
station/tools/tester/api.py
Normal file
347
station/tools/tester/api.py
Normal file
@@ -0,0 +1,347 @@
|
||||
"""
|
||||
FastAPI router for tester tool.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from pydantic import BaseModel
|
||||
from fastapi import APIRouter, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse, PlainTextResponse, FileResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from .config import config, environments
|
||||
from .core import (
|
||||
discover_tests,
|
||||
get_tests_tree,
|
||||
start_test_run,
|
||||
get_run_status,
|
||||
list_runs,
|
||||
TestStatus,
|
||||
)
|
||||
from .gherkin.parser import discover_features, extract_tags_from_features, get_feature_names, get_scenario_names
|
||||
from .gherkin.sync import sync_features_from_album
|
||||
|
||||
|
||||
router = APIRouter(prefix="/tools/tester", tags=["tester"])
|
||||
templates = Jinja2Templates(directory=Path(__file__).parent / "templates")
|
||||
|
||||
|
||||
class RunRequest(BaseModel):
|
||||
"""Request to start a test run."""
|
||||
test_ids: Optional[list[str]] = None
|
||||
|
||||
|
||||
class RunResponse(BaseModel):
|
||||
"""Response after starting a test run."""
|
||||
run_id: str
|
||||
status: str
|
||||
|
||||
|
||||
class TestResultResponse(BaseModel):
|
||||
"""A single test result."""
|
||||
test_id: str
|
||||
name: str
|
||||
status: str
|
||||
duration: float
|
||||
error_message: Optional[str] = None
|
||||
traceback: Optional[str] = None
|
||||
artifacts: list[dict] = []
|
||||
|
||||
|
||||
class RunStatusResponse(BaseModel):
|
||||
"""Status of a test run."""
|
||||
run_id: str
|
||||
status: str
|
||||
total: int
|
||||
completed: int
|
||||
passed: int
|
||||
failed: int
|
||||
errors: int
|
||||
skipped: int
|
||||
current_test: Optional[str] = None
|
||||
results: list[TestResultResponse]
|
||||
duration: Optional[float] = None
|
||||
|
||||
|
||||
@router.get("/", response_class=HTMLResponse)
|
||||
def index(request: Request):
|
||||
"""Render the test runner UI."""
|
||||
tests_tree = get_tests_tree()
|
||||
tests_list = discover_tests()
|
||||
|
||||
return templates.TemplateResponse("index.html", {
|
||||
"request": request,
|
||||
"config": config,
|
||||
"tests_tree": tests_tree,
|
||||
"total_tests": len(tests_list),
|
||||
})
|
||||
|
||||
|
||||
@router.get("/health")
|
||||
def health():
|
||||
"""Health check endpoint."""
|
||||
return {"status": "ok", "tool": "tester"}
|
||||
|
||||
|
||||
@router.get("/filters", response_class=HTMLResponse)
|
||||
def test_filters(request: Request):
|
||||
"""Show filterable test view with multiple filter options."""
|
||||
return templates.TemplateResponse("filters.html", {
|
||||
"request": request,
|
||||
"config": config,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/filters_v2", response_class=HTMLResponse)
|
||||
def test_filters_v2(request: Request):
|
||||
"""Show Gherkin-driven filter view (v2 with pulse variables)."""
|
||||
return templates.TemplateResponse("filters_v2.html", {
|
||||
"request": request,
|
||||
"config": config,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/api/config")
|
||||
def get_config():
|
||||
"""Get current configuration."""
|
||||
api_key = config.get("CONTRACT_TEST_API_KEY", "")
|
||||
return {
|
||||
"url": config.get("CONTRACT_TEST_URL", ""),
|
||||
"has_api_key": bool(api_key),
|
||||
"api_key_preview": f"{api_key[:8]}..." if len(api_key) > 8 else "",
|
||||
}
|
||||
|
||||
|
||||
@router.get("/api/environments")
|
||||
def get_environments():
|
||||
"""Get available test environments."""
|
||||
# Sanitize API keys - only return preview
|
||||
safe_envs = []
|
||||
for env in environments:
|
||||
safe_env = env.copy()
|
||||
api_key = safe_env.get("api_key", "")
|
||||
if api_key:
|
||||
safe_env["has_api_key"] = True
|
||||
safe_env["api_key_preview"] = f"{api_key[:8]}..." if len(api_key) > 8 else "***"
|
||||
del safe_env["api_key"] # Don't send full key to frontend
|
||||
else:
|
||||
safe_env["has_api_key"] = False
|
||||
safe_env["api_key_preview"] = ""
|
||||
safe_envs.append(safe_env)
|
||||
|
||||
return {"environments": safe_envs}
|
||||
|
||||
|
||||
@router.post("/api/environment/select")
|
||||
def select_environment(env_id: str):
|
||||
"""Select a target environment for testing."""
|
||||
# Find the environment
|
||||
env = next((e for e in environments if e["id"] == env_id), None)
|
||||
if not env:
|
||||
raise HTTPException(status_code=404, detail=f"Environment {env_id} not found")
|
||||
|
||||
# Update config (in memory for this session)
|
||||
config["CONTRACT_TEST_URL"] = env["url"]
|
||||
config["CONTRACT_TEST_API_KEY"] = env.get("api_key", "")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"environment": {
|
||||
"id": env["id"],
|
||||
"name": env["name"],
|
||||
"url": env["url"],
|
||||
"has_api_key": bool(env.get("api_key"))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@router.get("/api/tests")
|
||||
def list_tests():
|
||||
"""List all discovered tests."""
|
||||
tests = discover_tests()
|
||||
return {
|
||||
"total": len(tests),
|
||||
"tests": [
|
||||
{
|
||||
"id": t.id,
|
||||
"name": t.name,
|
||||
"module": t.module,
|
||||
"class_name": t.class_name,
|
||||
"method_name": t.method_name,
|
||||
"doc": t.doc,
|
||||
}
|
||||
for t in tests
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@router.get("/api/tests/tree")
|
||||
def get_tree():
|
||||
"""Get tests as a tree structure."""
|
||||
return get_tests_tree()
|
||||
|
||||
|
||||
@router.post("/api/run", response_model=RunResponse)
|
||||
def run_tests(request: RunRequest):
|
||||
"""Start a test run."""
|
||||
run_id = start_test_run(request.test_ids)
|
||||
return RunResponse(run_id=run_id, status="running")
|
||||
|
||||
|
||||
@router.get("/api/run/{run_id}", response_model=RunStatusResponse)
|
||||
def get_run(run_id: str):
|
||||
"""Get status of a test run (for polling)."""
|
||||
status = get_run_status(run_id)
|
||||
if not status:
|
||||
raise HTTPException(status_code=404, detail=f"Run {run_id} not found")
|
||||
|
||||
duration = None
|
||||
if status.started_at:
|
||||
end_time = status.finished_at or __import__("time").time()
|
||||
duration = round(end_time - status.started_at, 2)
|
||||
|
||||
return RunStatusResponse(
|
||||
run_id=status.run_id,
|
||||
status=status.status,
|
||||
total=status.total,
|
||||
completed=status.completed,
|
||||
passed=status.passed,
|
||||
failed=status.failed,
|
||||
errors=status.errors,
|
||||
skipped=status.skipped,
|
||||
current_test=status.current_test,
|
||||
duration=duration,
|
||||
results=[
|
||||
TestResultResponse(
|
||||
test_id=r.test_id,
|
||||
name=r.name,
|
||||
status=r.status.value,
|
||||
duration=round(r.duration, 3),
|
||||
error_message=r.error_message,
|
||||
traceback=r.traceback,
|
||||
artifacts=r.artifacts,
|
||||
)
|
||||
for r in status.results
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@router.get("/api/runs")
|
||||
def list_all_runs():
|
||||
"""List all test runs."""
|
||||
return {"runs": list_runs()}
|
||||
|
||||
|
||||
@router.get("/api/artifact/{run_id}/{filename}")
|
||||
def stream_artifact(run_id: str, filename: str):
|
||||
"""
|
||||
Stream an artifact file (video, screenshot, trace).
|
||||
|
||||
Similar to jira vein's attachment streaming endpoint.
|
||||
"""
|
||||
# Get artifacts directory
|
||||
artifacts_dir = Path(__file__).parent / "artifacts"
|
||||
|
||||
# Search for the artifact in all subdirectories
|
||||
for subdir in ["videos", "screenshots", "traces"]:
|
||||
artifact_path = artifacts_dir / subdir / run_id / filename
|
||||
if artifact_path.exists():
|
||||
# Determine media type
|
||||
if filename.endswith(".webm"):
|
||||
media_type = "video/webm"
|
||||
elif filename.endswith(".mp4"):
|
||||
media_type = "video/mp4"
|
||||
elif filename.endswith(".png"):
|
||||
media_type = "image/png"
|
||||
elif filename.endswith(".jpg") or filename.endswith(".jpeg"):
|
||||
media_type = "image/jpeg"
|
||||
elif filename.endswith(".zip"):
|
||||
media_type = "application/zip"
|
||||
else:
|
||||
media_type = "application/octet-stream"
|
||||
|
||||
return FileResponse(
|
||||
path=artifact_path,
|
||||
media_type=media_type,
|
||||
filename=filename
|
||||
)
|
||||
|
||||
# Not found
|
||||
raise HTTPException(status_code=404, detail=f"Artifact not found: {run_id}/{filename}")
|
||||
|
||||
|
||||
@router.get("/api/artifacts/{run_id}")
|
||||
def list_artifacts(run_id: str):
|
||||
"""List all artifacts for a test run."""
|
||||
artifacts_dir = Path(__file__).parent / "artifacts"
|
||||
artifacts = []
|
||||
|
||||
# Search in all artifact directories
|
||||
for subdir, artifact_type in [
|
||||
("videos", "video"),
|
||||
("screenshots", "screenshot"),
|
||||
("traces", "trace")
|
||||
]:
|
||||
run_dir = artifacts_dir / subdir / run_id
|
||||
if run_dir.exists():
|
||||
for artifact_file in run_dir.iterdir():
|
||||
if artifact_file.is_file():
|
||||
artifacts.append({
|
||||
"type": artifact_type,
|
||||
"filename": artifact_file.name,
|
||||
"size": artifact_file.stat().st_size,
|
||||
"url": f"/tools/tester/api/artifact/{run_id}/{artifact_file.name}"
|
||||
})
|
||||
|
||||
return {"artifacts": artifacts}
|
||||
|
||||
|
||||
@router.get("/api/features")
|
||||
def list_features():
|
||||
"""List all discovered Gherkin features."""
|
||||
features_dir = Path(__file__).parent / "features"
|
||||
features = discover_features(features_dir)
|
||||
|
||||
return {
|
||||
"features": [
|
||||
{
|
||||
"name": f.name,
|
||||
"description": f.description,
|
||||
"file_path": f.file_path,
|
||||
"language": f.language,
|
||||
"tags": f.tags,
|
||||
"scenario_count": len(f.scenarios),
|
||||
"scenarios": [
|
||||
{
|
||||
"name": s.name,
|
||||
"description": s.description,
|
||||
"tags": s.tags,
|
||||
"type": s.scenario_type,
|
||||
}
|
||||
for s in f.scenarios
|
||||
]
|
||||
}
|
||||
for f in features
|
||||
],
|
||||
"total": len(features)
|
||||
}
|
||||
|
||||
|
||||
@router.get("/api/features/tags")
|
||||
def list_feature_tags():
|
||||
"""List all unique tags from Gherkin features."""
|
||||
features_dir = Path(__file__).parent / "features"
|
||||
features = discover_features(features_dir)
|
||||
tags = extract_tags_from_features(features)
|
||||
|
||||
return {
|
||||
"tags": sorted(list(tags)),
|
||||
"total": len(tags)
|
||||
}
|
||||
|
||||
|
||||
@router.post("/api/features/sync")
|
||||
def sync_features():
|
||||
"""Sync feature files from album/book/gherkin-samples/."""
|
||||
result = sync_features_from_album()
|
||||
return result
|
||||
11
station/tools/tester/artifacts/.gitignore
vendored
Normal file
11
station/tools/tester/artifacts/.gitignore
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
# Ignore all artifacts (videos, screenshots, traces)
|
||||
# These are generated during test runs and should not be committed
|
||||
videos/
|
||||
screenshots/
|
||||
traces/
|
||||
*.webm
|
||||
*.mp4
|
||||
*.png
|
||||
*.jpg
|
||||
*.jpeg
|
||||
*.zip
|
||||
119
station/tools/tester/base.py
Normal file
119
station/tools/tester/base.py
Normal file
@@ -0,0 +1,119 @@
|
||||
"""
|
||||
Pure HTTP Contract Tests - Base Class
|
||||
|
||||
Framework-agnostic: works against ANY backend implementation.
|
||||
"""
|
||||
|
||||
import unittest
|
||||
import httpx
|
||||
|
||||
from .config import config
|
||||
|
||||
|
||||
class ContractTestCase(unittest.TestCase):
|
||||
"""
|
||||
Base class for pure HTTP contract tests.
|
||||
|
||||
Features:
|
||||
- Framework-agnostic (works with Django, FastAPI, Node, etc.)
|
||||
- Pure HTTP via httpx library
|
||||
- No database access - all data through API
|
||||
- API Key authentication
|
||||
"""
|
||||
|
||||
_base_url = None
|
||||
_api_key = None
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
"""Set up once per test class"""
|
||||
super().setUpClass()
|
||||
cls._base_url = config.get("CONTRACT_TEST_URL", "").rstrip("/")
|
||||
if not cls._base_url:
|
||||
raise ValueError("CONTRACT_TEST_URL required in environment")
|
||||
|
||||
cls._api_key = config.get("CONTRACT_TEST_API_KEY", "")
|
||||
if not cls._api_key:
|
||||
raise ValueError("CONTRACT_TEST_API_KEY required in environment")
|
||||
|
||||
@property
|
||||
def base_url(self):
|
||||
return self._base_url
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return self._api_key
|
||||
|
||||
def _auth_headers(self):
|
||||
"""Get authorization headers"""
|
||||
return {"Authorization": f"Api-Key {self.api_key}"}
|
||||
|
||||
# =========================================================================
|
||||
# HTTP helpers
|
||||
# =========================================================================
|
||||
|
||||
def get(self, path: str, params: dict = None, **kwargs):
|
||||
"""GET request"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.get(url, params=params, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def post(self, path: str, data: dict = None, **kwargs):
|
||||
"""POST request with JSON"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.post(url, json=data, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def put(self, path: str, data: dict = None, **kwargs):
|
||||
"""PUT request with JSON"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.put(url, json=data, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def patch(self, path: str, data: dict = None, **kwargs):
|
||||
"""PATCH request with JSON"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.patch(url, json=data, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def delete(self, path: str, **kwargs):
|
||||
"""DELETE request"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.delete(url, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def _wrap_response(self, response):
|
||||
"""Add .data attribute for consistency with DRF responses"""
|
||||
try:
|
||||
response.data = response.json()
|
||||
except Exception:
|
||||
response.data = None
|
||||
return response
|
||||
|
||||
# =========================================================================
|
||||
# Assertion helpers
|
||||
# =========================================================================
|
||||
|
||||
def assert_status(self, response, expected_status: int):
|
||||
"""Assert response has expected status code"""
|
||||
self.assertEqual(
|
||||
response.status_code,
|
||||
expected_status,
|
||||
f"Expected {expected_status}, got {response.status_code}. "
|
||||
f"Response: {response.data if hasattr(response, 'data') else response.content[:500]}"
|
||||
)
|
||||
|
||||
def assert_has_fields(self, data: dict, *fields: str):
|
||||
"""Assert dictionary has all specified fields"""
|
||||
missing = [f for f in fields if f not in data]
|
||||
self.assertEqual(missing, [], f"Missing fields: {missing}. Got: {list(data.keys())}")
|
||||
|
||||
def assert_is_list(self, data, min_length: int = 0):
|
||||
"""Assert data is a list with minimum length"""
|
||||
self.assertIsInstance(data, list)
|
||||
self.assertGreaterEqual(len(data), min_length)
|
||||
129
station/tools/tester/cli.py
Normal file
129
station/tools/tester/cli.py
Normal file
@@ -0,0 +1,129 @@
|
||||
"""
|
||||
CLI for contracts_http tool.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
import time
|
||||
|
||||
from .config import config
|
||||
from .core import discover_tests, start_test_run, get_run_status
|
||||
|
||||
|
||||
def cmd_discover(args):
|
||||
"""List discovered tests."""
|
||||
tests = discover_tests()
|
||||
|
||||
if args.json:
|
||||
import json
|
||||
print(json.dumps([
|
||||
{
|
||||
"id": t.id,
|
||||
"module": t.module,
|
||||
"class": t.class_name,
|
||||
"method": t.method_name,
|
||||
"doc": t.doc,
|
||||
}
|
||||
for t in tests
|
||||
], indent=2))
|
||||
else:
|
||||
print(f"Discovered {len(tests)} tests:\n")
|
||||
|
||||
# Group by module
|
||||
by_module = {}
|
||||
for t in tests:
|
||||
if t.module not in by_module:
|
||||
by_module[t.module] = []
|
||||
by_module[t.module].append(t)
|
||||
|
||||
for module, module_tests in sorted(by_module.items()):
|
||||
print(f" {module}:")
|
||||
for t in module_tests:
|
||||
print(f" - {t.class_name}.{t.method_name}")
|
||||
print()
|
||||
|
||||
|
||||
def cmd_run(args):
|
||||
"""Run tests."""
|
||||
print(f"Target: {config['CONTRACT_TEST_URL']}")
|
||||
print()
|
||||
|
||||
# Filter tests if pattern provided
|
||||
test_ids = None
|
||||
if args.pattern:
|
||||
all_tests = discover_tests()
|
||||
test_ids = [
|
||||
t.id for t in all_tests
|
||||
if args.pattern.lower() in t.id.lower()
|
||||
]
|
||||
if not test_ids:
|
||||
print(f"No tests matching pattern: {args.pattern}")
|
||||
return 1
|
||||
print(f"Running {len(test_ids)} tests matching '{args.pattern}'")
|
||||
else:
|
||||
print("Running all tests")
|
||||
|
||||
print()
|
||||
|
||||
# Start run
|
||||
run_id = start_test_run(test_ids)
|
||||
|
||||
# Poll until complete
|
||||
while True:
|
||||
status = get_run_status(run_id)
|
||||
if not status:
|
||||
print("Error: Run not found")
|
||||
return 1
|
||||
|
||||
# Print progress
|
||||
if status.current_test:
|
||||
sys.stdout.write(f"\r Running: {status.current_test[:60]}...")
|
||||
sys.stdout.flush()
|
||||
|
||||
if status.status in ("completed", "failed"):
|
||||
sys.stdout.write("\r" + " " * 80 + "\r") # Clear line
|
||||
break
|
||||
|
||||
time.sleep(0.5)
|
||||
|
||||
# Print results
|
||||
print(f"Results: {status.passed} passed, {status.failed} failed, {status.skipped} skipped")
|
||||
print()
|
||||
|
||||
# Print failures
|
||||
failures = [r for r in status.results if r.status.value in ("failed", "error")]
|
||||
if failures:
|
||||
print("Failures:")
|
||||
for f in failures:
|
||||
print(f"\n {f.test_id}")
|
||||
print(f" {f.error_message}")
|
||||
|
||||
return 1 if failures else 0
|
||||
|
||||
|
||||
def main(args=None):
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Contract HTTP Tests - Pure HTTP test runner"
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||
|
||||
# discover command
|
||||
discover_parser = subparsers.add_parser("discover", help="List discovered tests")
|
||||
discover_parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||
|
||||
# run command
|
||||
run_parser = subparsers.add_parser("run", help="Run tests")
|
||||
run_parser.add_argument("pattern", nargs="?", help="Filter tests by pattern (e.g., 'mascotas', 'pet_owners')")
|
||||
|
||||
args = parser.parse_args(args)
|
||||
|
||||
if args.command == "discover":
|
||||
cmd_discover(args)
|
||||
elif args.command == "run":
|
||||
sys.exit(cmd_run(args))
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
65
station/tools/tester/config.py
Normal file
65
station/tools/tester/config.py
Normal file
@@ -0,0 +1,65 @@
|
||||
"""
|
||||
Configuration for contract HTTP tests.
|
||||
|
||||
Loads from .env file in this directory, with environment overrides.
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def load_config() -> dict:
|
||||
"""Load configuration from .env file and environment variables."""
|
||||
config = {}
|
||||
|
||||
# Load from .env file in this directory
|
||||
env_file = Path(__file__).parent / ".env"
|
||||
if env_file.exists():
|
||||
with open(env_file) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and not line.startswith("#") and "=" in line:
|
||||
key, value = line.split("=", 1)
|
||||
config[key.strip()] = value.strip()
|
||||
|
||||
# Environment variables override .env file
|
||||
config["CONTRACT_TEST_URL"] = os.environ.get(
|
||||
"CONTRACT_TEST_URL",
|
||||
config.get("CONTRACT_TEST_URL", "")
|
||||
)
|
||||
config["CONTRACT_TEST_API_KEY"] = os.environ.get(
|
||||
"CONTRACT_TEST_API_KEY",
|
||||
config.get("CONTRACT_TEST_API_KEY", "")
|
||||
)
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def load_environments() -> list:
|
||||
"""Load available test environments from JSON file."""
|
||||
environments_file = Path(__file__).parent / "environments.json"
|
||||
|
||||
if environments_file.exists():
|
||||
try:
|
||||
with open(environments_file) as f:
|
||||
return json.load(f)
|
||||
except Exception as e:
|
||||
print(f"Failed to load environments.json: {e}")
|
||||
|
||||
# Default fallback
|
||||
config = load_config()
|
||||
return [
|
||||
{
|
||||
"id": "demo",
|
||||
"name": "Demo",
|
||||
"url": config.get("CONTRACT_TEST_URL", "https://demo.amarmascotas.ar"),
|
||||
"api_key": config.get("CONTRACT_TEST_API_KEY", ""),
|
||||
"description": "Demo environment",
|
||||
"default": True
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
config = load_config()
|
||||
environments = load_environments()
|
||||
342
station/tools/tester/core.py
Normal file
342
station/tools/tester/core.py
Normal file
@@ -0,0 +1,342 @@
|
||||
"""
|
||||
Core logic for test discovery and execution.
|
||||
"""
|
||||
|
||||
import unittest
|
||||
import time
|
||||
import threading
|
||||
import traceback
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class TestStatus(str, Enum):
|
||||
PENDING = "pending"
|
||||
RUNNING = "running"
|
||||
PASSED = "passed"
|
||||
FAILED = "failed"
|
||||
ERROR = "error"
|
||||
SKIPPED = "skipped"
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestInfo:
|
||||
"""Information about a discovered test."""
|
||||
id: str
|
||||
name: str
|
||||
module: str
|
||||
class_name: str
|
||||
method_name: str
|
||||
doc: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestResult:
|
||||
"""Result of a single test execution."""
|
||||
test_id: str
|
||||
name: str
|
||||
status: TestStatus
|
||||
duration: float = 0.0
|
||||
error_message: Optional[str] = None
|
||||
traceback: Optional[str] = None
|
||||
artifacts: list[dict] = field(default_factory=list) # List of artifact metadata
|
||||
|
||||
|
||||
@dataclass
|
||||
class RunStatus:
|
||||
"""Status of a test run."""
|
||||
run_id: str
|
||||
status: str # "running", "completed", "failed"
|
||||
total: int = 0
|
||||
completed: int = 0
|
||||
passed: int = 0
|
||||
failed: int = 0
|
||||
errors: int = 0
|
||||
skipped: int = 0
|
||||
results: list[TestResult] = field(default_factory=list)
|
||||
started_at: Optional[float] = None
|
||||
finished_at: Optional[float] = None
|
||||
current_test: Optional[str] = None
|
||||
|
||||
|
||||
# Global storage for run statuses
|
||||
_runs: dict[str, RunStatus] = {}
|
||||
_runs_lock = threading.Lock()
|
||||
|
||||
|
||||
def discover_tests() -> list[TestInfo]:
|
||||
"""Discover all tests in the tests directory."""
|
||||
tests_dir = Path(__file__).parent / "tests"
|
||||
# top_level_dir must be contracts_http's parent (tools/) so that
|
||||
# relative imports like "from ...base" resolve to contracts_http.base
|
||||
top_level = Path(__file__).parent.parent
|
||||
loader = unittest.TestLoader()
|
||||
|
||||
# Discover tests
|
||||
suite = loader.discover(str(tests_dir), pattern="test_*.py", top_level_dir=str(top_level))
|
||||
|
||||
tests = []
|
||||
|
||||
def extract_tests(suite_or_case):
|
||||
if isinstance(suite_or_case, unittest.TestSuite):
|
||||
for item in suite_or_case:
|
||||
extract_tests(item)
|
||||
elif isinstance(suite_or_case, unittest.TestCase):
|
||||
test_method = getattr(suite_or_case, suite_or_case._testMethodName, None)
|
||||
doc = test_method.__doc__ if test_method else None
|
||||
|
||||
# Build module path relative to tests/
|
||||
module_parts = suite_or_case.__class__.__module__.split(".")
|
||||
# Remove 'contracts_http.tests' prefix if present
|
||||
if len(module_parts) > 2 and module_parts[-3] == "tests":
|
||||
module_name = ".".join(module_parts[-2:])
|
||||
else:
|
||||
module_name = suite_or_case.__class__.__module__
|
||||
|
||||
test_id = f"{module_name}.{suite_or_case.__class__.__name__}.{suite_or_case._testMethodName}"
|
||||
|
||||
tests.append(TestInfo(
|
||||
id=test_id,
|
||||
name=suite_or_case._testMethodName,
|
||||
module=module_name,
|
||||
class_name=suite_or_case.__class__.__name__,
|
||||
method_name=suite_or_case._testMethodName,
|
||||
doc=doc.strip() if doc else None,
|
||||
))
|
||||
|
||||
extract_tests(suite)
|
||||
return tests
|
||||
|
||||
|
||||
def get_tests_tree() -> dict:
|
||||
"""Get tests organized as a tree structure for the UI."""
|
||||
tests = discover_tests()
|
||||
tree = {}
|
||||
|
||||
for test in tests:
|
||||
# Parse module to get folder structure
|
||||
parts = test.module.split(".")
|
||||
folder = parts[0] if parts else "root"
|
||||
|
||||
if folder not in tree:
|
||||
tree[folder] = {"modules": {}, "test_count": 0}
|
||||
|
||||
module_name = parts[-1] if len(parts) > 1 else test.module
|
||||
if module_name not in tree[folder]["modules"]:
|
||||
tree[folder]["modules"][module_name] = {"classes": {}, "test_count": 0}
|
||||
|
||||
if test.class_name not in tree[folder]["modules"][module_name]["classes"]:
|
||||
tree[folder]["modules"][module_name]["classes"][test.class_name] = {"tests": [], "test_count": 0}
|
||||
|
||||
tree[folder]["modules"][module_name]["classes"][test.class_name]["tests"].append({
|
||||
"id": test.id,
|
||||
"name": test.method_name,
|
||||
"doc": test.doc,
|
||||
})
|
||||
tree[folder]["modules"][module_name]["classes"][test.class_name]["test_count"] += 1
|
||||
tree[folder]["modules"][module_name]["test_count"] += 1
|
||||
tree[folder]["test_count"] += 1
|
||||
|
||||
return tree
|
||||
|
||||
|
||||
class ResultCollector(unittest.TestResult):
|
||||
"""Custom test result collector."""
|
||||
|
||||
def __init__(self, run_status: RunStatus):
|
||||
super().__init__()
|
||||
self.run_status = run_status
|
||||
self._test_start_times: dict[str, float] = {}
|
||||
|
||||
def _get_test_id(self, test: unittest.TestCase) -> str:
|
||||
module_parts = test.__class__.__module__.split(".")
|
||||
if len(module_parts) > 2 and module_parts[-3] == "tests":
|
||||
module_name = ".".join(module_parts[-2:])
|
||||
else:
|
||||
module_name = test.__class__.__module__
|
||||
return f"{module_name}.{test.__class__.__name__}.{test._testMethodName}"
|
||||
|
||||
def startTest(self, test):
|
||||
super().startTest(test)
|
||||
test_id = self._get_test_id(test)
|
||||
self._test_start_times[test_id] = time.time()
|
||||
with _runs_lock:
|
||||
self.run_status.current_test = test_id
|
||||
|
||||
def stopTest(self, test):
|
||||
super().stopTest(test)
|
||||
with _runs_lock:
|
||||
self.run_status.current_test = None
|
||||
|
||||
def addSuccess(self, test):
|
||||
super().addSuccess(test)
|
||||
test_id = self._get_test_id(test)
|
||||
duration = time.time() - self._test_start_times.get(test_id, time.time())
|
||||
|
||||
result = TestResult(
|
||||
test_id=test_id,
|
||||
name=test._testMethodName,
|
||||
status=TestStatus.PASSED,
|
||||
duration=duration,
|
||||
)
|
||||
|
||||
with _runs_lock:
|
||||
self.run_status.results.append(result)
|
||||
self.run_status.completed += 1
|
||||
self.run_status.passed += 1
|
||||
|
||||
def addFailure(self, test, err):
|
||||
super().addFailure(test, err)
|
||||
test_id = self._get_test_id(test)
|
||||
duration = time.time() - self._test_start_times.get(test_id, time.time())
|
||||
|
||||
result = TestResult(
|
||||
test_id=test_id,
|
||||
name=test._testMethodName,
|
||||
status=TestStatus.FAILED,
|
||||
duration=duration,
|
||||
error_message=str(err[1]),
|
||||
traceback="".join(traceback.format_exception(*err)),
|
||||
)
|
||||
|
||||
with _runs_lock:
|
||||
self.run_status.results.append(result)
|
||||
self.run_status.completed += 1
|
||||
self.run_status.failed += 1
|
||||
|
||||
def addError(self, test, err):
|
||||
super().addError(test, err)
|
||||
test_id = self._get_test_id(test)
|
||||
duration = time.time() - self._test_start_times.get(test_id, time.time())
|
||||
|
||||
result = TestResult(
|
||||
test_id=test_id,
|
||||
name=test._testMethodName,
|
||||
status=TestStatus.ERROR,
|
||||
duration=duration,
|
||||
error_message=str(err[1]),
|
||||
traceback="".join(traceback.format_exception(*err)),
|
||||
)
|
||||
|
||||
with _runs_lock:
|
||||
self.run_status.results.append(result)
|
||||
self.run_status.completed += 1
|
||||
self.run_status.errors += 1
|
||||
|
||||
def addSkip(self, test, reason):
|
||||
super().addSkip(test, reason)
|
||||
test_id = self._get_test_id(test)
|
||||
duration = time.time() - self._test_start_times.get(test_id, time.time())
|
||||
|
||||
result = TestResult(
|
||||
test_id=test_id,
|
||||
name=test._testMethodName,
|
||||
status=TestStatus.SKIPPED,
|
||||
duration=duration,
|
||||
error_message=reason,
|
||||
)
|
||||
|
||||
with _runs_lock:
|
||||
self.run_status.results.append(result)
|
||||
self.run_status.completed += 1
|
||||
self.run_status.skipped += 1
|
||||
|
||||
|
||||
def _run_tests_thread(run_id: str, test_ids: Optional[list[str]] = None):
|
||||
"""Run tests in a background thread."""
|
||||
tests_dir = Path(__file__).parent / "tests"
|
||||
top_level = Path(__file__).parent.parent
|
||||
loader = unittest.TestLoader()
|
||||
|
||||
# Discover all tests
|
||||
suite = loader.discover(str(tests_dir), pattern="test_*.py", top_level_dir=str(top_level))
|
||||
|
||||
# Filter to selected tests if specified
|
||||
if test_ids:
|
||||
filtered_suite = unittest.TestSuite()
|
||||
|
||||
def filter_tests(suite_or_case):
|
||||
if isinstance(suite_or_case, unittest.TestSuite):
|
||||
for item in suite_or_case:
|
||||
filter_tests(item)
|
||||
elif isinstance(suite_or_case, unittest.TestCase):
|
||||
module_parts = suite_or_case.__class__.__module__.split(".")
|
||||
if len(module_parts) > 2 and module_parts[-3] == "tests":
|
||||
module_name = ".".join(module_parts[-2:])
|
||||
else:
|
||||
module_name = suite_or_case.__class__.__module__
|
||||
test_id = f"{module_name}.{suite_or_case.__class__.__name__}.{suite_or_case._testMethodName}"
|
||||
|
||||
# Check if this test matches any of the requested IDs
|
||||
for requested_id in test_ids:
|
||||
if test_id == requested_id or test_id.startswith(requested_id + ".") or requested_id in test_id:
|
||||
filtered_suite.addTest(suite_or_case)
|
||||
break
|
||||
|
||||
filter_tests(suite)
|
||||
suite = filtered_suite
|
||||
|
||||
# Count total tests
|
||||
total = suite.countTestCases()
|
||||
|
||||
with _runs_lock:
|
||||
_runs[run_id].total = total
|
||||
_runs[run_id].started_at = time.time()
|
||||
|
||||
# Run tests with our collector
|
||||
collector = ResultCollector(_runs[run_id])
|
||||
|
||||
try:
|
||||
suite.run(collector)
|
||||
except Exception as e:
|
||||
with _runs_lock:
|
||||
_runs[run_id].status = "failed"
|
||||
|
||||
with _runs_lock:
|
||||
_runs[run_id].status = "completed"
|
||||
_runs[run_id].finished_at = time.time()
|
||||
|
||||
|
||||
def start_test_run(test_ids: Optional[list[str]] = None) -> str:
|
||||
"""Start a test run in the background. Returns run_id."""
|
||||
run_id = str(uuid.uuid4())[:8]
|
||||
|
||||
run_status = RunStatus(
|
||||
run_id=run_id,
|
||||
status="running",
|
||||
)
|
||||
|
||||
with _runs_lock:
|
||||
_runs[run_id] = run_status
|
||||
|
||||
# Start background thread
|
||||
thread = threading.Thread(target=_run_tests_thread, args=(run_id, test_ids))
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
return run_id
|
||||
|
||||
|
||||
def get_run_status(run_id: str) -> Optional[RunStatus]:
|
||||
"""Get the status of a test run."""
|
||||
with _runs_lock:
|
||||
return _runs.get(run_id)
|
||||
|
||||
|
||||
def list_runs() -> list[dict]:
|
||||
"""List all test runs."""
|
||||
with _runs_lock:
|
||||
return [
|
||||
{
|
||||
"run_id": run.run_id,
|
||||
"status": run.status,
|
||||
"total": run.total,
|
||||
"completed": run.completed,
|
||||
"passed": run.passed,
|
||||
"failed": run.failed,
|
||||
}
|
||||
for run in _runs.values()
|
||||
]
|
||||
37
station/tools/tester/endpoints.py
Normal file
37
station/tools/tester/endpoints.py
Normal file
@@ -0,0 +1,37 @@
|
||||
"""
|
||||
API Endpoints - Single source of truth for contract tests.
|
||||
|
||||
If API paths or versioning changes, update here only.
|
||||
"""
|
||||
|
||||
|
||||
class Endpoints:
|
||||
"""API endpoint paths"""
|
||||
|
||||
# ==========================================================================
|
||||
# Mascotas
|
||||
# ==========================================================================
|
||||
PET_OWNERS = "/mascotas/api/v1/pet-owners/"
|
||||
PET_OWNER_DETAIL = "/mascotas/api/v1/pet-owners/{id}/"
|
||||
PETS = "/mascotas/api/v1/pets/"
|
||||
PET_DETAIL = "/mascotas/api/v1/pets/{id}/"
|
||||
COVERAGE_CHECK = "/mascotas/api/v1/coverage/check/"
|
||||
|
||||
# ==========================================================================
|
||||
# Productos
|
||||
# ==========================================================================
|
||||
SERVICES = "/productos/api/v1/services/"
|
||||
CART = "/productos/api/v1/cart/"
|
||||
CART_DETAIL = "/productos/api/v1/cart/{id}/"
|
||||
|
||||
# ==========================================================================
|
||||
# Solicitudes
|
||||
# ==========================================================================
|
||||
SERVICE_REQUESTS = "/solicitudes/service-requests/"
|
||||
SERVICE_REQUEST_DETAIL = "/solicitudes/service-requests/{id}/"
|
||||
|
||||
# ==========================================================================
|
||||
# Auth
|
||||
# ==========================================================================
|
||||
TOKEN = "/api/token/"
|
||||
TOKEN_REFRESH = "/api/token/refresh/"
|
||||
31
station/tools/tester/environments.json
Normal file
31
station/tools/tester/environments.json
Normal file
@@ -0,0 +1,31 @@
|
||||
[
|
||||
{
|
||||
"id": "demo",
|
||||
"name": "Demo",
|
||||
"url": "https://demo.amarmascotas.ar",
|
||||
"api_key": "",
|
||||
"description": "Demo environment for testing",
|
||||
"default": true
|
||||
},
|
||||
{
|
||||
"id": "dev",
|
||||
"name": "Development",
|
||||
"url": "https://dev.amarmascotas.ar",
|
||||
"api_key": "",
|
||||
"description": "Development environment"
|
||||
},
|
||||
{
|
||||
"id": "stage",
|
||||
"name": "Staging",
|
||||
"url": "https://stage.amarmascotas.ar",
|
||||
"api_key": "",
|
||||
"description": "Staging environment"
|
||||
},
|
||||
{
|
||||
"id": "prod",
|
||||
"name": "Production",
|
||||
"url": "https://amarmascotas.ar",
|
||||
"api_key": "",
|
||||
"description": "Production environment (use with caution!)"
|
||||
}
|
||||
]
|
||||
5
station/tools/tester/features/.gitignore
vendored
Normal file
5
station/tools/tester/features/.gitignore
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# Ignore synced feature files
|
||||
# These are synced from album/book/gherkin-samples/
|
||||
*.feature
|
||||
es/
|
||||
en/
|
||||
88
station/tools/tester/get-api-key.sh
Executable file
88
station/tools/tester/get-api-key.sh
Executable file
@@ -0,0 +1,88 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Get CONTRACT_TEST_API_KEY from the database
|
||||
#
|
||||
# Usage:
|
||||
# ./get-api-key.sh # Uses env vars or defaults
|
||||
# ./get-api-key.sh --docker # Query via docker exec
|
||||
# ./get-api-key.sh --host db.example.com --password secret
|
||||
#
|
||||
# Environment variables:
|
||||
# DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# Defaults
|
||||
DB_HOST="${DB_HOST:-localhost}"
|
||||
DB_PORT="${DB_PORT:-5432}"
|
||||
DB_NAME="${DB_NAME:-amarback}"
|
||||
DB_USER="${DB_USER:-postgres}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-}"
|
||||
DOCKER_CONTAINER=""
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--docker)
|
||||
DOCKER_CONTAINER="${2:-core_nest_db}"
|
||||
shift 2 || shift 1
|
||||
;;
|
||||
--host)
|
||||
DB_HOST="$2"
|
||||
shift 2
|
||||
;;
|
||||
--port)
|
||||
DB_PORT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--name)
|
||||
DB_NAME="$2"
|
||||
shift 2
|
||||
;;
|
||||
--user)
|
||||
DB_USER="$2"
|
||||
shift 2
|
||||
;;
|
||||
--password)
|
||||
DB_PASSWORD="$2"
|
||||
shift 2
|
||||
;;
|
||||
--help|-h)
|
||||
echo "Usage: $0 [options]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --docker [container] Query via docker exec (default: core_nest_db)"
|
||||
echo " --host HOST Database host"
|
||||
echo " --port PORT Database port (default: 5432)"
|
||||
echo " --name NAME Database name (default: amarback)"
|
||||
echo " --user USER Database user (default: postgres)"
|
||||
echo " --password PASS Database password"
|
||||
echo ""
|
||||
echo "Environment variables: DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
QUERY="SELECT key FROM common_apikey WHERE is_active=true LIMIT 1;"
|
||||
|
||||
if [[ -n "$DOCKER_CONTAINER" ]]; then
|
||||
# Query via docker
|
||||
API_KEY=$(docker exec "$DOCKER_CONTAINER" psql -U "$DB_USER" -d "$DB_NAME" -t -c "$QUERY" 2>/dev/null | tr -d ' \n')
|
||||
else
|
||||
# Query directly
|
||||
export PGPASSWORD="$DB_PASSWORD"
|
||||
API_KEY=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -t -c "$QUERY" 2>/dev/null | tr -d ' \n')
|
||||
fi
|
||||
|
||||
if [[ -z "$API_KEY" ]]; then
|
||||
echo "Error: No active API key found in database" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$API_KEY"
|
||||
1
station/tools/tester/gherkin/__init__.py
Normal file
1
station/tools/tester/gherkin/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Gherkin integration for tester."""
|
||||
175
station/tools/tester/gherkin/mapper.py
Normal file
175
station/tools/tester/gherkin/mapper.py
Normal file
@@ -0,0 +1,175 @@
|
||||
"""
|
||||
Map tests to Gherkin scenarios based on metadata.
|
||||
|
||||
Tests can declare their Gherkin metadata via docstrings:
|
||||
|
||||
```python
|
||||
def test_coverage_check(self):
|
||||
'''
|
||||
Feature: Reservar turno veterinario
|
||||
Scenario: Verificar cobertura en zona disponible
|
||||
Tags: @smoke @coverage
|
||||
'''
|
||||
```
|
||||
|
||||
Or via class docstrings:
|
||||
|
||||
```python
|
||||
class TestCoverageFlow(ContractHTTPTestCase):
|
||||
"""
|
||||
Feature: Reservar turno veterinario
|
||||
Tags: @coverage
|
||||
"""
|
||||
```
|
||||
"""
|
||||
|
||||
import re
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestGherkinMetadata:
|
||||
"""Gherkin metadata extracted from a test."""
|
||||
feature: Optional[str] = None
|
||||
scenario: Optional[str] = None
|
||||
tags: list[str] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.tags is None:
|
||||
self.tags = []
|
||||
|
||||
|
||||
def extract_gherkin_metadata(docstring: Optional[str]) -> TestGherkinMetadata:
|
||||
"""
|
||||
Extract Gherkin metadata from a test docstring.
|
||||
|
||||
Looks for:
|
||||
- Feature: <name>
|
||||
- Scenario: <name>
|
||||
- Tags: @tag1 @tag2
|
||||
|
||||
Args:
|
||||
docstring: Test or class docstring
|
||||
|
||||
Returns:
|
||||
TestGherkinMetadata with extracted info
|
||||
"""
|
||||
if not docstring:
|
||||
return TestGherkinMetadata()
|
||||
|
||||
# Extract Feature
|
||||
feature = None
|
||||
feature_match = re.search(r"Feature:\s*(.+)", docstring)
|
||||
if feature_match:
|
||||
feature = feature_match.group(1).strip()
|
||||
|
||||
# Extract Scenario (also try Spanish: Escenario)
|
||||
scenario = None
|
||||
scenario_match = re.search(r"(Scenario|Escenario):\s*(.+)", docstring)
|
||||
if scenario_match:
|
||||
scenario = scenario_match.group(2).strip()
|
||||
|
||||
# Extract Tags
|
||||
tags = []
|
||||
tags_match = re.search(r"Tags:\s*(.+)", docstring)
|
||||
if tags_match:
|
||||
tags_str = tags_match.group(1).strip()
|
||||
tags = re.findall(r"@[\w-]+", tags_str)
|
||||
|
||||
return TestGherkinMetadata(
|
||||
feature=feature,
|
||||
scenario=scenario,
|
||||
tags=tags
|
||||
)
|
||||
|
||||
|
||||
def has_gherkin_metadata(docstring: Optional[str]) -> bool:
|
||||
"""Check if a docstring contains Gherkin metadata."""
|
||||
if not docstring:
|
||||
return False
|
||||
|
||||
return bool(
|
||||
re.search(r"Feature:\s*", docstring) or
|
||||
re.search(r"Scenario:\s*", docstring) or
|
||||
re.search(r"Escenario:\s*", docstring) or
|
||||
re.search(r"Tags:\s*@", docstring)
|
||||
)
|
||||
|
||||
|
||||
def match_test_to_feature(
|
||||
test_metadata: TestGherkinMetadata,
|
||||
feature_names: list[str]
|
||||
) -> Optional[str]:
|
||||
"""
|
||||
Match a test's feature metadata to an actual feature name.
|
||||
|
||||
Uses fuzzy matching if exact match not found.
|
||||
|
||||
Args:
|
||||
test_metadata: Extracted test metadata
|
||||
feature_names: List of available feature names
|
||||
|
||||
Returns:
|
||||
Matched feature name or None
|
||||
"""
|
||||
if not test_metadata.feature:
|
||||
return None
|
||||
|
||||
# Exact match
|
||||
if test_metadata.feature in feature_names:
|
||||
return test_metadata.feature
|
||||
|
||||
# Case-insensitive match
|
||||
test_feature_lower = test_metadata.feature.lower()
|
||||
for feature_name in feature_names:
|
||||
if feature_name.lower() == test_feature_lower:
|
||||
return feature_name
|
||||
|
||||
# Partial match (feature name contains test feature or vice versa)
|
||||
for feature_name in feature_names:
|
||||
if test_feature_lower in feature_name.lower():
|
||||
return feature_name
|
||||
if feature_name.lower() in test_feature_lower:
|
||||
return feature_name
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def match_test_to_scenario(
|
||||
test_metadata: TestGherkinMetadata,
|
||||
scenario_names: list[str]
|
||||
) -> Optional[str]:
|
||||
"""
|
||||
Match a test's scenario metadata to an actual scenario name.
|
||||
|
||||
Uses fuzzy matching if exact match not found.
|
||||
|
||||
Args:
|
||||
test_metadata: Extracted test metadata
|
||||
scenario_names: List of available scenario names
|
||||
|
||||
Returns:
|
||||
Matched scenario name or None
|
||||
"""
|
||||
if not test_metadata.scenario:
|
||||
return None
|
||||
|
||||
# Exact match
|
||||
if test_metadata.scenario in scenario_names:
|
||||
return test_metadata.scenario
|
||||
|
||||
# Case-insensitive match
|
||||
test_scenario_lower = test_metadata.scenario.lower()
|
||||
for scenario_name in scenario_names:
|
||||
if scenario_name.lower() == test_scenario_lower:
|
||||
return scenario_name
|
||||
|
||||
# Partial match
|
||||
for scenario_name in scenario_names:
|
||||
if test_scenario_lower in scenario_name.lower():
|
||||
return scenario_name
|
||||
if scenario_name.lower() in test_scenario_lower:
|
||||
return scenario_name
|
||||
|
||||
return None
|
||||
231
station/tools/tester/gherkin/parser.py
Normal file
231
station/tools/tester/gherkin/parser.py
Normal file
@@ -0,0 +1,231 @@
|
||||
"""
|
||||
Parse Gherkin .feature files.
|
||||
|
||||
Simple parser without external dependencies - parses the subset we need.
|
||||
For full Gherkin support, could use gherkin-python package later.
|
||||
"""
|
||||
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
|
||||
@dataclass
|
||||
class GherkinScenario:
|
||||
"""A Gherkin scenario."""
|
||||
name: str
|
||||
description: str
|
||||
tags: list[str] = field(default_factory=list)
|
||||
steps: list[str] = field(default_factory=list)
|
||||
examples: dict = field(default_factory=dict)
|
||||
scenario_type: str = "Scenario" # or "Scenario Outline" / "Esquema del escenario"
|
||||
|
||||
|
||||
@dataclass
|
||||
class GherkinFeature:
|
||||
"""A parsed Gherkin feature file."""
|
||||
name: str
|
||||
description: str
|
||||
file_path: str
|
||||
language: str = "en" # or "es"
|
||||
tags: list[str] = field(default_factory=list)
|
||||
background: Optional[dict] = None
|
||||
scenarios: list[GherkinScenario] = field(default_factory=list)
|
||||
|
||||
|
||||
def parse_feature_file(file_path: Path) -> Optional[GherkinFeature]:
|
||||
"""
|
||||
Parse a Gherkin .feature file.
|
||||
|
||||
Supports both English and Spanish keywords.
|
||||
Extracts: Feature name, scenarios, tags, steps.
|
||||
"""
|
||||
if not file_path.exists():
|
||||
return None
|
||||
|
||||
try:
|
||||
content = file_path.read_text(encoding='utf-8')
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
# Detect language
|
||||
language = "en"
|
||||
if re.search(r"#\s*language:\s*es", content):
|
||||
language = "es"
|
||||
|
||||
# Keywords by language
|
||||
if language == "es":
|
||||
feature_kw = r"Característica"
|
||||
scenario_kw = r"Escenario"
|
||||
outline_kw = r"Esquema del escenario"
|
||||
background_kw = r"Antecedentes"
|
||||
examples_kw = r"Ejemplos"
|
||||
given_kw = r"Dado"
|
||||
when_kw = r"Cuando"
|
||||
then_kw = r"Entonces"
|
||||
and_kw = r"Y"
|
||||
but_kw = r"Pero"
|
||||
else:
|
||||
feature_kw = r"Feature"
|
||||
scenario_kw = r"Scenario"
|
||||
outline_kw = r"Scenario Outline"
|
||||
background_kw = r"Background"
|
||||
examples_kw = r"Examples"
|
||||
given_kw = r"Given"
|
||||
when_kw = r"When"
|
||||
then_kw = r"Then"
|
||||
and_kw = r"And"
|
||||
but_kw = r"But"
|
||||
|
||||
lines = content.split('\n')
|
||||
|
||||
# Extract feature
|
||||
feature_name = None
|
||||
feature_desc = []
|
||||
feature_tags = []
|
||||
scenarios = []
|
||||
current_scenario = None
|
||||
current_tags = []
|
||||
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
line = lines[i].strip()
|
||||
|
||||
# Skip comments and empty lines
|
||||
if not line or line.startswith('#'):
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Tags
|
||||
if line.startswith('@'):
|
||||
tags = re.findall(r'@[\w-]+', line)
|
||||
current_tags.extend(tags)
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Feature
|
||||
feature_match = re.match(rf"^{feature_kw}:\s*(.+)", line)
|
||||
if feature_match:
|
||||
feature_name = feature_match.group(1).strip()
|
||||
feature_tags = current_tags.copy()
|
||||
current_tags = []
|
||||
|
||||
# Read feature description
|
||||
i += 1
|
||||
while i < len(lines):
|
||||
line = lines[i].strip()
|
||||
if not line or line.startswith('#'):
|
||||
i += 1
|
||||
continue
|
||||
# Stop at scenario or background
|
||||
if re.match(rf"^({scenario_kw}|{outline_kw}|{background_kw}):", line):
|
||||
break
|
||||
feature_desc.append(line)
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Scenario
|
||||
scenario_match = re.match(rf"^({scenario_kw}|{outline_kw}):\s*(.+)", line)
|
||||
if scenario_match:
|
||||
# Save previous scenario
|
||||
if current_scenario:
|
||||
scenarios.append(current_scenario)
|
||||
|
||||
scenario_type = scenario_match.group(1)
|
||||
scenario_name = scenario_match.group(2).strip()
|
||||
|
||||
current_scenario = GherkinScenario(
|
||||
name=scenario_name,
|
||||
description="",
|
||||
tags=current_tags.copy(),
|
||||
steps=[],
|
||||
scenario_type=scenario_type
|
||||
)
|
||||
current_tags = []
|
||||
|
||||
# Read scenario steps
|
||||
i += 1
|
||||
while i < len(lines):
|
||||
line = lines[i].strip()
|
||||
|
||||
# Empty or comment
|
||||
if not line or line.startswith('#'):
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# New scenario or feature-level element
|
||||
if re.match(rf"^({scenario_kw}|{outline_kw}|{examples_kw}):", line):
|
||||
break
|
||||
|
||||
# Tags (start of next scenario)
|
||||
if line.startswith('@'):
|
||||
break
|
||||
|
||||
# Step keywords
|
||||
if re.match(rf"^({given_kw}|{when_kw}|{then_kw}|{and_kw}|{but_kw})\s+", line):
|
||||
current_scenario.steps.append(line)
|
||||
|
||||
i += 1
|
||||
continue
|
||||
|
||||
i += 1
|
||||
|
||||
# Add last scenario
|
||||
if current_scenario:
|
||||
scenarios.append(current_scenario)
|
||||
|
||||
if not feature_name:
|
||||
return None
|
||||
|
||||
return GherkinFeature(
|
||||
name=feature_name,
|
||||
description=" ".join(feature_desc),
|
||||
file_path=str(file_path),
|
||||
language=language,
|
||||
tags=feature_tags,
|
||||
scenarios=scenarios
|
||||
)
|
||||
|
||||
|
||||
def discover_features(features_dir: Path) -> list[GherkinFeature]:
|
||||
"""
|
||||
Discover all .feature files in the features directory.
|
||||
"""
|
||||
if not features_dir.exists():
|
||||
return []
|
||||
|
||||
features = []
|
||||
|
||||
for feature_file in features_dir.rglob("*.feature"):
|
||||
parsed = parse_feature_file(feature_file)
|
||||
if parsed:
|
||||
features.append(parsed)
|
||||
|
||||
return features
|
||||
|
||||
|
||||
def extract_tags_from_features(features: list[GherkinFeature]) -> set[str]:
|
||||
"""Extract all unique tags from features."""
|
||||
tags = set()
|
||||
|
||||
for feature in features:
|
||||
tags.update(feature.tags)
|
||||
for scenario in feature.scenarios:
|
||||
tags.update(scenario.tags)
|
||||
|
||||
return tags
|
||||
|
||||
|
||||
def get_feature_names(features: list[GherkinFeature]) -> list[str]:
|
||||
"""Get list of feature names."""
|
||||
return [f.name for f in features]
|
||||
|
||||
|
||||
def get_scenario_names(features: list[GherkinFeature]) -> list[str]:
|
||||
"""Get list of all scenario names across all features."""
|
||||
scenarios = []
|
||||
for feature in features:
|
||||
for scenario in feature.scenarios:
|
||||
scenarios.append(scenario.name)
|
||||
return scenarios
|
||||
93
station/tools/tester/gherkin/sync.py
Normal file
93
station/tools/tester/gherkin/sync.py
Normal file
@@ -0,0 +1,93 @@
|
||||
"""
|
||||
Sync Gherkin feature files from album/book/gherkin-samples/ to tester/features/.
|
||||
"""
|
||||
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def sync_features_from_album(
|
||||
album_path: Optional[Path] = None,
|
||||
tester_path: Optional[Path] = None
|
||||
) -> dict:
|
||||
"""
|
||||
Sync .feature files from album/book/gherkin-samples/ to ward/tools/tester/features/.
|
||||
|
||||
Args:
|
||||
album_path: Path to album/book/gherkin-samples/ (auto-detected if None)
|
||||
tester_path: Path to ward/tools/tester/features/ (auto-detected if None)
|
||||
|
||||
Returns:
|
||||
Dict with sync stats: {synced: int, skipped: int, errors: int}
|
||||
"""
|
||||
# Auto-detect paths if not provided
|
||||
if tester_path is None:
|
||||
tester_path = Path(__file__).parent.parent / "features"
|
||||
|
||||
if album_path is None:
|
||||
# Attempt to find album in pawprint
|
||||
pawprint_root = Path(__file__).parent.parent.parent.parent
|
||||
album_path = pawprint_root / "album" / "book" / "gherkin-samples"
|
||||
|
||||
# Ensure paths exist
|
||||
if not album_path.exists():
|
||||
return {
|
||||
"synced": 0,
|
||||
"skipped": 0,
|
||||
"errors": 1,
|
||||
"message": f"Album path not found: {album_path}"
|
||||
}
|
||||
|
||||
tester_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Sync stats
|
||||
synced = 0
|
||||
skipped = 0
|
||||
errors = 0
|
||||
|
||||
# Find all .feature files in album
|
||||
for feature_file in album_path.rglob("*.feature"):
|
||||
# Get relative path from album root
|
||||
relative_path = feature_file.relative_to(album_path)
|
||||
|
||||
# Destination path
|
||||
dest_file = tester_path / relative_path
|
||||
|
||||
try:
|
||||
# Create parent directories
|
||||
dest_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy file
|
||||
shutil.copy2(feature_file, dest_file)
|
||||
synced += 1
|
||||
|
||||
except Exception as e:
|
||||
errors += 1
|
||||
|
||||
return {
|
||||
"synced": synced,
|
||||
"skipped": skipped,
|
||||
"errors": errors,
|
||||
"message": f"Synced {synced} feature files from {album_path}"
|
||||
}
|
||||
|
||||
|
||||
def clean_features_dir(features_dir: Optional[Path] = None):
|
||||
"""
|
||||
Clean the features directory (remove all .feature files).
|
||||
|
||||
Useful before re-syncing to ensure no stale files.
|
||||
"""
|
||||
if features_dir is None:
|
||||
features_dir = Path(__file__).parent.parent / "features"
|
||||
|
||||
if not features_dir.exists():
|
||||
return
|
||||
|
||||
# Remove all .feature files
|
||||
for feature_file in features_dir.rglob("*.feature"):
|
||||
try:
|
||||
feature_file.unlink()
|
||||
except Exception:
|
||||
pass
|
||||
44
station/tools/tester/helpers.py
Normal file
44
station/tools/tester/helpers.py
Normal file
@@ -0,0 +1,44 @@
|
||||
"""
|
||||
Contract Tests - Shared test data helpers.
|
||||
|
||||
Used across all endpoint tests to generate consistent test data.
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
|
||||
def unique_email(prefix="test"):
|
||||
"""Generate unique email for test data"""
|
||||
return f"{prefix}_{int(time.time() * 1000)}@contract-test.local"
|
||||
|
||||
|
||||
def sample_pet_owner(email=None):
|
||||
"""Generate sample pet owner data"""
|
||||
return {
|
||||
"first_name": "Test",
|
||||
"last_name": "Usuario",
|
||||
"email": email or unique_email("owner"),
|
||||
"phone": "1155667788",
|
||||
"address": "Av. Santa Fe 1234",
|
||||
"geo_latitude": -34.5955,
|
||||
"geo_longitude": -58.4166,
|
||||
}
|
||||
|
||||
|
||||
SAMPLE_CAT = {
|
||||
"name": "TestCat",
|
||||
"pet_type": "CAT",
|
||||
"is_neutered": False,
|
||||
}
|
||||
|
||||
SAMPLE_DOG = {
|
||||
"name": "TestDog",
|
||||
"pet_type": "DOG",
|
||||
"is_neutered": False,
|
||||
}
|
||||
|
||||
SAMPLE_NEUTERED_CAT = {
|
||||
"name": "NeuteredCat",
|
||||
"pet_type": "CAT",
|
||||
"is_neutered": True,
|
||||
}
|
||||
182
station/tools/tester/index.py
Normal file
182
station/tools/tester/index.py
Normal file
@@ -0,0 +1,182 @@
|
||||
"""
|
||||
Test index generator - creates browsable view of available tests.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, List
|
||||
import ast
|
||||
|
||||
|
||||
def parse_test_file(file_path: Path) -> Dict:
|
||||
"""Parse a test file and extract test methods with docstrings."""
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
tree = ast.parse(f.read())
|
||||
|
||||
module_doc = ast.get_docstring(tree)
|
||||
classes = []
|
||||
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.ClassDef):
|
||||
class_doc = ast.get_docstring(node)
|
||||
methods = []
|
||||
|
||||
for item in node.body:
|
||||
if isinstance(item, ast.FunctionDef) and item.name.startswith('test_'):
|
||||
method_doc = ast.get_docstring(item)
|
||||
methods.append({
|
||||
'name': item.name,
|
||||
'doc': method_doc or "No description"
|
||||
})
|
||||
|
||||
if methods: # Only include classes with test methods
|
||||
classes.append({
|
||||
'name': node.name,
|
||||
'doc': class_doc or "No description",
|
||||
'methods': methods
|
||||
})
|
||||
|
||||
return {
|
||||
'file': file_path.name,
|
||||
'module_doc': module_doc or "No module description",
|
||||
'classes': classes
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'file': file_path.name,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
|
||||
def build_test_index(tests_dir: Path) -> Dict:
|
||||
"""
|
||||
Build a hierarchical index of all tests.
|
||||
|
||||
Returns structure:
|
||||
{
|
||||
'mascotas': {
|
||||
'test_pet_owners.py': {...},
|
||||
'test_pets.py': {...}
|
||||
},
|
||||
'productos': {...},
|
||||
...
|
||||
}
|
||||
"""
|
||||
index = {}
|
||||
|
||||
# Find all domain directories (mascotas, productos, etc.)
|
||||
for domain_dir in tests_dir.iterdir():
|
||||
if not domain_dir.is_dir():
|
||||
continue
|
||||
if domain_dir.name.startswith('_'):
|
||||
continue
|
||||
|
||||
domain_tests = {}
|
||||
|
||||
# Find all test_*.py files in domain
|
||||
for test_file in domain_dir.glob('test_*.py'):
|
||||
test_info = parse_test_file(test_file)
|
||||
domain_tests[test_file.name] = test_info
|
||||
|
||||
if domain_tests: # Only include domains with tests
|
||||
index[domain_dir.name] = domain_tests
|
||||
|
||||
return index
|
||||
|
||||
|
||||
def generate_markdown_index(index: Dict) -> str:
|
||||
"""Generate markdown representation of test index."""
|
||||
lines = ["# Contract Tests Index\n"]
|
||||
|
||||
for domain, files in sorted(index.items()):
|
||||
lines.append(f"## {domain.capitalize()}\n")
|
||||
|
||||
for filename, file_info in sorted(files.items()):
|
||||
if 'error' in file_info:
|
||||
lines.append(f"### {filename} ⚠️ Parse Error")
|
||||
lines.append(f"```\n{file_info['error']}\n```\n")
|
||||
continue
|
||||
|
||||
lines.append(f"### {filename}")
|
||||
lines.append(f"{file_info['module_doc']}\n")
|
||||
|
||||
for cls in file_info['classes']:
|
||||
lines.append(f"#### {cls['name']}")
|
||||
lines.append(f"*{cls['doc']}*\n")
|
||||
|
||||
for method in cls['methods']:
|
||||
# Extract first line of docstring
|
||||
first_line = method['doc'].split('\n')[0].strip()
|
||||
lines.append(f"- `{method['name']}` - {first_line}")
|
||||
|
||||
lines.append("")
|
||||
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def generate_html_index(index: Dict) -> str:
|
||||
"""Generate HTML representation of test index."""
|
||||
html = ['<!DOCTYPE html><html><head>']
|
||||
html.append('<meta charset="utf-8">')
|
||||
html.append('<title>Contract Tests Index</title>')
|
||||
html.append('<style>')
|
||||
html.append('''
|
||||
body { font-family: system-ui, -apple-system, sans-serif; max-width: 1200px; margin: 0 auto; padding: 20px; }
|
||||
h1 { color: #2c3e50; border-bottom: 3px solid #3498db; padding-bottom: 10px; }
|
||||
h2 { color: #34495e; margin-top: 40px; border-bottom: 2px solid #95a5a6; padding-bottom: 8px; }
|
||||
h3 { color: #7f8c8d; margin-top: 30px; }
|
||||
h4 { color: #95a5a6; margin-top: 20px; margin-bottom: 10px; }
|
||||
.module-doc { font-style: italic; color: #7f8c8d; margin-bottom: 15px; }
|
||||
.class-doc { font-style: italic; color: #95a5a6; margin-bottom: 10px; }
|
||||
.test-method { margin-left: 20px; padding: 8px; background: #ecf0f1; margin-bottom: 5px; border-radius: 4px; }
|
||||
.test-name { font-family: monospace; color: #2980b9; font-weight: bold; }
|
||||
.test-doc { color: #34495e; margin-left: 10px; }
|
||||
.error { background: #e74c3c; color: white; padding: 10px; border-radius: 4px; }
|
||||
.domain-badge { display: inline-block; background: #3498db; color: white; padding: 3px 10px; border-radius: 12px; font-size: 12px; margin-left: 10px; }
|
||||
''')
|
||||
html.append('</style></head><body>')
|
||||
|
||||
html.append('<h1>Contract Tests Index</h1>')
|
||||
html.append(f'<p>Total domains: {len(index)}</p>')
|
||||
|
||||
for domain, files in sorted(index.items()):
|
||||
test_count = sum(len(f.get('classes', [])) for f in files.values())
|
||||
html.append(f'<h2>{domain.capitalize()} <span class="domain-badge">{test_count} test classes</span></h2>')
|
||||
|
||||
for filename, file_info in sorted(files.items()):
|
||||
if 'error' in file_info:
|
||||
html.append(f'<h3>{filename} ⚠️</h3>')
|
||||
html.append(f'<div class="error">Parse Error: {file_info["error"]}</div>')
|
||||
continue
|
||||
|
||||
html.append(f'<h3>{filename}</h3>')
|
||||
html.append(f'<div class="module-doc">{file_info["module_doc"]}</div>')
|
||||
|
||||
for cls in file_info['classes']:
|
||||
html.append(f'<h4>{cls["name"]}</h4>')
|
||||
html.append(f'<div class="class-doc">{cls["doc"]}</div>')
|
||||
|
||||
for method in cls['methods']:
|
||||
first_line = method['doc'].split('\n')[0].strip()
|
||||
html.append(f'<div class="test-method">')
|
||||
html.append(f'<span class="test-name">{method["name"]}</span>')
|
||||
html.append(f'<span class="test-doc">{first_line}</span>')
|
||||
html.append('</div>')
|
||||
|
||||
html.append('</body></html>')
|
||||
return '\n'.join(html)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# CLI usage
|
||||
import sys
|
||||
|
||||
tests_dir = Path(__file__).parent / 'tests'
|
||||
index = build_test_index(tests_dir)
|
||||
|
||||
if '--html' in sys.argv:
|
||||
print(generate_html_index(index))
|
||||
else:
|
||||
print(generate_markdown_index(index))
|
||||
119
station/tools/tester/playwright/README.md
Normal file
119
station/tools/tester/playwright/README.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# Playwright Test Integration
|
||||
|
||||
Frontend test support for ward/tools/tester.
|
||||
|
||||
## Features
|
||||
|
||||
- Discover Playwright tests (.spec.ts files)
|
||||
- Execute tests with Playwright runner
|
||||
- Capture video recordings and screenshots
|
||||
- Stream artifacts via API endpoints
|
||||
- Inline video/screenshot playback in test results
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
ward/tools/tester/
|
||||
├── playwright/
|
||||
│ ├── discovery.py # Find .spec.ts tests
|
||||
│ ├── runner.py # Execute Playwright tests
|
||||
│ └── artifacts.py # Store and serve artifacts
|
||||
├── frontend-tests/ # Synced Playwright tests (gitignored)
|
||||
└── artifacts/ # Test artifacts (gitignored)
|
||||
├── videos/
|
||||
├── screenshots/
|
||||
└── traces/
|
||||
```
|
||||
|
||||
## Test Metadata Format
|
||||
|
||||
Add Gherkin metadata to Playwright tests via JSDoc comments:
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Feature: Reservar turno veterinario
|
||||
* Scenario: Verificar cobertura en zona disponible
|
||||
* Tags: @smoke @coverage @frontend
|
||||
* @description Coverage check shows message for valid address
|
||||
*/
|
||||
test('coverage check shows message for valid address', async ({ page }) => {
|
||||
await page.goto('http://localhost:3000/turnero');
|
||||
await page.fill('[name="address"]', 'Av Santa Fe 1234, CABA');
|
||||
await page.click('button:has-text("Verificar")');
|
||||
|
||||
await expect(page.locator('.coverage-message')).toContainText('Tenemos cobertura');
|
||||
});
|
||||
```
|
||||
|
||||
## Playwright Configuration
|
||||
|
||||
Tests should use playwright.config.ts with video/screenshot capture:
|
||||
|
||||
```typescript
|
||||
import { defineConfig } from '@playwright/test';
|
||||
|
||||
export default defineConfig({
|
||||
use: {
|
||||
// Capture video on failure
|
||||
video: 'retain-on-failure',
|
||||
// Capture screenshot on failure
|
||||
screenshot: 'only-on-failure',
|
||||
},
|
||||
|
||||
// Output directory for artifacts
|
||||
outputDir: './test-results',
|
||||
|
||||
reporter: [
|
||||
['json', { outputFile: 'results.json' }],
|
||||
['html'],
|
||||
],
|
||||
});
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Stream Artifact
|
||||
```
|
||||
GET /tools/tester/api/artifact/{run_id}/{filename}
|
||||
```
|
||||
|
||||
Returns video/screenshot file for inline playback.
|
||||
|
||||
### List Artifacts
|
||||
```
|
||||
GET /tools/tester/api/artifacts/{run_id}
|
||||
```
|
||||
|
||||
Returns JSON list of all artifacts for a test run.
|
||||
|
||||
## Artifact Display
|
||||
|
||||
Videos and screenshots are displayed inline in test results:
|
||||
|
||||
**Video:**
|
||||
```html
|
||||
<video controls>
|
||||
<source src="/tools/tester/api/artifact/{run_id}/test-video.webm" type="video/webm">
|
||||
</video>
|
||||
```
|
||||
|
||||
**Screenshot:**
|
||||
```html
|
||||
<img src="/tools/tester/api/artifact/{run_id}/screenshot.png">
|
||||
```
|
||||
|
||||
## Integration with Test Runner
|
||||
|
||||
Playwright tests are discovered alongside backend tests and can be:
|
||||
- Run individually or in batches
|
||||
- Filtered by Gherkin metadata (feature, scenario, tags)
|
||||
- Filtered by pulse variables (role, stage, state)
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- Playwright trace viewer integration
|
||||
- Test parallelization
|
||||
- Browser selection (chromium, firefox, webkit)
|
||||
- Mobile device emulation
|
||||
- Network throttling
|
||||
- Test retry logic
|
||||
1
station/tools/tester/playwright/__init__.py
Normal file
1
station/tools/tester/playwright/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Playwright test support for tester."""
|
||||
178
station/tools/tester/playwright/artifacts.py
Normal file
178
station/tools/tester/playwright/artifacts.py
Normal file
@@ -0,0 +1,178 @@
|
||||
"""
|
||||
Artifact storage and retrieval for test results.
|
||||
"""
|
||||
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class TestArtifact:
|
||||
"""Test artifact (video, screenshot, trace, etc.)."""
|
||||
type: str # "video", "screenshot", "trace", "log"
|
||||
filename: str
|
||||
path: str
|
||||
size: int
|
||||
mimetype: str
|
||||
url: str # Streaming endpoint
|
||||
|
||||
|
||||
class ArtifactStore:
|
||||
"""Manage test artifacts."""
|
||||
|
||||
def __init__(self, artifacts_dir: Path):
|
||||
self.artifacts_dir = artifacts_dir
|
||||
self.videos_dir = artifacts_dir / "videos"
|
||||
self.screenshots_dir = artifacts_dir / "screenshots"
|
||||
self.traces_dir = artifacts_dir / "traces"
|
||||
|
||||
# Ensure directories exist
|
||||
self.videos_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.screenshots_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.traces_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def store_artifact(
|
||||
self,
|
||||
source_path: Path,
|
||||
run_id: str,
|
||||
artifact_type: str
|
||||
) -> Optional[TestArtifact]:
|
||||
"""
|
||||
Store an artifact and return its metadata.
|
||||
|
||||
Args:
|
||||
source_path: Path to the source file
|
||||
run_id: Test run ID
|
||||
artifact_type: Type of artifact (video, screenshot, trace)
|
||||
|
||||
Returns:
|
||||
TestArtifact metadata or None if storage fails
|
||||
"""
|
||||
if not source_path.exists():
|
||||
return None
|
||||
|
||||
# Determine destination directory
|
||||
if artifact_type == "video":
|
||||
dest_dir = self.videos_dir
|
||||
mimetype = "video/webm"
|
||||
elif artifact_type == "screenshot":
|
||||
dest_dir = self.screenshots_dir
|
||||
mimetype = "image/png"
|
||||
elif artifact_type == "trace":
|
||||
dest_dir = self.traces_dir
|
||||
mimetype = "application/zip"
|
||||
else:
|
||||
# Unknown type, store in root artifacts dir
|
||||
dest_dir = self.artifacts_dir
|
||||
mimetype = "application/octet-stream"
|
||||
|
||||
# Create run-specific subdirectory
|
||||
run_dir = dest_dir / run_id
|
||||
run_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy file
|
||||
dest_path = run_dir / source_path.name
|
||||
try:
|
||||
shutil.copy2(source_path, dest_path)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
# Build streaming URL
|
||||
url = f"/tools/tester/api/artifact/{run_id}/{source_path.name}"
|
||||
|
||||
return TestArtifact(
|
||||
type=artifact_type,
|
||||
filename=source_path.name,
|
||||
path=str(dest_path),
|
||||
size=dest_path.stat().st_size,
|
||||
mimetype=mimetype,
|
||||
url=url,
|
||||
)
|
||||
|
||||
def get_artifact(self, run_id: str, filename: str) -> Optional[Path]:
|
||||
"""
|
||||
Retrieve an artifact file.
|
||||
|
||||
Args:
|
||||
run_id: Test run ID
|
||||
filename: Artifact filename
|
||||
|
||||
Returns:
|
||||
Path to artifact file or None if not found
|
||||
"""
|
||||
# Search in all artifact directories
|
||||
for artifact_dir in [self.videos_dir, self.screenshots_dir, self.traces_dir]:
|
||||
artifact_path = artifact_dir / run_id / filename
|
||||
if artifact_path.exists():
|
||||
return artifact_path
|
||||
|
||||
# Check root artifacts dir
|
||||
artifact_path = self.artifacts_dir / run_id / filename
|
||||
if artifact_path.exists():
|
||||
return artifact_path
|
||||
|
||||
return None
|
||||
|
||||
def list_artifacts(self, run_id: str) -> list[TestArtifact]:
|
||||
"""
|
||||
List all artifacts for a test run.
|
||||
|
||||
Args:
|
||||
run_id: Test run ID
|
||||
|
||||
Returns:
|
||||
List of TestArtifact metadata
|
||||
"""
|
||||
artifacts = []
|
||||
|
||||
# Search in all artifact directories
|
||||
type_mapping = {
|
||||
self.videos_dir: ("video", "video/webm"),
|
||||
self.screenshots_dir: ("screenshot", "image/png"),
|
||||
self.traces_dir: ("trace", "application/zip"),
|
||||
}
|
||||
|
||||
for artifact_dir, (artifact_type, mimetype) in type_mapping.items():
|
||||
run_dir = artifact_dir / run_id
|
||||
if not run_dir.exists():
|
||||
continue
|
||||
|
||||
for artifact_file in run_dir.iterdir():
|
||||
if artifact_file.is_file():
|
||||
artifacts.append(TestArtifact(
|
||||
type=artifact_type,
|
||||
filename=artifact_file.name,
|
||||
path=str(artifact_file),
|
||||
size=artifact_file.stat().st_size,
|
||||
mimetype=mimetype,
|
||||
url=f"/tools/tester/api/artifact/{run_id}/{artifact_file.name}",
|
||||
))
|
||||
|
||||
return artifacts
|
||||
|
||||
def cleanup_old_artifacts(self, keep_recent: int = 10):
|
||||
"""
|
||||
Clean up old artifact directories, keeping only the most recent runs.
|
||||
|
||||
Args:
|
||||
keep_recent: Number of recent runs to keep
|
||||
"""
|
||||
# Get all run directories sorted by modification time
|
||||
all_runs = []
|
||||
|
||||
for artifact_dir in [self.videos_dir, self.screenshots_dir, self.traces_dir]:
|
||||
for run_dir in artifact_dir.iterdir():
|
||||
if run_dir.is_dir():
|
||||
all_runs.append(run_dir)
|
||||
|
||||
# Sort by modification time (newest first)
|
||||
all_runs.sort(key=lambda p: p.stat().st_mtime, reverse=True)
|
||||
|
||||
# Keep only the most recent
|
||||
for old_run in all_runs[keep_recent:]:
|
||||
try:
|
||||
shutil.rmtree(old_run)
|
||||
except Exception:
|
||||
pass # Ignore errors during cleanup
|
||||
153
station/tools/tester/playwright/discovery.py
Normal file
153
station/tools/tester/playwright/discovery.py
Normal file
@@ -0,0 +1,153 @@
|
||||
"""
|
||||
Discover Playwright tests (.spec.ts files).
|
||||
"""
|
||||
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class PlaywrightTestInfo:
|
||||
"""Information about a discovered Playwright test."""
|
||||
id: str
|
||||
name: str
|
||||
file_path: str
|
||||
test_name: str
|
||||
description: Optional[str] = None
|
||||
gherkin_feature: Optional[str] = None
|
||||
gherkin_scenario: Optional[str] = None
|
||||
tags: list[str] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.tags is None:
|
||||
self.tags = []
|
||||
|
||||
|
||||
def discover_playwright_tests(tests_dir: Path) -> list[PlaywrightTestInfo]:
|
||||
"""
|
||||
Discover all Playwright tests in the frontend-tests directory.
|
||||
|
||||
Parses .spec.ts files to extract:
|
||||
- test() calls
|
||||
- describe() blocks
|
||||
- Gherkin metadata from comments
|
||||
- Tags from comments
|
||||
"""
|
||||
if not tests_dir.exists():
|
||||
return []
|
||||
|
||||
tests = []
|
||||
|
||||
# Find all .spec.ts files
|
||||
for spec_file in tests_dir.rglob("*.spec.ts"):
|
||||
relative_path = spec_file.relative_to(tests_dir)
|
||||
|
||||
# Read file content
|
||||
try:
|
||||
content = spec_file.read_text()
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
# Extract describe blocks and tests
|
||||
tests_in_file = _parse_playwright_file(content, spec_file, relative_path)
|
||||
tests.extend(tests_in_file)
|
||||
|
||||
return tests
|
||||
|
||||
|
||||
def _parse_playwright_file(
|
||||
content: str,
|
||||
file_path: Path,
|
||||
relative_path: Path
|
||||
) -> list[PlaywrightTestInfo]:
|
||||
"""Parse a Playwright test file to extract test information."""
|
||||
tests = []
|
||||
|
||||
# Pattern to match test() calls
|
||||
# test('test name', async ({ page }) => { ... })
|
||||
# test.only('test name', ...)
|
||||
test_pattern = re.compile(
|
||||
r"test(?:\.\w+)?\s*\(\s*['\"]([^'\"]+)['\"]",
|
||||
re.MULTILINE
|
||||
)
|
||||
|
||||
# Pattern to match describe() blocks
|
||||
describe_pattern = re.compile(
|
||||
r"describe\s*\(\s*['\"]([^'\"]+)['\"]",
|
||||
re.MULTILINE
|
||||
)
|
||||
|
||||
# Extract metadata from comments above tests
|
||||
# Looking for JSDoc-style comments with metadata
|
||||
metadata_pattern = re.compile(
|
||||
r"/\*\*\s*\n((?:\s*\*.*\n)+)\s*\*/\s*\n\s*test",
|
||||
re.MULTILINE
|
||||
)
|
||||
|
||||
# Find all describe blocks to use as context
|
||||
describes = describe_pattern.findall(content)
|
||||
describe_context = describes[0] if describes else None
|
||||
|
||||
# Find all tests
|
||||
for match in test_pattern.finditer(content):
|
||||
test_name = match.group(1)
|
||||
|
||||
# Look for metadata comment before this test
|
||||
# Search backwards from the match position
|
||||
before_test = content[:match.start()]
|
||||
metadata_match = None
|
||||
for m in metadata_pattern.finditer(before_test):
|
||||
metadata_match = m
|
||||
|
||||
# Parse metadata if found
|
||||
gherkin_feature = None
|
||||
gherkin_scenario = None
|
||||
tags = []
|
||||
description = None
|
||||
|
||||
if metadata_match:
|
||||
metadata_block = metadata_match.group(1)
|
||||
|
||||
# Extract Feature, Scenario, Tags from metadata
|
||||
feature_match = re.search(r"\*\s*Feature:\s*(.+)", metadata_block)
|
||||
scenario_match = re.search(r"\*\s*Scenario:\s*(.+)", metadata_block)
|
||||
tags_match = re.search(r"\*\s*Tags:\s*(.+)", metadata_block)
|
||||
desc_match = re.search(r"\*\s*@description\s+(.+)", metadata_block)
|
||||
|
||||
if feature_match:
|
||||
gherkin_feature = feature_match.group(1).strip()
|
||||
if scenario_match:
|
||||
gherkin_scenario = scenario_match.group(1).strip()
|
||||
if tags_match:
|
||||
tags_str = tags_match.group(1).strip()
|
||||
tags = [t.strip() for t in re.findall(r"@[\w-]+", tags_str)]
|
||||
if desc_match:
|
||||
description = desc_match.group(1).strip()
|
||||
|
||||
# Build test ID
|
||||
module_name = str(relative_path).replace("/", ".").replace(".spec.ts", "")
|
||||
test_id = f"frontend.{module_name}.{_sanitize_test_name(test_name)}"
|
||||
|
||||
tests.append(PlaywrightTestInfo(
|
||||
id=test_id,
|
||||
name=test_name,
|
||||
file_path=str(relative_path),
|
||||
test_name=test_name,
|
||||
description=description or test_name,
|
||||
gherkin_feature=gherkin_feature,
|
||||
gherkin_scenario=gherkin_scenario,
|
||||
tags=tags,
|
||||
))
|
||||
|
||||
return tests
|
||||
|
||||
|
||||
def _sanitize_test_name(name: str) -> str:
|
||||
"""Convert test name to a valid identifier."""
|
||||
# Replace spaces and special chars with underscores
|
||||
sanitized = re.sub(r"[^\w]+", "_", name.lower())
|
||||
# Remove leading/trailing underscores
|
||||
sanitized = sanitized.strip("_")
|
||||
return sanitized
|
||||
189
station/tools/tester/playwright/runner.py
Normal file
189
station/tools/tester/playwright/runner.py
Normal file
@@ -0,0 +1,189 @@
|
||||
"""
|
||||
Execute Playwright tests and capture artifacts.
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
|
||||
@dataclass
|
||||
class PlaywrightResult:
|
||||
"""Result of a Playwright test execution."""
|
||||
test_id: str
|
||||
name: str
|
||||
status: str # "passed", "failed", "skipped"
|
||||
duration: float
|
||||
error_message: Optional[str] = None
|
||||
traceback: Optional[str] = None
|
||||
artifacts: list[dict] = field(default_factory=list)
|
||||
|
||||
|
||||
class PlaywrightRunner:
|
||||
"""Run Playwright tests and collect artifacts."""
|
||||
|
||||
def __init__(self, tests_dir: Path, artifacts_dir: Path):
|
||||
self.tests_dir = tests_dir
|
||||
self.artifacts_dir = artifacts_dir
|
||||
self.videos_dir = artifacts_dir / "videos"
|
||||
self.screenshots_dir = artifacts_dir / "screenshots"
|
||||
self.traces_dir = artifacts_dir / "traces"
|
||||
|
||||
# Ensure artifact directories exist
|
||||
self.videos_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.screenshots_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.traces_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def run_tests(
|
||||
self,
|
||||
test_files: Optional[list[str]] = None,
|
||||
run_id: Optional[str] = None
|
||||
) -> list[PlaywrightResult]:
|
||||
"""
|
||||
Run Playwright tests and collect results.
|
||||
|
||||
Args:
|
||||
test_files: List of test file paths to run (relative to tests_dir).
|
||||
If None, runs all tests.
|
||||
run_id: Optional run ID to namespace artifacts.
|
||||
|
||||
Returns:
|
||||
List of PlaywrightResult objects.
|
||||
"""
|
||||
if not self.tests_dir.exists():
|
||||
return []
|
||||
|
||||
# Build playwright command
|
||||
cmd = ["npx", "playwright", "test"]
|
||||
|
||||
# Add specific test files if provided
|
||||
if test_files:
|
||||
cmd.extend(test_files)
|
||||
|
||||
# Add reporter for JSON output
|
||||
results_file = self.artifacts_dir / f"results_{run_id or 'latest'}.json"
|
||||
cmd.extend([
|
||||
"--reporter=json",
|
||||
f"--output={results_file}"
|
||||
])
|
||||
|
||||
# Configure artifact collection
|
||||
# Videos and screenshots are configured in playwright.config.ts
|
||||
# We'll assume config is set to capture on failure
|
||||
|
||||
# Run tests
|
||||
start_time = time.time()
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
cwd=self.tests_dir,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=600 # 10 minute timeout
|
||||
)
|
||||
|
||||
# Parse results
|
||||
if results_file.exists():
|
||||
with open(results_file) as f:
|
||||
results_data = json.load(f)
|
||||
return self._parse_results(results_data, run_id)
|
||||
else:
|
||||
# No results file - likely error
|
||||
return self._create_error_result(result.stderr)
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return self._create_error_result("Tests timed out after 10 minutes")
|
||||
except Exception as e:
|
||||
return self._create_error_result(str(e))
|
||||
|
||||
def _parse_results(
|
||||
self,
|
||||
results_data: dict,
|
||||
run_id: Optional[str]
|
||||
) -> list[PlaywrightResult]:
|
||||
"""Parse Playwright JSON results."""
|
||||
parsed_results = []
|
||||
|
||||
# Playwright JSON reporter structure:
|
||||
# {
|
||||
# "suites": [...],
|
||||
# "tests": [...],
|
||||
# }
|
||||
|
||||
tests = results_data.get("tests", [])
|
||||
|
||||
for test in tests:
|
||||
test_id = test.get("testId", "unknown")
|
||||
title = test.get("title", "Unknown test")
|
||||
status = test.get("status", "unknown") # passed, failed, skipped
|
||||
duration = test.get("duration", 0) / 1000.0 # Convert ms to seconds
|
||||
|
||||
error_message = None
|
||||
traceback = None
|
||||
|
||||
# Extract error if failed
|
||||
if status == "failed":
|
||||
error = test.get("error", {})
|
||||
error_message = error.get("message", "Test failed")
|
||||
traceback = error.get("stack", "")
|
||||
|
||||
# Collect artifacts
|
||||
artifacts = []
|
||||
for attachment in test.get("attachments", []):
|
||||
artifact_type = attachment.get("contentType", "")
|
||||
artifact_path = attachment.get("path", "")
|
||||
|
||||
if artifact_path:
|
||||
artifact_file = Path(artifact_path)
|
||||
if artifact_file.exists():
|
||||
# Determine type
|
||||
if "video" in artifact_type:
|
||||
type_label = "video"
|
||||
elif "image" in artifact_type:
|
||||
type_label = "screenshot"
|
||||
elif "trace" in artifact_type:
|
||||
type_label = "trace"
|
||||
else:
|
||||
type_label = "attachment"
|
||||
|
||||
artifacts.append({
|
||||
"type": type_label,
|
||||
"filename": artifact_file.name,
|
||||
"path": str(artifact_file),
|
||||
"size": artifact_file.stat().st_size,
|
||||
"mimetype": artifact_type,
|
||||
})
|
||||
|
||||
parsed_results.append(PlaywrightResult(
|
||||
test_id=test_id,
|
||||
name=title,
|
||||
status=status,
|
||||
duration=duration,
|
||||
error_message=error_message,
|
||||
traceback=traceback,
|
||||
artifacts=artifacts,
|
||||
))
|
||||
|
||||
return parsed_results
|
||||
|
||||
def _create_error_result(self, error_msg: str) -> list[PlaywrightResult]:
|
||||
"""Create an error result when test execution fails."""
|
||||
return [
|
||||
PlaywrightResult(
|
||||
test_id="playwright_error",
|
||||
name="Playwright Execution Error",
|
||||
status="failed",
|
||||
duration=0.0,
|
||||
error_message=error_msg,
|
||||
traceback="",
|
||||
artifacts=[],
|
||||
)
|
||||
]
|
||||
|
||||
def get_artifact_url(self, run_id: str, artifact_filename: str) -> str:
|
||||
"""Generate URL for streaming an artifact."""
|
||||
return f"/tools/tester/api/artifact/{run_id}/{artifact_filename}"
|
||||
862
station/tools/tester/templates/filters.html
Normal file
862
station/tools/tester/templates/filters.html
Normal file
@@ -0,0 +1,862 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Test Filters - Ward</title>
|
||||
<style>
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
background: #111827;
|
||||
color: #e5e7eb;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 20px;
|
||||
padding-bottom: 20px;
|
||||
border-bottom: 1px solid #374151;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 1.5rem;
|
||||
font-weight: 600;
|
||||
color: #f9fafb;
|
||||
}
|
||||
|
||||
.nav-links {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
font-size: 0.875rem;
|
||||
}
|
||||
|
||||
.nav-links a {
|
||||
color: #60a5fa;
|
||||
text-decoration: none;
|
||||
padding: 6px 12px;
|
||||
border-radius: 4px;
|
||||
transition: background 0.2s;
|
||||
}
|
||||
|
||||
.nav-links a:hover {
|
||||
background: #374151;
|
||||
}
|
||||
|
||||
.nav-links a.active {
|
||||
background: #2563eb;
|
||||
color: white;
|
||||
}
|
||||
|
||||
/* Filter Panel */
|
||||
.filter-panel {
|
||||
background: #1f2937;
|
||||
border-radius: 8px;
|
||||
padding: 20px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.filter-section {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.filter-section:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.filter-label {
|
||||
font-weight: 600;
|
||||
font-size: 0.875rem;
|
||||
color: #9ca3af;
|
||||
text-transform: uppercase;
|
||||
margin-bottom: 10px;
|
||||
display: block;
|
||||
}
|
||||
|
||||
.filter-group {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.filter-chip {
|
||||
padding: 6px 12px;
|
||||
border-radius: 6px;
|
||||
font-size: 0.875rem;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
background: #374151;
|
||||
color: #e5e7eb;
|
||||
border: 2px solid transparent;
|
||||
}
|
||||
|
||||
.filter-chip:hover {
|
||||
background: #4b5563;
|
||||
}
|
||||
|
||||
.filter-chip.active {
|
||||
background: #2563eb;
|
||||
color: white;
|
||||
border-color: #1d4ed8;
|
||||
}
|
||||
|
||||
.search-box {
|
||||
width: 100%;
|
||||
padding: 10px 12px;
|
||||
background: #374151;
|
||||
border: 2px solid #4b5563;
|
||||
border-radius: 6px;
|
||||
color: #e5e7eb;
|
||||
font-size: 0.875rem;
|
||||
transition: border-color 0.2s;
|
||||
}
|
||||
|
||||
.search-box:focus {
|
||||
outline: none;
|
||||
border-color: #2563eb;
|
||||
}
|
||||
|
||||
.search-box::placeholder {
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
/* Test List */
|
||||
.test-list {
|
||||
background: #1f2937;
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.list-header {
|
||||
padding: 12px 16px;
|
||||
background: #374151;
|
||||
font-weight: 600;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.test-count {
|
||||
font-size: 0.75rem;
|
||||
color: #9ca3af;
|
||||
background: #1f2937;
|
||||
padding: 4px 10px;
|
||||
border-radius: 10px;
|
||||
}
|
||||
|
||||
.list-body {
|
||||
padding: 16px;
|
||||
max-height: 600px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.test-card {
|
||||
background: #374151;
|
||||
border-radius: 6px;
|
||||
padding: 12px;
|
||||
margin-bottom: 8px;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
border: 2px solid transparent;
|
||||
}
|
||||
|
||||
.test-card:hover {
|
||||
background: #4b5563;
|
||||
border-color: #2563eb;
|
||||
}
|
||||
|
||||
.test-card.selected {
|
||||
border-color: #2563eb;
|
||||
background: #1e3a8a;
|
||||
}
|
||||
|
||||
.test-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: start;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.test-title {
|
||||
font-weight: 600;
|
||||
color: #f9fafb;
|
||||
font-size: 0.95rem;
|
||||
}
|
||||
|
||||
.test-status-badge {
|
||||
padding: 2px 8px;
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.status-passed {
|
||||
background: #065f46;
|
||||
color: #34d399;
|
||||
}
|
||||
|
||||
.status-failed {
|
||||
background: #7f1d1d;
|
||||
color: #f87171;
|
||||
}
|
||||
|
||||
.status-skipped {
|
||||
background: #78350f;
|
||||
color: #fbbf24;
|
||||
}
|
||||
|
||||
.status-unknown {
|
||||
background: #374151;
|
||||
color: #9ca3af;
|
||||
}
|
||||
|
||||
.test-path {
|
||||
font-size: 0.75rem;
|
||||
color: #9ca3af;
|
||||
font-family: monospace;
|
||||
margin-bottom: 6px;
|
||||
}
|
||||
|
||||
.test-doc {
|
||||
font-size: 0.875rem;
|
||||
color: #d1d5db;
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.test-meta {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
margin-top: 8px;
|
||||
font-size: 0.75rem;
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
.test-meta span {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
text-align: center;
|
||||
padding: 60px 20px;
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
.empty-state-icon {
|
||||
font-size: 3rem;
|
||||
margin-bottom: 16px;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
/* Action Bar */
|
||||
.action-bar {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 12px 16px;
|
||||
background: #1f2937;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.btn {
|
||||
padding: 8px 16px;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
font-size: 0.875rem;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: #2563eb;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-primary:hover {
|
||||
background: #1d4ed8;
|
||||
}
|
||||
|
||||
.btn-primary:disabled {
|
||||
background: #4b5563;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: #374151;
|
||||
color: #e5e7eb;
|
||||
}
|
||||
|
||||
.btn-secondary:hover {
|
||||
background: #4b5563;
|
||||
}
|
||||
|
||||
.selection-info {
|
||||
font-size: 0.875rem;
|
||||
color: #9ca3af;
|
||||
}
|
||||
|
||||
.selection-info strong {
|
||||
color: #60a5fa;
|
||||
}
|
||||
|
||||
/* Responsive */
|
||||
@media (max-width: 768px) {
|
||||
.filter-section {
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
.action-bar {
|
||||
flex-direction: column;
|
||||
gap: 12px;
|
||||
align-items: stretch;
|
||||
}
|
||||
|
||||
.selection-info {
|
||||
text-align: center;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<div>
|
||||
<h1>Contract HTTP Tests - Filters</h1>
|
||||
<div class="nav-links">
|
||||
<a href="/tools/tester/">Runner</a>
|
||||
<a href="/tools/tester/filters" class="active">Filters</a>
|
||||
</div>
|
||||
</div>
|
||||
<div style="display: flex; align-items: center; gap: 12px; font-size: 0.875rem; color: #9ca3af;">
|
||||
<span>Target:</span>
|
||||
<select id="environmentSelector" style="background: #374151; color: #e5e7eb; border: 1px solid #4b5563; border-radius: 4px; padding: 4px 8px; font-size: 0.875rem; cursor: pointer;">
|
||||
<option value="">Loading...</option>
|
||||
</select>
|
||||
<strong id="currentUrl" style="color: #60a5fa;">Loading...</strong>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="filter-panel">
|
||||
<div class="filter-section">
|
||||
<label class="filter-label">Search</label>
|
||||
<input
|
||||
type="text"
|
||||
class="search-box"
|
||||
id="searchInput"
|
||||
placeholder="Search by test name, class, or description..."
|
||||
autocomplete="off"
|
||||
>
|
||||
</div>
|
||||
|
||||
<div class="filter-section">
|
||||
<label class="filter-label">Domain</label>
|
||||
<div class="filter-group" id="domainFilters">
|
||||
<div class="filter-chip active" data-filter="all" onclick="toggleDomainFilter(this)">
|
||||
All Domains
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="filter-section">
|
||||
<label class="filter-label">Module</label>
|
||||
<div class="filter-group" id="moduleFilters">
|
||||
<div class="filter-chip active" data-filter="all" onclick="toggleModuleFilter(this)">
|
||||
All Modules
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="filter-section">
|
||||
<label class="filter-label">Status (from last run)</label>
|
||||
<div class="filter-group">
|
||||
<div class="filter-chip active" data-status="all" onclick="toggleStatusFilter(this)">
|
||||
All
|
||||
</div>
|
||||
<div class="filter-chip" data-status="passed" onclick="toggleStatusFilter(this)">
|
||||
Passed
|
||||
</div>
|
||||
<div class="filter-chip" data-status="failed" onclick="toggleStatusFilter(this)">
|
||||
Failed
|
||||
</div>
|
||||
<div class="filter-chip" data-status="skipped" onclick="toggleStatusFilter(this)">
|
||||
Skipped
|
||||
</div>
|
||||
<div class="filter-chip" data-status="unknown" onclick="toggleStatusFilter(this)">
|
||||
Not Run
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="filter-section">
|
||||
<button class="btn btn-secondary" onclick="clearFilters()">Clear All Filters</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="action-bar">
|
||||
<div class="selection-info">
|
||||
<span id="selectedCount">0</span> tests selected
|
||||
</div>
|
||||
<div style="display: flex; gap: 10px;">
|
||||
<button class="btn btn-secondary" onclick="selectAll()">Select All Visible</button>
|
||||
<button class="btn btn-secondary" onclick="deselectAll()">Deselect All</button>
|
||||
<button class="btn btn-primary" id="runSelectedBtn" onclick="runSelected()">Run Selected</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="test-list">
|
||||
<div class="list-header">
|
||||
<span>Tests</span>
|
||||
<span class="test-count" id="testCount">Loading...</span>
|
||||
</div>
|
||||
<div class="list-body" id="testListBody">
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">🔍</div>
|
||||
<div>Loading tests...</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
let allTests = [];
|
||||
let selectedTests = new Set();
|
||||
let lastRunResults = {};
|
||||
|
||||
// Filter state
|
||||
let filters = {
|
||||
search: '',
|
||||
domains: new Set(['all']),
|
||||
modules: new Set(['all']),
|
||||
status: new Set(['all'])
|
||||
};
|
||||
|
||||
// Load tests on page load
|
||||
async function loadTests() {
|
||||
try {
|
||||
const response = await fetch('/tools/tester/api/tests');
|
||||
const data = await response.json();
|
||||
allTests = data.tests;
|
||||
|
||||
// Extract unique domains and modules
|
||||
const domains = new Set();
|
||||
const modules = new Set();
|
||||
|
||||
allTests.forEach(test => {
|
||||
const parts = test.id.split('.');
|
||||
if (parts.length >= 2) {
|
||||
domains.add(parts[0]);
|
||||
modules.add(parts[1]);
|
||||
}
|
||||
});
|
||||
|
||||
// Populate domain filters
|
||||
const domainFilters = document.getElementById('domainFilters');
|
||||
domains.forEach(domain => {
|
||||
const chip = document.createElement('div');
|
||||
chip.className = 'filter-chip';
|
||||
chip.dataset.filter = domain;
|
||||
chip.textContent = domain;
|
||||
chip.onclick = function() { toggleDomainFilter(this); };
|
||||
domainFilters.appendChild(chip);
|
||||
});
|
||||
|
||||
// Populate module filters
|
||||
const moduleFilters = document.getElementById('moduleFilters');
|
||||
modules.forEach(module => {
|
||||
const chip = document.createElement('div');
|
||||
chip.className = 'filter-chip';
|
||||
chip.dataset.filter = module;
|
||||
chip.textContent = module.replace('test_', '');
|
||||
chip.onclick = function() { toggleModuleFilter(this); };
|
||||
moduleFilters.appendChild(chip);
|
||||
});
|
||||
|
||||
// Try to load last run results
|
||||
await loadLastRunResults();
|
||||
|
||||
renderTests();
|
||||
} catch (error) {
|
||||
console.error('Failed to load tests:', error);
|
||||
document.getElementById('testListBody').innerHTML = `
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">⚠️</div>
|
||||
<div>Failed to load tests</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
async function loadLastRunResults() {
|
||||
try {
|
||||
const response = await fetch('/tools/tester/api/runs');
|
||||
const data = await response.json();
|
||||
|
||||
if (data.runs && data.runs.length > 0) {
|
||||
const lastRunId = data.runs[0];
|
||||
const runResponse = await fetch(`/tools/tester/api/run/${lastRunId}`);
|
||||
const runData = await runResponse.json();
|
||||
|
||||
runData.results.forEach(result => {
|
||||
lastRunResults[result.test_id] = result.status;
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to load last run results:', error);
|
||||
}
|
||||
}
|
||||
|
||||
function getTestStatus(testId) {
|
||||
return lastRunResults[testId] || 'unknown';
|
||||
}
|
||||
|
||||
function toggleDomainFilter(chip) {
|
||||
const filter = chip.dataset.filter;
|
||||
|
||||
if (filter === 'all') {
|
||||
// Deselect all others
|
||||
document.querySelectorAll('#domainFilters .filter-chip').forEach(c => {
|
||||
c.classList.remove('active');
|
||||
});
|
||||
chip.classList.add('active');
|
||||
filters.domains = new Set(['all']);
|
||||
} else {
|
||||
// Remove 'all'
|
||||
document.querySelector('#domainFilters [data-filter="all"]').classList.remove('active');
|
||||
|
||||
if (filters.domains.has(filter)) {
|
||||
filters.domains.delete(filter);
|
||||
chip.classList.remove('active');
|
||||
} else {
|
||||
filters.domains.add(filter);
|
||||
chip.classList.add('active');
|
||||
}
|
||||
|
||||
// If nothing selected, select all
|
||||
if (filters.domains.size === 0 || filters.domains.has('all')) {
|
||||
document.querySelector('#domainFilters [data-filter="all"]').classList.add('active');
|
||||
document.querySelectorAll('#domainFilters .filter-chip:not([data-filter="all"])').forEach(c => {
|
||||
c.classList.remove('active');
|
||||
});
|
||||
filters.domains = new Set(['all']);
|
||||
}
|
||||
}
|
||||
|
||||
renderTests();
|
||||
}
|
||||
|
||||
function toggleModuleFilter(chip) {
|
||||
const filter = chip.dataset.filter;
|
||||
|
||||
if (filter === 'all') {
|
||||
document.querySelectorAll('#moduleFilters .filter-chip').forEach(c => {
|
||||
c.classList.remove('active');
|
||||
});
|
||||
chip.classList.add('active');
|
||||
filters.modules = new Set(['all']);
|
||||
} else {
|
||||
document.querySelector('#moduleFilters [data-filter="all"]').classList.remove('active');
|
||||
|
||||
if (filters.modules.has(filter)) {
|
||||
filters.modules.delete(filter);
|
||||
chip.classList.remove('active');
|
||||
} else {
|
||||
filters.modules.add(filter);
|
||||
chip.classList.add('active');
|
||||
}
|
||||
|
||||
if (filters.modules.size === 0 || filters.modules.has('all')) {
|
||||
document.querySelector('#moduleFilters [data-filter="all"]').classList.add('active');
|
||||
document.querySelectorAll('#moduleFilters .filter-chip:not([data-filter="all"])').forEach(c => {
|
||||
c.classList.remove('active');
|
||||
});
|
||||
filters.modules = new Set(['all']);
|
||||
}
|
||||
}
|
||||
|
||||
renderTests();
|
||||
}
|
||||
|
||||
function toggleStatusFilter(chip) {
|
||||
const status = chip.dataset.status;
|
||||
|
||||
if (status === 'all') {
|
||||
document.querySelectorAll('[data-status]').forEach(c => {
|
||||
c.classList.remove('active');
|
||||
});
|
||||
chip.classList.add('active');
|
||||
filters.status = new Set(['all']);
|
||||
} else {
|
||||
document.querySelector('[data-status="all"]').classList.remove('active');
|
||||
|
||||
if (filters.status.has(status)) {
|
||||
filters.status.delete(status);
|
||||
chip.classList.remove('active');
|
||||
} else {
|
||||
filters.status.add(status);
|
||||
chip.classList.add('active');
|
||||
}
|
||||
|
||||
if (filters.status.size === 0 || filters.status.has('all')) {
|
||||
document.querySelector('[data-status="all"]').classList.add('active');
|
||||
document.querySelectorAll('[data-status]:not([data-status="all"])').forEach(c => {
|
||||
c.classList.remove('active');
|
||||
});
|
||||
filters.status = new Set(['all']);
|
||||
}
|
||||
}
|
||||
|
||||
renderTests();
|
||||
}
|
||||
|
||||
function clearFilters() {
|
||||
// Reset search
|
||||
document.getElementById('searchInput').value = '';
|
||||
filters.search = '';
|
||||
|
||||
// Reset domains
|
||||
document.querySelectorAll('#domainFilters .filter-chip').forEach(c => c.classList.remove('active'));
|
||||
document.querySelector('#domainFilters [data-filter="all"]').classList.add('active');
|
||||
filters.domains = new Set(['all']);
|
||||
|
||||
// Reset modules
|
||||
document.querySelectorAll('#moduleFilters .filter-chip').forEach(c => c.classList.remove('active'));
|
||||
document.querySelector('#moduleFilters [data-filter="all"]').classList.add('active');
|
||||
filters.modules = new Set(['all']);
|
||||
|
||||
// Reset status
|
||||
document.querySelectorAll('[data-status]').forEach(c => c.classList.remove('active'));
|
||||
document.querySelector('[data-status="all"]').classList.add('active');
|
||||
filters.status = new Set(['all']);
|
||||
|
||||
renderTests();
|
||||
}
|
||||
|
||||
function filterTests() {
|
||||
return allTests.filter(test => {
|
||||
const parts = test.id.split('.');
|
||||
const domain = parts[0];
|
||||
const module = parts[1];
|
||||
const status = getTestStatus(test.id);
|
||||
|
||||
// Search filter
|
||||
if (filters.search) {
|
||||
const searchLower = filters.search.toLowerCase();
|
||||
const matchesSearch =
|
||||
test.name.toLowerCase().includes(searchLower) ||
|
||||
test.class_name.toLowerCase().includes(searchLower) ||
|
||||
(test.doc && test.doc.toLowerCase().includes(searchLower)) ||
|
||||
test.id.toLowerCase().includes(searchLower);
|
||||
|
||||
if (!matchesSearch) return false;
|
||||
}
|
||||
|
||||
// Domain filter
|
||||
if (!filters.domains.has('all') && !filters.domains.has(domain)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Module filter
|
||||
if (!filters.modules.has('all') && !filters.modules.has(module)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Status filter
|
||||
if (!filters.status.has('all') && !filters.status.has(status)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
});
|
||||
}
|
||||
|
||||
function renderTests() {
|
||||
const filteredTests = filterTests();
|
||||
const container = document.getElementById('testListBody');
|
||||
|
||||
document.getElementById('testCount').textContent = `${filteredTests.length} of ${allTests.length}`;
|
||||
|
||||
if (filteredTests.length === 0) {
|
||||
container.innerHTML = `
|
||||
<div class="empty-state">
|
||||
<div class="empty-state-icon">🔍</div>
|
||||
<div>No tests match your filters</div>
|
||||
</div>
|
||||
`;
|
||||
return;
|
||||
}
|
||||
|
||||
container.innerHTML = filteredTests.map(test => {
|
||||
const status = getTestStatus(test.id);
|
||||
const isSelected = selectedTests.has(test.id);
|
||||
const parts = test.id.split('.');
|
||||
const domain = parts[0];
|
||||
const module = parts[1];
|
||||
|
||||
return `
|
||||
<div class="test-card ${isSelected ? 'selected' : ''}" onclick="toggleTestSelection('${test.id}')" data-test-id="${test.id}">
|
||||
<div class="test-header">
|
||||
<div class="test-title">${formatTestName(test.method_name)}</div>
|
||||
<div class="test-status-badge status-${status}">${status}</div>
|
||||
</div>
|
||||
<div class="test-path">${test.id}</div>
|
||||
<div class="test-doc">${test.doc || 'No description'}</div>
|
||||
<div class="test-meta">
|
||||
<span>📁 ${domain}</span>
|
||||
<span>📄 ${module}</span>
|
||||
<span>🏷️ ${test.class_name}</span>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}).join('');
|
||||
|
||||
updateSelectionInfo();
|
||||
}
|
||||
|
||||
function formatTestName(name) {
|
||||
return name.replace(/^test_/, '').replace(/_/g, ' ');
|
||||
}
|
||||
|
||||
function toggleTestSelection(testId) {
|
||||
if (selectedTests.has(testId)) {
|
||||
selectedTests.delete(testId);
|
||||
} else {
|
||||
selectedTests.add(testId);
|
||||
}
|
||||
|
||||
// Update UI
|
||||
const card = document.querySelector(`[data-test-id="${testId}"]`);
|
||||
if (card) {
|
||||
card.classList.toggle('selected');
|
||||
}
|
||||
|
||||
updateSelectionInfo();
|
||||
}
|
||||
|
||||
function selectAll() {
|
||||
const filteredTests = filterTests();
|
||||
filteredTests.forEach(test => selectedTests.add(test.id));
|
||||
renderTests();
|
||||
}
|
||||
|
||||
function deselectAll() {
|
||||
selectedTests.clear();
|
||||
renderTests();
|
||||
}
|
||||
|
||||
function updateSelectionInfo() {
|
||||
document.getElementById('selectedCount').textContent = selectedTests.size;
|
||||
document.getElementById('runSelectedBtn').disabled = selectedTests.size === 0;
|
||||
}
|
||||
|
||||
async function runSelected() {
|
||||
if (selectedTests.size === 0) {
|
||||
alert('No tests selected');
|
||||
return;
|
||||
}
|
||||
|
||||
const testIds = Array.from(selectedTests);
|
||||
|
||||
try {
|
||||
const response = await fetch('/tools/tester/api/run', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ test_ids: testIds }),
|
||||
});
|
||||
const data = await response.json();
|
||||
|
||||
// Build URL params to preserve filter state in runner
|
||||
const params = new URLSearchParams();
|
||||
params.set('run', data.run_id);
|
||||
|
||||
// Pass filter state
|
||||
if (filters.search) params.set('search', filters.search);
|
||||
if (!filters.domains.has('all')) {
|
||||
params.set('domains', Array.from(filters.domains).join(','));
|
||||
}
|
||||
if (!filters.modules.has('all')) {
|
||||
params.set('modules', Array.from(filters.modules).join(','));
|
||||
}
|
||||
if (!filters.status.has('all')) {
|
||||
params.set('status', Array.from(filters.status).join(','));
|
||||
}
|
||||
|
||||
// Redirect to main runner with filters applied
|
||||
window.location.href = `/tools/tester/?${params.toString()}`;
|
||||
} catch (error) {
|
||||
console.error('Failed to start run:', error);
|
||||
alert('Failed to start test run');
|
||||
}
|
||||
}
|
||||
|
||||
// Search input handler
|
||||
document.getElementById('searchInput').addEventListener('input', (e) => {
|
||||
filters.search = e.target.value;
|
||||
renderTests();
|
||||
});
|
||||
|
||||
// Load environments
|
||||
async function loadEnvironments() {
|
||||
try {
|
||||
const response = await fetch('/tools/tester/api/environments');
|
||||
const data = await response.json();
|
||||
const selector = document.getElementById('environmentSelector');
|
||||
const currentUrl = document.getElementById('currentUrl');
|
||||
|
||||
const savedEnvId = localStorage.getItem('selectedEnvironment');
|
||||
let selectedEnv = null;
|
||||
|
||||
selector.innerHTML = data.environments.map(env => {
|
||||
const isDefault = env.default || env.id === savedEnvId;
|
||||
if (isDefault) selectedEnv = env;
|
||||
return `<option value="${env.id}" ${isDefault ? 'selected' : ''}>${env.name} ${env.has_api_key ? '🔑' : ''}</option>`;
|
||||
}).join('');
|
||||
|
||||
if (selectedEnv) {
|
||||
currentUrl.textContent = selectedEnv.url;
|
||||
}
|
||||
|
||||
selector.addEventListener('change', async (e) => {
|
||||
const envId = e.target.value;
|
||||
try {
|
||||
const response = await fetch(`/tools/tester/api/environment/select?env_id=${envId}`, {
|
||||
method: 'POST'
|
||||
});
|
||||
const data = await response.json();
|
||||
|
||||
if (data.success) {
|
||||
currentUrl.textContent = data.environment.url;
|
||||
localStorage.setItem('selectedEnvironment', envId);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to switch environment:', error);
|
||||
alert('Failed to switch environment');
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Failed to load environments:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Load tests on page load
|
||||
loadEnvironments();
|
||||
loadTests();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
1187
station/tools/tester/templates/filters_v2.html
Normal file
1187
station/tools/tester/templates/filters_v2.html
Normal file
File diff suppressed because it is too large
Load Diff
909
station/tools/tester/templates/index.html
Normal file
909
station/tools/tester/templates/index.html
Normal file
@@ -0,0 +1,909 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Contract Tests - Ward</title>
|
||||
<style>
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
background: #111827;
|
||||
color: #e5e7eb;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 20px;
|
||||
padding-bottom: 20px;
|
||||
border-bottom: 1px solid #374151;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 1.5rem;
|
||||
font-weight: 600;
|
||||
color: #f9fafb;
|
||||
}
|
||||
|
||||
.config-info {
|
||||
font-size: 0.875rem;
|
||||
color: #9ca3af;
|
||||
}
|
||||
|
||||
.config-info strong {
|
||||
color: #60a5fa;
|
||||
}
|
||||
|
||||
.toolbar {
|
||||
display: flex;
|
||||
gap: 10px;
|
||||
margin-bottom: 20px;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
button {
|
||||
padding: 8px 16px;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
font-size: 0.875rem;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: #2563eb;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-primary:hover {
|
||||
background: #1d4ed8;
|
||||
}
|
||||
|
||||
.btn-primary:disabled {
|
||||
background: #4b5563;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: #374151;
|
||||
color: #e5e7eb;
|
||||
}
|
||||
|
||||
.btn-secondary:hover {
|
||||
background: #4b5563;
|
||||
}
|
||||
|
||||
.main-content {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
gap: 20px;
|
||||
}
|
||||
|
||||
@media (max-width: 900px) {
|
||||
.main-content {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
}
|
||||
|
||||
.panel {
|
||||
background: #1f2937;
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.panel-header {
|
||||
padding: 12px 16px;
|
||||
background: #374151;
|
||||
font-weight: 600;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.panel-body {
|
||||
padding: 16px;
|
||||
max-height: 600px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
/* Test Tree */
|
||||
.folder {
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.folder-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 6px 8px;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.folder-header:hover {
|
||||
background: #374151;
|
||||
}
|
||||
|
||||
.folder-header input {
|
||||
margin-right: 12px;
|
||||
}
|
||||
|
||||
.folder-name {
|
||||
font-weight: 500;
|
||||
color: #f9fafb;
|
||||
}
|
||||
|
||||
.test-count {
|
||||
margin-left: auto;
|
||||
font-size: 0.75rem;
|
||||
color: #9ca3af;
|
||||
background: #374151;
|
||||
padding: 2px 8px;
|
||||
border-radius: 10px;
|
||||
}
|
||||
|
||||
.folder-content {
|
||||
margin-left: 20px;
|
||||
}
|
||||
|
||||
.module {
|
||||
margin: 4px 0;
|
||||
}
|
||||
|
||||
.module-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 4px 8px;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.module-header:hover {
|
||||
background: #374151;
|
||||
}
|
||||
|
||||
.module-header input {
|
||||
margin-right: 12px;
|
||||
}
|
||||
|
||||
.module-name {
|
||||
color: #93c5fd;
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
.class-block {
|
||||
margin-left: 20px;
|
||||
}
|
||||
|
||||
.class-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 4px 8px;
|
||||
font-size: 1rem;
|
||||
color: #a78bfa;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.class-header:hover {
|
||||
background: #374151;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.class-header input {
|
||||
margin-right: 12px;
|
||||
}
|
||||
|
||||
.test-list {
|
||||
margin-left: 20px;
|
||||
}
|
||||
|
||||
.test-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 6px 8px;
|
||||
font-size: 0.95rem;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.test-item:hover {
|
||||
background: #374151;
|
||||
}
|
||||
|
||||
.test-item input {
|
||||
margin-right: 12px;
|
||||
}
|
||||
|
||||
.test-name {
|
||||
color: #d1d5db;
|
||||
}
|
||||
|
||||
/* Results */
|
||||
.summary {
|
||||
display: flex;
|
||||
gap: 16px;
|
||||
margin-bottom: 16px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.stat {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
font-size: 1.5rem;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
font-size: 0.75rem;
|
||||
color: #9ca3af;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.stat-passed .stat-value { color: #34d399; }
|
||||
.stat-failed .stat-value { color: #f87171; }
|
||||
.stat-skipped .stat-value { color: #fbbf24; }
|
||||
.stat-running .stat-value { color: #60a5fa; }
|
||||
|
||||
.result-item {
|
||||
padding: 8px 12px;
|
||||
margin-bottom: 4px;
|
||||
border-radius: 4px;
|
||||
background: #374151;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.result-icon {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border-radius: 50%;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 0.75rem;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.result-passed .result-icon {
|
||||
background: #065f46;
|
||||
color: #34d399;
|
||||
}
|
||||
|
||||
.result-failed .result-icon,
|
||||
.result-error .result-icon {
|
||||
background: #7f1d1d;
|
||||
color: #f87171;
|
||||
}
|
||||
|
||||
.result-skipped .result-icon {
|
||||
background: #78350f;
|
||||
color: #fbbf24;
|
||||
}
|
||||
|
||||
.result-running .result-icon {
|
||||
background: #1e3a8a;
|
||||
color: #60a5fa;
|
||||
animation: pulse 1s infinite;
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
0%, 100% { opacity: 1; }
|
||||
50% { opacity: 0.5; }
|
||||
}
|
||||
|
||||
.result-info {
|
||||
flex: 1;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.result-name {
|
||||
font-size: 0.875rem;
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
|
||||
.result-test-id {
|
||||
font-size: 0.75rem;
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
.result-duration {
|
||||
font-size: 0.75rem;
|
||||
color: #9ca3af;
|
||||
}
|
||||
|
||||
.result-error {
|
||||
margin-top: 8px;
|
||||
padding: 8px;
|
||||
background: #1f2937;
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
font-family: monospace;
|
||||
white-space: pre-wrap;
|
||||
color: #f87171;
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
text-align: center;
|
||||
padding: 40px;
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
.progress-bar {
|
||||
height: 4px;
|
||||
background: #374151;
|
||||
border-radius: 2px;
|
||||
margin-bottom: 16px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.progress-fill {
|
||||
height: 100%;
|
||||
background: #2563eb;
|
||||
transition: width 0.3s;
|
||||
}
|
||||
|
||||
.current-test {
|
||||
font-size: 0.75rem;
|
||||
color: #60a5fa;
|
||||
margin-bottom: 8px;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
/* Collapsible */
|
||||
.collapsed .folder-content,
|
||||
.collapsed .module-content,
|
||||
.collapsed .class-content {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.toggle-icon {
|
||||
margin-right: 4px;
|
||||
transition: transform 0.2s;
|
||||
}
|
||||
|
||||
.collapsed .toggle-icon {
|
||||
transform: rotate(-90deg);
|
||||
}
|
||||
|
||||
a {
|
||||
color: #60a5fa;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<div>
|
||||
<h1>Contract HTTP Tests</h1>
|
||||
<div style="display: flex; gap: 12px; margin-top: 8px; font-size: 0.875rem;">
|
||||
<a href="/tools/tester/" style="color: #60a5fa; text-decoration: none; font-weight: 600;">Runner</a>
|
||||
<a href="/tools/tester/filters" style="color: #60a5fa; text-decoration: none;">Filters</a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="config-info">
|
||||
<div style="display: flex; align-items: center; gap: 12px;">
|
||||
<span>Target:</span>
|
||||
<select id="environmentSelector" style="background: #374151; color: #e5e7eb; border: 1px solid #4b5563; border-radius: 4px; padding: 4px 8px; font-size: 0.875rem; cursor: pointer;">
|
||||
<option value="">Loading...</option>
|
||||
</select>
|
||||
<strong id="currentUrl">{{ config.CONTRACT_TEST_URL }}</strong>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<div class="toolbar">
|
||||
<button class="btn-primary" id="runAllBtn" onclick="runAll()">Run All</button>
|
||||
<button class="btn-secondary" id="runSelectedBtn" onclick="runSelected()">Run Selected</button>
|
||||
<button class="btn-secondary" onclick="clearResults()">Clear Results</button>
|
||||
<span style="margin-left: auto; color: #6b7280;">{{ total_tests }} tests discovered</span>
|
||||
</div>
|
||||
|
||||
<div class="main-content">
|
||||
<div class="panel">
|
||||
<div class="panel-header">
|
||||
<span>Tests</span>
|
||||
<button class="btn-secondary" onclick="toggleAll()" style="padding: 4px 8px; font-size: 0.75rem;">Toggle All</button>
|
||||
</div>
|
||||
<div class="panel-body" id="testsPanel">
|
||||
{% for folder_name, folder in tests_tree.items() %}
|
||||
<div class="folder" data-folder="{{ folder_name }}">
|
||||
<div class="folder-header" onclick="toggleFolder(this)">
|
||||
<span class="toggle-icon">▼</span>
|
||||
<input type="checkbox" onclick="event.stopPropagation(); toggleFolderCheckbox(this)" checked>
|
||||
<span class="folder-name">{{ folder_name }}/</span>
|
||||
<span class="test-count">{{ folder.test_count }}</span>
|
||||
</div>
|
||||
<div class="folder-content">
|
||||
{% for module_name, module in folder.modules.items() %}
|
||||
<div class="module" data-module="{{ folder_name }}.{{ module_name }}">
|
||||
<div class="module-header" onclick="toggleModule(this)">
|
||||
<span class="toggle-icon">▼</span>
|
||||
<input type="checkbox" onclick="event.stopPropagation(); toggleModuleCheckbox(this)" checked>
|
||||
<span class="module-name">{{ module_name }}.py</span>
|
||||
<span class="test-count">{{ module.test_count }}</span>
|
||||
</div>
|
||||
<div class="module-content">
|
||||
{% for class_name, cls in module.classes.items() %}
|
||||
<div class="class-block" data-class="{{ folder_name }}.{{ module_name }}.{{ class_name }}">
|
||||
<div class="class-header" onclick="toggleClass(this)">
|
||||
<span class="toggle-icon">▼</span>
|
||||
<input type="checkbox" onclick="event.stopPropagation(); toggleClassCheckbox(this)" checked>
|
||||
<span>{{ class_name }}</span>
|
||||
<span class="test-count">{{ cls.test_count }}</span>
|
||||
</div>
|
||||
<div class="class-content test-list">
|
||||
{% for test in cls.tests %}
|
||||
<div class="test-item">
|
||||
<input type="checkbox" data-test-id="{{ test.id }}" checked>
|
||||
<span class="test-name" title="{{ test.doc or '' }}">{{ test.name }}</span>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="panel">
|
||||
<div class="panel-header">
|
||||
<span>Results</span>
|
||||
<span id="runDuration" style="font-size: 0.75rem; color: #9ca3af;"></span>
|
||||
</div>
|
||||
<div class="panel-body" id="resultsPanel">
|
||||
<div class="summary" id="summary" style="display: none;">
|
||||
<div class="stat stat-passed">
|
||||
<div class="stat-value" id="passedCount">0</div>
|
||||
<div class="stat-label">Passed</div>
|
||||
</div>
|
||||
<div class="stat stat-failed">
|
||||
<div class="stat-value" id="failedCount">0</div>
|
||||
<div class="stat-label">Failed</div>
|
||||
</div>
|
||||
<div class="stat stat-skipped">
|
||||
<div class="stat-value" id="skippedCount">0</div>
|
||||
<div class="stat-label">Skipped</div>
|
||||
</div>
|
||||
<div class="stat stat-running">
|
||||
<div class="stat-value" id="runningCount">0</div>
|
||||
<div class="stat-label">Running</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="progress-bar" id="progressBar" style="display: none;">
|
||||
<div class="progress-fill" id="progressFill" style="width: 0%;"></div>
|
||||
</div>
|
||||
<div class="current-test" id="currentTest" style="display: none;"></div>
|
||||
<div id="resultsList">
|
||||
<div class="empty-state">
|
||||
Run tests to see results
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
let currentRunId = null;
|
||||
let pollInterval = null;
|
||||
|
||||
// Parse URL parameters for filters
|
||||
const urlParams = new URLSearchParams(window.location.search);
|
||||
const filterParams = {
|
||||
search: urlParams.get('search') || '',
|
||||
domains: urlParams.get('domains') ? new Set(urlParams.get('domains').split(',')) : new Set(),
|
||||
modules: urlParams.get('modules') ? new Set(urlParams.get('modules').split(',')) : new Set(),
|
||||
status: urlParams.get('status') ? new Set(urlParams.get('status').split(',')) : new Set(),
|
||||
};
|
||||
|
||||
// Check if there's a run ID in URL
|
||||
const autoRunId = urlParams.get('run');
|
||||
|
||||
// Format "TestCoverageCheck" -> "Coverage Check"
|
||||
function formatClassName(name) {
|
||||
// Remove "Test" prefix
|
||||
let formatted = name.replace(/^Test/, '');
|
||||
// Add space before each capital letter
|
||||
formatted = formatted.replace(/([A-Z])/g, ' $1').trim();
|
||||
return formatted;
|
||||
}
|
||||
|
||||
// Format "test_returns_coverage_boolean" -> "returns coverage boolean"
|
||||
function formatTestName(name) {
|
||||
// Remove "test_" prefix
|
||||
let formatted = name.replace(/^test_/, '');
|
||||
// Replace underscores with spaces
|
||||
formatted = formatted.replace(/_/g, ' ');
|
||||
return formatted;
|
||||
}
|
||||
|
||||
// Apply filters to test tree
|
||||
function applyFilters() {
|
||||
const folders = document.querySelectorAll('.folder');
|
||||
|
||||
folders.forEach(folder => {
|
||||
const folderName = folder.dataset.folder;
|
||||
let hasVisibleTests = false;
|
||||
|
||||
// Check domain filter
|
||||
if (filterParams.domains.size > 0 && !filterParams.domains.has(folderName)) {
|
||||
folder.style.display = 'none';
|
||||
return;
|
||||
}
|
||||
|
||||
// Check modules
|
||||
const modules = folder.querySelectorAll('.module');
|
||||
modules.forEach(module => {
|
||||
const moduleName = module.dataset.module.split('.')[1];
|
||||
let moduleVisible = true;
|
||||
|
||||
if (filterParams.modules.size > 0 && !filterParams.modules.has(moduleName)) {
|
||||
moduleVisible = false;
|
||||
}
|
||||
|
||||
// Check search filter on test names
|
||||
if (filterParams.search && moduleVisible) {
|
||||
const tests = module.querySelectorAll('.test-item');
|
||||
let hasMatchingTest = false;
|
||||
tests.forEach(test => {
|
||||
const testName = test.querySelector('.test-name').textContent.toLowerCase();
|
||||
if (testName.includes(filterParams.search.toLowerCase())) {
|
||||
hasMatchingTest = true;
|
||||
}
|
||||
});
|
||||
if (!hasMatchingTest) {
|
||||
moduleVisible = false;
|
||||
}
|
||||
}
|
||||
|
||||
if (moduleVisible) {
|
||||
module.style.display = '';
|
||||
hasVisibleTests = true;
|
||||
} else {
|
||||
module.style.display = 'none';
|
||||
}
|
||||
});
|
||||
|
||||
folder.style.display = hasVisibleTests ? '' : 'none';
|
||||
});
|
||||
}
|
||||
|
||||
// Load environments
|
||||
async function loadEnvironments() {
|
||||
try {
|
||||
const response = await fetch('/tools/tester/api/environments');
|
||||
const data = await response.json();
|
||||
const selector = document.getElementById('environmentSelector');
|
||||
const currentUrl = document.getElementById('currentUrl');
|
||||
|
||||
// Get saved environment from localStorage
|
||||
const savedEnvId = localStorage.getItem('selectedEnvironment');
|
||||
let selectedEnv = null;
|
||||
|
||||
// Populate selector
|
||||
selector.innerHTML = data.environments.map(env => {
|
||||
const isDefault = env.default || env.id === savedEnvId;
|
||||
if (isDefault) selectedEnv = env;
|
||||
return `<option value="${env.id}" ${isDefault ? 'selected' : ''}>${env.name} ${env.has_api_key ? '🔑' : ''}</option>`;
|
||||
}).join('');
|
||||
|
||||
// Update URL display
|
||||
if (selectedEnv) {
|
||||
currentUrl.textContent = selectedEnv.url;
|
||||
}
|
||||
|
||||
// Handle environment changes
|
||||
selector.addEventListener('change', async (e) => {
|
||||
const envId = e.target.value;
|
||||
try {
|
||||
const response = await fetch(`/tools/tester/api/environment/select?env_id=${envId}`, {
|
||||
method: 'POST'
|
||||
});
|
||||
const data = await response.json();
|
||||
|
||||
if (data.success) {
|
||||
currentUrl.textContent = data.environment.url;
|
||||
localStorage.setItem('selectedEnvironment', envId);
|
||||
|
||||
// Show notification
|
||||
const notification = document.createElement('div');
|
||||
notification.textContent = `Switched to ${data.environment.name}`;
|
||||
notification.style.cssText = 'position: fixed; top: 20px; right: 20px; background: #2563eb; color: white; padding: 12px 20px; border-radius: 6px; z-index: 1000; animation: fadeIn 0.3s;';
|
||||
document.body.appendChild(notification);
|
||||
setTimeout(() => notification.remove(), 3000);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to switch environment:', error);
|
||||
alert('Failed to switch environment');
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Failed to load environments:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Apply formatting and filters on page load
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
// Load environments
|
||||
loadEnvironments();
|
||||
// Format class names
|
||||
document.querySelectorAll('.class-header > span:not(.toggle-icon):not(.test-count)').forEach(el => {
|
||||
if (!el.querySelector('input')) {
|
||||
el.textContent = formatClassName(el.textContent);
|
||||
}
|
||||
});
|
||||
// Format test names
|
||||
document.querySelectorAll('.test-name').forEach(el => {
|
||||
el.textContent = formatTestName(el.textContent);
|
||||
});
|
||||
|
||||
// Apply filters from URL
|
||||
if (filterParams.domains.size > 0 || filterParams.modules.size > 0 || filterParams.search) {
|
||||
applyFilters();
|
||||
}
|
||||
|
||||
// Auto-start run if run ID in URL
|
||||
if (autoRunId) {
|
||||
currentRunId = autoRunId;
|
||||
document.getElementById('summary').style.display = 'flex';
|
||||
document.getElementById('progressBar').style.display = 'block';
|
||||
pollInterval = setInterval(pollStatus, 1000);
|
||||
pollStatus();
|
||||
}
|
||||
});
|
||||
|
||||
function getSelectedTestIds() {
|
||||
const checkboxes = document.querySelectorAll('.test-item input[type="checkbox"]:checked');
|
||||
return Array.from(checkboxes).map(cb => cb.dataset.testId);
|
||||
}
|
||||
|
||||
async function runAll() {
|
||||
await startRun(null);
|
||||
}
|
||||
|
||||
async function runSelected() {
|
||||
const testIds = getSelectedTestIds();
|
||||
if (testIds.length === 0) {
|
||||
alert('No tests selected');
|
||||
return;
|
||||
}
|
||||
await startRun(testIds);
|
||||
}
|
||||
|
||||
async function startRun(testIds) {
|
||||
document.getElementById('runAllBtn').disabled = true;
|
||||
document.getElementById('runSelectedBtn').disabled = true;
|
||||
document.getElementById('summary').style.display = 'flex';
|
||||
document.getElementById('progressBar').style.display = 'block';
|
||||
document.getElementById('resultsList').innerHTML = '';
|
||||
|
||||
try {
|
||||
const response = await fetch('/tools/tester/api/run', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ test_ids: testIds }),
|
||||
});
|
||||
const data = await response.json();
|
||||
currentRunId = data.run_id;
|
||||
|
||||
// Start polling
|
||||
pollInterval = setInterval(pollStatus, 1000);
|
||||
pollStatus(); // Immediate first poll
|
||||
} catch (error) {
|
||||
console.error('Failed to start run:', error);
|
||||
document.getElementById('runAllBtn').disabled = false;
|
||||
document.getElementById('runSelectedBtn').disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
async function pollStatus() {
|
||||
if (!currentRunId) return;
|
||||
|
||||
try {
|
||||
const response = await fetch(`/tools/tester/api/run/${currentRunId}`);
|
||||
const data = await response.json();
|
||||
|
||||
updateUI(data);
|
||||
|
||||
if (data.status === 'completed' || data.status === 'failed') {
|
||||
clearInterval(pollInterval);
|
||||
pollInterval = null;
|
||||
document.getElementById('runAllBtn').disabled = false;
|
||||
document.getElementById('runSelectedBtn').disabled = false;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Poll failed:', error);
|
||||
}
|
||||
}
|
||||
|
||||
function updateUI(data) {
|
||||
// Update counts
|
||||
document.getElementById('passedCount').textContent = data.passed;
|
||||
document.getElementById('failedCount').textContent = data.failed + data.errors;
|
||||
document.getElementById('skippedCount').textContent = data.skipped;
|
||||
document.getElementById('runningCount').textContent = data.total - data.completed;
|
||||
|
||||
// Update progress
|
||||
const progress = data.total > 0 ? (data.completed / data.total * 100) : 0;
|
||||
document.getElementById('progressFill').style.width = progress + '%';
|
||||
|
||||
// Update duration
|
||||
if (data.duration) {
|
||||
document.getElementById('runDuration').textContent = data.duration.toFixed(1) + 's';
|
||||
}
|
||||
|
||||
// Current test
|
||||
const currentTestEl = document.getElementById('currentTest');
|
||||
if (data.current_test) {
|
||||
currentTestEl.textContent = 'Running: ' + data.current_test;
|
||||
currentTestEl.style.display = 'block';
|
||||
} else {
|
||||
currentTestEl.style.display = 'none';
|
||||
}
|
||||
|
||||
// Results list
|
||||
const resultsList = document.getElementById('resultsList');
|
||||
resultsList.innerHTML = data.results.map(r => renderResult(r)).join('');
|
||||
}
|
||||
|
||||
function renderResult(result) {
|
||||
const icons = {
|
||||
passed: '✓',
|
||||
failed: '✗',
|
||||
error: '✗',
|
||||
skipped: '−',
|
||||
running: '●',
|
||||
};
|
||||
|
||||
let errorHtml = '';
|
||||
if (result.error_message) {
|
||||
errorHtml = `<div class="result-error">${escapeHtml(result.error_message)}</div>`;
|
||||
}
|
||||
|
||||
// Render artifacts (videos, screenshots)
|
||||
let artifactsHtml = '';
|
||||
if (result.artifacts && result.artifacts.length > 0) {
|
||||
const artifactItems = result.artifacts.map(artifact => {
|
||||
if (artifact.type === 'video') {
|
||||
return `
|
||||
<div style="margin-top: 8px;">
|
||||
<div style="font-size: 0.75rem; color: #9ca3af; margin-bottom: 4px;">
|
||||
📹 ${artifact.filename} (${formatBytes(artifact.size)})
|
||||
</div>
|
||||
<video controls style="max-width: 100%; border-radius: 4px; background: #000;">
|
||||
<source src="${artifact.url}" type="video/webm">
|
||||
Your browser does not support video playback.
|
||||
</video>
|
||||
</div>
|
||||
`;
|
||||
} else if (artifact.type === 'screenshot') {
|
||||
return `
|
||||
<div style="margin-top: 8px;">
|
||||
<div style="font-size: 0.75rem; color: #9ca3af; margin-bottom: 4px;">
|
||||
📸 ${artifact.filename} (${formatBytes(artifact.size)})
|
||||
</div>
|
||||
<img src="${artifact.url}" style="max-width: 100%; border-radius: 4px; border: 1px solid #374151;">
|
||||
</div>
|
||||
`;
|
||||
} else {
|
||||
return `
|
||||
<div style="margin-top: 8px; font-size: 0.75rem; color: #9ca3af;">
|
||||
📎 <a href="${artifact.url}" style="color: #60a5fa;">${artifact.filename}</a> (${formatBytes(artifact.size)})
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
}).join('');
|
||||
artifactsHtml = `<div class="result-artifacts">${artifactItems}</div>`;
|
||||
}
|
||||
|
||||
return `
|
||||
<div class="result-item result-${result.status}">
|
||||
<div class="result-icon">${icons[result.status] || '?'}</div>
|
||||
<div class="result-info">
|
||||
<div class="result-name">${escapeHtml(result.name)}</div>
|
||||
<div class="result-test-id">${escapeHtml(result.test_id)}</div>
|
||||
${errorHtml}
|
||||
${artifactsHtml}
|
||||
</div>
|
||||
<div class="result-duration">${result.duration.toFixed(3)}s</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
function formatBytes(bytes) {
|
||||
if (bytes < 1024) return bytes + ' B';
|
||||
if (bytes < 1024 * 1024) return (bytes / 1024).toFixed(1) + ' KB';
|
||||
return (bytes / (1024 * 1024)).toFixed(1) + ' MB';
|
||||
}
|
||||
|
||||
function escapeHtml(text) {
|
||||
const div = document.createElement('div');
|
||||
div.textContent = text;
|
||||
return div.innerHTML;
|
||||
}
|
||||
|
||||
function clearResults() {
|
||||
document.getElementById('summary').style.display = 'none';
|
||||
document.getElementById('progressBar').style.display = 'none';
|
||||
document.getElementById('currentTest').style.display = 'none';
|
||||
document.getElementById('runDuration').textContent = '';
|
||||
document.getElementById('resultsList').innerHTML = '<div class="empty-state">Run tests to see results</div>';
|
||||
}
|
||||
|
||||
// Toggle functions
|
||||
function toggleFolder(header) {
|
||||
header.parentElement.classList.toggle('collapsed');
|
||||
}
|
||||
|
||||
function toggleModule(header) {
|
||||
header.parentElement.classList.toggle('collapsed');
|
||||
}
|
||||
|
||||
function toggleClass(header) {
|
||||
header.parentElement.classList.toggle('collapsed');
|
||||
}
|
||||
|
||||
function toggleAll() {
|
||||
const folders = document.querySelectorAll('.folder');
|
||||
const allCollapsed = Array.from(folders).every(f => f.classList.contains('collapsed'));
|
||||
|
||||
folders.forEach(folder => {
|
||||
if (allCollapsed) {
|
||||
folder.classList.remove('collapsed');
|
||||
} else {
|
||||
folder.classList.add('collapsed');
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function toggleFolderCheckbox(checkbox) {
|
||||
const folder = checkbox.closest('.folder');
|
||||
const childCheckboxes = folder.querySelectorAll('input[type="checkbox"]');
|
||||
childCheckboxes.forEach(cb => cb.checked = checkbox.checked);
|
||||
}
|
||||
|
||||
function toggleModuleCheckbox(checkbox) {
|
||||
const module = checkbox.closest('.module');
|
||||
const childCheckboxes = module.querySelectorAll('.test-item input[type="checkbox"]');
|
||||
childCheckboxes.forEach(cb => cb.checked = checkbox.checked);
|
||||
}
|
||||
|
||||
function toggleClassCheckbox(checkbox) {
|
||||
const classBlock = checkbox.closest('.class-block');
|
||||
const childCheckboxes = classBlock.querySelectorAll('.test-item input[type="checkbox"]');
|
||||
childCheckboxes.forEach(cb => cb.checked = checkbox.checked);
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
73
station/tools/tester/tests/README.md
Normal file
73
station/tools/tester/tests/README.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Contract Tests
|
||||
|
||||
API contract tests organized by Django app, with optional workflow tests.
|
||||
|
||||
## Testing Modes
|
||||
|
||||
Two modes via `CONTRACT_TEST_MODE` environment variable:
|
||||
|
||||
| Mode | Command | Description |
|
||||
|------|---------|-------------|
|
||||
| **api** (default) | `pytest tests/contracts/` | Fast, Django test client, test DB |
|
||||
| **live** | `CONTRACT_TEST_MODE=live pytest tests/contracts/` | Real HTTP, LiveServerTestCase, test DB |
|
||||
|
||||
### Mode Comparison
|
||||
|
||||
| | `api` (default) | `live` |
|
||||
|---|---|---|
|
||||
| **Base class** | `APITestCase` | `LiveServerTestCase` |
|
||||
| **HTTP** | In-process (Django test client) | Real HTTP via `requests` |
|
||||
| **Auth** | `force_authenticate()` | JWT tokens via API |
|
||||
| **Database** | Django test DB (isolated) | Django test DB (isolated) |
|
||||
| **Speed** | ~3-5 sec | ~15-30 sec |
|
||||
| **Server** | None (in-process) | Auto-started by Django |
|
||||
|
||||
### Key Point: Both Modes Use Test Database
|
||||
|
||||
Neither mode touches your real database. Django automatically:
|
||||
1. Creates a test database (prefixed with `test_`)
|
||||
2. Runs migrations
|
||||
3. Destroys it after tests complete
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
tests/contracts/
|
||||
├── base.py # Mode switcher (imports from base_api or base_live)
|
||||
├── base_api.py # APITestCase implementation
|
||||
├── base_live.py # LiveServerTestCase implementation
|
||||
├── conftest.py # pytest-django configuration
|
||||
├── endpoints.py # API paths (single source of truth)
|
||||
├── helpers.py # Shared test data helpers
|
||||
│
|
||||
├── mascotas/ # Django app: mascotas
|
||||
│ ├── test_pet_owners.py
|
||||
│ ├── test_pets.py
|
||||
│ └── test_coverage.py
|
||||
│
|
||||
├── productos/ # Django app: productos
|
||||
│ ├── test_services.py
|
||||
│ └── test_cart.py
|
||||
│
|
||||
├── solicitudes/ # Django app: solicitudes
|
||||
│ └── test_service_requests.py
|
||||
│
|
||||
└── workflows/ # Multi-step API sequences (e.g., turnero booking flow)
|
||||
└── test_turnero_general.py
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# All contract tests
|
||||
pytest tests/contracts/
|
||||
|
||||
# Single app
|
||||
pytest tests/contracts/mascotas/
|
||||
|
||||
# Single file
|
||||
pytest tests/contracts/mascotas/test_pet_owners.py
|
||||
|
||||
# Live mode (real HTTP)
|
||||
CONTRACT_TEST_MODE=live pytest tests/contracts/
|
||||
```
|
||||
2
station/tools/tester/tests/__init__.py
Normal file
2
station/tools/tester/tests/__init__.py
Normal file
@@ -0,0 +1,2 @@
|
||||
# Contract tests - black-box HTTP tests that validate API contracts
|
||||
# These tests are decoupled from Django and can run against any implementation
|
||||
1
station/tools/tester/tests/_dev/__init__.py
Normal file
1
station/tools/tester/tests/_dev/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Development tests - minimal tests for tester development
|
||||
29
station/tools/tester/tests/_dev/test_health.py
Normal file
29
station/tools/tester/tests/_dev/test_health.py
Normal file
@@ -0,0 +1,29 @@
|
||||
"""
|
||||
Development Test: Health Check
|
||||
|
||||
Minimal test to verify tester is working when backend tests aren't available.
|
||||
Tests basic HTTP connectivity and authentication flow.
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
|
||||
|
||||
class TestHealth(ContractTestCase):
|
||||
"""Basic health and connectivity tests"""
|
||||
|
||||
def test_can_connect_to_base_url(self):
|
||||
"""Verify we can connect to the configured URL"""
|
||||
# This just ensures httpx and base URL work
|
||||
try:
|
||||
response = self.get("/health/")
|
||||
except Exception as e:
|
||||
self.skipTest(f"Cannot connect to {self.base_url}: {e}")
|
||||
|
||||
# If we got here, connection worked
|
||||
self.assertIsNotNone(response)
|
||||
|
||||
def test_token_authentication(self):
|
||||
"""Verify token authentication is configured"""
|
||||
# Just checks that we have a token (either from env or fetch)
|
||||
self.assertIsNotNone(self.token, "No authentication token available")
|
||||
164
station/tools/tester/tests/base.py
Normal file
164
station/tools/tester/tests/base.py
Normal file
@@ -0,0 +1,164 @@
|
||||
"""
|
||||
Pure HTTP Contract Tests - Base Class
|
||||
|
||||
Framework-agnostic: works against ANY backend implementation.
|
||||
Does NOT manage database - expects a ready environment.
|
||||
|
||||
Requirements:
|
||||
- Server running at CONTRACT_TEST_URL
|
||||
- Database migrated and seeded
|
||||
- Test user exists OR CONTRACT_TEST_TOKEN provided
|
||||
|
||||
Usage:
|
||||
CONTRACT_TEST_URL=http://127.0.0.1:8000 pytest
|
||||
CONTRACT_TEST_TOKEN=your_jwt_token pytest
|
||||
"""
|
||||
|
||||
import os
|
||||
import unittest
|
||||
import httpx
|
||||
|
||||
from .endpoints import Endpoints
|
||||
|
||||
|
||||
def get_base_url():
|
||||
"""Get base URL from environment (required)"""
|
||||
url = os.environ.get("CONTRACT_TEST_URL", "")
|
||||
if not url:
|
||||
raise ValueError("CONTRACT_TEST_URL environment variable required")
|
||||
return url.rstrip("/")
|
||||
|
||||
|
||||
class ContractTestCase(unittest.TestCase):
|
||||
"""
|
||||
Base class for pure HTTP contract tests.
|
||||
|
||||
Features:
|
||||
- Framework-agnostic (works with Django, FastAPI, Node, etc.)
|
||||
- Pure HTTP via requests library
|
||||
- No database access - all data through API
|
||||
- JWT authentication
|
||||
"""
|
||||
|
||||
# Auth credentials - override via environment
|
||||
TEST_USER_EMAIL = os.environ.get("CONTRACT_TEST_USER", "contract_test@example.com")
|
||||
TEST_USER_PASSWORD = os.environ.get("CONTRACT_TEST_PASSWORD", "testpass123")
|
||||
|
||||
# Class-level cache
|
||||
_base_url = None
|
||||
_token = None
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
"""Set up once per test class"""
|
||||
super().setUpClass()
|
||||
cls._base_url = get_base_url()
|
||||
|
||||
# Use provided token or fetch one
|
||||
cls._token = os.environ.get("CONTRACT_TEST_TOKEN", "")
|
||||
if not cls._token:
|
||||
cls._token = cls._fetch_token()
|
||||
|
||||
@classmethod
|
||||
def _fetch_token(cls):
|
||||
"""Get JWT token for authentication"""
|
||||
url = f"{cls._base_url}{Endpoints.TOKEN}"
|
||||
try:
|
||||
response = httpx.post(url, json={
|
||||
"username": cls.TEST_USER_EMAIL,
|
||||
"password": cls.TEST_USER_PASSWORD,
|
||||
}, timeout=10)
|
||||
if response.status_code == 200:
|
||||
return response.json().get("access", "")
|
||||
else:
|
||||
print(f"Warning: Token request failed with {response.status_code}")
|
||||
except httpx.RequestError as e:
|
||||
print(f"Warning: Token request failed: {e}")
|
||||
return ""
|
||||
|
||||
@property
|
||||
def base_url(self):
|
||||
return self._base_url
|
||||
|
||||
@property
|
||||
def token(self):
|
||||
return self._token
|
||||
|
||||
def _auth_headers(self):
|
||||
"""Get authorization headers"""
|
||||
if self.token:
|
||||
return {"Authorization": f"Bearer {self.token}"}
|
||||
return {}
|
||||
|
||||
# =========================================================================
|
||||
# HTTP helpers
|
||||
# =========================================================================
|
||||
|
||||
def get(self, path: str, params: dict = None, **kwargs):
|
||||
"""GET request"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.get(url, params=params, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def post(self, path: str, data: dict = None, **kwargs):
|
||||
"""POST request with JSON"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.post(url, json=data, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def put(self, path: str, data: dict = None, **kwargs):
|
||||
"""PUT request with JSON"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.put(url, json=data, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def patch(self, path: str, data: dict = None, **kwargs):
|
||||
"""PATCH request with JSON"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.patch(url, json=data, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def delete(self, path: str, **kwargs):
|
||||
"""DELETE request"""
|
||||
url = f"{self.base_url}{path}"
|
||||
headers = {**self._auth_headers(), **kwargs.pop("headers", {})}
|
||||
response = httpx.delete(url, headers=headers, timeout=30, **kwargs)
|
||||
return self._wrap_response(response)
|
||||
|
||||
def _wrap_response(self, response):
|
||||
"""Add .data attribute for consistency with DRF responses"""
|
||||
try:
|
||||
response.data = response.json()
|
||||
except Exception:
|
||||
response.data = None
|
||||
return response
|
||||
|
||||
# =========================================================================
|
||||
# Assertion helpers
|
||||
# =========================================================================
|
||||
|
||||
def assert_status(self, response, expected_status: int):
|
||||
"""Assert response has expected status code"""
|
||||
self.assertEqual(
|
||||
response.status_code,
|
||||
expected_status,
|
||||
f"Expected {expected_status}, got {response.status_code}. "
|
||||
f"Response: {response.data if hasattr(response, 'data') else response.content[:500]}"
|
||||
)
|
||||
|
||||
def assert_has_fields(self, data: dict, *fields: str):
|
||||
"""Assert dictionary has all specified fields"""
|
||||
missing = [f for f in fields if f not in data]
|
||||
self.assertEqual(missing, [], f"Missing fields: {missing}. Got: {list(data.keys())}")
|
||||
|
||||
def assert_is_list(self, data, min_length: int = 0):
|
||||
"""Assert data is a list with minimum length"""
|
||||
self.assertIsInstance(data, list)
|
||||
self.assertGreaterEqual(len(data), min_length)
|
||||
|
||||
|
||||
__all__ = ["ContractTestCase"]
|
||||
29
station/tools/tester/tests/conftest.py
Normal file
29
station/tools/tester/tests/conftest.py
Normal file
@@ -0,0 +1,29 @@
|
||||
"""
|
||||
Contract Tests Configuration
|
||||
|
||||
Supports two testing modes via CONTRACT_TEST_MODE environment variable:
|
||||
|
||||
# Fast mode (default) - Django test client, test DB
|
||||
pytest tests/contracts/
|
||||
|
||||
# Live mode - Real HTTP with LiveServerTestCase, test DB
|
||||
CONTRACT_TEST_MODE=live pytest tests/contracts/
|
||||
"""
|
||||
|
||||
import os
|
||||
import pytest
|
||||
|
||||
# Let pytest-django handle Django setup via pytest.ini DJANGO_SETTINGS_MODULE
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
"""Register custom markers"""
|
||||
config.addinivalue_line(
|
||||
"markers", "workflow: marks test as a workflow/flow test (runs endpoint tests in sequence)"
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def contract_test_mode():
|
||||
"""Return current test mode"""
|
||||
return os.environ.get("CONTRACT_TEST_MODE", "api")
|
||||
38
station/tools/tester/tests/endpoints.py
Normal file
38
station/tools/tester/tests/endpoints.py
Normal file
@@ -0,0 +1,38 @@
|
||||
"""
|
||||
API Endpoints - Single source of truth for contract tests.
|
||||
|
||||
If API paths or versioning changes, update here only.
|
||||
"""
|
||||
|
||||
|
||||
class Endpoints:
|
||||
"""API endpoint paths"""
|
||||
|
||||
# ==========================================================================
|
||||
# Mascotas
|
||||
# ==========================================================================
|
||||
PET_OWNERS = "/mascotas/api/v1/pet-owners/"
|
||||
PET_OWNER_DETAIL = "/mascotas/api/v1/pet-owners/{id}/"
|
||||
PETS = "/mascotas/api/v1/pets/"
|
||||
PET_DETAIL = "/mascotas/api/v1/pets/{id}/"
|
||||
COVERAGE_CHECK = "/mascotas/api/v1/coverage/check/"
|
||||
|
||||
# ==========================================================================
|
||||
# Productos
|
||||
# ==========================================================================
|
||||
SERVICES = "/productos/api/v1/services/"
|
||||
CATEGORIES = "/productos/api/v1/categories/"
|
||||
CART = "/productos/api/v1/cart/"
|
||||
CART_DETAIL = "/productos/api/v1/cart/{id}/"
|
||||
|
||||
# ==========================================================================
|
||||
# Solicitudes
|
||||
# ==========================================================================
|
||||
SERVICE_REQUESTS = "/solicitudes/service-requests/"
|
||||
SERVICE_REQUEST_DETAIL = "/solicitudes/service-requests/{id}/"
|
||||
|
||||
# ==========================================================================
|
||||
# Auth
|
||||
# ==========================================================================
|
||||
TOKEN = "/api/token/"
|
||||
TOKEN_REFRESH = "/api/token/refresh/"
|
||||
44
station/tools/tester/tests/helpers.py
Normal file
44
station/tools/tester/tests/helpers.py
Normal file
@@ -0,0 +1,44 @@
|
||||
"""
|
||||
Contract Tests - Shared test data helpers.
|
||||
|
||||
Used across all endpoint tests to generate consistent test data.
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
|
||||
def unique_email(prefix="test"):
|
||||
"""Generate unique email for test data"""
|
||||
return f"{prefix}_{int(time.time() * 1000)}@contract-test.local"
|
||||
|
||||
|
||||
def sample_pet_owner(email=None):
|
||||
"""Generate sample pet owner data"""
|
||||
return {
|
||||
"first_name": "Test",
|
||||
"last_name": "Usuario",
|
||||
"email": email or unique_email("owner"),
|
||||
"phone": "1155667788",
|
||||
"address": "Av. Santa Fe 1234",
|
||||
"geo_latitude": -34.5955,
|
||||
"geo_longitude": -58.4166,
|
||||
}
|
||||
|
||||
|
||||
SAMPLE_CAT = {
|
||||
"name": "TestCat",
|
||||
"pet_type": "CAT",
|
||||
"is_neutered": False,
|
||||
}
|
||||
|
||||
SAMPLE_DOG = {
|
||||
"name": "TestDog",
|
||||
"pet_type": "DOG",
|
||||
"is_neutered": False,
|
||||
}
|
||||
|
||||
SAMPLE_NEUTERED_CAT = {
|
||||
"name": "NeuteredCat",
|
||||
"pet_type": "CAT",
|
||||
"is_neutered": True,
|
||||
}
|
||||
1
station/tools/tester/tests/mascotas/__init__.py
Normal file
1
station/tools/tester/tests/mascotas/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Contract tests for mascotas app endpoints
|
||||
53
station/tools/tester/tests/mascotas/test_coverage.py
Normal file
53
station/tools/tester/tests/mascotas/test_coverage.py
Normal file
@@ -0,0 +1,53 @@
|
||||
"""
|
||||
Contract Tests: Coverage Check API
|
||||
|
||||
Endpoint: /mascotas/api/v1/coverage/check/
|
||||
App: mascotas
|
||||
|
||||
Used to check if a location has veterinary coverage before proceeding with turnero.
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
|
||||
|
||||
class TestCoverageCheck(ContractTestCase):
|
||||
"""GET /mascotas/api/v1/coverage/check/"""
|
||||
|
||||
def test_with_coordinates_returns_200(self):
|
||||
"""Coverage check should accept lat/lng parameters"""
|
||||
response = self.get(Endpoints.COVERAGE_CHECK, params={
|
||||
"lat": -34.6037,
|
||||
"lng": -58.3816,
|
||||
})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
|
||||
def test_returns_coverage_boolean(self):
|
||||
"""Coverage check should return coverage boolean"""
|
||||
response = self.get(Endpoints.COVERAGE_CHECK, params={
|
||||
"lat": -34.6037,
|
||||
"lng": -58.3816,
|
||||
})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
self.assert_has_fields(response.data, "coverage")
|
||||
self.assertIsInstance(response.data["coverage"], bool)
|
||||
|
||||
def test_returns_vet_count(self):
|
||||
"""Coverage check should return number of available vets"""
|
||||
response = self.get(Endpoints.COVERAGE_CHECK, params={
|
||||
"lat": -34.6037,
|
||||
"lng": -58.3816,
|
||||
})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
self.assert_has_fields(response.data, "vet_count")
|
||||
self.assertIsInstance(response.data["vet_count"], int)
|
||||
|
||||
def test_without_coordinates_fails(self):
|
||||
"""Coverage check without coordinates should fail"""
|
||||
response = self.get(Endpoints.COVERAGE_CHECK)
|
||||
|
||||
# Should return 400 or similar error
|
||||
self.assertIn(response.status_code, [400, 422])
|
||||
171
station/tools/tester/tests/mascotas/test_pet_owners.py
Normal file
171
station/tools/tester/tests/mascotas/test_pet_owners.py
Normal file
@@ -0,0 +1,171 @@
|
||||
"""
|
||||
Contract Tests: Pet Owners API
|
||||
|
||||
Endpoint: /mascotas/api/v1/pet-owners/
|
||||
App: mascotas
|
||||
|
||||
Related Tickets:
|
||||
- VET-536: Paso 0 - Test creación del petowner invitado
|
||||
- VET-535: Establecer y definir test para las apis vinculadas al procesos de solicitar turno general
|
||||
|
||||
Context: In the turnero general flow (guest booking), a "guest" pet owner is created
|
||||
with a mock email (e.g., invitado-1759415377297@example.com). This user is fundamental
|
||||
for subsequent steps as it provides the address used to filter available services.
|
||||
|
||||
TBD: PetOwnerViewSet needs pagination - currently loads all records on list().
|
||||
See mascotas/views/api/v1/views/petowner_views.py:72
|
||||
Using email filter in tests to avoid loading 14k+ records.
|
||||
"""
|
||||
|
||||
import time
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
from ..helpers import sample_pet_owner
|
||||
|
||||
|
||||
class TestPetOwnerCreate(ContractTestCase):
|
||||
"""POST /mascotas/api/v1/pet-owners/
|
||||
|
||||
VET-536: Tests for guest petowner creation (Step 0 of turnero flow)
|
||||
"""
|
||||
|
||||
def test_create_returns_201(self):
|
||||
"""
|
||||
Creating a pet owner returns 201 with the created resource.
|
||||
|
||||
Request (from production turnero):
|
||||
POST /mascotas/api/v1/pet-owners/
|
||||
{
|
||||
"first_name": "Juan",
|
||||
"last_name": "Pérez",
|
||||
"email": "invitado-1733929847293@example.com",
|
||||
"phone": "1155667788",
|
||||
"address": "Av. Santa Fe 1234, Buenos Aires",
|
||||
"geo_latitude": -34.5955,
|
||||
"geo_longitude": -58.4166
|
||||
}
|
||||
|
||||
Response (201):
|
||||
{
|
||||
"id": 12345,
|
||||
"first_name": "Juan",
|
||||
"last_name": "Pérez",
|
||||
"email": "invitado-1733929847293@example.com",
|
||||
"phone": "1155667788",
|
||||
"address": "Av. Santa Fe 1234, Buenos Aires",
|
||||
"geo_latitude": -34.5955,
|
||||
"geo_longitude": -58.4166,
|
||||
"pets": [],
|
||||
"created_at": "2024-12-11T15:30:47.293Z"
|
||||
}
|
||||
"""
|
||||
data = sample_pet_owner()
|
||||
|
||||
response = self.post(Endpoints.PET_OWNERS, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
self.assert_has_fields(response.data, "id", "email", "first_name", "last_name")
|
||||
self.assertEqual(response.data["email"], data["email"])
|
||||
|
||||
def test_requires_email(self):
|
||||
"""
|
||||
Pet owner creation requires email (current behavior).
|
||||
|
||||
Note: The turnero guest flow uses a mock email created by frontend
|
||||
(e.g., invitado-1759415377297@example.com). The API always requires email.
|
||||
This test ensures the contract enforcement - no petowner without email.
|
||||
"""
|
||||
data = {
|
||||
"address": "Av. Corrientes 1234",
|
||||
"first_name": "Invitado",
|
||||
"last_name": str(int(time.time())),
|
||||
}
|
||||
|
||||
response = self.post(Endpoints.PET_OWNERS, data)
|
||||
|
||||
self.assert_status(response, 400)
|
||||
|
||||
def test_duplicate_email_returns_existing(self):
|
||||
"""
|
||||
Creating pet owner with existing email returns the existing record.
|
||||
|
||||
Note: API has upsert behavior - returns 200 with existing record,
|
||||
not 400 error. This allows frontend to "create or get" in one call.
|
||||
Important for guest flow - if user refreshes/retries, we don't create duplicates.
|
||||
"""
|
||||
data = sample_pet_owner()
|
||||
first_response = self.post(Endpoints.PET_OWNERS, data)
|
||||
first_id = first_response.data["id"]
|
||||
|
||||
response = self.post(Endpoints.PET_OWNERS, data) # Same email
|
||||
|
||||
# Returns 200 with existing record (upsert behavior)
|
||||
self.assert_status(response, 200)
|
||||
self.assertEqual(response.data["id"], first_id)
|
||||
|
||||
def test_address_and_geolocation_persisted(self):
|
||||
"""
|
||||
Pet owner address and geolocation coordinates are persisted correctly.
|
||||
|
||||
The address is critical for the turnero flow - it's used to filter available
|
||||
services by location. Geolocation (lat/lng) may be obtained from Google Maps API.
|
||||
"""
|
||||
data = sample_pet_owner()
|
||||
|
||||
response = self.post(Endpoints.PET_OWNERS, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
self.assert_has_fields(response.data, "address", "geo_latitude", "geo_longitude")
|
||||
self.assertEqual(response.data["address"], data["address"])
|
||||
# Verify geolocation fields are numeric (not null/empty)
|
||||
self.assertIsNotNone(response.data.get("geo_latitude"))
|
||||
self.assertIsNotNone(response.data.get("geo_longitude"))
|
||||
|
||||
|
||||
class TestPetOwnerRetrieve(ContractTestCase):
|
||||
"""GET /mascotas/api/v1/pet-owners/{id}/"""
|
||||
|
||||
def test_get_by_id_returns_200(self):
|
||||
"""GET pet owner by ID returns owner details"""
|
||||
# Create owner first
|
||||
data = sample_pet_owner()
|
||||
create_response = self.post(Endpoints.PET_OWNERS, data)
|
||||
owner_id = create_response.data["id"]
|
||||
|
||||
response = self.get(Endpoints.PET_OWNER_DETAIL.format(id=owner_id))
|
||||
|
||||
self.assert_status(response, 200)
|
||||
self.assertEqual(response.data["id"], owner_id)
|
||||
self.assert_has_fields(response.data, "id", "first_name", "last_name", "address", "pets")
|
||||
|
||||
def test_nonexistent_returns_404(self):
|
||||
"""GET non-existent owner returns 404"""
|
||||
response = self.get(Endpoints.PET_OWNER_DETAIL.format(id=999999))
|
||||
|
||||
self.assert_status(response, 404)
|
||||
|
||||
|
||||
class TestPetOwnerList(ContractTestCase):
|
||||
"""GET /mascotas/api/v1/pet-owners/"""
|
||||
|
||||
def test_list_with_email_filter_returns_200(self):
|
||||
"""GET pet owners filtered by email returns 200"""
|
||||
# Filter by email to avoid loading 14k+ records (no pagination on this endpoint)
|
||||
response = self.get(Endpoints.PET_OWNERS, params={"email": "nonexistent@test.com"})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
|
||||
def test_list_filter_by_email_works(self):
|
||||
"""Can filter pet owners by email"""
|
||||
# Create a pet owner first
|
||||
data = sample_pet_owner()
|
||||
self.post(Endpoints.PET_OWNERS, data)
|
||||
|
||||
# Filter by that email
|
||||
response = self.get(Endpoints.PET_OWNERS, params={"email": data["email"]})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
# Should find exactly one
|
||||
results = response.data if isinstance(response.data, list) else response.data.get("results", [])
|
||||
self.assertEqual(len(results), 1)
|
||||
self.assertEqual(results[0]["email"], data["email"])
|
||||
171
station/tools/tester/tests/mascotas/test_pets.py
Normal file
171
station/tools/tester/tests/mascotas/test_pets.py
Normal file
@@ -0,0 +1,171 @@
|
||||
"""
|
||||
Contract Tests: Pets API
|
||||
|
||||
Endpoint: /mascotas/api/v1/pets/
|
||||
App: mascotas
|
||||
|
||||
Related Tickets:
|
||||
- VET-537: Paso 1 - Test creación de la mascota vinculada al petowner invitado
|
||||
- VET-535: Establecer y definir test para las apis vinculadas al procesos de solicitar turno general
|
||||
|
||||
Context: In the turnero general flow (Step 1), a pet is created and linked to the guest
|
||||
pet owner. The pet data (type, name, neutered status) combined with the owner's address
|
||||
is used to filter available services and veterinarians.
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
from ..helpers import (
|
||||
sample_pet_owner,
|
||||
unique_email,
|
||||
SAMPLE_CAT,
|
||||
SAMPLE_DOG,
|
||||
SAMPLE_NEUTERED_CAT,
|
||||
)
|
||||
|
||||
|
||||
class TestPetCreate(ContractTestCase):
|
||||
"""POST /mascotas/api/v1/pets/
|
||||
|
||||
VET-537: Tests for pet creation linked to guest petowner (Step 1 of turnero flow)
|
||||
"""
|
||||
|
||||
def _create_owner(self):
|
||||
"""Helper to create a pet owner"""
|
||||
data = sample_pet_owner(unique_email("pet_owner"))
|
||||
response = self.post(Endpoints.PET_OWNERS, data)
|
||||
return response.data["id"]
|
||||
|
||||
def test_create_cat_returns_201(self):
|
||||
"""
|
||||
Creating a cat returns 201 with pet_type CAT.
|
||||
|
||||
Request (from production turnero):
|
||||
POST /mascotas/api/v1/pets/
|
||||
{
|
||||
"name": "Luna",
|
||||
"pet_type": "CAT",
|
||||
"is_neutered": false,
|
||||
"owner": 12345
|
||||
}
|
||||
|
||||
Response (201):
|
||||
{
|
||||
"id": 67890,
|
||||
"name": "Luna",
|
||||
"pet_type": "CAT",
|
||||
"is_neutered": false,
|
||||
"owner": 12345,
|
||||
"breed": null,
|
||||
"birth_date": null,
|
||||
"created_at": "2024-12-11T15:31:15.123Z"
|
||||
}
|
||||
"""
|
||||
owner_id = self._create_owner()
|
||||
data = {**SAMPLE_CAT, "owner": owner_id}
|
||||
|
||||
response = self.post(Endpoints.PETS, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
self.assert_has_fields(response.data, "id", "name", "pet_type", "owner")
|
||||
self.assertEqual(response.data["pet_type"], "CAT")
|
||||
self.assertEqual(response.data["name"], "TestCat")
|
||||
|
||||
def test_create_dog_returns_201(self):
|
||||
"""
|
||||
Creating a dog returns 201 with pet_type DOG.
|
||||
|
||||
Validates that both major pet types (CAT/DOG) are supported in the contract.
|
||||
"""
|
||||
owner_id = self._create_owner()
|
||||
data = {**SAMPLE_DOG, "owner": owner_id}
|
||||
|
||||
response = self.post(Endpoints.PETS, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
self.assertEqual(response.data["pet_type"], "DOG")
|
||||
|
||||
def test_neutered_status_persisted(self):
|
||||
"""
|
||||
Neutered status is persisted correctly.
|
||||
|
||||
This is important business data that may affect service recommendations
|
||||
or veterinarian assignments.
|
||||
"""
|
||||
owner_id = self._create_owner()
|
||||
data = {**SAMPLE_NEUTERED_CAT, "owner": owner_id}
|
||||
|
||||
response = self.post(Endpoints.PETS, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
self.assertTrue(response.data["is_neutered"])
|
||||
|
||||
def test_requires_owner(self):
|
||||
"""
|
||||
Pet creation without owner should fail.
|
||||
|
||||
Enforces the required link between pet and petowner - critical for the
|
||||
turnero flow where pets must be associated with the guest user.
|
||||
"""
|
||||
data = SAMPLE_CAT.copy()
|
||||
|
||||
response = self.post(Endpoints.PETS, data)
|
||||
|
||||
self.assert_status(response, 400)
|
||||
|
||||
def test_invalid_pet_type_rejected(self):
|
||||
"""
|
||||
Invalid pet_type should be rejected.
|
||||
|
||||
Currently only CAT and DOG are supported. This test ensures the contract
|
||||
validates pet types correctly.
|
||||
"""
|
||||
owner_id = self._create_owner()
|
||||
data = {
|
||||
"name": "InvalidPet",
|
||||
"pet_type": "HAMSTER",
|
||||
"owner": owner_id,
|
||||
}
|
||||
|
||||
response = self.post(Endpoints.PETS, data)
|
||||
|
||||
self.assert_status(response, 400)
|
||||
|
||||
|
||||
class TestPetRetrieve(ContractTestCase):
|
||||
"""GET /mascotas/api/v1/pets/{id}/"""
|
||||
|
||||
def _create_owner_with_pet(self):
|
||||
"""Helper to create owner and pet"""
|
||||
owner_data = sample_pet_owner(unique_email("pet_owner"))
|
||||
owner_response = self.post(Endpoints.PET_OWNERS, owner_data)
|
||||
owner_id = owner_response.data["id"]
|
||||
|
||||
pet_data = {**SAMPLE_CAT, "owner": owner_id}
|
||||
pet_response = self.post(Endpoints.PETS, pet_data)
|
||||
return pet_response.data["id"]
|
||||
|
||||
def test_get_by_id_returns_200(self):
|
||||
"""GET pet by ID returns pet details"""
|
||||
pet_id = self._create_owner_with_pet()
|
||||
|
||||
response = self.get(Endpoints.PET_DETAIL.format(id=pet_id))
|
||||
|
||||
self.assert_status(response, 200)
|
||||
self.assertEqual(response.data["id"], pet_id)
|
||||
|
||||
def test_nonexistent_returns_404(self):
|
||||
"""GET non-existent pet returns 404"""
|
||||
response = self.get(Endpoints.PET_DETAIL.format(id=999999))
|
||||
|
||||
self.assert_status(response, 404)
|
||||
|
||||
|
||||
class TestPetList(ContractTestCase):
|
||||
"""GET /mascotas/api/v1/pets/"""
|
||||
|
||||
def test_list_returns_200(self):
|
||||
"""GET pets list returns 200 (with pagination)"""
|
||||
response = self.get(Endpoints.PETS, params={"page_size": 1})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
1
station/tools/tester/tests/productos/__init__.py
Normal file
1
station/tools/tester/tests/productos/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Contract tests for productos app endpoints
|
||||
149
station/tools/tester/tests/productos/test_cart.py
Normal file
149
station/tools/tester/tests/productos/test_cart.py
Normal file
@@ -0,0 +1,149 @@
|
||||
"""
|
||||
Contract Tests: Cart API
|
||||
|
||||
Endpoint: /productos/api/v1/cart/
|
||||
App: productos
|
||||
|
||||
Related Tickets:
|
||||
- VET-538: Test creación de cart vinculado al petowner
|
||||
- VET-535: Establecer y definir test para las apis vinculadas al procesos de solicitar turno general
|
||||
|
||||
Context: In the turnero general flow (Step 2), a cart is created for the guest petowner.
|
||||
The cart holds selected services and calculates price summary (subtotals, discounts, total).
|
||||
|
||||
TBD: CartViewSet needs pagination/filtering - list endpoint hangs on large dataset.
|
||||
See productos/api/v1/viewsets.py:93
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
from ..helpers import sample_pet_owner, unique_email
|
||||
|
||||
|
||||
class TestCartCreate(ContractTestCase):
|
||||
"""POST /productos/api/v1/cart/
|
||||
|
||||
VET-538: Tests for cart creation linked to petowner (Step 2 of turnero flow)
|
||||
"""
|
||||
|
||||
def _create_petowner(self):
|
||||
"""Helper to create a pet owner"""
|
||||
data = sample_pet_owner(unique_email("cart_owner"))
|
||||
response = self.post(Endpoints.PET_OWNERS, data)
|
||||
return response.data["id"]
|
||||
|
||||
def test_create_cart_for_petowner(self):
|
||||
"""
|
||||
Creating a cart returns 201 and links to petowner.
|
||||
|
||||
Request (from production turnero):
|
||||
POST /productos/api/v1/cart/
|
||||
{
|
||||
"petowner": 12345,
|
||||
"services": []
|
||||
}
|
||||
|
||||
Response (201):
|
||||
{
|
||||
"id": 789,
|
||||
"petowner": 12345,
|
||||
"veterinarian": null,
|
||||
"items": [],
|
||||
"resume": [
|
||||
{"concept": "SUBTOTAL", "amount": "0.00", "order": 1},
|
||||
{"concept": "COSTO_SERVICIO", "amount": "0.00", "order": 2},
|
||||
{"concept": "DESCUENTO", "amount": "0.00", "order": 3},
|
||||
{"concept": "TOTAL", "amount": "0.00", "order": 4},
|
||||
{"concept": "ADELANTO", "amount": "0.00", "order": 5}
|
||||
],
|
||||
"extra_details": "",
|
||||
"pets": [],
|
||||
"pet_reasons": []
|
||||
}
|
||||
"""
|
||||
owner_id = self._create_petowner()
|
||||
data = {
|
||||
"petowner": owner_id,
|
||||
"services": []
|
||||
}
|
||||
|
||||
response = self.post(Endpoints.CART, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
self.assert_has_fields(response.data, "id", "petowner", "items")
|
||||
self.assertEqual(response.data["petowner"], owner_id)
|
||||
|
||||
def test_cart_has_price_summary_fields(self):
|
||||
"""
|
||||
Cart response includes price summary fields.
|
||||
|
||||
These fields are critical for turnero flow - user needs to see:
|
||||
- resume: array with price breakdown (SUBTOTAL, DESCUENTO, TOTAL, etc)
|
||||
- items: cart items with individual pricing
|
||||
"""
|
||||
owner_id = self._create_petowner()
|
||||
data = {"petowner": owner_id, "services": []}
|
||||
|
||||
response = self.post(Endpoints.CART, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
# Price fields should exist (may be 0 for empty cart)
|
||||
self.assert_has_fields(response.data, "resume", "items")
|
||||
|
||||
def test_empty_cart_has_zero_totals(self):
|
||||
"""
|
||||
Empty cart (no services) has zero price totals.
|
||||
|
||||
Validates initial state before services are added.
|
||||
"""
|
||||
owner_id = self._create_petowner()
|
||||
data = {"petowner": owner_id, "services": []}
|
||||
|
||||
response = self.post(Endpoints.CART, data)
|
||||
|
||||
self.assert_status(response, 201)
|
||||
# Empty cart should have resume with zero amounts
|
||||
self.assertIn("resume", response.data)
|
||||
# Find TOTAL concept in resume
|
||||
total_item = next((item for item in response.data["resume"] if item["concept"] == "TOTAL"), None)
|
||||
self.assertIsNotNone(total_item)
|
||||
self.assertEqual(total_item["amount"], "0.00")
|
||||
|
||||
|
||||
class TestCartRetrieve(ContractTestCase):
|
||||
"""GET /productos/api/v1/cart/{id}/"""
|
||||
|
||||
def _create_petowner_with_cart(self):
|
||||
"""Helper to create petowner and cart"""
|
||||
owner_data = sample_pet_owner(unique_email("cart_owner"))
|
||||
owner_response = self.post(Endpoints.PET_OWNERS, owner_data)
|
||||
owner_id = owner_response.data["id"]
|
||||
|
||||
cart_data = {"petowner": owner_id, "services": []}
|
||||
cart_response = self.post(Endpoints.CART, cart_data)
|
||||
return cart_response.data["id"]
|
||||
|
||||
def test_get_cart_by_id_returns_200(self):
|
||||
"""GET cart by ID returns cart details"""
|
||||
cart_id = self._create_petowner_with_cart()
|
||||
|
||||
response = self.get(Endpoints.CART_DETAIL.format(id=cart_id))
|
||||
|
||||
self.assert_status(response, 200)
|
||||
self.assertEqual(response.data["id"], cart_id)
|
||||
|
||||
def test_detail_returns_404_for_nonexistent(self):
|
||||
"""GET /cart/{id}/ returns 404 for non-existent cart"""
|
||||
response = self.get(Endpoints.CART_DETAIL.format(id=999999))
|
||||
self.assert_status(response, 404)
|
||||
|
||||
|
||||
class TestCartList(ContractTestCase):
|
||||
"""GET /productos/api/v1/cart/"""
|
||||
|
||||
@pytest.mark.skip(reason="TBD: Cart list hangs - needs pagination/filtering. Checking if dead code.")
|
||||
def test_list_returns_200(self):
|
||||
"""GET /cart/ returns 200"""
|
||||
response = self.get(Endpoints.CART)
|
||||
self.assert_status(response, 200)
|
||||
112
station/tools/tester/tests/productos/test_categories.py
Normal file
112
station/tools/tester/tests/productos/test_categories.py
Normal file
@@ -0,0 +1,112 @@
|
||||
"""
|
||||
Contract Tests: Categories API
|
||||
|
||||
Endpoint: /productos/api/v1/categories/
|
||||
App: productos
|
||||
|
||||
Returns service categories filtered by location availability.
|
||||
Categories without available services in location should be hidden.
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
|
||||
|
||||
class TestCategoriesList(ContractTestCase):
|
||||
"""GET /productos/api/v1/categories/"""
|
||||
|
||||
def test_list_returns_200(self):
|
||||
"""GET categories returns 200"""
|
||||
response = self.get(Endpoints.CATEGORIES, params={"page_size": 10})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
|
||||
def test_returns_list(self):
|
||||
"""GET categories returns a list"""
|
||||
response = self.get(Endpoints.CATEGORIES, params={"page_size": 10})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
data = response.data
|
||||
# Handle paginated or non-paginated response
|
||||
categories = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
self.assertIsInstance(categories, list)
|
||||
|
||||
def test_categories_have_required_fields(self):
|
||||
"""
|
||||
Each category should have id, name, and description.
|
||||
|
||||
Request (from production turnero):
|
||||
GET /productos/api/v1/categories/
|
||||
|
||||
Response (200):
|
||||
[
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Consulta General",
|
||||
"description": "Consultas veterinarias generales"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"name": "Vacunación",
|
||||
"description": "Servicios de vacunación"
|
||||
}
|
||||
]
|
||||
"""
|
||||
response = self.get(Endpoints.CATEGORIES, params={"page_size": 10})
|
||||
|
||||
data = response.data
|
||||
categories = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
|
||||
if len(categories) > 0:
|
||||
category = categories[0]
|
||||
self.assert_has_fields(category, "id", "name", "description")
|
||||
|
||||
def test_only_active_categories_returned(self):
|
||||
"""
|
||||
Only active categories are returned in the list.
|
||||
|
||||
Business rule: Inactive categories should not be visible to users.
|
||||
"""
|
||||
response = self.get(Endpoints.CATEGORIES, params={"page_size": 50})
|
||||
|
||||
data = response.data
|
||||
categories = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
|
||||
# All categories should be active (no 'active': False in response)
|
||||
# This is enforced at queryset level in CategoryViewSet
|
||||
self.assertIsInstance(categories, list)
|
||||
|
||||
|
||||
class TestCategoryRetrieve(ContractTestCase):
|
||||
"""GET /productos/api/v1/categories/{id}/"""
|
||||
|
||||
def test_get_category_by_id_returns_200(self):
|
||||
"""
|
||||
GET category by ID returns category details.
|
||||
|
||||
First fetch list to get a valid ID, then retrieve that category.
|
||||
"""
|
||||
# Get first category
|
||||
list_response = self.get(Endpoints.CATEGORIES, params={"page_size": 1})
|
||||
if list_response.status_code != 200:
|
||||
self.skipTest("No categories available for testing")
|
||||
|
||||
data = list_response.data
|
||||
categories = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
|
||||
if len(categories) == 0:
|
||||
self.skipTest("No categories available for testing")
|
||||
|
||||
category_id = categories[0]["id"]
|
||||
|
||||
# Test detail endpoint
|
||||
response = self.get(f"{Endpoints.CATEGORIES}{category_id}/")
|
||||
|
||||
self.assert_status(response, 200)
|
||||
self.assertEqual(response.data["id"], category_id)
|
||||
|
||||
def test_nonexistent_category_returns_404(self):
|
||||
"""GET non-existent category returns 404"""
|
||||
response = self.get(f"{Endpoints.CATEGORIES}999999/")
|
||||
|
||||
self.assert_status(response, 404)
|
||||
122
station/tools/tester/tests/productos/test_services.py
Normal file
122
station/tools/tester/tests/productos/test_services.py
Normal file
@@ -0,0 +1,122 @@
|
||||
"""
|
||||
Contract Tests: Services API
|
||||
|
||||
Endpoint: /productos/api/v1/services/
|
||||
App: productos
|
||||
|
||||
Returns available veterinary services filtered by pet type and location.
|
||||
Critical for vet assignment automation.
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
from ..helpers import sample_pet_owner, unique_email, SAMPLE_CAT, SAMPLE_DOG
|
||||
|
||||
|
||||
class TestServicesList(ContractTestCase):
|
||||
"""GET /productos/api/v1/services/"""
|
||||
|
||||
def test_list_returns_200(self):
|
||||
"""GET services returns 200"""
|
||||
response = self.get(Endpoints.SERVICES, params={"page_size": 10})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
|
||||
def test_returns_list(self):
|
||||
"""GET services returns a list"""
|
||||
response = self.get(Endpoints.SERVICES, params={"page_size": 10})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
data = response.data
|
||||
# Handle paginated or non-paginated response
|
||||
services = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
self.assertIsInstance(services, list)
|
||||
|
||||
def test_services_have_required_fields(self):
|
||||
"""Each service should have id and name"""
|
||||
response = self.get(Endpoints.SERVICES, params={"page_size": 10})
|
||||
|
||||
data = response.data
|
||||
services = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
|
||||
if len(services) > 0:
|
||||
service = services[0]
|
||||
self.assert_has_fields(service, "id", "name")
|
||||
|
||||
def test_accepts_pet_id_filter(self):
|
||||
"""Services endpoint accepts pet_id parameter"""
|
||||
response = self.get(Endpoints.SERVICES, params={"pet_id": 1})
|
||||
|
||||
# Should not error (even if pet doesn't exist, endpoint should handle gracefully)
|
||||
self.assertIn(response.status_code, [200, 404])
|
||||
|
||||
|
||||
class TestServicesFiltering(ContractTestCase):
|
||||
"""GET /productos/api/v1/services/ with filters"""
|
||||
|
||||
def _create_owner_with_cat(self):
|
||||
"""Helper to create owner and cat"""
|
||||
owner_data = sample_pet_owner(unique_email("service_owner"))
|
||||
owner_response = self.post(Endpoints.PET_OWNERS, owner_data)
|
||||
owner_id = owner_response.data["id"]
|
||||
|
||||
pet_data = {**SAMPLE_CAT, "owner": owner_id}
|
||||
pet_response = self.post(Endpoints.PETS, pet_data)
|
||||
return pet_response.data["id"]
|
||||
|
||||
def _create_owner_with_dog(self):
|
||||
"""Helper to create owner and dog"""
|
||||
owner_data = sample_pet_owner(unique_email("service_owner"))
|
||||
owner_response = self.post(Endpoints.PET_OWNERS, owner_data)
|
||||
owner_id = owner_response.data["id"]
|
||||
|
||||
pet_data = {**SAMPLE_DOG, "owner": owner_id}
|
||||
pet_response = self.post(Endpoints.PETS, pet_data)
|
||||
return pet_response.data["id"]
|
||||
|
||||
def test_filter_services_by_cat(self):
|
||||
"""
|
||||
Services filtered by cat pet_id returns appropriate services.
|
||||
|
||||
Request (from production turnero):
|
||||
GET /productos/api/v1/services/?pet_id=123
|
||||
|
||||
Response structure validates services available for CAT type.
|
||||
"""
|
||||
cat_id = self._create_owner_with_cat()
|
||||
response = self.get(Endpoints.SERVICES, params={"pet_id": cat_id, "page_size": 10})
|
||||
|
||||
# Should return services or handle gracefully
|
||||
self.assertIn(response.status_code, [200, 404])
|
||||
if response.status_code == 200:
|
||||
data = response.data
|
||||
services = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
self.assertIsInstance(services, list)
|
||||
|
||||
def test_filter_services_by_dog(self):
|
||||
"""
|
||||
Services filtered by dog pet_id returns appropriate services.
|
||||
|
||||
Different pet types may have different service availability.
|
||||
"""
|
||||
dog_id = self._create_owner_with_dog()
|
||||
response = self.get(Endpoints.SERVICES, params={"pet_id": dog_id, "page_size": 10})
|
||||
|
||||
self.assertIn(response.status_code, [200, 404])
|
||||
if response.status_code == 200:
|
||||
data = response.data
|
||||
services = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
self.assertIsInstance(services, list)
|
||||
|
||||
def test_services_without_pet_returns_all(self):
|
||||
"""
|
||||
Services without pet filter returns all available services.
|
||||
|
||||
Used for initial service browsing before pet selection.
|
||||
"""
|
||||
response = self.get(Endpoints.SERVICES, params={"page_size": 10})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
data = response.data
|
||||
services = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
self.assertIsInstance(services, list)
|
||||
1
station/tools/tester/tests/solicitudes/__init__.py
Normal file
1
station/tools/tester/tests/solicitudes/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Contract tests for solicitudes app endpoints
|
||||
@@ -0,0 +1,56 @@
|
||||
"""
|
||||
Contract Tests: Service Requests API
|
||||
|
||||
Endpoint: /solicitudes/service-requests/
|
||||
App: solicitudes
|
||||
|
||||
Creates and manages service requests (appointment bookings).
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
|
||||
|
||||
class TestServiceRequestList(ContractTestCase):
|
||||
"""GET /solicitudes/service-requests/"""
|
||||
|
||||
def test_list_returns_200(self):
|
||||
"""GET should return list of service requests (with pagination)"""
|
||||
response = self.get(Endpoints.SERVICE_REQUESTS, params={"page_size": 1})
|
||||
|
||||
self.assert_status(response, 200)
|
||||
|
||||
def test_returns_list(self):
|
||||
"""GET should return a list (possibly paginated)"""
|
||||
response = self.get(Endpoints.SERVICE_REQUESTS, params={"page_size": 10})
|
||||
|
||||
data = response.data
|
||||
requests_list = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
self.assertIsInstance(requests_list, list)
|
||||
|
||||
|
||||
class TestServiceRequestFields(ContractTestCase):
|
||||
"""Field validation for service requests"""
|
||||
|
||||
def test_has_state_field(self):
|
||||
"""Service requests should have a state/status field"""
|
||||
response = self.get(Endpoints.SERVICE_REQUESTS, params={"page_size": 1})
|
||||
|
||||
data = response.data
|
||||
requests_list = data["results"] if isinstance(data, dict) and "results" in data else data
|
||||
|
||||
if len(requests_list) > 0:
|
||||
req = requests_list[0]
|
||||
has_state = "state" in req or "status" in req
|
||||
self.assertTrue(has_state, "Service request should have state/status field")
|
||||
|
||||
|
||||
class TestServiceRequestCreate(ContractTestCase):
|
||||
"""POST /solicitudes/service-requests/"""
|
||||
|
||||
def test_create_requires_fields(self):
|
||||
"""Creating service request with empty data should fail"""
|
||||
response = self.post(Endpoints.SERVICE_REQUESTS, {})
|
||||
|
||||
# Should return 400 with validation errors
|
||||
self.assert_status(response, 400)
|
||||
1
station/tools/tester/tests/workflows/__init__.py
Normal file
1
station/tools/tester/tests/workflows/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Contract tests for frontend workflows (compositions of endpoint tests)
|
||||
65
station/tools/tester/tests/workflows/test_turnero_general.py
Normal file
65
station/tools/tester/tests/workflows/test_turnero_general.py
Normal file
@@ -0,0 +1,65 @@
|
||||
"""
|
||||
Workflow Test: General Turnero Flow
|
||||
|
||||
This is a COMPOSITION test that validates the full turnero flow
|
||||
by calling endpoints in sequence. Use this to ensure the flow works
|
||||
end-to-end, but individual endpoint behavior is tested in app folders.
|
||||
|
||||
Flow:
|
||||
1. Check coverage at address
|
||||
2. Create pet owner (guest with mock email)
|
||||
3. Create pet for owner
|
||||
4. Get available services for pet
|
||||
5. Create service request
|
||||
|
||||
Frontend route: /turnos/
|
||||
User type: Guest (invitado)
|
||||
"""
|
||||
|
||||
from ..base import ContractTestCase
|
||||
from ..endpoints import Endpoints
|
||||
from ..helpers import sample_pet_owner, unique_email, SAMPLE_CAT
|
||||
|
||||
|
||||
class TestTurneroGeneralFlow(ContractTestCase):
|
||||
"""
|
||||
End-to-end flow test for general turnero.
|
||||
|
||||
Note: This tests the SEQUENCE of calls, not individual endpoint behavior.
|
||||
Individual endpoint tests are in mascotas/, productos/, solicitudes/.
|
||||
"""
|
||||
|
||||
def test_full_flow_sequence(self):
|
||||
"""
|
||||
Complete turnero flow should work end-to-end.
|
||||
|
||||
This test validates that a guest user can complete the full
|
||||
appointment booking flow.
|
||||
"""
|
||||
# Step 0: Check coverage at address
|
||||
coverage_response = self.get(Endpoints.COVERAGE_CHECK, params={
|
||||
"lat": -34.6037,
|
||||
"lng": -58.3816,
|
||||
})
|
||||
self.assert_status(coverage_response, 200)
|
||||
|
||||
# Step 1: Create pet owner (frontend creates mock email for guest)
|
||||
mock_email = unique_email("invitado")
|
||||
owner_data = sample_pet_owner(mock_email)
|
||||
owner_response = self.post(Endpoints.PET_OWNERS, owner_data)
|
||||
self.assert_status(owner_response, 201)
|
||||
owner_id = owner_response.data["id"]
|
||||
|
||||
# Step 2: Create pet for owner
|
||||
pet_data = {**SAMPLE_CAT, "owner": owner_id}
|
||||
pet_response = self.post(Endpoints.PETS, pet_data)
|
||||
self.assert_status(pet_response, 201)
|
||||
pet_id = pet_response.data["id"]
|
||||
|
||||
# Step 3: Get services (optionally filtered by pet)
|
||||
services_response = self.get(Endpoints.SERVICES, params={"pet_id": pet_id})
|
||||
# Services endpoint may return 200 even without pet filter
|
||||
self.assertIn(services_response.status_code, [200, 404])
|
||||
|
||||
# Note: Steps 4-5 (select date/time, create service request) require
|
||||
# more setup (available times, cart, etc.) and are tested separately.
|
||||
Reference in New Issue
Block a user