Files
soleprint/station/tools/tester
2025-12-31 09:07:27 -03:00
..
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-31 09:07:27 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00
2025-12-24 05:38:37 -03:00

Tester - HTTP Contract Test Runner

Web UI for discovering and running contract tests.

Quick Start

# Sync tests from production repo (local dev)
/home/mariano/wdir/ama/core_nest/pawprint/ctrl/sync-tests.sh

# Run locally
cd /home/mariano/wdir/ama/pawprint/ward
python -m tools.tester

# Open in browser
http://localhost:12003/tester

Architecture

Test DefinitionsTester (Runner + UI)Target API

amar_django_back_contracts/
└── tests/contracts/          ← Test definitions (source of truth)
    ├── mascotas/
    ├── productos/
    └── workflows/

ward/tools/tester/
├── tests/                    ← Synced from contracts (deployment)
│   ├── mascotas/
│   ├── productos/
│   └── workflows/
├── base.py                   ← HTTP test base class
├── core.py                   ← Test discovery & execution
├── api.py                    ← FastAPI endpoints
└── templates/                ← Web UI

Strategy: Separation of Concerns

  1. Tests live in production repo (amar_django_back_contracts)

    • Developers write tests alongside code
    • Tests are versioned with the API
    • PR reviews include test changes
  2. Tester consumes tests (ward/tools/tester)

    • Provides web UI for visibility
    • Runs tests against any target (dev, stage, prod)
    • Shows test coverage to product team
  3. Deployment syncs tests

    • sync-tests.sh copies tests from contracts to tester
    • Deployment script includes test sync
    • Server always has latest tests

Configuration

Single Environment (.env)

CONTRACT_TEST_URL=https://demo.amarmascotas.ar
CONTRACT_TEST_API_KEY=your-api-key-here

Multiple Environments (environments.json)

Configure multiple target environments with individual tokens:

[
  {
    "id": "demo",
    "name": "Demo",
    "url": "https://demo.amarmascotas.ar",
    "api_key": "",
    "description": "Demo environment for testing",
    "default": true
  },
  {
    "id": "dev",
    "name": "Development",
    "url": "https://dev.amarmascotas.ar",
    "api_key": "dev-token-here",
    "description": "Development environment"
  },
  {
    "id": "prod",
    "name": "Production",
    "url": "https://amarmascotas.ar",
    "api_key": "prod-token-here",
    "description": "Production (use with caution!)"
  }
]

Environment Selector: Available in UI header on both Runner and Filters pages. Selection persists via localStorage.

Web UI Features

  • Filters: Advanced filtering by domain, module, status, and search
  • Runner: Execute tests with real-time progress tracking
  • Multi-Environment: Switch between dev/stage/prod with per-environment tokens
  • URL State: Filter state persists via URL when running tests
  • Real-time Status: See test results as they run

API Endpoints

GET  /tools/tester/                        # Runner UI
GET  /tools/tester/filters                 # Filters UI
GET  /tools/tester/api/tests               # List all tests
GET  /tools/tester/api/environments        # List environments
POST /tools/tester/api/environment/select  # Switch environment
POST /tools/tester/api/run                 # Start test run
GET  /tools/tester/api/run/{run_id}        # Get run status (polling)
GET  /tools/tester/api/runs                # List all runs

Usage Flow

From Filters to Runner

  1. Go to /tools/tester/filters
  2. Filter tests (domain, module, search)
  3. Select tests to run
  4. Click "Run Selected"
  5. → Redirects to Runner with filters applied and auto-starts execution

URL Parameters

Runner accepts URL params for deep linking:

/tools/tester/?run=abc123&domains=mascotas&search=owner
  • run - Auto-load results for this run ID
  • domains - Filter by domains (comma-separated)
  • modules - Filter by modules (comma-separated)
  • search - Search term for test names
  • status - Filter by status (passed,failed,skipped)

Deployment

Tests are synced during deployment:

# Full deployment (includes test sync)
cd /home/mariano/wdir/ama/pawprint/deploy
./deploy.sh

# Or sync tests only
/home/mariano/wdir/ama/core_nest/pawprint/ctrl/sync-tests.sh

Why This Design?

Problem: Tests scattered, no visibility, hard to demonstrate value

Solution:

  • Tests in production repo (developer workflow)
  • Tester provides visibility (product team, demos)
  • Separation allows independent evolution

Benefits:

  • Product team sees test coverage
  • Demos show "quality dashboard"
  • Tests protect marketplace automation work
  • Non-devs can run tests via UI
  • Production tests: /home/mariano/wdir/ama/amar_django_back_contracts/tests/contracts/
  • Sync script: /home/mariano/wdir/ama/core_nest/pawprint/ctrl/sync-tests.sh
  • Ward system: /home/mariano/wdir/ama/pawprint/ward/