updated docs

This commit is contained in:
2026-04-14 10:32:05 -03:00
parent 2e5a304181
commit a80b72a9b1
67 changed files with 3260 additions and 5005 deletions

View File

@@ -0,0 +1,55 @@
# Google Vein
Status: **building**
Connects soleprint to Google services. OAuth2 flow is implemented. Sheets read access works. Calendar and Drive are next.
---
## What Works
**OAuth2 authentication** -- full flow with authorization URL generation, code exchange, token refresh, and user identity extraction. Supports both identity (OpenID Connect) and API access scopes.
**Google Sheets** -- read-only access to spreadsheet data.
## Configuration
Create a `.env` file in the vein directory:
```env
GOOGLE_CLIENT_ID=your-client-id
GOOGLE_CLIENT_SECRET=your-client-secret
GOOGLE_REDIRECT_URI=http://localhost:12000/artery/google/oauth/callback
API_PORT=8003
```
You need a Google Cloud project with OAuth consent screen configured and credentials created.
## OAuth Scopes
Identity scopes (default):
- `openid`
- `userinfo.email`
- `userinfo.profile`
API scopes (added when needed):
- `spreadsheets.readonly`
- `drive.readonly`
## What's Coming
- Calendar integration
- Drive file browsing and download
- Full Sheets write support
Configuration details will be added as these integrations mature.
## Running Standalone
```bash
cd soleprint/artery/veins/google
python run.py
# Runs on port 8003
```

77
docs/data/en/artery-ia.md Normal file
View File

@@ -0,0 +1,77 @@
# IA Vein
Status: **live**
Connects soleprint to AI/LLM providers. Uses an OpenAI-compatible API interface, so it works with OpenAI, Anthropic (via proxy), local models, or any compatible endpoint.
---
## What It Does
- Generic chat completion endpoint
- Health check against the configured provider
- Use-case-specific routers mounted as sub-routes
- JSON extraction from AI responses
## Configuration
Create a `.env` file in the vein directory:
```env
AI_API_URL=https://api.openai.com/v1
AI_API_KEY=your-api-key
AI_MODEL=gpt-4o
API_PORT=8005
```
`AI_API_URL` defaults to OpenAI. Point it at any OpenAI-compatible endpoint.
## Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/ia/health` | Test API connection, returns provider and model info |
| POST | `/ia/chat` | Generic chat completion |
| | `/ia/practice/*` | Practice use-case routes |
## Authentication
API key resolves in order:
1. `X-AI-Token` HTTP header
2. `.env` file value
This lets soleprint tools pass their own keys per-request.
## Chat Request
```json
{
"messages": [
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello"}
],
"temperature": 0.7,
"max_tokens": 1024
}
```
The response includes `content` (raw text) and `parsed` (first valid JSON object extracted from the response, if any).
## Use Cases
The IA vein supports use-case-specific routers mounted under `/ia/{usecase}/`. Each use case has its own prompts, models, and routes.
Currently implemented:
- **Practice** -- AI-powered practice sessions with structured prompts and formatted output
## Running Standalone
```bash
cd soleprint/artery/veins/ia
python run.py
# Runs on port 8005
```
Or through soleprint, where it is mounted under `/artery/ia/`.

View File

@@ -0,0 +1,71 @@
# Jira Vein
Status: **live**
Connects soleprint to Jira Cloud. Fetches tickets, sprints, backlogs, and epics. Supports text and JSON output formats.
---
## What It Does
- Query your assigned tickets, project backlogs, and current sprints
- Fetch full ticket details with comments, attachments, and child work items
- Process entire epics: fetch the epic and all children, save to local storage
- Search with raw JQL
- Stream attachment content directly from Jira
## Configuration
Create a `.env` file in the vein directory or set environment variables:
```env
JIRA_URL=https://your-org.atlassian.net
JIRA_EMAIL=you@example.com
JIRA_API_TOKEN=your-api-token
API_PORT=8001
```
`JIRA_EMAIL` and `JIRA_API_TOKEN` are optional in config. They can be provided per-request via HTTP headers instead, allowing one vein instance to serve multiple users.
Generate an API token at [id.atlassian.net/manage-profile/security/api-tokens](https://id.atlassian.net/manage-profile/security/api-tokens).
## Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/health` | Test connection, returns authenticated user |
| GET | `/mine` | Your assigned open tickets |
| GET | `/backlog?project=KEY` | Project backlog |
| GET | `/sprint?project=KEY` | Current sprint tickets |
| GET | `/ticket/{key}` | Full ticket detail with comments and children |
| POST | `/search` | Raw JQL query |
| POST | `/epic/{key}/process` | Fetch entire epic tree, save to files (streaming) |
| GET | `/epic/{key}/status` | Check if epic has been processed |
| GET | `/attachment/{id}` | Stream attachment content |
All list endpoints support `page`, `page_size`, and `text=true` for plain-text output.
## Authentication
Credentials resolve in order:
1. HTTP headers (`X-Jira-Email`, `X-Jira-Token`)
2. `.env` file values
## Text Mode
Pass `?text=true` to any query endpoint. Returns formatted plain text instead of JSON. Useful for piping into other tools or AI prompts.
## Epic Processing
The `/epic/{key}/process` endpoint streams progress as NDJSON. It fetches the epic, then each child ticket with a short delay between requests. All tickets are saved as JSON files to `artery/larder/jira_epics/{key}/`.
## Running Standalone
```bash
cd soleprint/artery/veins/jira
python run.py
# Runs on port 8001
```
Or through soleprint, where it is mounted under `/artery/jira/`.

View File

@@ -0,0 +1,94 @@
# Shunt Pattern
Shunts are fake connectors for testing. They mirror real API structures but return configurable mock data. No external service needed.
---
## Why Shunts
You need to test payment flows without charging real cards. You need to develop against an API that requires VPN access. You need predictable responses for BDD tests. Shunts solve all of these.
A shunt runs as a FastAPI app, same as a vein. Your code talks to it the same way it would talk to the real service. Swap the URL and you're testing against production.
## Structure
```
shunts/
└── {name}/
├── __init__.py
├── main.py # FastAPI app
├── run.py # Standalone runner
├── core/
│ └── config.py # Mock behavior settings
├── api/
│ └── routes.py # Mock endpoints
└── templates/ # Optional config UI
```
## Creating a Shunt
Start from the example template:
```bash
cp -r soleprint/artery/shunts/example soleprint/artery/shunts/yourservice
```
The example shunt provides:
- Health check endpoint
- Config UI at root path
- Catch-all GET/POST handlers that return responses from a `responses.json` file
- `SHUNT_DEPOT` environment variable for external response storage
Edit `responses.json` to define your fake responses:
```json
{
"GET /users": [{"id": 1, "name": "Test User"}],
"POST /orders": {"id": 999, "status": "created"}
}
```
For more complex behavior, replace the catch-all handlers with specific route implementations.
## MercadoPago Shunt
The MercadoPago shunt is a full mock of the MercadoPago payment API. It demonstrates what a production-grade shunt looks like.
**What it mocks:**
- Checkout Pro preferences (`POST /v1/preferences`)
- Payments (`POST /v1/payments`, `GET /v1/payments/{id}`)
- Merchant orders (`GET /v1/merchant_orders/{id}`)
- OAuth token exchange (`POST /oauth/token`)
- Webhook simulation (`POST /mock/webhook`)
**Testing controls:**
| Endpoint | Purpose |
|----------|---------|
| `POST /mock/config` | Set default payment status, error rate |
| `GET /mock/reset` | Clear all mock data |
| `GET /mock/stats` | Count of stored mock objects |
**Configurable behavior:**
- `default_payment_status` -- force all payments to approved, pending, or rejected
- `error_rate` -- probability (0-1) of random 500 errors
- `enable_random_delays` -- simulate real API latency
- `min_delay_ms` / `max_delay_ms` -- delay range in milliseconds
Data is generated using `datagen`'s MercadoPago generator. Stored in memory. Reset between test runs with `/mock/reset`.
## Running a Shunt
```bash
cd soleprint/artery/shunts/mercadopago
python run.py
```
Or configure it in your room's shunt registry so soleprint manages it.
## Room-Specific Shunts
Rooms can define their own shunts under `cfg/{room}/artery/shunts/`. These are merged into the build output alongside the core shunts.

View File

@@ -0,0 +1,33 @@
# Slack Vein
Status: **building**
Will connect soleprint to Slack for channel messaging and notifications.
---
## What Exists
The vein structure is in place: config, client, auth, models, routes. Bot token and user token configuration is ready.
```env
SLACK_BOT_TOKEN=xoxb-...
SLACK_USER_TOKEN=xoxp-...
API_PORT=8002
```
## What's Coming
- Send messages to channels
- Read channel history
- Notification integration with other soleprint tools
Configuration details will be added as the integration matures.
## Running Standalone
```bash
cd soleprint/artery/veins/slack
python run.py
# Runs on port 8002
```

110
docs/data/en/artery.md Normal file
View File

@@ -0,0 +1,110 @@
# Artery
Artery is the connector system. It bridges soleprint to external services -- APIs, messaging platforms, payment processors, AI providers.
**Todo lo vital** -- everything vital flows through here.
---
## Hierarchy
Connectors scale from simple to full:
```
Vein ──────► Pulse ──────► Plexus
│ │ │
│ │ └── Full app: backend + frontend + DB
│ │
│ └── Composed: Vein + Room + Depot
└── Stateless API connector
Shunt ─── Fake connector for testing
```
![Artery Hierarchy](../graphs/artery_hierarchy.svg)
**Vein** -- a stateless wrapper around an external API. Handles auth, exposes endpoints, runs standalone or through soleprint. Each vein follows the same structure: `core/` for the isolated client, `api/` for FastAPI routes, `models/` for data types.
**Shunt** -- a mock vein. Returns configurable fake responses so you can test without hitting real APIs. Shunts mirror real API structures but store everything in memory.
**Pulse** -- a vein composed with a room and a depot. Adds persistent storage and room-specific configuration on top of a raw vein.
**Plexus** -- a full application. Backend, frontend, database. The highest level of the hierarchy.
## Veins
| Vein | Status | Description |
|------|--------|-------------|
| [Jira](artery-jira.md) | live | Issue tracker integration |
| [Google](artery-google.md) | building | OAuth, Sheets, Calendar, Drive |
| [Slack](artery-slack.md) | building | Channel messaging |
| [IA](artery-ia.md) | live | AI/LLM connector |
| Maps | planned | Location services |
| WhatsApp | planned | Messaging |
| GNUCash | planned | Accounting |
| VPN | planned | Network access |
## Shunts
| Shunt | Status | Description |
|-------|--------|-------------|
| [MercadoPago](artery-shunts.md) | ready | Mock payment processing API |
| [Example](artery-shunts.md) | ready | Template for creating new shunts |
See [Shunt Pattern](artery-shunts.md) for how shunts work and how to create one.
## Vein Structure
Every vein follows the same layout:
```
veins/
└── {name}/
├── __init__.py
├── main.py # FastAPI app entry point
├── run.py # Standalone runner
├── .env # Credentials (not committed)
├── core/
│ ├── config.py # Pydantic settings from .env
│ ├── client.py # Isolated API client
│ └── auth.py # Auth handling
├── api/
│ └── routes.py # FastAPI router
└── models/
└── ... # Data models, formatters
```
The `core/` module is isolated. It can run without FastAPI. The `api/` module wraps it in HTTP routes.
## Base Class
All veins extend `BaseVein`:
```python
class BaseVein(ABC):
name: str # e.g., 'jira', 'slack'
get_client(creds) -> client # Create API client
health_check(creds) -> dict # Test connection
create_router() -> APIRouter # HTTP routes
```
## Configuration
Veins load credentials from `.env` files using Pydantic settings. Credentials can also be passed per-request via HTTP headers, so one vein instance can serve multiple users.
Vein configurations are registered in `veins.json`:
```json
{
"items": [
{
"name": "jira",
"slug": "jira",
"title": "Jira",
"status": "live",
"system": "artery"
}
]
}
```

View File

@@ -0,0 +1,86 @@
# Atlas Books
A book is a documentation library served through Atlas. It lives in a directory under `books/` and is served at `/book/{slug}/`.
---
## Two Types
**Standalone book** -- static HTML files. No template, no depot. Atlas serves them directly. The book and its content are the same thing.
**Templated book** -- a template defines the structure, a depot (called a larder) provides the data. Atlas renders a landing page with links to the template definition and the data browser.
## Directory Structure
### Standalone Book
```
books/
└── feature-flow/
├── index-en.html # English version
├── index-es.html # Spanish version
└── CLAUDE.md # Dev notes
```
Routes:
- `/book/feature-flow/` -- language picker
- `/book/feature-flow/en` -- English
- `/book/feature-flow/es` -- Spanish
A standalone book is just HTML. Put it in the directory, register it in `books.json`, add a route in `main.py`.
### Templated Book
```
books/
└── feature-form-samples/
├── template/ # Template definition
│ └── plantilla-flujo.md
├── feature-form/ # Larder (data)
│ ├── .larder # Marker file
│ ├── pet-owner/
│ ├── veterinarian/
│ └── backoffice/
├── index.html # Larder browser
└── detail.html # Detail renderer
```
Routes:
- `/book/feature-form-samples/` -- landing page (links to template + larder)
- `/book/feature-form-samples/template/` -- template definition
- `/book/feature-form-samples/larder/` -- data browser
- `/book/feature-form-samples/larder/{user_type}/{file}` -- specific entry
The `.larder` marker identifies which subdirectory holds the data.
## The Feature Flow Example
Feature Flow is the standalone reference. It documents the BDD standardization pipeline -- how features go from idea to Gherkin to test. Two languages, no template, pure HTML.
## Registration
Books are registered in `books.json`:
```json
{
"items": [
{
"name": "feature-flow",
"slug": "feature-flow",
"title": "Feature Flow Pipeline",
"status": "ready",
"system": "atlas"
}
]
}
```
## Adding a Room-Specific Book
1. Create the book directory in `cfg/<room>/atlas/books/{slug}/`
2. Add the HTML files
3. Register it in the room's `books.json` (in `cfg/<room>/data/`)
4. Add routes in main.py if it needs custom handling
5. Build: `python build.py --cfg <room>`
The build merges room books into the output alongside core books. Room books can also override core books by using the same slug.

View File

@@ -0,0 +1,75 @@
# Atlas Templates
A template is a documentation pattern. It defines the structure that content must follow. Templates turn unstructured knowledge into consistent, browsable documentation.
---
## How Templates Work
A template defines fields, layout, and validation rules. Content that follows the template gets rendered through Atlas with consistent formatting and navigation.
Templates live in `soleprint/atlas/templates/` (core) or inside a book's `template/` subdirectory.
Data templates -- the schema files that define structure -- live in `cfg/<room>/data/template/`.
## The Feature Form Example
The feature form template captures user flows in a structured format:
| Field | Purpose |
|-------|---------|
| User Type | Who performs this flow |
| Entry Point | Where the flow starts |
| User Goal | One-sentence objective |
| Steps | Numbered sequence of actions |
| Expected Result | What success looks like |
| Common Problems | Known failure modes |
| Special Cases | Edge cases and exceptions |
| Related Flows | Connected user flows |
| Technical Notes | Developer-facing details |
This template is served at `/book/feature-form-samples/template/` as a styled HTML form with placeholder fields. Non-technical team members can understand the structure without reading code.
## Depots and Larders
A **depot** is data storage connected to a template. In Atlas, the depot pattern is called a **larder** -- a directory that holds content conforming to a template.
The connection works like this:
```
Template (structure) + Larder (data) = Templated Book
```
A larder directory contains a `.larder` marker file and organizes content in subdirectories. For feature forms, the larder groups entries by user type:
```
feature-form/
├── .larder
├── pet-owner/
│ ├── register-pet.html
│ └── book-appointment.html
├── veterinarian/
│ └── review-history.html
└── backoffice/
└── manage-users.html
```
Each file in the larder follows the template's field structure. Atlas renders them through a detail view that reads the content and applies consistent styling.
## The Larder Pattern
Larders enforce a constraint: all content in a larder must match the connected template. This keeps documentation consistent even when multiple people contribute.
The landing page of a templated book links to both:
- The **template** -- so you can see the pattern
- The **larder** -- so you can browse the actual content
This separation means the template can evolve independently from the data. Update the template, and all larder entries get the new rendering.
## Adding a Template
1. Define the template structure (HTML or markdown) in `soleprint/atlas/templates/` or inside a book's `template/` directory
2. Create a larder directory with a `.larder` marker
3. Add content files that follow the template structure
4. Register the book in `books.json` with `template` metadata
5. Add routes in `main.py` for the landing page, template view, and larder browser

54
docs/data/en/atlas.md Normal file
View File

@@ -0,0 +1,54 @@
# Atlas
Atlas is the documentation system. It turns structured data into browsable documentation.
**Mapeando el recorrido** -- mapping the journey.
---
## Components
Atlas has three components:
| Component | Purpose |
|-----------|---------|
| **Books** | Documentation libraries. Standalone HTML or template-generated. |
| **Templates** | Documentation patterns. Define how content is structured and rendered. |
| **Depots** | Data storage. Connect templates to actual content. |
A book can be standalone -- just HTML files served directly. Or it can be template-backed, where a template defines the structure and a depot provides the data.
## How It Works
Atlas runs as a FastAPI app inside soleprint. It serves books at `/book/{slug}/`, fetches data from the soleprint hub, and renders content using Jinja2 templates or static HTML.
Books are registered in `books.json`. Templates and depots connect through the book's directory structure.
## Room-Specific Books
Core books live in `soleprint/atlas/books/`. Room-specific books live in `cfg/<room>/atlas/books/`.
At build time, room-specific books are merged into the output:
```
soleprint/atlas/books/ # Core books (all rooms)
cfg/amar/atlas/books/ # Amar-specific books
↓ build.py
gen/amar/soleprint/atlas/books/ # Merged output
```
Core books ship with every room. Room books add to or override them. The build copies the core first, then overlays the room-specific content.
## Data Templates
Template data files live in `cfg/<room>/data/template/`. These define the structure that depot content must follow.
## Current State
| Component | Item | Status |
|-----------|------|--------|
| Book | Feature Flow Pipeline | ready |
| Template | Feature Form | ready |
| Depot | Feature Forms | ready |
See [Books](atlas-books.md) and [Templates](atlas-templates.md) for details on each pattern.

View File

@@ -0,0 +1,54 @@
# Shared Components
Soleprint has two distributable components. They live inside the spr repo and get published into consuming projects.
Management is handled by `ctrl/spr.py`.
## soleprint-ui
Vue component library. Includes GraphRenderer, sidebar components, and shared UI pieces.
Source: `soleprint/common/ui/`
Publish to a consuming project:
```bash
python ctrl/spr.py publish soleprint-ui <target>
```
The target project receives the built files and commits them directly. No npm dependency on spr.
## soleprint-modelgen
Model generator. Reads schema definitions and produces model files.
Source: `soleprint/station/tools/modelgen/`
Has a `pyproject.toml` for local editable install:
```bash
pip install -e soleprint/station/tools/modelgen/
```
## Registry
`registry.json` at the repo root defines all components. Each entry has:
- **name** — component identifier
- **type** — `npm` or `python`
- **source** — path within the repo
- **version** — current version string
## Commands
```bash
python ctrl/spr.py list # Show all components
python ctrl/spr.py sync <component> <target> # One-time copy
python ctrl/spr.py watch <component> <target> # Live sync (ctrl+c to stop)
python ctrl/spr.py publish <component> <target> # Versioned publish
python ctrl/spr.py diff <component> <target> # Show differences
```
`sync` copies files once. `watch` keeps them in sync while you develop. `publish` stamps a version and copies.
The consuming project has no awareness of spr. It just commits whatever lands in the target folder.

75
docs/data/en/concepts.md Normal file
View File

@@ -0,0 +1,75 @@
# Concepts
The mental model behind soleprint.
---
## Rooms
A room is an isolated configuration. Each room lives in `cfg/<room>/` and contains everything soleprint needs to build and run a specific instance.
```
cfg/
standalone/ # Soleprint only, no managed app
amar/ # Soleprint wrapping the Amar application
myroom/ # Your room
```
Rooms are independent. They don't share state. You can run multiple rooms simultaneously on different ports.
## Systems
Three systems plug into the soleprint core:
- **Artery** — data flow. Connectors to external services. Veins talk to real APIs. Shunts fake them for testing. Pulses compose a vein with a room and depot. Plexus is a full app stack.
- **Atlas** — documentation. Books, templates, depots. Documentation that lives next to what it describes and stays actionable.
- **Station** — execution. Tools like tester, datagen, modelgen, graphgen. Monitors like databrowse. Desks for task orchestration.
## The cfg to gen Flow
`cfg/` is hand-authored. `gen/` is machine-built. Never edit `gen/` directly.
```
cfg/myroom/ ──┐
├── build.py ──► gen/myroom/
soleprint/ ──┘
```
`build.py` merges the core framework (`soleprint/`) with your room config (`cfg/myroom/`). Room-specific files override or extend core defaults. The output in `gen/myroom/` is what actually runs.
![Build Flow](../graphs/cfg_gen_flow.svg)
## Layers
Room initialization uses layers 0 through 6. Each layer adds a capability:
| Layer | What |
|-------|------|
| 0 | Config — `config.json`, branding, terminology |
| 1 | Docker — container setup, compose files |
| 2 | Managed app — the application soleprint wraps |
| 3 | Link — bridge adapters to the managed app's database |
| 4 | Scripts — build and run scripts in `ctrl/` |
| 5 | Systems — artery, atlas, station configs |
| 6 | Nginx — reverse proxy, sidebar injection |
![Room Layers](../graphs/room_layers.svg)
## Standalone vs Managed
**Standalone** — soleprint runs by itself. No managed app. Useful for tooling, documentation, or connector development.
**Managed** — soleprint wraps an existing application. Nginx sits in front of the app, injecting the sidebar into every HTML response. Link adapters bridge soleprint into the app's database.
## The Wrapping Concept
Soleprint never touches your app's source code. The injection works like this:
1. Nginx receives the app's HTML response
2. `sub_filter` injects soleprint's CSS and JS before `</head>`
3. The sidebar renders on top of the app's UI
4. Link adapters connect to the app's database for data browsing and test data generation
The app doesn't know soleprint exists. No SDK. No middleware. No build step changes.
> Your app runs exactly as it would without soleprint. The sidebar is a layer on top.

View File

@@ -0,0 +1,63 @@
# Deployment
## Docker Compose
The default. Every built room gets its own compose stack.
```bash
cd gen/<room>/soleprint
docker compose up
```
Environment variables in `.env` control the stack:
- `DEPLOYMENT_NAME` — identifies the deployment
- `NETWORK_NAME` — Docker network name
- `SOLEPRINT_PORT` — port to expose
Each room runs independently. No shared state between rooms.
## Local Development with Caddy
For multi-room local dev, Caddy acts as a reverse proxy. It routes by hostname.
Caddyfile location: `~/wdir/ppl/local/Caddyfile`
Add entries to `/etc/hosts` for each room:
```
127.0.0.1 myroom.local.ar
127.0.0.1 myroom.spr.local.ar
```
Routing pattern:
- `myroom.local.ar` — app direct
- `myroom.spr.local.ar` — app with soleprint sidebar
Caddy reads the hostname and proxies to the right container.
## AWS Deployment
Production runs on EC2. Domain: `soleprint.mcrn.ar`.
Deploy standalone with:
```bash
./ctrl/deploy.sh
```
Services sit on a shared Docker network. Nginx handles routing by subdomain.
## The Gateway Pattern
All rooms share one Nginx entry point. One server, one IP, many rooms.
How it works:
1. Wildcard DNS: `*.mcrn.ar` points to the EC2 instance.
2. Nginx receives the request and reads the hostname.
3. Hostname maps to a container on the shared Docker network.
4. Nginx proxies to that container.
No per-room DNS config. Add a room, add an Nginx block, reload. The wildcard handles the rest.

52
docs/data/en/intro.md Normal file
View File

@@ -0,0 +1,52 @@
# Introduction
Soleprint is a development workflow platform. It wraps existing applications without touching their source code.
You get a sidebar injected into your app, connectors to external services, a test runner, data generators, and documentation tools. All running alongside your app, not inside it.
**Cada paso deja huella** — each step leaves a mark.
---
## Why It Exists
Born from freelance friction. Testing required PRs. Documentation was scattered across wikis and READMEs. Quick API connectors took too long to set up. Every project reinvented the same plumbing.
Soleprint is the control room that sits next to your app. You build once, reuse everywhere.
## How It Works
Soleprint runs as a Docker composition alongside your application. Nginx intercepts the app's HTML response and injects a sidebar via `sub_filter`. The sidebar connects to soleprint's backend, which bridges into the app's database through link adapters.
Your app stays untouched. No SDK. No middleware. No source changes.
## The Four Systems
| System | Purpose |
|--------|---------|
| **Artery** | Connectors to external services — Jira, Slack, Google, or your own APIs |
| **Atlas** | Actionable documentation — books, templates, depots |
| **Station** | Tools and execution — tester, datagen, modelgen, graphgen, databrowse |
| **Soleprint** | Core coordinator — ties the systems together |
![System Overview](../graphs/system_overview.svg)
## Artery Hierarchy
Connectors scale from simple to full:
- **Vein** — stateless API connector
- **Shunt** — fake connector for testing
- **Pulse** — composed: vein + room + depot
- **Plexus** — full app: backend + frontend + database
## What You Get
- Sidebar injection into any web app
- BDD test runner with Gherkin support
- Test data generation with faker
- Model generation from schema
- Interactive data browser
- Navigable model graphs
- Reusable API connectors with mock equivalents
- Documentation that lives next to the code it describes

40
docs/data/en/managed.md Normal file
View File

@@ -0,0 +1,40 @@
# Managed Room
A managed room wraps an existing application. Soleprint runs alongside it, providing tools and a sidebar without touching the app's source code.
## Structure
```
gen/myroom/
myapp/ # The app (cloned repos + Docker)
link/ # DB bridge (FastAPI adapter)
soleprint/ # Soleprint instance
```
## Wrapping mechanism
Nginx `sub_filter` injects sidebar CSS and JS into the app's HTML `</head>` tag. The app serves normally — soleprint attaches from the outside.
![Wrapping](../graphs/wrapping.svg)
## config.json
A managed room adds these fields to `config.json`:
- `managed.name` — the app name
- `managed.repos.backend` — backend repository
- `managed.repos.frontend` — frontend repository
## Sidebar
Loads from `/spr/sidebar.js` and `/spr/sidebar.css`, configured via `/api/sidebar/config`.
The sidebar is generic at runtime. All customization comes from `config.json`.
## Hosts
Add to `/etc/hosts`:
```
127.0.0.1 myroom.spr.local.ar myroom.local.ar
```

View File

@@ -0,0 +1,87 @@
# Quick Start
Zero to running in 2 minutes.
## Prerequisites
- Python 3.12+
- Docker
## Setup
**1. Clone the repo**
```bash
git clone <repo-url> spr
cd spr
```
**2. Initialize a room**
```bash
python -m init.cli myroom
```
The interactive wizard walks you through layers 0-6. Accept the defaults for a standalone setup.
**3. Build**
```bash
python build.py --cfg myroom
```
This merges core framework files with your room config into `gen/myroom/`.
**4. Run**
```bash
cd gen/myroom && docker compose up
```
**5. Open**
Visit [http://localhost:12000](http://localhost:12000).
---
## Alternative: Browser Wizard
Prefer a UI? Run the web initializer:
```bash
python -m init.web
```
Open [http://localhost:9000](http://localhost:9000) and configure your room from the browser.
## Clone an Existing Room
Start from a sample configuration:
```bash
python -m init.cli myroom --from sample
```
This copies the sample room's config as your starting point.
## Rebuild
After changing anything in `cfg/myroom/`, rebuild:
```bash
python build.py --cfg myroom
```
Then restart the containers.
## Build All Rooms
```bash
python build.py --all
```
## Ports
| Service | Port |
|---------|------|
| Soleprint | 12000 |

View File

@@ -0,0 +1,42 @@
# Room Setup
Two ways to create a room: CLI wizard or web wizard.
```bash
# CLI wizard
python -m init.cli myroom
# Web wizard (opens browser on :9000)
python -m init.web
# Clone from existing room
python -m init.cli myroom --from sample
```
## Layers
Each layer is optional beyond layer 0.
| Layer | What | Files |
|-------|------|-------|
| 0 | Config + Data | `config.json`, `data/*.json` |
| 1 | Docker | `soleprint/docker-compose.yml`, `.env` |
| 2 | Managed App | `docker-compose.yml`, Dockerfiles, `.env` |
| 3 | Link | `link/main.py`, `adapters/`, Dockerfile |
| 4 | Scripts | `ctrl/start.sh`, `stop.sh`, `status.sh`, `logs.sh` |
| 5 | Systems | tester environments, test scaffolds |
| 6 | Nginx | `nginx/local.conf`, `docker-compose.nginx.yml` |
![Room Layers](../graphs/room_layers.svg)
The wizard adds layers incrementally. Layer 0 is always created. Each subsequent layer builds on the previous ones but none are required.
## Build
After the wizard finishes, build the room:
```bash
python build.py --cfg myroom
```
Output goes to `gen/myroom/`. The build merges room-specific configs from `cfg/myroom/` into the base framework.

View File

@@ -0,0 +1,46 @@
# Standalone Room
A standalone room is soleprint by itself — no managed app. Good for documentation, tool development, or running the platform independently.
## Structure
```
cfg/standalone/
config.json
data/*.json # 12 registry files
soleprint/
docker-compose.yml
```
Build output is flat — all systems merged into one directory:
```
gen/standalone/
```
## config.json
Three sections:
- **framework** — name, port
- **systems** — artery, atlas, station toggles and settings
- **components** — the naming scheme used across the room
## Data registries
The `data/*.json` files are registries: `veins.json`, `tools.json`, `books.json`, etc.
Each follows the same shape:
```json
{
"items": [
{
"name": "...",
"slug": "...",
"title": "...",
"status": "..."
}
]
}
```

View File

@@ -0,0 +1,80 @@
# Databrowse
SQL data browser. Connects to a database, reads its schema from config, and provides a navigation and query interface.
**Status:** ready
---
## What It Does
Databrowse is a monitor, not a tool. It runs continuously and lets you browse database contents through a web interface. Navigate tables, inspect rows, run saved queries.
The core is generic SQL. Room configuration defines the schema and saved views.
## Structure
```
soleprint/station/monitors/databrowse/ # Core (generic SQL browser)
cfg/<room>/soleprint/station/monitors/databrowse/depot/ # Room config
```
## Configuration
Room-specific config lives in the depot directory.
### schema.json
Defines tables, fields, and relationships:
```json
{
"tables": [
{
"name": "users",
"fields": [
{"name": "id", "type": "integer", "primary": true},
{"name": "email", "type": "string"},
{"name": "created_at", "type": "datetime"}
],
"relationships": [
{"field": "id", "target": "orders.user_id", "type": "one_to_many"}
]
}
]
}
```
### views.json
Defines saved queries:
```json
{
"views": [
{
"name": "Recent Users",
"query": "SELECT * FROM users ORDER BY created_at DESC LIMIT 50"
},
{
"name": "Active Orders",
"query": "SELECT * FROM orders WHERE status = 'active'"
}
]
}
```
## Separation
Core provides:
- SQL connection handling
- Table navigation UI
- Query execution engine
- Relationship traversal
Room provides:
- `depot/schema.json` -- table definitions
- `depot/views.json` -- saved queries
- Database connection credentials
The core knows nothing about your domain. The depot tells it what to show.

View File

@@ -0,0 +1,57 @@
# Datagen
Test data generator using faker. Produces realistic, domain-specific data for testing and development.
**Status:** live
---
## What It Does
Datagen generates fake but realistic data. Names, emails, addresses, transactions -- whatever your domain needs. It uses Python's faker library under the hood.
Core datagen is a placeholder. The real work happens in room-specific generators.
## Structure
```
soleprint/station/tools/datagen/ # Core (base classes, placeholder)
cfg/<room>/soleprint/station/tools/datagen/ # Room-specific generators
```
After build, both merge into `gen/<room>/station/tools/datagen/`.
## Pattern
Rooms subclass a base generator and provide domain-specific data factories:
```python
from station.tools.datagen.base import BaseGenerator
class AmarDataGenerator(BaseGenerator):
def generate_customers(self, count=10):
return [self.fake_customer() for _ in range(count)]
def fake_customer(self):
return {
"name": self.faker.name(),
"email": self.faker.email(),
"phone": self.faker.phone_number(),
}
```
Each room defines what data it needs. Core provides the faker instance and base class. Rooms provide the factories.
## Room Configuration
Room generators live in `cfg/<room>/soleprint/station/tools/datagen/`. They are fully self-contained -- they define their own models, factories, and output formats.
The core module provides:
- Base generator class with faker instance
- CLI entry point
- Output formatting (JSON, CSV)
Rooms provide:
- Domain-specific generator subclasses
- Field definitions and relationships
- Volume and distribution configuration

View File

@@ -0,0 +1,38 @@
# Graphgen
Interactive database schema visualization. Supabase-style graph of your data model, rendered in the browser.
**Status:** planned
---
## What It Will Do
Graphgen will take a schema definition and render it as an interactive, navigable graph. Tables as nodes, relationships as edges. Click to explore, zoom to navigate.
Think Supabase's schema visualizer, but fed by soleprint's modelgen extractors and rendered with soleprint-ui's GraphRenderer.
## Architecture
```
Extract (modelgen) ──► Serve (station API) ──► Render (GraphRenderer)
```
**Extract** -- modelgen extractors read the codebase (Django, SQLAlchemy, Prisma) and produce a normalized schema.
**Serve** -- station exposes the schema as a JSON API endpoint.
**Render** -- GraphRenderer (Vue Flow) draws the interactive graph in the browser.
## Dependencies
- **modelgen extractors** -- must be functional before graphgen can extract schemas
- **soleprint-ui GraphRenderer** -- Vue Flow-based component for rendering
Graphgen depends on modelgen extractors being complete. That is the blocking dependency.
## Location
```
soleprint/station/tools/graphgen/ # Core (not yet built)
```

View File

@@ -0,0 +1,54 @@
# Modelgen
Generates platform-specific models from JSON Schema. Reads schema once, writes models for multiple targets.
**Status:** dev
---
## What It Does
Modelgen takes a JSON Schema definition and produces model code for different platforms:
- **Pydantic** -- Python data validation models
- **Django ORM** -- Django model classes
- **Prisma** -- Prisma schema definitions
One schema, multiple outputs.
## Extractors
Modelgen also works in reverse. Extractors read existing codebases and produce a normalized schema representation:
- **Django extractor** -- reads Django model files
- **SQLAlchemy extractor** -- reads SQLAlchemy model files
- **Prisma extractor** -- reads Prisma schema files
Extractors feed into graphgen for visualization.
## Output
Generated models are written to `gen/<room>/models/`.
```
gen/<room>/models/
├── pydantic/
├── django/
└── prisma/
```
## CLI
```bash
python -m modelgen
```
Reads from `schema.json` (the project source of truth) and writes to the configured output directory.
## Shared Distribution
Modelgen is also distributed as a shared component via `ctrl/spr.py`. This allows other projects to use model generation without running full soleprint.
## Schema Source
The source of truth is `schema.json` at the project root. All model generation starts from this file. Room-specific schema extensions live in `cfg/<room>/models/`.

View File

@@ -0,0 +1,99 @@
# Tester
HTTP contract test runner with web UI and CLI. Tests API endpoints against configurable environments with support for multiple auth types.
**Status:** live
---
## What It Does
Tester runs HTTP requests against real endpoints and validates responses. It is not a unit test runner. It tests contracts -- does this endpoint return what the spec says it should?
Tests are written as Python classes extending `ContractTestCase`. They run via pytest or through the soleprint web UI.
## Configuration
### Environments
Defined in `environments.json`:
```json
{
"environments": [
{
"name": "local",
"url": "http://localhost:8000",
"auth_type": "none"
},
{
"name": "staging",
"url": "https://staging.example.com",
"auth_type": "bearer",
"token_endpoint": "/api/token/"
},
{
"name": "external",
"url": "https://api.example.com",
"auth_type": "api-key"
}
]
}
```
Auth types: `bearer`, `api-key`, `none`.
## Base Class
All tests extend `ContractTestCase`:
```python
from station.tools.tester.base import ContractTestCase
class TestUserEndpoints(ContractTestCase):
AUTH_TYPE = "bearer"
TOKEN_ENDPOINT = "/api/token/"
def test_list_users(self):
response = self.get("/api/users/")
self.assertEqual(response.status_code, 200)
```
`ContractTestCase` handles auth negotiation, environment selection, and base URL resolution.
## Test Discovery
Tests live in two places:
- **Core tests:** `soleprint/station/tools/tester/tests/` -- shared across all rooms
- **Room tests:** `cfg/<room>/soleprint/station/tools/tester/tests/` -- room-specific
Room tests import from core:
```python
from station.tools.tester.base import ContractTestCase
```
After build, both are merged into `gen/<room>/station/tools/tester/tests/`.
## Helpers
Built-in test helpers:
- `unique_email()` -- generates a unique email address
- `unique_id()` -- generates a unique identifier
These are generic helpers available to all rooms. Room-specific helpers stay in `cfg/<room>/` and are not part of core.
## Running
**Web UI:** navigate to `/station/tools/tester/` in soleprint.
**CLI:**
```bash
cd gen/<room>
pytest station/tools/tester/tests/
```
Environment selection is handled via config or CLI flags.

71
docs/data/en/station.md Normal file
View File

@@ -0,0 +1,71 @@
# Station
Station is the execution system. It runs tools, monitors environments, and bundles them into composable desks.
**Centro de control** -- the control center.
---
## Hierarchy
Station components scale from single-purpose to composed:
```
Tool ──────► Desk
│ │
│ └── Composed: Cabinet + Room + Depots
└── Standalone executable (tester, datagen, modelgen, graphgen)
Monitor ─── Long-running observer (databrowse)
```
**Tool** -- a standalone executable that does one job. Generates data, runs tests, produces models. Each tool works independently and can be invoked via web UI or CLI.
**Monitor** -- a long-running observer. Connects to a data source and provides a browsing or query interface. Always on, not invoked per-task.
**Desk** -- a composed execution bundle. Combines a cabinet (tool configuration), a room, and depots (data sources) into a ready-to-run environment.
## Tools
| Tool | Status | Description |
|------|--------|-------------|
| [Tester](station-tester.md) | live | HTTP contract test runner, multi-environment, BDD/Gherkin |
| [Datagen](station-datagen.md) | live | Test data generator using faker |
| [Modelgen](station-modelgen.md) | dev | Generate models from JSON Schema (Pydantic, Django, Prisma) |
| [Graphgen](station-graphgen.md) | planned | Supabase-style interactive DB schema visualization |
## Monitors
| Monitor | Status | Description |
|---------|--------|-------------|
| [Databrowse](station-databrowse.md) | ready | SQL data browser |
## Structure
```
station/
├── tools/
│ ├── tester/ # HTTP contract testing
│ ├── datagen/ # Test data generation
│ ├── modelgen/ # Model generation from schema
│ └── graphgen/ # Schema visualization (planned)
├── monitors/
│ └── databrowse/ # SQL data browser
└── desks/ # Composed execution bundles
```
## Room Configuration
Tools and monitors follow the same split as all soleprint systems. Core logic lives in `soleprint/station/`. Room-specific configuration lives in `cfg/<room>/soleprint/station/`.
```
cfg/<room>/soleprint/station/
├── tools/
│ ├── tester/tests/ # Room-specific test cases
│ └── datagen/ # Room-specific data generators
└── monitors/
└── databrowse/depot/ # Room-specific schema and views
```
The build merges both into `gen/<room>/`.