updated docs

This commit is contained in:
2026-04-14 10:32:05 -03:00
parent 2e5a304181
commit a80b72a9b1
67 changed files with 3260 additions and 5005 deletions

77
docs/data/en/artery-ia.md Normal file
View File

@@ -0,0 +1,77 @@
# IA Vein
Status: **live**
Connects soleprint to AI/LLM providers. Uses an OpenAI-compatible API interface, so it works with OpenAI, Anthropic (via proxy), local models, or any compatible endpoint.
---
## What It Does
- Generic chat completion endpoint
- Health check against the configured provider
- Use-case-specific routers mounted as sub-routes
- JSON extraction from AI responses
## Configuration
Create a `.env` file in the vein directory:
```env
AI_API_URL=https://api.openai.com/v1
AI_API_KEY=your-api-key
AI_MODEL=gpt-4o
API_PORT=8005
```
`AI_API_URL` defaults to OpenAI. Point it at any OpenAI-compatible endpoint.
## Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/ia/health` | Test API connection, returns provider and model info |
| POST | `/ia/chat` | Generic chat completion |
| | `/ia/practice/*` | Practice use-case routes |
## Authentication
API key resolves in order:
1. `X-AI-Token` HTTP header
2. `.env` file value
This lets soleprint tools pass their own keys per-request.
## Chat Request
```json
{
"messages": [
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello"}
],
"temperature": 0.7,
"max_tokens": 1024
}
```
The response includes `content` (raw text) and `parsed` (first valid JSON object extracted from the response, if any).
## Use Cases
The IA vein supports use-case-specific routers mounted under `/ia/{usecase}/`. Each use case has its own prompts, models, and routes.
Currently implemented:
- **Practice** -- AI-powered practice sessions with structured prompts and formatted output
## Running Standalone
```bash
cd soleprint/artery/veins/ia
python run.py
# Runs on port 8005
```
Or through soleprint, where it is mounted under `/artery/ia/`.