1.7 KiB
1.7 KiB
IA Vein
Status: live
Connects soleprint to AI/LLM providers. Uses an OpenAI-compatible API interface, so it works with OpenAI, Anthropic (via proxy), local models, or any compatible endpoint.
What It Does
- Generic chat completion endpoint
- Health check against the configured provider
- Use-case-specific routers mounted as sub-routes
- JSON extraction from AI responses
Configuration
Create a .env file in the vein directory:
AI_API_URL=https://api.openai.com/v1
AI_API_KEY=your-api-key
AI_MODEL=gpt-4o
API_PORT=8005
AI_API_URL defaults to OpenAI. Point it at any OpenAI-compatible endpoint.
Endpoints
| Method | Path | Description |
|---|---|---|
| GET | /ia/health |
Test API connection, returns provider and model info |
| POST | /ia/chat |
Generic chat completion |
/ia/practice/* |
Practice use-case routes |
Authentication
API key resolves in order:
X-AI-TokenHTTP header.envfile value
This lets soleprint tools pass their own keys per-request.
Chat Request
{
"messages": [
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello"}
],
"temperature": 0.7,
"max_tokens": 1024
}
The response includes content (raw text) and parsed (first valid JSON object extracted from the response, if any).
Use Cases
The IA vein supports use-case-specific routers mounted under /ia/{usecase}/. Each use case has its own prompts, models, and routes.
Currently implemented:
- Practice -- AI-powered practice sessions with structured prompts and formatted output
Running Standalone
cd soleprint/artery/veins/ia
python run.py
# Runs on port 8005
Or through soleprint, where it is mounted under /artery/ia/.