AI reasoning and LLM integration
Documents service purpose, endpoints, rate limiting, tool system, and configuration. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .gitignore | ||
| __init__.py | ||
| conversation.py | ||
| main.py | ||
| prompts.py | ||
| README.md | ||
| tools.py | ||
Reason Service
AI reasoning and LLM integration for Egregore.
Purpose
Processes conversations with Claude, handles tool execution, and manages the AI reasoning loop. This is where the "thinking" happens.
Endpoints
| Endpoint | Method | Description |
|---|---|---|
/process |
POST | Process conversation with tool loop |
/tool |
POST | Execute a single tool directly |
/tools |
GET | List available tool definitions |
/prompt |
GET | Get current system prompt |
/health |
GET | Health check |
Configuration
Environment variables (from ~/.env):
| Variable | Description | Default |
|---|---|---|
ANTHROPIC_API_KEY |
Claude API key | Required |
RATE_LIMIT |
Requests per minute | 10/minute |
Running
# Activate venv
source ~/.venv/bin/activate
# Run directly
python main.py
# Or via systemd
sudo systemctl start reason
sudo systemctl status reason
Rate Limiting
Built-in rate limiting (10 req/min per IP) protects against API abuse. Configure via RATE_LIMIT env var.
Tool System
The service supports a tool loop pattern:
- Receive conversation history
- Call Claude API
- If Claude requests tool use, execute tools
- Feed results back to Claude
- Repeat until Claude produces final response
Tools are defined in tools.py and include system operations available to the AI.
Dependencies
- FastAPI
- anthropic (Claude API client)
- slowapi (rate limiting)