# Reason Service AI reasoning and LLM integration for Egregore. ## Purpose Processes conversations with Claude, handles tool execution, and manages the AI reasoning loop. This is where the "thinking" happens. ## Endpoints | Endpoint | Method | Description | |----------|--------|-------------| | `/process` | POST | Process conversation with tool loop | | `/tool` | POST | Execute a single tool directly | | `/tools` | GET | List available tool definitions | | `/prompt` | GET | Get current system prompt | | `/health` | GET | Health check | ## Configuration Environment variables (from `~/.env`): | Variable | Description | Default | |----------|-------------|---------| | `ANTHROPIC_API_KEY` | Claude API key | Required | | `RATE_LIMIT` | Requests per minute | `10/minute` | ## Running ```bash # Activate venv source ~/.venv/bin/activate # Run directly python main.py # Or via systemd sudo systemctl start reason sudo systemctl status reason ``` ## Rate Limiting Built-in rate limiting (10 req/min per IP) protects against API abuse. Configure via `RATE_LIMIT` env var. ## Tool System The service supports a tool loop pattern: 1. Receive conversation history 2. Call Claude API 3. If Claude requests tool use, execute tools 4. Feed results back to Claude 5. Repeat until Claude produces final response Tools are defined in `tools.py` and include system operations available to the AI. ## Dependencies - FastAPI - anthropic (Claude API client) - slowapi (rate limiting)