Migrate all LLM consumers from direct Ollama calls to centralized
mana-llm service with OpenAI-compatible API.
Migrated services:
- matrix-ollama-bot
- telegram-ollama-bot
- chat-backend
- telegram-project-doc-bot
New env vars: MANA_LLM_URL, LLM_MODEL, LLM_TIMEOUT
Replaces: OLLAMA_URL, OLLAMA_MODEL, OLLAMA_TIMEOUT
- Add OllamaService for local model inference via Ollama API
- Update ChatService to route requests based on model provider
- Support both 'ollama' (local) and 'openrouter' (cloud) providers
- Add Gemma 3 4B as default model (free, runs on Mac Mini)
- Add SQL migration script for existing databases
- Update CLAUDE.md with Ollama configuration docs
Environment variables:
- OLLAMA_URL: Ollama server URL (default: http://localhost:11434)
- OLLAMA_TIMEOUT: Request timeout in ms (default: 120000)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Chat:
- Add OpenRouter as primary AI provider with multiple models
- Update chat service with new model configurations
- Add model seed data for Llama, DeepSeek, Mistral, Claude, GPT-4o
Picture:
- Integrate @mana-core/nestjs-integration for credit system
- Implement freemium model (3 free generations, then 10 credits)
- Migrate storage to @manacore/shared-storage
- Add comprehensive project documentation
- Add build:packages step to all test.yml jobs (fixes @manacore/shared-nestjs-auth not found)
- Handle missing coverage artifacts gracefully in test-coverage.yml
- Update .prettierignore to exclude apps-archived/ and problematic files
- Format all source files to pass CI checks
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Changed AI provider from Azure OpenAI to Google Gemini
- Updated model list with Gemini 2.5 Flash (default), 2.0 Flash-Lite, and 2.5 Pro
- Updated backend environment variables for Gemini API
- Changed default port from 3001 to 3002
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>