managarten/services/mana-llm
Till JS da373491b8 chore(mana-llm): thread GOOGLE_API_KEY + default model into local compose
Matches the macmini compose — Google Gemini was already wired in the
provider adapter (commit 2 of the function-calling migration) but the
local dev stack's compose never passed the env through, so the
container booted without the provider and every tool-calling request
fell back to Ollama (unreachable in local dev, LAN-only GPU box).

With this in place the local mana-llm healthcheck reports both
`google` and `openrouter` as healthy and the webapp planner hits
Gemini Flash for real.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 20:42:21 +02:00
..
src feat(mana-llm): add OpenAI-style tools + tool_calls passthrough 2026-04-20 15:22:48 +02:00
tests feat(mana-llm): add central LLM abstraction service 2026-01-29 22:01:00 +01:00
.env.example chore(ai-services): adopt Windows GPU as source of truth for llm/stt/tts 2026-04-08 12:46:03 +02:00
.gitignore feat(mana-llm): add central LLM abstraction service 2026-01-29 22:01:00 +01:00
CLAUDE.md chore(matrix): final scrub of stale matrix references 2026-04-08 16:47:54 +02:00
docker-compose.dev.yml feat(mana-llm): add central LLM abstraction service 2026-01-29 22:01:00 +01:00
docker-compose.yml chore(mana-llm): thread GOOGLE_API_KEY + default model into local compose 2026-04-20 20:42:21 +02:00
Dockerfile feat(mana-llm): add central LLM abstraction service 2026-01-29 22:01:00 +01:00
pyproject.toml feat(mana-llm): add Google Gemini fallback provider with auto-routing 2026-03-23 22:44:09 +01:00
requirements.txt fix(mana-llm): add google-genai to requirements.txt for Docker builds 2026-04-16 12:40:30 +02:00
service.pyw chore(ai-services): adopt Windows GPU as source of truth for llm/stt/tts 2026-04-08 12:46:03 +02:00
start.sh feat(llm-playground): add model comparison feature 2026-01-31 23:30:16 +01:00