managarten/apps/chat
Till JS fa16f1fe38 feat(apps): add GPU server fallback to all LLM-using apps
Configure all apps with gpu-llm.mana.how as fallback when MANA_LLM_URL
is not set. This ensures apps can use the GPU server's local LLM models
(Ollama gemma3, qwen2.5-coder) instead of cloud providers.

Apps updated:
- Chat: LLM fallback to GPU server
- Context: LLM fallback (replaces Azure OpenAI dependency)
- NutriPhi: LLM + Vision fallback (replaces Google Gemini for food analysis)
- Planta: LLM + Vision fallback (replaces Google Gemini for plant analysis)
- ManaDeck: LLM + Vision fallback for card generation
- Traces: LLM fallback for AI city guides

Vision model default: ollama/gemma3:12b (multimodal, runs on RTX 3090)
Added VISION_MODEL to .env.development

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 22:21:20 +01:00
..
apps feat(apps): add GPU server fallback to all LLM-using apps 2026-03-27 22:21:20 +01:00
packages/chat-types feat(versioning): add semantic versioning and changesets to all apps 2026-03-19 16:20:18 +01:00
CLAUDE.md feat(chat): add all Mac Mini Ollama models to playground 2026-01-30 17:48:40 +01:00
INTEGRATION_COMPLETE.md style: auto-format codebase with Prettier 2025-11-27 18:33:16 +01:00
MANA_CORE_AUTH_INTEGRATION.md style: auto-format codebase with Prettier 2025-11-27 18:33:16 +01:00
package.json feat(versioning): add semantic versioning and changesets to all apps 2026-03-19 16:20:18 +01:00
TESTING_GUIDE.md 🔒 security(auth): migrate to EdDSA JWT and add automated monitoring 2025-12-18 21:42:47 +01:00