mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 22:21:10 +02:00
Configure all apps with gpu-llm.mana.how as fallback when MANA_LLM_URL is not set. This ensures apps can use the GPU server's local LLM models (Ollama gemma3, qwen2.5-coder) instead of cloud providers. Apps updated: - Chat: LLM fallback to GPU server - Context: LLM fallback (replaces Azure OpenAI dependency) - NutriPhi: LLM + Vision fallback (replaces Google Gemini for food analysis) - Planta: LLM + Vision fallback (replaces Google Gemini for plant analysis) - ManaDeck: LLM + Vision fallback for card generation - Traces: LLM fallback for AI city guides Vision model default: ollama/gemma3:12b (multimodal, runs on RTX 3090) Added VISION_MODEL to .env.development Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| apps | ||
| packages/chat-types | ||
| CLAUDE.md | ||
| INTEGRATION_COMPLETE.md | ||
| MANA_CORE_AUTH_INTEGRATION.md | ||
| package.json | ||
| TESTING_GUIDE.md | ||