mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 18:01:09 +02:00
feat(apps): add GPU server fallback to all LLM-using apps
Configure all apps with gpu-llm.mana.how as fallback when MANA_LLM_URL is not set. This ensures apps can use the GPU server's local LLM models (Ollama gemma3, qwen2.5-coder) instead of cloud providers. Apps updated: - Chat: LLM fallback to GPU server - Context: LLM fallback (replaces Azure OpenAI dependency) - NutriPhi: LLM + Vision fallback (replaces Google Gemini for food analysis) - Planta: LLM + Vision fallback (replaces Google Gemini for plant analysis) - ManaDeck: LLM + Vision fallback for card generation - Traces: LLM fallback for AI city guides Vision model default: ollama/gemma3:12b (multimodal, runs on RTX 3090) Added VISION_MODEL to .env.development Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
c07987138e
commit
fa16f1fe38
7 changed files with 13 additions and 7 deletions
|
|
@ -24,7 +24,7 @@ PUBLIC_GLITCHTIP_DSN=
|
|||
# Mana Core Auth Service
|
||||
MANA_CORE_AUTH_URL=http://localhost:3001
|
||||
# Mana Credits Service
|
||||
MANA_CREDITS_URL=http://localhost:3060
|
||||
MANA_CREDITS_URL=http://localhost:3061
|
||||
# Service key for service-to-service communication
|
||||
MANA_CORE_SERVICE_KEY=dev-service-key-for-bot-sso-2024
|
||||
|
||||
|
|
@ -424,3 +424,6 @@ CITYCORNERS_WEB_PORT=5196
|
|||
GPU_API_KEY=sk-gpu-cf483ede1e05e28fba5e56c94cd3c24e7c245e57816d3e86
|
||||
GPU_SERVER_URL=https://gpu.mana.how
|
||||
GPU_SERVER_LAN_URL=http://192.168.178.11
|
||||
|
||||
# Vision Model for NutriPhi + Planta (local, replaces Google Gemini)
|
||||
VISION_MODEL=ollama/gemma3:12b
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue