managarten/apps/context/apps/backend
Till JS fa16f1fe38 feat(apps): add GPU server fallback to all LLM-using apps
Configure all apps with gpu-llm.mana.how as fallback when MANA_LLM_URL
is not set. This ensures apps can use the GPU server's local LLM models
(Ollama gemma3, qwen2.5-coder) instead of cloud providers.

Apps updated:
- Chat: LLM fallback to GPU server
- Context: LLM fallback (replaces Azure OpenAI dependency)
- NutriPhi: LLM + Vision fallback (replaces Google Gemini for food analysis)
- Planta: LLM + Vision fallback (replaces Google Gemini for plant analysis)
- ManaDeck: LLM + Vision fallback for card generation
- Traces: LLM fallback for AI city guides

Vision model default: ollama/gemma3:12b (multimodal, runs on RTX 3090)
Added VISION_MODEL to .env.development

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 22:21:20 +01:00
..
src feat(apps): add GPU server fallback to all LLM-using apps 2026-03-27 22:21:20 +01:00
drizzle.config.ts feat(context): add NestJS backend, PostgreSQL database, and migrate web app from Supabase to API 2026-03-19 09:28:01 +01:00
jest.config.js feat(context): add NestJS backend, PostgreSQL database, and migrate web app from Supabase to API 2026-03-19 09:28:01 +01:00
nest-cli.json feat(context): add NestJS backend, PostgreSQL database, and migrate web app from Supabase to API 2026-03-19 09:28:01 +01:00
package.json feat: add unified @manacore/shared-llm package and migrate all backends 2026-03-23 22:06:30 +01:00
tsconfig.json feat(context): add NestJS backend, PostgreSQL database, and migrate web app from Supabase to API 2026-03-19 09:28:01 +01:00