fix(api): default vision model to ollama/gemma3:4b

mana-llm on the live Mac Mini does not have GOOGLE_API_KEY configured —
only the Ollama provider is registered. The previous default
'google/gemini-2.0-flash' would error with 'Provider google not
available' on every photo analysis.

Switch to ollama/gemma3:4b which is locally available via the
gpu-proxy bridge to the Windows GPU box (192.168.178.11). Gemma 3 is
multimodal and verified end-to-end with the new mana-llm structured-
output passthrough — see the 5520f1385 fix landing the response_format
plumbing on the Pydantic side and the Ollama provider's native format
field translation.

VISION_MODEL env var still wins, so prod can flip to
google/gemini-2.0-flash later by adding GOOGLE_API_KEY to mana-llm's
docker-compose env block.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-04-09 19:34:32 +02:00
parent 3b035e930f
commit 958819f06a
2 changed files with 8 additions and 6 deletions

View file

@ -33,9 +33,11 @@ import { logger, type AuthVariables } from '@mana/shared-hono';
const LLM_URL = process.env.MANA_LLM_URL || 'http://localhost:3025';
// mana-llm parses model strings as `provider/model` (router.py:_parse_model).
// Without a prefix, it defaults to ollama/ which then falls back to Google
// only if auto_fallback_enabled + google_api_key are set. Be explicit.
const VISION_MODEL = process.env.VISION_MODEL || 'google/gemini-2.0-flash';
// Default to Gemma 3 (4B, multimodal) on the local Ollama instance — it
// runs on the GPU server (192.168.178.11) via the gpu-proxy bridge and
// supports vision out of the box. Override with VISION_MODEL=google/gemini-2.0-flash
// (or similar) once mana-llm has GOOGLE_API_KEY configured.
const VISION_MODEL = process.env.VISION_MODEL || 'ollama/gemma3:4b';
const llm = createOpenAICompatible({
name: 'mana-llm',

View file

@ -21,9 +21,9 @@ import {
import { logger, type AuthVariables } from '@mana/shared-hono';
const LLM_URL = process.env.MANA_LLM_URL || 'http://localhost:3025';
// See nutriphi/routes.ts for the explanation of the model prefix and
// the /v1 suffix on the base URL.
const VISION_MODEL = process.env.VISION_MODEL || 'google/gemini-2.0-flash';
// See nutriphi/routes.ts for the rationale on the default model and
// the /v1 base URL.
const VISION_MODEL = process.env.VISION_MODEL || 'ollama/gemma3:4b';
const llm = createOpenAICompatible({
name: 'mana-llm',