Found while smoke-testing the AI SDK refactor: both nutriphi and planta
were calling `${MANA_LLM_URL}/api/v1/chat/completions` and passing
`gemini-2.0-flash` as the model name. Both wrong:
1. mana-llm exposes routes under /v1/, not /api/v1/. The original
pre-refactor code had the same bug — it predates this commit and
was apparently never noticed because the photo workflow was never
wired into the unified app's UI until last week. /api/v1 returned
404 against the live mana-llm container; now we hit /v1.
2. mana-llm's router parses model strings as `provider/model`
(services/mana-llm/src/providers/router.py:_parse_model). Without
a prefix, `gemini-2.0-flash` was being routed as
`ollama/gemini-2.0-flash` and only worked via the auto-fallback
to Google when ollama failed. Be explicit: `google/gemini-2.0-flash`
hits the Google provider directly and skips the failed-ollama
round-trip.
VISION_MODEL env var still wins over the default, so prod overrides
remain possible.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>