mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-15 01:21:09 +02:00
Replaces hand-rolled fetch + JSON.parse + cast-to-any with generateObject
from the AI SDK. The model is constrained to the shared Zod schemas in
@mana/shared-types, so the response is validated at the boundary instead
of trusting Gemini to emit the right shape.
Routes refactored:
- nutriphi/analysis/photo (image_url → multimodal `image:` content)
- nutriphi/analysis/text (free-text meal description)
- planta/analysis/identify (plant photo identification)
Why this is materially better than the old code:
- Runtime validation: if Gemini drifts, the AI SDK throws before the
response leaves the route. Frontend never sees malformed payloads.
- Provider-portable: createOpenAICompatible({ baseURL: MANA_LLM_URL })
keeps mana-llm as the central routing/auth/observability point. The
AI SDK speaks the OpenAI dialect to mana-llm. If we ever swap the
backend (e.g. claude-sonnet-4-6 for plant ID), it's a one-line model
name change.
- System prompts moved from a multi-line example-laden string to a
short instruction. The schema itself (with .describe() field hints)
now carries the structural contract that the JSON-by-example
paragraph used to encode. Token cost goes down, accuracy goes up.
- Drops manual fetch error handling (status checks, JSON.parse, cast)
in favour of try/catch around generateObject. Errors are typed.
mana-llm itself is unchanged — it's still the OpenAI-compatible proxy
in front of Gemini Vision. The AI SDK just gives us a typed client and
a schema-aware decoder on top of it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| calendar | ||
| chat | ||
| contacts | ||
| context | ||
| guides | ||
| moodlit | ||
| music | ||
| news | ||
| nutriphi | ||
| picture | ||
| planta | ||
| presi | ||
| research | ||
| storage | ||
| todo | ||
| traces | ||
| who | ||