mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 22:41:09 +02:00
The Ollama provider was completely ignoring `response_format` from the
incoming OpenAI-compatible request. Two consequences:
1. Clients that asked for `{"type":"json_object"}` or
`{"type":"json_schema",...}` got back JSON wrapped in
```json ... ``` markdown fences, because Ollama defaults to
conversational output.
2. Strict downstream parsers (Vercel AI SDK `generateObject`,
manual `JSON.parse`) failed to decode the response and threw,
even though the underlying JSON was valid inside the fences.
Fix: when response_format is set, translate it to Ollama's native
`format` field:
- `{"type":"json_object"}` → `format: "json"`
- `{"type":"json_schema","json_schema":{"schema":{...}}}`
→ `format: <the schema dict>` (Ollama 0.5+ supports full JSON
schemas in the format field)
Defensive belt-and-suspenders: a small `_strip_json_fences` helper
runs after the Ollama response is decoded and removes any leftover
```json ... ``` wrapping. Some older vision models still wrap
output in fences even when `format` is set; this catches them.
Streaming path is unchanged because the nutriphi/planta refactor uses
non-streaming `generateObject`. Streaming structured output with
Ollama deserves its own pass when someone actually needs it.
Discovered during the AI SDK + Zod refactor smoke test — neither the
old nor the new vision routes ever returned validated JSON locally
because of this bug. Production uses Google Gemini directly via
fallback so the issue was masked there.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| src | ||
| tests | ||
| .env.example | ||
| .gitignore | ||
| CLAUDE.md | ||
| docker-compose.dev.yml | ||
| docker-compose.yml | ||
| Dockerfile | ||
| pyproject.toml | ||
| requirements.txt | ||
| service.pyw | ||
| start.sh | ||