mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 22:41:09 +02:00
Reasoning-style models (Gemma 4 E4B is the first one we use, but DeepSeek R1, Gemini 2.5 thinking, etc. behave the same way) split their output into two fields: - message.content — the final answer - message.reasoning — the chain-of-thought leading up to it When the model is given too few max_tokens to finish reasoning AND emit content, the response comes back with content="" and reasoning populated with the half-finished thought. Verified empirically with gemma4:e4b and `max_tokens: 10` on a "Sage Hi auf Deutsch in einem Wort" prompt — content was "" while reasoning had "Here's a thinking process to..." (cut off mid-thought). For the title task this rarely matters because the system prompt is directive enough to skip the thinking phase (verified: same gemma4: e4b returns clean 7-token titles like "Sonnenstrahlen genießen heute" with the standard system prompt + max_tokens 32). But it's a real failure mode for any future task that uses a less-directive prompt or hits a longer reasoning chain. Defensive fix: prefer message.content first, fall back to message.reasoning if content is empty. The fallback is a string-or- nothing operation, no semantic interpretation — if the reasoning field happens to contain a usable answer fragment, the caller's cleanup chain (e.g. generateTitleTask's strip-quotes-and-dots pipeline) will normalize it. If it's truly half-finished thought, the caller's runRules fallback still kicks in via the existing empty-result detection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| src | ||
| package.json | ||
| tsconfig.json | ||