managarten/packages/shared-llm/src/backends
Till JS be81d11dc3 feat(ai): SSE streaming for foreground Mission Runner
Enable real-time token streaming during the planner "calling-llm" phase
so the user sees live progress ("empfange Plan… 128 tokens") instead of
a static spinner. The parser still receives the full text once complete —
no partial-JSON risk.

Changes:
- Extract shared SSE parser from playground into @mana/shared-llm/sse-parser
- remote.ts: use stream:true when onToken callback is provided
- AiPlanInput: add optional onToken field (shared-ai)
- ai-plan task: pass onToken through to backend.generate()
- runner.ts: throttled (500ms) phaseDetail updates during streaming
- Playground: refactored to use shared SSE parser

Also includes: AI agent architecture comparison report (docs/reports/)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:32:43 +02:00
..
byok-providers feat(llm): add BYOK tier + 4 provider adapters (OpenAI, Anthropic, Gemini, Mistral) 2026-04-14 15:06:48 +02:00
browser.ts fix(mana/web): unwrap $state proxy in workbench-scenes Dexie writes 2026-04-09 00:44:00 +02:00
byok.test.ts test(byok): add 35 unit tests + update docs to as-built status 2026-04-14 15:23:03 +02:00
byok.ts feat(llm): add BYOK tier + 4 provider adapters (OpenAI, Anthropic, Gemini, Mistral) 2026-04-14 15:06:48 +02:00
cloud.ts chore(cloud-tier): upgrade default model gemini-2.0-flash → gemini-2.5-flash 2026-04-16 12:32:03 +02:00
mana-server.ts docs(shared-llm): correct the mana-server tier topology in code + CLAUDE.md 2026-04-09 16:40:34 +02:00
remote.ts feat(ai): SSE streaming for foreground Mission Runner 2026-04-16 12:32:43 +02:00