mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-15 01:01:09 +02:00
Enable real-time token streaming during the planner "calling-llm" phase
so the user sees live progress ("empfange Plan… 128 tokens") instead of
a static spinner. The parser still receives the full text once complete —
no partial-JSON risk.
Changes:
- Extract shared SSE parser from playground into @mana/shared-llm/sse-parser
- remote.ts: use stream:true when onToken callback is provided
- AiPlanInput: add optional onToken field (shared-ai)
- ai-plan task: pass onToken through to backend.generate()
- runner.ts: throttled (500ms) phaseDetail updates during streaming
- Playground: refactored to use shared SSE parser
Also includes: AI agent architecture comparison report (docs/reports/)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
29 lines
708 B
JSON
29 lines
708 B
JSON
{
|
|
"name": "@mana/shared-llm",
|
|
"version": "2.0.0",
|
|
"private": true,
|
|
"sideEffects": false,
|
|
"description": "Tiered LLM orchestrator for Mana — routes tasks across rules / browser-edge / mana-server / cloud backends with explicit user-controlled privacy tiers",
|
|
"main": "./src/index.ts",
|
|
"types": "./src/index.ts",
|
|
"exports": {
|
|
".": "./src/index.ts",
|
|
"./sse-parser": "./src/sse-parser.ts"
|
|
},
|
|
"scripts": {
|
|
"type-check": "tsc --noEmit",
|
|
"clean": "rm -rf dist"
|
|
},
|
|
"dependencies": {
|
|
"@mana/local-llm": "workspace:*"
|
|
},
|
|
"devDependencies": {
|
|
"@types/node": "^24.10.1",
|
|
"svelte": "^5.0.0",
|
|
"typescript": "^5.9.3"
|
|
},
|
|
"peerDependencies": {
|
|
"dexie": "^4.0.0",
|
|
"svelte": "^5.0.0"
|
|
}
|
|
}
|