New Bun/Hono service on port 3068 that bundles many web-research providers
behind a unified interface for side-by-side comparison. All eval runs
persist in research.* (mana_platform) so quality can be reviewed later.
Providers (Phase 1+2):
search: searxng, duckduckgo, brave, tavily, exa, serper
extract: readability (via mana-search), jina-reader, firecrawl
Endpoints:
POST /v1/search, /v1/search/compare — single + fan-out
POST /v1/extract, /v1/extract/compare — single + fan-out
GET /v1/runs, /v1/runs/:id — history
POST /v1/runs/:run/results/:id/rate — manual eval
GET /v1/providers, /v1/providers/health — catalog + readiness
Auto-routing: when `provider` is omitted, queries are classified via regex
(fast path, 0ms) with optional mana-llm fallback, then routed to the first
available provider for that query type (news → tavily, academic → exa,
semantic → exa, etc.).
Credits: server-key calls go through mana-credits reserve → commit/refund
so failed provider calls don't charge the user. BYO-keys supported via
research.provider_configs (UI arrives in Phase 4).
Cache: Redis with graceful degradation (1h TTL for search, 24h for
extract). Pay-per-use APIs only — no subscription-gated providers.
Docs: docs/plans/mana-research-service.md + docs/reports/web-research-capabilities.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fix missing strikethroughs in §6 table (#1, #2, #6) and update Fazit
to reflect final state: 7 of 10 items done. Document remaining 3
langfristige Punkte with context on dependencies and priorities.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Complete tool handler coverage for the MCP server:
Todo: complete_tasks_by_title
Calendar: create_event (with timeBlock)
Notes: update_note, append_to_note, add_tag_to_note
Places: create_place, visit_place, get_places
Drink: log_drink, get_drink_progress, undo_drink
Food: log_meal, nutrition_summary
Journal: create_journal_entry
Habits: create_habit, log_habit (get_habits improved)
News: save_news_article
27 of 29 tools now have real implementations. Remaining 2
(research_news, get_current_location) need external service
calls that aren't available in the API server context.
Also updates architecture comparison report to mark MCP as done.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Enable real-time token streaming during the planner "calling-llm" phase
so the user sees live progress ("empfange Plan… 128 tokens") instead of
a static spinner. The parser still receives the full text once complete —
no partial-JSON risk.
Changes:
- Extract shared SSE parser from playground into @mana/shared-llm/sse-parser
- remote.ts: use stream:true when onToken callback is provided
- AiPlanInput: add optional onToken field (shared-ai)
- ai-plan task: pass onToken through to backend.generate()
- runner.ts: throttled (500ms) phaseDetail updates during streaming
- Playground: refactored to use shared SSE parser
Also includes: AI agent architecture comparison report (docs/reports/)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>