mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-17 17:09:40 +02:00
feat(research): Phase 3a — 4 sync research agents
Adds Perplexity Sonar, Claude web_search, OpenAI Responses, and Gemini
Grounding as ResearchAgents behind the same comparison interface as the
search and extract providers.
New endpoints:
POST /v1/research — single-agent (or auto-routed to the first
provider with a configured key)
POST /v1/research/compare — fan-out across N agents, persist all
answers + citations in research.eval_*
Each agent normalizes its native response into a common AgentAnswer shape
(answer text + citations[] + tokenUsage), storing the provider's raw
response alongside for later inspection. Implementations use direct HTTP
against each vendor's public API — no SDK deps added.
Auto-routing preference: perplexity-sonar → gemini-grounding →
openai-responses → claude-web-search → (openai-deep-research stubbed for
Phase 3b). Credits orchestration reuses the search/extract executor
pattern (reserve → call → commit/refund).
Deferred to Phase 3b: openai-deep-research (async job queue), migration
of mana-ai + mana-api news-research to call this service directly.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
928f036033
commit
49f315f6be
12 changed files with 879 additions and 15 deletions
|
|
@ -430,17 +430,21 @@ Klassifikation ist optional und fällt bei LLM-Timeout auf `'general'` zurück.
|
|||
- [x] Run-Listen-Endpoints bereits in Phase 1 geliefert
|
||||
- [x] ~~Nightly-Job~~: Live-Aggregation im `addResult()`-Pfad via `onConflictDoUpdate` genügt für Phase 2.
|
||||
|
||||
### Phase 3 — Research Agents + mana-ai Migration (≈ 1–2 Wochen)
|
||||
### Phase 3a — Sync Research Agents ✅ (2026-04-17)
|
||||
|
||||
- [ ] Provider-Adapter:
|
||||
- [x] Provider-Adapter (via direct HTTP, keine SDK-Deps):
|
||||
- `PerplexitySonarProvider` (4 Modelle: sonar, sonar-pro, sonar-reasoning, sonar-deep-research)
|
||||
- `ClaudeWebSearchProvider` (via Anthropic SDK + tool-use)
|
||||
- `OpenAIResponsesProvider` (via OpenAI SDK + `web_search_preview` tool)
|
||||
- `GeminiGroundingProvider` (via google-genai SDK mit Search-Grounding)
|
||||
- `OpenAIDeepResearchProvider` — **async**, via BullMQ/inline Job-Queue, Response-Endpoint `GET /v1/research/tasks/:id`
|
||||
- [ ] `POST /v1/research` + `POST /v1/research/compare`
|
||||
- [ ] Auto-Router für `conversational`-Queries → Agent-Mode
|
||||
- [ ] `mana-llm` um Anthropic- und OpenAI-Provider erweitern (nur für Claude/OpenAI Agents; restlicher LLM-Workflow bleibt Ollama-first)
|
||||
- `ClaudeWebSearchProvider` (Anthropic Messages API mit `web_search_20250305` Tool)
|
||||
- `OpenAIResponsesProvider` (OpenAI Responses API mit `web_search_preview` Tool)
|
||||
- `GeminiGroundingProvider` (Google GenAI v1beta mit Google-Search-Grounding)
|
||||
- [x] `POST /v1/research` + `POST /v1/research/compare`
|
||||
- [x] Agent-Auto-Router (`pickAgent` wählt ersten Provider mit Key: perplexity → gemini → openai → claude → deep-research)
|
||||
- [x] Agents in `/v1/providers` + `/v1/providers/health` integriert
|
||||
|
||||
### Phase 3b — Async + Migrationen (offen)
|
||||
|
||||
- [ ] `OpenAIDeepResearchProvider` — async, via Job-Queue, `GET /v1/research/tasks/:id` Polling-Endpoint
|
||||
- [ ] Auto-Router für `conversational`-Queries → Agent-Mode in `/v1/search` (aktuell separate Endpoints)
|
||||
- [ ] **Migration:** `apps/api/src/modules/news-research/routes.ts` wird zum dünnen Adapter auf `mana-research`
|
||||
- [ ] **Migration:** `services/mana-ai/src/planner/news-research-client.ts` ruft jetzt `mana-research` direkt statt `mana-api`
|
||||
- [ ] **Migration:** `research_news`-Tool bekommt Option `depth: 'shallow' | 'deep'`; `deep` ruft Agent-Mode
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue