managarten/packages/shared-ai/src/planner
Till JS 0d613e1846 feat(ai): thread TokenUsage through runPlannerLoop → mana-ai budget
Carries per-round token counts from the mana-llm response body
(prompt_tokens + completion_tokens) back through LlmCompletionResponse
→ PlannerLoopResult. The loop sums across rounds and exposes a single
aggregate on result.usage.

Lets mana-ai's tick re-activate per-agent daily-token budget tracking
— tokensUsed was stubbed to 0 in the migration commit (6) because the
loop didn't surface usage yet. Now recordTokenUsage + agentTokenUsage24h
get real numbers again, and the mana_ai_tokens_used_total Prometheus
counter is accurate.

Additive only: consumers without usage needs ignore the new field,
and providers that don't return usage produce zeros (not undefined —
the loop still exposes the object so downstream branches stay trivial).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 18:21:34 +02:00
..
index.ts feat(ai): thread TokenUsage through runPlannerLoop → mana-ai budget 2026-04-20 18:21:34 +02:00
loop.test.ts test(ai): promote MockLlmClient to a shared @mana/shared-ai export 2026-04-20 18:05:46 +02:00
loop.ts feat(ai): thread TokenUsage through runPlannerLoop → mana-ai budget 2026-04-20 18:21:34 +02:00
mock-llm.ts test(ai): promote MockLlmClient to a shared @mana/shared-ai export 2026-04-20 18:05:46 +02:00
parser.test.ts feat(shared-ai): extract planner + mission types to @mana/shared-ai 2026-04-15 00:01:57 +02:00
parser.ts feat(shared-ai): extract planner + mission types to @mana/shared-ai 2026-04-15 00:01:57 +02:00
prompt.ts chore(ai): P2 batch — prompt sync, perf, dedup, scope unification 2026-04-16 16:33:52 +02:00
system-prompt.ts feat(shared-ai): runPlannerLoop + compact system prompt for function calling 2026-04-20 15:31:01 +02:00
types.ts feat(ai): SSE streaming for foreground Mission Runner 2026-04-16 12:32:43 +02:00