managarten/services/mana-ai/package.json
Till JS 76577869e1 feat(mana-ai): OpenTelemetry tracing + Grafana Tempo backend
Add distributed tracing to the mana-ai background runner so mission
execution can be visualized end-to-end in Grafana.

Instrumentation (services/mana-ai/):
- tracing.ts: OTel provider setup with OTLP/HTTP exporter, withSpan() helper
- tick.ts: tick.planMission span with mission/agent/user attributes
- client.ts: planner.complete span with LLM model, tokens, latency

Infrastructure:
- docker/tempo/tempo.yaml: Grafana Tempo config (OTLP HTTP on 4318)
- docker-compose: tempo service + tempo_data volume + mana-ai env var
- docker/grafana/provisioning/datasources/tempo.yml: auto-provisioned

Trace flow:
  tick.planMission (root span)
    └── planner.complete (child span)
        ├── llm.model = "gpt-4o-mini"
        ├── llm.tokens.total = 1234
        └── llm.response.length = 567

Enable: set OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
View: Grafana → Explore → Tempo datasource

Also fixes: removed broken @mana/subscriptions workspace ref from arcade.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 15:21:23 +02:00

27 lines
679 B
JSON

{
"name": "@mana/ai-service",
"version": "0.1.0",
"private": true,
"type": "module",
"scripts": {
"dev": "bun run --watch src/index.ts",
"start": "bun run src/index.ts",
"test": "bun test"
},
"dependencies": {
"@mana/shared-ai": "workspace:*",
"@mana/shared-hono": "workspace:*",
"@opentelemetry/api": "^1.9.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.57.0",
"@opentelemetry/resources": "^1.30.0",
"@opentelemetry/sdk-trace-base": "^1.30.0",
"@opentelemetry/semantic-conventions": "^1.28.0",
"hono": "^4.7.0",
"postgres": "^3.4.5",
"prom-client": "^15.1.3"
},
"devDependencies": {
"typescript": "^5.9.3",
"@types/bun": "latest"
}
}