managarten/docker/prometheus/prometheus.yml
Till JS 8e8b6ac65f fix(mana-auth) + chore: rewrite /api/v1/auth/login JWT mint, remove Matrix stack
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.

═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
   of api.signInEmail
═══════════════════════════════════════════════════════════════════════

Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.

Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.

Verified end-to-end on auth.mana.how:

  $ curl -X POST https://auth.mana.how/api/v1/auth/login \
      -d '{"email":"...","password":"..."}'
  {
    "user": {...},
    "token": "<session token>",
    "accessToken": "eyJhbGciOiJFZERTQSI...",   ← real JWT now
    "refreshToken": "<session token>"
  }

Side benefits:
- Email-not-verified path is now handled by checking
  signInResponse.status === 403 directly, no more catching APIError
  with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
  and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
  (network errors etc); the FORBIDDEN-checking logic in it is dead but
  harmless and left in for defense in depth.

═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
   Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════

The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.

Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
  docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
  table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
  channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
  notify-client, the i18n bundle, the observatory map, the credits
  app-label list, the landing footer/apps page, the prometheus + alerts
  + promtail tier mappings, and the matrix-related deploy paths in
  cd-macmini.yml + ci.yml

Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:32:13 +02:00

338 lines
9.4 KiB
YAML

# Mana Prometheus Configuration
# Scrapes metrics from all services
global:
scrape_interval: 15s
evaluation_interval: 15s
# Load alerting rules
rule_files:
- /etc/prometheus/alerts.yml
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
scrape_configs:
# Prometheus self-monitoring
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# Host system metrics via node-exporter
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
relabel_configs:
- source_labels: [__address__]
target_label: instance
replacement: 'mac-mini'
# Docker container metrics via cAdvisor
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
# PostgreSQL metrics
- job_name: 'postgres'
static_configs:
- targets: ['postgres-exporter:9187']
# Redis metrics
- job_name: 'redis'
static_configs:
- targets: ['redis-exporter:9121']
# ============================================
# Core Services (Hono/Bun + Go)
# ============================================
# Auth Service
- job_name: 'mana-auth'
static_configs:
- targets: ['mana-auth:3001']
metrics_path: '/metrics'
scrape_interval: 30s
# Credits Service
- job_name: 'mana-credits'
static_configs:
- targets: ['mana-credits:3002']
metrics_path: '/metrics'
scrape_interval: 30s
# User Service
- job_name: 'mana-user'
static_configs:
- targets: ['mana-user:3062']
metrics_path: '/metrics'
scrape_interval: 30s
# Subscriptions Service
- job_name: 'mana-subscriptions'
static_configs:
- targets: ['mana-subscriptions:3063']
metrics_path: '/metrics'
scrape_interval: 30s
# Analytics Service
- job_name: 'mana-analytics'
static_configs:
- targets: ['mana-analytics:3064']
metrics_path: '/metrics'
scrape_interval: 30s
# ULoad Server
- job_name: 'uload-server'
static_configs:
- targets: ['mana-app-uload-server:3070']
metrics_path: '/metrics'
scrape_interval: 30s
# Memoro Server
- job_name: 'memoro-server'
static_configs:
- targets: ['mana-app-memoro-server:3015']
metrics_path: '/metrics'
scrape_interval: 30s
# NOTE: Individual app backends (chat, todo, calendar, contacts, storage,
# nutriphi, music, planta, picture) have been REMOVED — all migrated to
# local-first architecture. Only uload-server and memoro-server remain.
# Mana LLM Gateway (Ollama + Google Fallback)
- job_name: 'mana-llm'
static_configs:
- targets: ['mana-llm:3020']
metrics_path: '/metrics'
scrape_interval: 15s
# Mana Search Service
- job_name: 'mana-search'
static_configs:
- targets: ['mana-search:3012']
metrics_path: '/metrics'
scrape_interval: 30s
# Mana Media Service
- job_name: 'mana-media'
static_configs:
- targets: ['mana-media:3011']
metrics_path: '/metrics'
scrape_interval: 30s
# ============================================
# GPU Server (Windows PC, LAN: 192.168.178.11)
# ============================================
# GPU: LLM Gateway
- job_name: 'gpu-llm'
static_configs:
- targets: ['192.168.178.11:3025']
labels:
instance: 'gpu-server'
metrics_path: '/metrics'
scrape_interval: 15s
# GPU: Speech-to-Text (WhisperX)
- job_name: 'gpu-stt'
static_configs:
- targets: ['192.168.178.11:3020']
labels:
instance: 'gpu-server'
metrics_path: '/health'
scrape_interval: 30s
# GPU: Text-to-Speech
- job_name: 'gpu-tts'
static_configs:
- targets: ['192.168.178.11:3022']
labels:
instance: 'gpu-server'
metrics_path: '/health'
scrape_interval: 30s
# GPU: Image Generation (FLUX.2)
- job_name: 'gpu-image-gen'
static_configs:
- targets: ['192.168.178.11:3023']
labels:
instance: 'gpu-server'
metrics_path: '/health'
scrape_interval: 30s
# GPU: Video Generation (LTX-Video)
- job_name: 'gpu-video-gen'
static_configs:
- targets: ['192.168.178.11:3026']
labels:
instance: 'gpu-server'
metrics_path: '/health'
scrape_interval: 30s
# ============================================
# Go Infrastructure Services
# ============================================
# API Gateway (Go)
- job_name: 'mana-api-gateway'
static_configs:
- targets: ['mana-api-gateway:3016']
metrics_path: '/metrics'
scrape_interval: 15s
# Sync Server (Go) — local-first data sync
- job_name: 'mana-sync'
static_configs:
- targets: ['mana-core-sync:3051']
metrics_path: '/metrics'
scrape_interval: 30s
# Notification Service (Go) — email, push, webhook
- job_name: 'mana-notify'
static_configs:
- targets: ['mana-core-notify:3013']
metrics_path: '/metrics'
scrape_interval: 30s
# Crawler Service (Go)
- job_name: 'mana-crawler'
static_configs:
- targets: ['mana-crawler:3014']
metrics_path: '/metrics'
scrape_interval: 30s
# ============================================
# Blackbox Exporter — HTTP Uptime Probes
# ============================================
# Web Apps (Unified Mana app at mana.how + standalone games)
- job_name: 'blackbox-web'
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
# Unified Mana app (all modules as routes)
- https://mana.how
- https://mana.how/chat
- https://mana.how/todo
- https://mana.how/calendar
- https://mana.how/contacts
- https://mana.how/times
- https://mana.how/photos
- https://mana.how/picture
- https://mana.how/storage
- https://mana.how/presi
- https://mana.how/nutriphi
- https://mana.how/planta
- https://mana.how/calc
- https://mana.how/zitare
- https://mana.how/cards
- https://mana.how/skilltree
- https://mana.how/music
- https://mana.how/citycorners
- https://mana.how/memoro
- https://mana.how/moodlit
- https://mana.how/context
- https://mana.how/questions
- https://mana.how/uload
- https://mana.how/notes
- https://mana.how/habits
- https://mana.how/guides
- https://mana.how/inventar
- https://mana.how/playground
# Standalone games (separate containers)
- https://whopxl.mana.how
- https://arcade.mana.how
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
# API Health Endpoints (only services with running containers)
- job_name: 'blackbox-api'
metrics_path: /probe
params:
module: [http_health]
static_configs:
- targets:
- https://auth.mana.how/health
- https://api.mana.how/health
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
# Infrastructure & Monitoring Tools
- job_name: 'blackbox-infra'
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- https://git.mana.how
- https://grafana.mana.how
- https://stats.mana.how
- https://glitchtip.mana.how
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
# GPU Server Services — probe /health, not /
# The GPU services (whisper STT, TTS, FLUX image gen) only return 2xx
# on /health; their root path returns 401/403/404 by design (auth or
# API-only). Ollama is the exception — its / returns 200, but it has
# no /health endpoint, so we keep it on / via a separate target.
- job_name: 'blackbox-gpu'
metrics_path: /probe
params:
module: [http_health]
static_configs:
- targets:
- https://gpu-stt.mana.how/health
- https://gpu-tts.mana.how/health
- https://gpu-img.mana.how/health
- https://gpu-video.mana.how/health
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
- job_name: 'blackbox-gpu-root'
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- https://gpu-ollama.mana.how
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
# ============================================
# Pushgateway (deploy metrics, batch jobs)
# ============================================
- job_name: 'pushgateway'
honor_labels: true
static_configs:
- targets: ['pushgateway:9091']