managarten/docker/promtail/config.yaml
Till JS 8e8b6ac65f fix(mana-auth) + chore: rewrite /api/v1/auth/login JWT mint, remove Matrix stack
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.

═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
   of api.signInEmail
═══════════════════════════════════════════════════════════════════════

Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.

Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.

Verified end-to-end on auth.mana.how:

  $ curl -X POST https://auth.mana.how/api/v1/auth/login \
      -d '{"email":"...","password":"..."}'
  {
    "user": {...},
    "token": "<session token>",
    "accessToken": "eyJhbGciOiJFZERTQSI...",   ← real JWT now
    "refreshToken": "<session token>"
  }

Side benefits:
- Email-not-verified path is now handled by checking
  signInResponse.status === 403 directly, no more catching APIError
  with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
  and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
  (network errors etc); the FORBIDDEN-checking logic in it is dead but
  harmless and left in for defense in depth.

═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
   Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════

The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.

Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
  docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
  table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
  channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
  notify-client, the i18n bundle, the observatory map, the credits
  app-label list, the landing footer/apps page, the prometheus + alerts
  + promtail tier mappings, and the matrix-related deploy paths in
  cd-macmini.yml + ci.yml

Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:32:13 +02:00

111 lines
3.4 KiB
YAML

server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
batchwait: 3s
batchsize: 1048576 # 1 MB
scrape_configs:
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 30s
filters:
# Only collect from our compose project
- name: label
values: ["com.docker.compose.project"]
relabel_configs:
# Extract compose service name → label "service"
- source_labels: ["__meta_docker_container_label_com_docker_compose_service"]
target_label: "service"
# Extract container name → label "container"
- source_labels: ["__meta_docker_container_name"]
regex: "/(.*)"
target_label: "container"
# Extract compose project → label "project"
- source_labels: ["__meta_docker_container_label_com_docker_compose_project"]
target_label: "project"
# Tier labels based on container name prefix for easy filtering
# mana-infra-* → tier=infra
- source_labels: ["container"]
regex: "mana-infra-.*"
target_label: "tier"
replacement: "infra"
# mana-core-* → tier=core
- source_labels: ["container"]
regex: "mana-core-.*"
target_label: "tier"
replacement: "core"
# mana-auth/credits/user/subscriptions/analytics → tier=auth
- source_labels: ["container"]
regex: "mana-(auth|credits|user|subscriptions|analytics)"
target_label: "tier"
replacement: "auth"
# mana-app-* → tier=app
- source_labels: ["container"]
regex: "mana-app-.*"
target_label: "tier"
replacement: "app"
# mana-mon-* → tier=monitoring
- source_labels: ["container"]
regex: "mana-mon-.*"
target_label: "tier"
replacement: "monitoring"
# mana-game-* → tier=games
- source_labels: ["container"]
regex: "mana-game-.*"
target_label: "tier"
replacement: "games"
# mana-service-* → tier=service
- source_labels: ["container"]
regex: "mana-service-.*"
target_label: "tier"
replacement: "service"
# Default tier for anything unmatched
- source_labels: ["tier"]
regex: "^$"
target_label: "tier"
replacement: "other"
pipeline_stages:
# Drop monitoring container logs to save space (they're noisy)
- match:
selector: '{tier="monitoring"}'
action: drop
# Try to parse JSON logs (Go services, Hono services)
- json:
expressions:
level: level
msg: msg
error: error
status: status
method: method
path: path
duration: duration
request_id: requestId
service_name: service
# Fall back: extract level from common log patterns
- regex:
expression: '(?i)(?P<level>error|warn|info|debug|fatal|panic)'
# Normalize level label
- labels:
level:
service_name:
# Add timestamp from log if available
- timestamp:
source: ts
format: RFC3339Nano
fallback_formats:
- "2006-01-02T15:04:05.000Z"
- "2006-01-02 15:04:05"
action_on_failure: fudge