mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 21:21:10 +02:00
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.
═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
of api.signInEmail
═══════════════════════════════════════════════════════════════════════
Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.
Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.
Verified end-to-end on auth.mana.how:
$ curl -X POST https://auth.mana.how/api/v1/auth/login \
-d '{"email":"...","password":"..."}'
{
"user": {...},
"token": "<session token>",
"accessToken": "eyJhbGciOiJFZERTQSI...", ← real JWT now
"refreshToken": "<session token>"
}
Side benefits:
- Email-not-verified path is now handled by checking
signInResponse.status === 403 directly, no more catching APIError
with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
(network errors etc); the FORBIDDEN-checking logic in it is dead but
harmless and left in for defense in depth.
═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════
The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.
Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
notify-client, the i18n bundle, the observatory map, the credits
app-label list, the landing footer/apps page, the prometheus + alerts
+ promtail tier mappings, and the matrix-related deploy paths in
cd-macmini.yml + ci.yml
Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
81 lines
2.4 KiB
YAML
81 lines
2.4 KiB
YAML
tunnel: bb0ea86d-8253-4a54-838b-107bb7945be9
|
|
credentials-file: /Users/mana/.cloudflared/bb0ea86d-8253-4a54-838b-107bb7945be9.json
|
|
|
|
ingress:
|
|
# SSH Access (requires cloudflared on client)
|
|
- hostname: ssh.mana.how
|
|
service: ssh://localhost:22
|
|
|
|
# Mana Dashboard (Main)
|
|
- hostname: mana.how
|
|
service: http://localhost:5000
|
|
|
|
# Auth Service
|
|
- hostname: auth.mana.how
|
|
service: http://localhost:3001
|
|
|
|
# API Gateway (Go)
|
|
- hostname: api.mana.how
|
|
service: http://localhost:3016
|
|
|
|
# Forgejo (Git + CI/CD)
|
|
- hostname: git.mana.how
|
|
service: http://localhost:3041
|
|
|
|
# NOTE: Individual app backends (chat, todo, calendar, contacts, storage,
|
|
# nutriphi, music, planta, picture, etc.) have been REMOVED — all migrated
|
|
# to local-first architecture. Web apps run as routes under mana.how.
|
|
# Only uload-server and memoro-server remain as app-specific backends.
|
|
|
|
# Games
|
|
- hostname: whopxl.mana.how
|
|
service: http://localhost:5100
|
|
- hostname: arcade.mana.how
|
|
service: http://localhost:5210
|
|
|
|
# Public Status Page (generated every 60s by mana-status-gen container)
|
|
- hostname: status.mana.how
|
|
service: http://localhost:4400
|
|
|
|
# Monitoring & Tools
|
|
- hostname: grafana.mana.how
|
|
service: http://localhost:8000
|
|
- hostname: stats.mana.how
|
|
service: http://localhost:8010
|
|
|
|
# GlitchTip Error Tracking
|
|
- hostname: glitchtip.mana.how
|
|
service: http://localhost:8020
|
|
|
|
# Self-Hosted Landing Pages (via Nginx on port 4400)
|
|
- hostname: it.mana.how
|
|
service: http://localhost:4400
|
|
- hostname: chats.mana.how
|
|
service: http://localhost:4400
|
|
- hostname: pics.mana.how
|
|
service: http://localhost:4400
|
|
- hostname: zitares.mana.how
|
|
service: http://localhost:4400
|
|
- hostname: presis.mana.how
|
|
service: http://localhost:4400
|
|
- hostname: clocks.mana.how
|
|
service: http://localhost:4400
|
|
- hostname: docs.mana.how
|
|
service: http://localhost:4400
|
|
|
|
# GPU Server (Windows PC, LAN: 192.168.178.11)
|
|
- hostname: gpu-llm.mana.how
|
|
service: http://192.168.178.11:3025
|
|
- hostname: gpu-stt.mana.how
|
|
service: http://192.168.178.11:3020
|
|
- hostname: gpu-tts.mana.how
|
|
service: http://192.168.178.11:3022
|
|
- hostname: gpu-img.mana.how
|
|
service: http://192.168.178.11:3023
|
|
- hostname: gpu-video.mana.how
|
|
service: http://192.168.178.11:3026
|
|
- hostname: gpu-ollama.mana.how
|
|
service: http://192.168.178.11:11434
|
|
|
|
# Catch-all
|
|
- service: http_status:404
|