chore(cutover): remove services/mana-auth/ — moved to mana-platform

Live containers on the Mac Mini build out of `../mana/services/mana-auth/`
since the 8-Doppel-Cutover commit (774852ba2). Smoke test green
2026-05-08 — health endpoints, JWKS, login flow, Stripe-webhook all
reachable from the new build path. Removing the now-stale duplicate.

Was 560K in this repo, gone now. Active code lives in
`Code/mana/services/mana-auth/` (siehe ../mana/CLAUDE.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Till JS 2026-05-08 18:53:56 +02:00
parent af3f21a179
commit 0a30b91200
67 changed files with 0 additions and 10640 deletions

View file

@ -1,133 +0,0 @@
# mana-auth
Central authentication service for the Mana ecosystem. Hono + Bun + Better Auth.
## Tech Stack
| Layer | Technology |
|-------|------------|
| **Runtime** | Bun |
| **Framework** | Hono |
| **Auth** | Better Auth (native Hono handler) |
| **Database** | PostgreSQL + Drizzle ORM |
| **JWT** | EdDSA via Better Auth JWT plugin |
| **Email** | Nodemailer → self-hosted Stalwart SMTP (`docs/MAIL_SERVER.md`) |
## Port: 3001
## Better Auth Plugins
1. **Organization** — B2B multi-tenant with RBAC
2. **JWT** — EdDSA tokens with minimal claims (sub, email, role, sid)
3. **Two-Factor** — TOTP with backup codes
4. **Magic Link** — Passwordless email login
## Key Endpoints
### Better Auth Native (`/api/auth/*`)
Handled directly by Better Auth — includes sign-in, sign-up, session, 2FA, magic links, org management.
### Custom Auth (`/api/v1/auth/*`)
| Method | Path | Description |
|--------|------|-------------|
| POST | `/register` | Register + init credits |
| POST | `/login` | Login (returns JWT + sets SSO cookie) |
| POST | `/logout` | Logout |
| POST | `/validate` | Validate JWT token |
| GET | `/session` | Get current session |
### Me — GDPR Self-Service (`/api/v1/me/*`)
| Method | Path | Description |
|--------|------|-------------|
| GET | `/data` | Full user data summary (auth, credits, project entities) |
| GET | `/data/export` | Download all data as JSON file |
| DELETE | `/data` | Delete all user data across all services (right to be forgotten) |
Aggregates data from 3 sources: auth DB (sessions, accounts, 2FA, passkeys), mana-credits (balance, transactions), mana-sync DB (entity counts per app).
### Encryption Vault (`/api/v1/me/encryption-vault/*`)
Per-user master-key custody for the Mana data-layer encryption. The browser fetches its master key here on first login and re-fetches on each session start. The key itself never lives in the database — it's wrapped with the service-wide KEK (loaded from `MANA_AUTH_KEK`).
| Method | Path | Description |
|--------|------|-------------|
| GET | `/status` | Cheap metadata read: `{ vaultExists, hasRecoveryWrap, zeroKnowledge, recoverySetAt }`. No decryption, no audit row. Used by the settings page on mount. |
| POST | `/init` | Idempotent vault initialisation. Mints + KEK-wraps a fresh master key on first call, returns the existing one on subsequent calls. |
| GET | `/key` | Hot path. Returns either `{ masterKey, formatVersion, kekId }` (standard mode) or `{ requiresRecoveryCode: true, recoveryWrappedMk, recoveryIv }` (zero-knowledge mode). |
| POST | `/rotate` | Mints a fresh master key. Old MK is gone — caller must re-encrypt or accept loss. **Forbidden in zero-knowledge mode** (`409 ZK_ROTATE_FORBIDDEN`). |
| POST | `/recovery-wrap` | Stores a client-built recovery wrap: `{ recoveryWrappedMk, recoveryIv }`. The recovery secret itself NEVER touches the wire. Idempotent — replaces existing wrap. |
| DELETE | `/recovery-wrap` | Removes the recovery wrap. **Forbidden in zero-knowledge mode** (`409 ZK_ACTIVE`) — would lock the user out. |
| POST | `/zero-knowledge` | Toggles ZK mode. `{ enable: true }` requires a recovery wrap to be set first (else `400 RECOVERY_WRAP_MISSING`). `{ enable: false, masterKey: base64 }` requires the freshly-unwrapped MK from the client so the server can KEK-re-wrap it. |
All routes write to `auth.encryption_vault_audit` for security investigations. Three database CHECK constraints enforce vault consistency at the schema level (`encryption_vaults_has_wrap`, `encryption_vaults_wrap_iv_pair`, `encryption_vaults_zk_consistency`) so a code-level bug can't accidentally lock a user out.
Schema lives in `src/db/schema/encryption-vaults.ts`, service in `src/services/encryption-vault/`. Migration files: `sql/002_encryption_vaults.sql` (Phase 2: tables + RLS) and `sql/003_recovery_wrap.sql` (Phase 9: recovery columns + ZK constraints).
For the full architectural deep-dive, threat model, and rollout history (Phases 19 + backlog sweep), see `apps/mana/apps/web/src/lib/data/DATA_LAYER_AUDIT.md`. User-facing docs at `apps/docs/src/content/docs/architecture/security.mdx`.
### Admin (`/api/v1/admin/*`)
| Method | Path | Description |
|--------|------|-------------|
| GET | `/users` | Paginated user list with search (`?page=1&limit=20&search=`) |
| GET | `/users/:id/data` | Aggregated user data summary (same as /me/data) |
| DELETE | `/users/:id/data` | Delete all user data (admin) |
| GET | `/users/:id/tier` | Get user's access tier |
| PUT | `/users/:id/tier` | Update user's access tier |
### Internal (`/api/v1/internal/*`)
| Method | Path | Description |
|--------|------|-------------|
| GET | `/org/:orgId/member/:userId` | Check membership (for mana-credits) |
## Local Dev Login
There is **no built-in admin seed** and **no auth-bypass env var**, and
the local stack runs with `requireEmailVerification: true` against no
real SMTP. Use the convenience script instead of hand-crafting SQL:
```bash
pnpm setup:dev-user # 3 founder accounts
./scripts/dev/setup-dev-user.sh foo@x.de pass # single account
```
Defaults to `tills95@gmail.com` / `tilljkb@gmail.com` / `rajiehq@gmail.com`,
all with password `Aa-123456789` and `access_tier = founder`. The script
calls `POST /api/v1/auth/register` (so Better Auth handles hashing),
then runs an idempotent SQL `UPDATE auth.users SET email_verified = true,
access_tier = 'founder'`. Full docs in `docs/LOCAL_DEVELOPMENT.md`.
## Cross-Domain SSO
Session cookies shared across `*.mana.how` via `COOKIE_DOMAIN=.mana.how`.
## Environment Variables
```env
PORT=3001
DATABASE_URL=postgresql://...
SYNC_DATABASE_URL=postgresql://.../mana_sync # mana-sync DB for entity counts (GDPR data view)
BASE_URL=https://auth.mana.how
COOKIE_DOMAIN=.mana.how
NODE_ENV=production
MANA_SERVICE_KEY=...
MANA_CREDITS_URL=http://mana-credits:3061
MANA_SUBSCRIPTIONS_URL=http://mana-subscriptions:3063
SMTP_HOST=stalwart # self-hosted on Mac Mini, see docs/MAIL_SERVER.md
SMTP_PORT=587
SMTP_USER=...
SMTP_PASS=...
# Encryption Vault — REQUIRED IN PRODUCTION
# Base64-encoded 32-byte AES-256 key. Generate with `openssl rand -base64 32`.
# The dev fallback is 32 zero bytes (prints a loud warning at startup).
# This key wraps every user's master key in auth.encryption_vaults — guard
# it like a database password. Provision via Docker secret / KMS / Vault.
MANA_AUTH_KEK=
```
## Critical Rules
- **ALWAYS use Better Auth** — no custom auth implementation
- **EdDSA algorithm only** for JWT (Better Auth manages JWKS)
- **Minimal JWT claims** — sub, email, role, sid only
- **jose library** for JWT validation (NOT jsonwebtoken)

View file

@ -1,47 +0,0 @@
# Install stage: use node + pnpm to resolve workspace dependencies.
# Build context must be the monorepo root (see docker-compose.macmini.yml).
FROM node:22-alpine AS installer
RUN corepack enable && corepack prepare pnpm@9.15.0 --activate
WORKDIR /app
# Copy workspace structure
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY services/mana-auth/package.json ./services/mana-auth/
COPY packages/shared-hono ./packages/shared-hono
COPY packages/shared-ai ./packages/shared-ai
COPY packages/shared-logger ./packages/shared-logger
COPY packages/shared-types ./packages/shared-types
COPY packages/shared-error-tracking ./packages/shared-error-tracking
# Root package.json devDeps reference @mana/eslint-config; pnpm filter
# install still resolves the root, so the package needs to be present
# even though mana-auth itself doesn't import from it.
COPY packages/eslint-config ./packages/eslint-config
# Install only mana-auth and its workspace deps
RUN pnpm install --filter @mana/auth... --no-frozen-lockfile --ignore-scripts
# Runtime stage: bun
FROM oven/bun:1 AS production
WORKDIR /app
# Copy installed deps from installer stage
COPY --from=installer /app/node_modules ./node_modules
COPY --from=installer /app/services/mana-auth/node_modules ./services/mana-auth/node_modules
COPY --from=installer /app/packages ./packages
# Copy source
COPY services/mana-auth/package.json ./services/mana-auth/
COPY services/mana-auth/src ./services/mana-auth/src
COPY services/mana-auth/tsconfig.json services/mana-auth/drizzle.config.ts ./services/mana-auth/
WORKDIR /app/services/mana-auth
EXPOSE 3001
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
CMD bun -e "fetch('http://localhost:3001/health').then(r => process.exit(r.ok ? 0 : 1)).catch(() => process.exit(1))"
CMD ["bun", "run", "src/index.ts"]

View file

@ -1,12 +0,0 @@
import { defineConfig } from 'drizzle-kit';
export default defineConfig({
schema: './src/db/schema/*.ts',
out: './drizzle',
dialect: 'postgresql',
dbCredentials: {
url:
process.env.DATABASE_URL || 'postgresql://mana:devpassword@localhost:5432/mana_platform',
},
schemaFilter: ['auth'],
});

View file

@ -1,34 +0,0 @@
{
"name": "@mana/auth",
"version": "0.1.0",
"private": true,
"type": "module",
"scripts": {
"dev": "bun run --hot src/index.ts",
"start": "bun run src/index.ts",
"test": "bun test",
"db:push": "drizzle-kit push",
"db:generate": "drizzle-kit generate",
"db:studio": "drizzle-kit studio"
},
"dependencies": {
"@better-auth/passkey": "^1.6.8",
"@mana/shared-ai": "workspace:*",
"@mana/shared-error-tracking": "workspace:*",
"@mana/shared-hono": "workspace:*",
"@mana/shared-types": "workspace:*",
"bcryptjs": "^3.0.2",
"better-auth": "^1.4.3",
"drizzle-orm": "^0.38.3",
"hono": "^4.7.0",
"jose": "^6.1.2",
"nanoid": "^5.0.0",
"postgres": "^3.4.5",
"zod": "^3.24.0"
},
"devDependencies": {
"@types/bcryptjs": "^2.4.6",
"drizzle-kit": "^0.30.4",
"typescript": "^5.9.3"
}
}

View file

@ -1,22 +0,0 @@
-- Migration: Add access_tier to users table
-- Run this on production before deploying the new mana-auth version.
-- After this migration, run `drizzle-kit push` or redeploy mana-auth.
--
-- Alternatively, just run `pnpm db:push` from services/mana-auth/ which
-- will apply the schema change automatically via Drizzle Kit.
-- Step 1: Create the enum type (if not exists)
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'access_tier') THEN
CREATE TYPE public.access_tier AS ENUM ('guest', 'public', 'beta', 'alpha', 'founder');
END IF;
END
$$;
-- Step 2: Add the column with default 'public'
ALTER TABLE auth.users
ADD COLUMN IF NOT EXISTS access_tier public.access_tier NOT NULL DEFAULT 'public';
-- Step 3: Set yourself (founder) — replace with your actual email
-- UPDATE auth.users SET access_tier = 'founder' WHERE email = 'your@email.com';

View file

@ -1,78 +0,0 @@
-- Migration: encryption_vaults + encryption_vault_audit
--
-- Adds the per-user encryption vault that holds each user's master key
-- (MK) wrapped with a service-wide Key Encryption Key (KEK). The KEK
-- itself never lives in the database — it is loaded from the
-- MANA_AUTH_KEK env var (later: a KMS / Vault).
--
-- Run this BEFORE deploying the encryption Phase 2 mana-auth release.
-- After this migration, run `pnpm db:push` from services/mana-auth/
-- to materialize the Drizzle-defined columns (or just deploy the new
-- service — Drizzle creates the tables on boot).
--
-- The Drizzle schema definition lives in
-- src/db/schema/encryption-vaults.ts. This SQL file only adds the
-- bits Drizzle cannot model: row-level security policies + the FORCE
-- option that makes the policies apply even to the table owner.
-- ─── Tables ───────────────────────────────────────────────────
-- Table CREATE statements are intentionally idempotent so this file
-- can be re-run on a partially-migrated database without crashing.
CREATE TABLE IF NOT EXISTS auth.encryption_vaults (
user_id TEXT PRIMARY KEY REFERENCES auth.users(id) ON DELETE CASCADE,
wrapped_mk TEXT NOT NULL,
wrap_iv TEXT NOT NULL,
format_version SMALLINT NOT NULL DEFAULT 1,
kek_id TEXT NOT NULL DEFAULT 'env-v1',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
rotated_at TIMESTAMPTZ
);
CREATE INDEX IF NOT EXISTS encryption_vaults_user_id_idx
ON auth.encryption_vaults (user_id);
CREATE TABLE IF NOT EXISTS auth.encryption_vault_audit (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE,
action TEXT NOT NULL,
ip_address TEXT,
user_agent TEXT,
context TEXT,
status INTEGER NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX IF NOT EXISTS encryption_vault_audit_user_id_idx
ON auth.encryption_vault_audit (user_id);
CREATE INDEX IF NOT EXISTS encryption_vault_audit_created_at_idx
ON auth.encryption_vault_audit (created_at);
-- ─── Row Level Security ───────────────────────────────────────
--
-- Defense-in-depth: even if a future query forgets the WHERE
-- user_id = $1 clause, the database itself refuses to leak rows
-- belonging to other users. The vault service wraps every read
-- and write in a transaction that calls
-- set_config('app.current_user_id', userId, true)
-- before touching the table — RLS rejects anything else.
--
-- FORCE makes the policy apply to the table owner too, so the
-- mana-auth service role cannot bypass it via grants alone.
ALTER TABLE auth.encryption_vaults ENABLE ROW LEVEL SECURITY;
ALTER TABLE auth.encryption_vaults FORCE ROW LEVEL SECURITY;
DROP POLICY IF EXISTS encryption_vaults_user_isolation ON auth.encryption_vaults;
CREATE POLICY encryption_vaults_user_isolation ON auth.encryption_vaults
USING (user_id = current_setting('app.current_user_id', true))
WITH CHECK (user_id = current_setting('app.current_user_id', true));
ALTER TABLE auth.encryption_vault_audit ENABLE ROW LEVEL SECURITY;
ALTER TABLE auth.encryption_vault_audit FORCE ROW LEVEL SECURITY;
DROP POLICY IF EXISTS encryption_vault_audit_user_isolation ON auth.encryption_vault_audit;
CREATE POLICY encryption_vault_audit_user_isolation ON auth.encryption_vault_audit
USING (user_id = current_setting('app.current_user_id', true))
WITH CHECK (user_id = current_setting('app.current_user_id', true));

View file

@ -1,86 +0,0 @@
-- Migration: encryption_vaults recovery wrap + zero-knowledge mode
--
-- Phase 9 of the encryption rollout. Adds three new columns + makes
-- wrapped_mk nullable so a user can opt into "true zero-knowledge"
-- mode where the server can no longer decrypt their data.
--
-- The opt-in flow is:
-- 1. Client generates a 32-byte recovery secret (client-only)
-- 2. Client wraps the existing master key with a recovery-derived key
-- 3. Client posts the wrapped MK + IV to /me/encryption-vault/recovery-wrap
-- 4. The server stores recovery_wrapped_mk + recovery_iv (both NULLABLE
-- until the user enables the recovery wrap; both NOT NULL once set)
-- 5. Client posts /me/encryption-vault/zero-knowledge with `enable: true`
-- The server NULLs out wrapped_mk + wrap_iv, sets zero_knowledge=true.
-- The server can no longer decrypt the user's data.
-- 6. On the next unlock, GET /key returns the recovery_wrapped_mk blob
-- with `requiresRecoveryCode: true`. The client prompts the user for
-- the recovery code, derives the wrap key, unwraps locally.
--
-- The "disable" flow is the inverse: the client unwraps locally, generates
-- a new server-side wrapped_mk via a fresh KEK wrap, and posts it back.
--
-- Idempotent: re-running on a partially-migrated DB is safe.
-- ─── Add new columns ──────────────────────────────────────────
ALTER TABLE auth.encryption_vaults
ADD COLUMN IF NOT EXISTS recovery_wrapped_mk TEXT,
ADD COLUMN IF NOT EXISTS recovery_iv TEXT,
ADD COLUMN IF NOT EXISTS recovery_format_version SMALLINT NOT NULL DEFAULT 1,
ADD COLUMN IF NOT EXISTS recovery_set_at TIMESTAMPTZ,
ADD COLUMN IF NOT EXISTS zero_knowledge BOOLEAN NOT NULL DEFAULT false;
-- ─── Make wrapped_mk + wrap_iv nullable ───────────────────────
-- These were NOT NULL in the Phase 2 migration. After Phase 9, a vault
-- in zero-knowledge mode has no server-side wrap at all, so both columns
-- have to allow NULL. Existing rows are unaffected (they have non-NULL
-- values; the constraint just relaxes).
ALTER TABLE auth.encryption_vaults
ALTER COLUMN wrapped_mk DROP NOT NULL,
ALTER COLUMN wrap_iv DROP NOT NULL;
-- ─── Sanity constraint ────────────────────────────────────────
-- A vault row must have AT LEAST one usable wrap form, otherwise the
-- user has lost access to their data and we should have rejected the
-- mutation that left the row in this state. The check enforces that
-- at least one of (wrapped_mk, recovery_wrapped_mk) is populated.
ALTER TABLE auth.encryption_vaults
DROP CONSTRAINT IF EXISTS encryption_vaults_has_wrap;
ALTER TABLE auth.encryption_vaults
ADD CONSTRAINT encryption_vaults_has_wrap
CHECK (wrapped_mk IS NOT NULL OR recovery_wrapped_mk IS NOT NULL);
-- ─── Cross-field consistency ──────────────────────────────────
-- If recovery_wrapped_mk is set, recovery_iv must also be set.
-- If wrapped_mk is set, wrap_iv must also be set.
ALTER TABLE auth.encryption_vaults
DROP CONSTRAINT IF EXISTS encryption_vaults_wrap_iv_pair;
ALTER TABLE auth.encryption_vaults
ADD CONSTRAINT encryption_vaults_wrap_iv_pair
CHECK (
(wrapped_mk IS NULL) = (wrap_iv IS NULL)
AND
(recovery_wrapped_mk IS NULL) = (recovery_iv IS NULL)
);
-- ─── Zero-knowledge implies the server wrap is gone ───────────
-- If a vault is in zero-knowledge mode, the KEK-wrapped MK MUST be
-- absent — otherwise the "server can no longer decrypt" promise is
-- a lie. The recovery wrap MUST be present, otherwise the user is
-- locked out.
ALTER TABLE auth.encryption_vaults
DROP CONSTRAINT IF EXISTS encryption_vaults_zk_consistency;
ALTER TABLE auth.encryption_vaults
ADD CONSTRAINT encryption_vaults_zk_consistency
CHECK (
(zero_knowledge = false)
OR
(zero_knowledge = true AND wrapped_mk IS NULL AND recovery_wrapped_mk IS NOT NULL)
);

View file

@ -1,70 +0,0 @@
-- Migration 004: Spaces schema
--
-- Adds the `spaces` schema with two server-side tables that extend Better
-- Auth organizations for our multi-tenancy model. See
-- docs/plans/spaces-foundation.md for the full RFC, and the Drizzle
-- definitions at src/db/schema/spaces.ts.
--
-- Why a separate schema:
-- - Keeps auth tables focused on identity, not domain extensions
-- - Lets us grant narrower RLS policies per schema
-- - Mirrors the pgSchema-per-concern pattern used across mana_platform
--
-- Idempotent: re-running on a partially-migrated DB is safe.
-- ─── Schema ──────────────────────────────────────────────────────
CREATE SCHEMA IF NOT EXISTS spaces;
-- ─── credentials ────────────────────────────────────────────────
-- Per-space external credentials: OAuth tokens, API keys, SMTP configs.
-- NEVER stored client-side — these are server-held secrets, wrapped with
-- the service-wide KEK (same mechanism as auth.encryption_vaults).
CREATE TABLE IF NOT EXISTS spaces.credentials (
space_id TEXT NOT NULL,
provider TEXT NOT NULL,
access_token_encrypted TEXT NOT NULL,
refresh_token_encrypted TEXT,
expires_at TIMESTAMPTZ,
scopes TEXT[],
provider_account_id TEXT,
metadata_json TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (space_id, provider),
CONSTRAINT space_credentials_space_fk
FOREIGN KEY (space_id) REFERENCES auth.organizations (id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS space_credentials_space_idx
ON spaces.credentials (space_id);
-- ─── module_permissions ─────────────────────────────────────────
-- Role × module permission matrix. If no row exists for a given
-- (space, role, module) tuple, the default is derived from SPACE_MODULE_ALLOWLIST
-- plus role-tier fallback (owner > admin > member). Rows here are
-- explicit overrides — typically written when a space owner customises
-- the default permissions for a custom role.
CREATE TABLE IF NOT EXISTS spaces.module_permissions (
space_id TEXT NOT NULL,
role TEXT NOT NULL,
module_id TEXT NOT NULL,
can_read BOOLEAN NOT NULL DEFAULT TRUE,
can_write BOOLEAN NOT NULL DEFAULT FALSE,
can_admin BOOLEAN NOT NULL DEFAULT FALSE,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (space_id, role, module_id),
CONSTRAINT space_module_permissions_space_fk
FOREIGN KEY (space_id) REFERENCES auth.organizations (id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS space_module_permissions_space_module_idx
ON spaces.module_permissions (space_id, module_id);
-- ─── RLS ─────────────────────────────────────────────────────────
-- Defer enabling RLS until the rest of the app is scope-aware. Turning
-- it on now would lock out services that don't yet pass the space
-- context. Re-enable in a follow-up migration once mana-api speaks the
-- Spaces protocol end-to-end.
--
-- ALTER TABLE spaces.credentials ENABLE ROW LEVEL SECURITY;
-- ALTER TABLE spaces.module_permissions ENABLE ROW LEVEL SECURITY;

View file

@ -1,77 +0,0 @@
-- Migration 005: Personas
--
-- Adds the three `auth.personas*` tables introduced by the M2.ac MCP
-- work (feat commit 493db0c3b). See docs/plans/mana-mcp-and-personas.md
-- for the full spec, and src/db/schema/personas.ts for the Drizzle
-- definitions.
--
-- A `persona` is an auto-driven user (archetype + system prompt + module
-- mix) that goes through the normal auth/register/JWT pipeline — kept in
-- the auth schema so foreign keys to `auth.users` stay straightforward.
-- The companion tables are append-only:
-- - persona_actions: every MCP tool call the runner makes
-- - persona_feedback: module-scoped quality ratings emitted per tick
--
-- This SQL matches what drizzle-kit push would emit for personas.ts. We
-- apply it manually because the other tables created alongside personas
-- (spaces.credentials, spaces.module_permissions) live outside the auth
-- schemaFilter and pre-existing public enums would otherwise trip the
-- push. See migration 006 for the follow-up that makes push clean.
--
-- Idempotent: re-running on a partially-migrated DB is safe.
-- ─── personas ───────────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS auth.personas (
user_id TEXT PRIMARY KEY NOT NULL,
archetype TEXT NOT NULL,
system_prompt TEXT NOT NULL,
module_mix JSONB NOT NULL,
tick_cadence TEXT NOT NULL DEFAULT 'daily',
last_active_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT personas_user_id_users_id_fk
FOREIGN KEY (user_id) REFERENCES auth.users (id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS personas_archetype_idx
ON auth.personas (archetype);
-- ─── persona_actions ────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS auth.persona_actions (
id TEXT PRIMARY KEY NOT NULL,
persona_id TEXT NOT NULL,
tick_id TEXT NOT NULL,
tool_name TEXT NOT NULL,
input_hash TEXT,
result TEXT NOT NULL,
error_message TEXT,
latency_ms INTEGER,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT persona_actions_persona_id_personas_user_id_fk
FOREIGN KEY (persona_id) REFERENCES auth.personas (user_id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS persona_actions_persona_idx
ON auth.persona_actions (persona_id, created_at);
CREATE INDEX IF NOT EXISTS persona_actions_tick_idx
ON auth.persona_actions (tick_id);
-- ─── persona_feedback ───────────────────────────────────────────
CREATE TABLE IF NOT EXISTS auth.persona_feedback (
id TEXT PRIMARY KEY NOT NULL,
persona_id TEXT NOT NULL,
tick_id TEXT NOT NULL,
module TEXT NOT NULL,
rating SMALLINT NOT NULL,
notes TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT persona_feedback_persona_id_personas_user_id_fk
FOREIGN KEY (persona_id) REFERENCES auth.personas (user_id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS persona_feedback_persona_idx
ON auth.persona_feedback (persona_id, created_at);
CREATE INDEX IF NOT EXISTS persona_feedback_module_idx
ON auth.persona_feedback (module, created_at);

View file

@ -1,89 +0,0 @@
-- Migration 006: Move Better Auth enums from `public` to `auth` schema
--
-- Background: mana-auth's drizzle.config.ts uses schemaFilter: ['auth'],
-- which restricts introspection to the auth schema. Enums declared via
-- the module-level `pgEnum(...)` factory default to `public`, which the
-- filter hides. Result: every `drizzle-kit push` would re-emit
-- CREATE TYPE access_tier ... and fail with 42710 ("type already
-- exists"). That blocked setup-databases.sh from advancing mana-auth
-- past enum declarations and masked subsequent schema drift (e.g. the
-- `kind` column from persona work going un-pushed).
--
-- Fix: move the three enums into the auth schema itself. Source-side
-- this is `authSchema.enum(...)` instead of `pgEnum(...)`. DB-side this
-- migration recreates the types in auth, repoints the two columns that
-- reference them, then drops the old public types.
--
-- Scope of affected columns (verified 2026-04-23):
-- - auth.users.access_tier → access_tier
-- - auth.users.role → user_role
-- (user_kind has no column uses yet; the type is created in auth
-- preemptively so the next schema push doesn't try to create it in
-- public.)
--
-- Idempotent: re-running on an already-migrated DB is a no-op for the
-- column changes; the CREATE TYPE statements use guarded DO blocks.
BEGIN;
-- 1. Create the new types in auth (guarded so re-runs don't fail).
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_type t JOIN pg_namespace n ON t.typnamespace = n.oid
WHERE n.nspname = 'auth' AND t.typname = 'access_tier'
) THEN
CREATE TYPE auth.access_tier AS ENUM ('guest', 'public', 'beta', 'alpha', 'founder');
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_type t JOIN pg_namespace n ON t.typnamespace = n.oid
WHERE n.nspname = 'auth' AND t.typname = 'user_role'
) THEN
CREATE TYPE auth.user_role AS ENUM ('user', 'admin', 'service');
END IF;
IF NOT EXISTS (
SELECT 1 FROM pg_type t JOIN pg_namespace n ON t.typnamespace = n.oid
WHERE n.nspname = 'auth' AND t.typname = 'user_kind'
) THEN
CREATE TYPE auth.user_kind AS ENUM ('human', 'persona', 'system');
END IF;
END $$;
-- 2. Repoint the two existing columns. Only runs if the column still
-- uses the old public type — the `format_type` check keeps this
-- idempotent.
DO $$ BEGIN
IF (SELECT format_type(a.atttypid, a.atttypmod)
FROM pg_attribute a
JOIN pg_class c ON a.attrelid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE n.nspname = 'auth' AND c.relname = 'users' AND a.attname = 'access_tier'
) = 'access_tier' THEN
ALTER TABLE auth.users ALTER COLUMN access_tier DROP DEFAULT;
ALTER TABLE auth.users
ALTER COLUMN access_tier TYPE auth.access_tier
USING access_tier::text::auth.access_tier;
ALTER TABLE auth.users ALTER COLUMN access_tier SET DEFAULT 'public'::auth.access_tier;
END IF;
IF (SELECT format_type(a.atttypid, a.atttypmod)
FROM pg_attribute a
JOIN pg_class c ON a.attrelid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE n.nspname = 'auth' AND c.relname = 'users' AND a.attname = 'role'
) = 'user_role' THEN
ALTER TABLE auth.users ALTER COLUMN role DROP DEFAULT;
ALTER TABLE auth.users
ALTER COLUMN role TYPE auth.user_role
USING role::text::auth.user_role;
ALTER TABLE auth.users ALTER COLUMN role SET DEFAULT 'user'::auth.user_role;
END IF;
END $$;
-- 3. Drop the now-unreferenced public types. DROP TYPE IF EXISTS is
-- safe if someone re-runs this after they were already dropped.
DROP TYPE IF EXISTS public.access_tier;
DROP TYPE IF EXISTS public.user_role;
-- Note: public.user_kind was never created (no prior migration emitted
-- it), so no DROP is needed.
COMMIT;

View file

@ -1,110 +0,0 @@
-- 007_passkey_bootstrap.sql
--
-- Aligns auth.passkeys with the expected schema of
-- `@better-auth/passkey` (1.6+) and extends auth.login_attempts with
-- a `method` column so passkey failures can be bucketed separately
-- from password failures for rate-limit/lockout accounting.
--
-- Idempotent. Safe to re-run against a fresh or partially-migrated
-- dev database. No destructive drops — we only ADD or RENAME.
--
-- Applied via psql (not drizzle-kit push) because:
-- - drizzle-kit push treats column renames as drop + add unless
-- confirmed interactively, which would delete existing passkey
-- rows if there were any;
-- - adding NOT NULL / DEFAULT in a push without a USING clause
-- fails against tables with existing rows.
--
-- Usage (dev):
-- docker exec -i mana-postgres psql -U mana -d mana_platform \
-- < services/mana-auth/sql/007_passkey_bootstrap.sql
--
-- Production: run under migrations tooling once the pattern exists.
-- The mana-auth CLAUDE.md notes the repo convention that hand-
-- authored SQL migrations under sql/ are applied by hand.
BEGIN;
-- ─── Passkey schema alignment ──────────────────────────────────
-- friendly_name → name
-- Better Auth's plugin schema calls the column `name`. Rename
-- without dropping so any rows survive (none expected in dev, but
-- the migration is idempotent regardless).
DO $$
BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'auth' AND table_name = 'passkeys'
AND column_name = 'friendly_name'
) AND NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'auth' AND table_name = 'passkeys'
AND column_name = 'name'
) THEN
ALTER TABLE auth.passkeys RENAME COLUMN friendly_name TO name;
END IF;
END $$;
-- Add aaguid — the authenticator AAGUID is optional in WebAuthn but
-- required by Better Auth's schema. Nullable so existing rows (if
-- any) stay valid.
ALTER TABLE auth.passkeys ADD COLUMN IF NOT EXISTS aaguid text;
-- Convert transports from jsonb to text (CSV of AuthenticatorTransport
-- values). Better Auth stores it as a plain string like
-- "usb,nfc,hybrid"; jsonb would force the plugin to JSON.parse on
-- every read.
--
-- Postgres forbids subqueries directly in ALTER TABLE … USING, so
-- we stage the conversion through a dedicated helper function (which
-- can freely contain subqueries) and drop the function after use.
DO $$
DECLARE
current_type text;
BEGIN
SELECT data_type INTO current_type
FROM information_schema.columns
WHERE table_schema = 'auth' AND table_name = 'passkeys'
AND column_name = 'transports';
IF current_type = 'jsonb' THEN
CREATE OR REPLACE FUNCTION pg_temp.jsonb_array_to_csv(j jsonb)
RETURNS text LANGUAGE sql IMMUTABLE AS $fn$
SELECT CASE
WHEN j IS NULL THEN NULL
WHEN jsonb_typeof(j) = 'array' THEN (
SELECT string_agg(value, ',')
FROM jsonb_array_elements_text(j) AS value
)
ELSE j::text
END
$fn$;
ALTER TABLE auth.passkeys
ALTER COLUMN transports TYPE text
USING (pg_temp.jsonb_array_to_csv(transports));
END IF;
END $$;
-- ─── Lockout table: method column ──────────────────────────────
-- Bucket login attempts by auth method so passkey + password + 2FA
-- failures can be counted / rate-limited independently. Default
-- 'password' for the existing pre-passkey column — that's historically
-- what any prior row represented.
ALTER TABLE auth.login_attempts
ADD COLUMN IF NOT EXISTS method text NOT NULL DEFAULT 'password';
-- Replace the existing (email, attempted_at) index with one that
-- also covers method, so lockout checks filter without a sequential
-- scan. Using IF NOT EXISTS on the new index and dropping the old
-- one afterwards keeps the migration re-runnable.
CREATE INDEX IF NOT EXISTS login_attempts_email_method_time_idx
ON auth.login_attempts (email, method, attempted_at);
-- The old (email, attempted_at) index becomes redundant once the new
-- one exists (queries on email+method still use the new one).
DROP INDEX IF EXISTS auth.login_attempts_email_attempted_at_idx;
COMMIT;

View file

@ -1,20 +0,0 @@
-- 008_community_identity.sql
--
-- Phase 3.C von docs/plans/feedback-rewards-and-identity.md.
--
-- Community-Hub Opt-Ins für jeden User:
-- - community_show_real_name: legt offen, ob der echte name neben
-- der eulen-pseudonym im community-feed angezeigt wird (default off).
-- - community_karma: counter — eine pro Reaction die jemand auf einen
-- eigenen Post macht. Treibt die Bronze/Silver/Gold/Platin-Tier-Badge.
--
-- Apply manually:
-- psql "$DATABASE_URL" -f services/mana-auth/sql/008_community_identity.sql
BEGIN;
ALTER TABLE auth.users
ADD COLUMN IF NOT EXISTS community_show_real_name boolean NOT NULL DEFAULT false,
ADD COLUMN IF NOT EXISTS community_karma integer NOT NULL DEFAULT 0;
COMMIT;

View file

@ -1,17 +0,0 @@
-- 009_rename_community_to_feedback.sql
-- Renames the two identity-opt-in columns on auth.users to match the
-- "feedback" brand the public hub now carries. Was originally added
-- in 008_community_identity.sql.
--
-- Apply with:
-- psql "$DATABASE_URL" -f sql/009_rename_community_to_feedback.sql
BEGIN;
ALTER TABLE auth.users
RENAME COLUMN community_show_real_name TO feedback_show_real_name;
ALTER TABLE auth.users
RENAME COLUMN community_karma TO feedback_karma;
COMMIT;

View file

@ -1,503 +0,0 @@
/**
* Better Auth Configuration
*
* This file configures Better Auth with:
* - Email/password authentication
* - Organization plugin for B2B (multi-tenant)
* - JWT plugin with minimal claims
* - Drizzle adapter for PostgreSQL
*
* ARCHITECTURE DECISION (2024-12):
* We use MINIMAL JWT claims. Organization and credit data should be fetched
* via API calls, not embedded in JWTs. See docs/AUTHENTICATION_ARCHITECTURE.md
*
* @see https://www.better-auth.com/docs
*/
import { betterAuth } from 'better-auth';
import { drizzleAdapter } from 'better-auth/adapters/drizzle';
import { jwt } from 'better-auth/plugins/jwt';
import { organization } from 'better-auth/plugins/organization';
import { twoFactor } from 'better-auth/plugins/two-factor';
import { magicLink } from 'better-auth/plugins/magic-link';
import { passkey } from '@better-auth/passkey';
import postgres from 'postgres';
import { logger } from '@mana/shared-hono';
import { getDb } from '../db/connection';
import { organizations, members, invitations } from '../db/schema/organizations';
import {
users,
sessions,
accounts,
verificationTokens,
jwks,
twoFactorAuth,
passkeys,
} from '../db/schema/auth';
import {
sendPasswordResetEmail,
sendInvitationEmail,
sendVerificationEmail,
sendMagicLinkEmail,
} from '../email/send';
import { sourceAppStore, passwordResetRedirectStore } from './stores';
import { TRUSTED_ORIGINS } from './sso-origins';
import {
assertValidSpaceMetadataForCreate,
assertSpaceIsDeletable,
createPersonalSpaceFor,
} from '../spaces';
// Re-export so existing imports (`import { TRUSTED_ORIGINS } from './better-auth.config'`)
// keep working. New code should import from './sso-origins' directly.
export { TRUSTED_ORIGINS };
/**
* JWT Custom Payload Interface
*
* MINIMAL claims only. Organization context and credits are available via:
* - GET /organization/get-active-member - org membership & role
* - GET /api/v1/credits/balance - credit balance
*
* Why minimal claims?
* 1. Credit balance changes frequently - JWT would be stale
* 2. Organization context available via Better Auth org plugin APIs
* 3. Smaller tokens = better performance
* 4. Follows Better Auth's session-based design
*/
export interface JWTCustomPayload {
/** User ID (standard JWT claim) */
sub: string;
/** User email */
email: string;
/** User role (user, admin, service) */
role: string;
/** Session ID for reference */
sid: string;
/** Access tier for app-level gating (guest, public, beta, alpha, founder) */
tier: string;
}
/**
* WebAuthn configuration for the passkey plugin. Kept as a separate
* argument so the call site (src/index.ts) can wire it in from the
* loaded config without coupling better-auth.config.ts to config.ts.
*/
export interface BetterAuthWebAuthnOptions {
rpId: string;
rpName: string;
origin: string | string[];
}
/**
* Create Better Auth instance
*
* @param databaseUrl - PostgreSQL connection URL for the auth DB
* @param syncDatabaseUrl - PostgreSQL connection URL for `mana_sync`. Held
* for use by the per-user `userContext` bootstrap; currently no
* per-Space singletons are written here (the kontextDoc that used to
* live here was retired in the Option-B cleanup).
* @param webauthn - WebAuthn settings for the passkey plugin
* @returns Better Auth instance
*/
export function createBetterAuth(
databaseUrl: string,
syncDatabaseUrl: string,
webauthn: BetterAuthWebAuthnOptions
) {
const db = getDb(databaseUrl);
// Lazy module-scoped sync SQL pool. Mirrors the pattern in
// routes/auth.ts so we don't open a second pool just for the
// org-create hook. Process lifetime owns it; never closed manually.
let _syncSql: ReturnType<typeof postgres> | null = null;
const getSyncSql = (): ReturnType<typeof postgres> => {
if (!_syncSql) _syncSql = postgres(syncDatabaseUrl, { max: 2 });
return _syncSql;
};
return betterAuth({
// Database adapter (Drizzle with PostgreSQL)
database: drizzleAdapter(db, {
provider: 'pg',
schema: {
// Auth tables (actual Drizzle table objects)
user: users,
session: sessions,
account: accounts,
verification: verificationTokens,
// Organization tables
organization: organizations,
member: members,
invitation: invitations,
// JWT plugin table
jwks: jwks,
// Two-Factor Authentication table
twoFactor: twoFactorAuth,
// Passkey plugin table — Drizzle field names match
// @better-auth/passkey's plugin schema (see src/db/schema/
// auth.ts comment for the alignment rationale).
passkey: passkeys,
},
}),
// Custom user fields (must be declared so Better Auth includes them in the user object)
user: {
additionalFields: {
accessTier: {
type: 'string',
defaultValue: 'public',
input: false, // Not settable via sign-up
},
kind: {
type: 'string',
defaultValue: 'human',
input: false, // Set only via admin endpoints, never sign-up
},
},
},
// Email/password authentication with password reset
emailAndPassword: {
enabled: true,
requireEmailVerification: true,
minPasswordLength: 8,
maxPasswordLength: 128,
/**
* Password Reset Configuration
*
* Better Auth provides password reset via:
* - auth.api.requestPasswordReset({ body: { email } }) - Sends reset email
* - auth.api.resetPassword({ body: { newPassword, token } }) - Resets password
*
* The reset URL is modified to include callbackURL parameter
* so users are redirected back to the app they requested reset from.
*
* @see https://www.better-auth.com/docs/authentication/email-password#password-reset
*/
sendResetPassword: async ({
user,
url,
}: {
user: { email: string; name: string };
url: string;
}) => {
// Check if we have a redirect URL stored for this user's password reset request
const redirectUrl = passwordResetRedirectStore.get(user.email);
// Modify reset URL to include callbackURL parameter
let resetUrl = url;
if (redirectUrl) {
const urlObj = new URL(url);
urlObj.searchParams.set('callbackURL', redirectUrl);
resetUrl = urlObj.toString();
}
await sendPasswordResetEmail(user.email, resetUrl, user.name);
},
},
/**
* Email Verification Configuration
*
* Sends verification email when user registers.
* User must verify email before they can log in.
*
* The verification URL is modified to include redirectTo parameter
* so users are redirected back to the app they registered from.
*/
emailVerification: {
sendOnSignUp: true,
autoSignInAfterVerification: true,
sendVerificationEmail: async ({
user,
url,
}: {
user: { email: string; name: string };
url: string;
}) => {
// Check if we have a source app URL stored for this user
// Note: We get the URL without deleting it here since it might be needed
// during the verification process in the passthrough controller
const sourceAppUrl = sourceAppStore.get(user.email);
// Modify verification URL: set callbackURL so Better Auth redirects
// back to the source app after email verification
let verificationUrl = url;
if (sourceAppUrl) {
const urlObj = new URL(url);
urlObj.searchParams.set('callbackURL', sourceAppUrl);
verificationUrl = urlObj.toString();
}
await sendVerificationEmail(user.email, verificationUrl, user.name);
},
},
// Session configuration
session: {
expiresIn: 60 * 60 * 24 * 7, // 7 days
updateAge: 60 * 60 * 24, // Update session once per day
},
/**
* Database hooks lifecycle callbacks for core tables.
*
* `user.create.after` runs after a successful signup and provisions
* the user's personal Space (a Better Auth organization of type
* `personal`). Every user needs one because modules store private
* data like mood, dreams, sleep there. Failure propagates: an
* orphan user without a personal space is a worse state than a
* retry-able signup error.
*
* See docs/plans/spaces-foundation.md and ../spaces/personal-space.ts.
*/
databaseHooks: {
user: {
create: {
after: async (user) => {
const result = await createPersonalSpaceFor(db, {
id: user.id,
email: user.email,
name: user.name,
accessTier: (user as { accessTier?: string | null }).accessTier,
});
},
},
},
},
// Base URL for callbacks and redirects
baseURL: process.env.BASE_URL || 'http://localhost:3001',
/**
* Advanced Cookie Configuration for Cross-Domain SSO
*
* By setting the cookie domain to '.mana.how', session cookies are shared
* across all subdomains (calendar.mana.how, todo.mana.how, etc.).
* This enables Single Sign-On: login once, authenticated everywhere.
*
* For local development (localhost), leave domain undefined to use default behavior.
*/
advanced: {
// Cookie prefix for all auth cookies
cookiePrefix: 'mana',
// Cross-subdomain cookie configuration
crossSubDomainCookies: {
// Enable cross-subdomain cookies in production
enabled: !!process.env.COOKIE_DOMAIN,
// Domain for cookies (e.g., '.mana.how' - note the leading dot)
domain: process.env.COOKIE_DOMAIN || undefined,
},
// Default cookie options for all auth cookies
defaultCookieAttributes: {
// Secure in production, allow http in development
secure: process.env.NODE_ENV === 'production',
// SameSite=None is required for cross-subdomain SSO via fetch()
// Lax only sends cookies on top-level navigations, not programmatic fetch()
// None requires Secure=true (ensured by production check above)
sameSite: process.env.COOKIE_DOMAIN ? ('none' as const) : ('lax' as const),
// Cookies accessible to all paths
path: '/',
// Prevent JavaScript access to cookies
httpOnly: true,
},
},
// Trusted origins for cross-origin requests (must match CORS_ORIGINS in production)
// IMPORTANT: Every app that uses SSO must be listed here, otherwise
// Better Auth will reject cross-origin requests with credentials.
// When adding a new app, add its production domain here AND to
// CORS_ORIGINS in docker-compose.macmini.yml.
// Single source of truth: TRUSTED_ORIGINS (exported below).
trustedOrigins: TRUSTED_ORIGINS,
// Plugins
plugins: [
/**
* Organization Plugin (B2B)
*
* Provides complete organization management:
* - Create/update/delete organizations
* - Invite/add/remove members
* - Role-based access control
* - Active organization tracking (session.activeOrganizationId)
*
* Client apps use these endpoints for org context:
* - GET /organization/get-active-member
* - GET /organization/get-active-member-role
* - POST /organization/set-active
*/
organization({
// Allow users to create their own organizations
allowUserToCreateOrganization: true,
// Email invitation handler
async sendInvitationEmail(data) {
const { email, organization, inviter } = data;
const baseUrl = process.env.BASE_URL || 'https://mana.how';
const inviteUrl = `${baseUrl}/accept-invitation?id=${data.id}`;
await sendInvitationEmail(
email,
organization.name,
inviter?.user?.name || 'Ein Teammitglied',
inviteUrl
);
},
/**
* Spaces enforce that every organization carries a valid
* `metadata.type` (the Space type), and block deletion of the
* user's personal space. The per-Space `kontextDoc` singleton
* that used to be bootstrapped here was retired in favour of
* the user-driven `notes.isSpaceContext` flag (Option B
* cleanup), so the after-create hook is currently empty
* kept as a hook anchor for future per-Space bootstrap needs.
*/
organizationHooks: {
beforeCreateOrganization: async ({ organization }) => {
assertValidSpaceMetadataForCreate(organization.metadata);
},
beforeDeleteOrganization: async ({ organization }) => {
assertSpaceIsDeletable(organization.metadata);
},
},
// Custom roles and permissions
organizationRole: {
owner: {
permissions: [
'organization:update',
'organization:delete',
'members:invite',
'members:remove',
'members:update_role',
'credits:allocate',
'credits:view_all',
],
},
admin: {
permissions: [
'organization:update',
'members:invite',
'members:remove',
'credits:view_all',
],
},
member: {
permissions: ['credits:view_own'],
},
},
}),
/**
* JWT Plugin
*
* Generates JWT tokens with MINIMAL claims.
*
* DO NOT add complex claims like:
* - credit_balance (stale after 15min, fetch via API instead)
* - organization details (use Better Auth org plugin APIs)
* - customer_type (derive from activeOrganizationId presence)
*
* Apps should call APIs for dynamic data:
* - Credits: GET /api/v1/credits/balance
* - Org info: GET /organization/get-active-member
*/
jwt({
jwt: {
// For OIDC compatibility, issuer MUST match the discovery document
// Use BASE_URL to match /.well-known/openid-configuration issuer
issuer: process.env.BASE_URL || process.env.JWT_ISSUER || 'http://localhost:3001',
audience: process.env.JWT_AUDIENCE || 'mana',
expirationTime: '15m',
/**
* Define minimal JWT payload
*
* Only includes static user info that doesn't change frequently.
*/
definePayload({ user, session }: { user: any; session: any }) {
return {
sub: user.id,
email: user.email,
role: (user as { role?: string }).role || 'user',
sid: session.id,
tier: (user as { accessTier?: string }).accessTier || 'public',
};
},
},
}),
/**
* Two-Factor Authentication Plugin (TOTP)
*
* Provides TOTP-based 2FA with backup codes.
* Endpoints provided automatically by Better Auth passthrough:
* - POST /two-factor/enable (requires password)
* - POST /two-factor/disable (requires password)
* - POST /two-factor/verify-totp (during login)
* - POST /two-factor/verify-backup-code (during login)
* - POST /two-factor/get-totp-uri
* - POST /two-factor/generate-backup-codes
*/
twoFactor({
issuer: 'Mana',
}),
/**
* Magic Link Plugin (Passwordless Email Login)
*
* Sends a one-time login link via email.
* Endpoints via Better Auth passthrough:
* - POST /magic-link/send-magic-link
* - GET /magic-link/verify (callback from email)
*/
magicLink({
sendMagicLink: async ({ email, url }: { email: string; url: string }) => {
await sendMagicLinkEmail(email, url);
},
expiresIn: 600, // 10 minutes
}),
/**
* Passkey plugin WebAuthn registration + authentication.
*
* rpID is the effective domain the credential binds to. For
* cross-subdomain SSO on `*.mana.how`, this MUST be `mana.how`
* (the bare apex), not any subdomain otherwise a passkey
* registered on app.mana.how won't work on calendar.mana.how.
* In dev this resolves to `localhost`.
*
* `origin` is the full URL(s) where WebAuthn calls are made
* from; a mismatch causes a SecurityError on verify. We pass
* every CORS origin by default.
*
* Note: passkeys don't replace passwords in this build every
* account keeps its password, and passkey is additive. This
* sidesteps the "user lost all passkeys" recovery-flow that
* passwordless-only accounts would require.
*/
passkey({
rpID: webauthn.rpId,
rpName: webauthn.rpName,
origin: webauthn.origin,
}),
],
});
}
/**
* Export type for Better Auth instance
*/
export type BetterAuthInstance = ReturnType<typeof createBetterAuth>;

View file

@ -1,123 +0,0 @@
/**
* SSO config consistency tests.
*
* Locks in the relationship between three places that must agree about
* which origins are allowed to talk to mana-auth:
*
* 1. `TRUSTED_ORIGINS` in `better-auth.config.ts` Better Auth's
* cross-origin allow-list. A missing entry causes silent login
* failure (request rejected before any handler runs).
* 2. `CORS_ORIGINS` env var on the `mana-auth` service in
* `docker-compose.macmini.yml` Hono's CORS preflight check.
* A missing entry causes browsers to block the response.
* 3. The set of HTTPS origins in (1) must be a SUBSET of (2) every
* production origin Better Auth trusts must also pass CORS.
*
* The reverse is intentionally NOT enforced: docker-compose may list
* extra origins (e.g. legacy subdomains) that Better Auth doesn't yet
* trust. But if it does, this test reports them so dead entries get
* cleaned up rather than accumulating forever.
*
* This test is referenced from the root CLAUDE.md as the canonical
* way to verify "I added a new app to SSO" see "Adding an app to SSO"
* in `/CLAUDE.md`.
*/
import { describe, it, expect } from 'bun:test';
import { readFileSync } from 'node:fs';
import { join } from 'node:path';
import { TRUSTED_ORIGINS } from './sso-origins';
const REPO_ROOT = join(import.meta.dir, '../../../..');
const COMPOSE_FILE = join(REPO_ROOT, 'docker-compose.macmini.yml');
/**
* Pull the `CORS_ORIGINS` value out of the `mana-auth` service block in
* docker-compose.macmini.yml. We deliberately do a coarse string scan
* instead of a YAML parse to keep this test dependency-free the
* compose file's `mana-auth:` block is conventional enough that the
* `service: ... CORS_ORIGINS: ...` window is unambiguous.
*/
function readManaAuthCorsOrigins(): string[] {
const yaml = readFileSync(COMPOSE_FILE, 'utf8');
// Find the mana-auth service definition
const serviceMatch = yaml.match(/^ {2}mana-auth:\s*$/m);
if (!serviceMatch) {
throw new Error('mana-auth service not found in docker-compose.macmini.yml');
}
const tail = yaml.slice(serviceMatch.index! + serviceMatch[0].length);
// CORS_ORIGINS appears within the next ~50 lines under environment:
const corsMatch = tail.match(/CORS_ORIGINS:\s*([^\n]+)/);
if (!corsMatch) {
throw new Error('CORS_ORIGINS not found in mana-auth service block');
}
return corsMatch[1]
.replace(/^["']|["']$/g, '')
.split(',')
.map((s) => s.trim())
.filter(Boolean);
}
describe('SSO trusted origins', () => {
it('contains the canonical mana.how origin', () => {
expect(TRUSTED_ORIGINS).toContain('https://mana.how');
});
it('contains the auth subdomain (Better Auth callback target)', () => {
expect(TRUSTED_ORIGINS).toContain('https://auth.mana.how');
});
it('contains localhost dev origins for local development', () => {
// Web dev server (5173) and the auth server itself (3001) — both
// are required for the local SSO loop to work end-to-end.
expect(TRUSTED_ORIGINS).toContain('http://localhost:5173');
expect(TRUSTED_ORIGINS).toContain('http://localhost:3001');
});
it('every production origin uses HTTPS', () => {
const httpOrigins = TRUSTED_ORIGINS.filter(
(o) => o.startsWith('http://') && !o.includes('localhost')
);
expect(httpOrigins).toEqual([]);
});
it('every production origin is on mana.how (no third-party hosts)', () => {
const offRoot = TRUSTED_ORIGINS.filter((o) => {
if (o.includes('localhost')) return false;
return !/^https:\/\/([a-z0-9-]+\.)?mana\.how$/.test(o);
});
expect(offRoot).toEqual([]);
});
it('has no duplicate entries', () => {
const set = new Set(TRUSTED_ORIGINS);
expect(set.size).toBe(TRUSTED_ORIGINS.length);
});
});
describe('SSO ↔ docker-compose CORS_ORIGINS consistency', () => {
const corsOrigins = readManaAuthCorsOrigins();
it('every HTTPS trusted origin is also in mana-auth CORS_ORIGINS', () => {
const productionTrusted = TRUSTED_ORIGINS.filter((o) => o.startsWith('https://'));
const missing = productionTrusted.filter((o) => !corsOrigins.includes(o));
// If this fails: add the listed origins to CORS_ORIGINS for the
// mana-auth service in docker-compose.macmini.yml.
expect(missing).toEqual([]);
});
it('mana-auth CORS_ORIGINS contains NO entries outside trustedOrigins (no dead drift)', () => {
// Hard-fail on extras: if CORS lists an origin Better Auth doesn't
// trust, the server accepts the preflight but then silently rejects
// the auth request — worst-of-both-worlds. Tightened from a warning
// to a hard assertion on 2026-04-19 per audit.
// Fix: either add the origin to TRUSTED_ORIGINS (in sso-origins.ts)
// or remove it from the mana-auth CORS_ORIGINS in
// docker-compose.macmini.yml.
const extras = corsOrigins.filter(
(o) =>
o.startsWith('https://') && !TRUSTED_ORIGINS.includes(o as (typeof TRUSTED_ORIGINS)[number])
);
expect(extras).toEqual([]);
});
});

View file

@ -1,39 +0,0 @@
/**
* Single source of truth for SSO trusted origins.
*
* Extracted into a standalone module (no Better Auth imports) so it can
* also be consumed by infra tooling (compose env generators, monitoring
* jobs, etc.) without pulling in the full auth stack.
*
* Better Auth rejects any cross-origin auth request whose Origin header
* isn't in this list silent login failure on mis-configured apps. When
* adding a new top-level domain (NOT a path under mana.how), update both:
*
* 1. `PRODUCTION_TRUSTED_ORIGINS` below
* 2. The `mana-auth` `CORS_ORIGINS` env var in
* `docker-compose.macmini.yml` (must be a superset of this list)
*
* `sso-config.spec.ts` enforces both invariants. The unified app under
* `mana.how` does NOT need per-module subdomains here modules are routed
* by path on the same origin.
*/
/** HTTPS origins Better Auth accepts in production. */
export const PRODUCTION_TRUSTED_ORIGINS = [
// Unified app — all productivity apps live under mana.how
'https://mana.how',
'https://auth.mana.how',
// Separate apps (not part of the unified app)
'https://whopxl.mana.how', // Games
'https://cardecky.mana.how', // Cardecky spaced-repetition spinoff (own SvelteKit container, not the unified app)
'https://cardecky-api.mana.how', // Cardecky marketplace + community backend (cards-server)
'https://memoro-app.mana.how', // Memoro web SPA (separate deploy under mana e.V.)
'https://zitare.mana.how', // Zitare app shell (SvelteKit static SPA, Cookie-SSO consumer)
'https://zitare-api.mana.how', // Zitare backend API (Hono+Bun, JWT-bearer consumer)
] as const;
/** Local dev origins — web dev server + the auth server itself. */
export const LOCAL_TRUSTED_ORIGINS = ['http://localhost:3001', 'http://localhost:5173'] as const;
/** Full trusted-origins list passed to Better Auth. */
export const TRUSTED_ORIGINS: string[] = [...PRODUCTION_TRUSTED_ORIGINS, ...LOCAL_TRUSTED_ORIGINS];

View file

@ -1,34 +0,0 @@
/**
* In-memory stores for cross-request state.
* Used to pass redirect URLs from registration/reset requests to email handlers.
*/
const TTL = 10 * 60 * 1000; // 10 minutes
function createStore() {
const map = new Map<string, { value: string; expires: number }>();
return {
set(key: string, value: string) {
map.set(key, { value, expires: Date.now() + TTL });
},
get(key: string): string | undefined {
const entry = map.get(key);
if (!entry) return undefined;
if (Date.now() > entry.expires) {
map.delete(key);
return undefined;
}
return entry.value;
},
delete(key: string) {
map.delete(key);
},
};
}
/** Stores source app URL for email verification redirects */
export const sourceAppStore = createStore();
/** Stores redirect URL for password reset callbacks */
export const passwordResetRedirectStore = createStore();

View file

@ -1,108 +0,0 @@
export interface Config {
port: number;
databaseUrl: string;
syncDatabaseUrl: string;
baseUrl: string;
cookieDomain: string;
nodeEnv: string;
serviceKey: string;
cors: { origins: string[] };
manaNotifyUrl: string;
manaCreditsUrl: string;
manaSubscriptionsUrl: string;
manaMailUrl: string;
/** Base64-encoded 32-byte AES-256 key encryption key (KEK). Wraps each
* user's master key in auth.encryption_vaults. Required in production
* in development a deterministic dev KEK is auto-generated so the
* service still boots, with a loud warning. */
encryptionKek: string;
/**
* PEM-encoded RSA-OAEP-2048 public key for the mana-ai Mission
* Grant runner. The `/me/ai-mission-grant` endpoint wraps per-
* mission data keys with this public key so only mana-ai (holder
* of the paired private key) can unwrap them. Optional at boot:
* when absent, the endpoint returns 503 so the UI can degrade
* to foreground-only execution.
*/
missionGrantPublicKeyPem?: string;
/** WebAuthn passkey settings. `rpId` is the effective domain the
* authenticator binds credentials to `mana.how` in prod (scopes
* passkeys across all subdomains) and `localhost` in dev. `origin`
* is the URL where the browser made the WebAuthn call; mismatches
* cause the verification step to fail with `invalid origin`. `name`
* is shown to the user in the authenticator prompt ("Register a
* passkey for Mana"). */
webauthn: {
rpId: string;
rpName: string;
origin: string | string[];
};
}
export function loadConfig(): Config {
const env = (key: string, fallback?: string) => process.env[key] || fallback || '';
const nodeEnv = env('NODE_ENV', 'development');
// Encryption KEK: in production a missing/short value is fatal — the
// vault service refuses to mint or unwrap any master keys without a
// real KEK. In development we auto-fill with a deterministic dev key
// so contributors can run the service without setting up a secret.
let encryptionKek = env('MANA_AUTH_KEK');
if (!encryptionKek) {
if (nodeEnv === 'production') {
throw new Error(
'mana-auth: MANA_AUTH_KEK env var is required in production. ' +
'Set it to a base64-encoded 32-byte random value: ' +
'`openssl rand -base64 32`'
);
}
// 32 zero bytes — deterministic, obviously not for production. The
// vault service logs a loud warning at startup when it sees this.
encryptionKek = 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=';
}
const corsOrigins = env('CORS_ORIGINS', 'http://localhost:5173').split(',');
// WebAuthn: derive sensible defaults from the auth service's
// BASE_URL + COOKIE_DOMAIN so a dev never has to set three extra
// env vars. In prod, override explicitly.
//
// rpId must be the bare effective domain (no protocol, no port).
// A mismatch between rpId and the client's origin hostname causes
// SecurityError at registration time. Deriving rpId from
// COOKIE_DOMAIN (already stripped of its leading dot for the shared
// cookie) keeps it honest — `.mana.how` → `mana.how` — and falls
// back to the hostname of BASE_URL.
const cookieDomain = env('COOKIE_DOMAIN');
const defaultRpId = cookieDomain
? cookieDomain.replace(/^\./, '')
: new URL(env('BASE_URL', 'http://localhost:3001')).hostname;
return {
port: parseInt(env('PORT', '3001'), 10),
databaseUrl: env('DATABASE_URL', 'postgresql://mana:devpassword@localhost:5432/mana_platform'),
syncDatabaseUrl: env(
'SYNC_DATABASE_URL',
'postgresql://mana:devpassword@localhost:5432/mana_sync'
),
baseUrl: env('BASE_URL', 'http://localhost:3001'),
cookieDomain,
nodeEnv,
serviceKey: env('MANA_SERVICE_KEY', 'dev-service-key'),
cors: { origins: corsOrigins },
manaNotifyUrl: env('MANA_NOTIFY_URL', 'http://localhost:3013'),
manaCreditsUrl: env('MANA_CREDITS_URL', 'http://localhost:3061'),
manaSubscriptionsUrl: env('MANA_SUBSCRIPTIONS_URL', 'http://localhost:3063'),
manaMailUrl: env('MANA_MAIL_URL', 'http://localhost:3042'),
encryptionKek,
missionGrantPublicKeyPem: env('MANA_AI_PUBLIC_KEY_PEM') || undefined,
webauthn: {
rpId: env('WEBAUTHN_RP_ID', defaultRpId),
rpName: env('WEBAUTHN_RP_NAME', 'Mana'),
// Pass every CORS origin as allowed WebAuthn origin by default
// so the same passkey works from any app subdomain. Override
// with WEBAUTHN_ORIGIN to restrict further.
origin: env('WEBAUTHN_ORIGIN') ? env('WEBAUTHN_ORIGIN').split(',') : corsOrigins,
},
};
}

View file

@ -1,15 +0,0 @@
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import * as schema from './schema/index';
let db: ReturnType<typeof drizzle<typeof schema>> | null = null;
export function getDb(databaseUrl: string) {
if (!db) {
const client = postgres(databaseUrl, { max: 20 });
db = drizzle(client, { schema });
}
return db;
}
export type Database = ReturnType<typeof getDb>;

View file

@ -1,32 +0,0 @@
import { text, timestamp, jsonb, integer, index } from 'drizzle-orm/pg-core';
import { authSchema, users } from './auth';
/**
* API Keys table for programmatic access to services.
* Keys are hashed using SHA-256 for security - the full key is only shown once at creation.
*/
export const apiKeys = authSchema.table(
'api_keys',
{
id: text('id').primaryKey(), // nanoid
userId: text('user_id')
.references(() => users.id, { onDelete: 'cascade' })
.notNull(),
name: text('name').notNull(), // User-friendly name for the key
keyPrefix: text('key_prefix').notNull(), // "sk_live_abc..." for display (first 12 chars)
keyHash: text('key_hash').notNull(), // SHA-256 hash of the full key
scopes: jsonb('scopes').$type<string[]>().default(['stt', 'tts']).notNull(), // Allowed service scopes
rateLimitRequests: integer('rate_limit_requests').default(60).notNull(), // Requests per window
rateLimitWindow: integer('rate_limit_window').default(60).notNull(), // Window in seconds
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
lastUsedAt: timestamp('last_used_at', { withTimezone: true }),
revokedAt: timestamp('revoked_at', { withTimezone: true }),
},
(table) => [
index('api_keys_user_id_idx').on(table.userId),
index('api_keys_key_hash_idx').on(table.keyHash),
]
);
export type ApiKey = typeof apiKeys.$inferSelect;
export type NewApiKey = typeof apiKeys.$inferInsert;

View file

@ -1,223 +0,0 @@
import {
pgSchema,
uuid,
text,
timestamp,
boolean,
jsonb,
index,
integer,
} from 'drizzle-orm/pg-core';
export const authSchema = pgSchema('auth');
// Enums live inside the auth schema so drizzle-kit push with
// `schemaFilter: ['auth']` can introspect them. Defining via pgEnum()
// would put them in public and cause spurious CREATE TYPE attempts on
// every push (the filter hides them, drizzle thinks they're missing).
export const userRoleEnum = authSchema.enum('user_role', ['user', 'admin', 'service']);
// Hierarchy: founder > alpha > beta > public > guest
export const accessTierEnum = authSchema.enum('access_tier', [
'guest',
'public',
'beta',
'alpha',
'founder',
]);
// `human` is the default for everyone real. `persona` is for the auto-test
// users driven by the persona-runner — they go through the same
// auth/register/JWT pipeline as humans (no bypass), but admin UIs and
// product analytics filter them out by default. `system` is reserved for
// service principals (e.g. mana-ai's planner identity).
// See docs/plans/mana-mcp-and-personas.md (M2 — Persona-Primitives).
export const userKindEnum = authSchema.enum('user_kind', ['human', 'persona', 'system']);
// Users table (Better Auth schema)
export const users = authSchema.table('users', {
id: text('id').primaryKey(), // Better Auth generates nanoid
name: text('name').notNull(),
email: text('email').unique().notNull(),
emailVerified: boolean('email_verified').default(false).notNull(),
image: text('image'), // Better Auth uses 'image' not 'avatarUrl'
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
// Custom fields (not required by Better Auth)
role: userRoleEnum('role').default('user').notNull(),
accessTier: accessTierEnum('access_tier').default('public').notNull(),
kind: userKindEnum('kind').default('human').notNull(),
twoFactorEnabled: boolean('two_factor_enabled').default(false),
deletedAt: timestamp('deleted_at', { withTimezone: true }),
// Null = user hasn't finished the 3-screen onboarding flow yet (Name
// → Look → Templates). The flow is skippable, but even a skip sets
// this timestamp so we don't re-prompt. See docs/plans/onboarding-flow.md.
onboardingCompletedAt: timestamp('onboarding_completed_at', { withTimezone: true }),
// Public-feedback identity opt-ins (Phase 3.C of feedback-rewards-and-identity).
// Off by default — users stay anonymous as their tier-pseudonym ("Wachsame
// Eule #4528"). Opt-in shows the real `name` next to the pseudonym in the
// auth-required feedback feed only; the public-mirror NEVER exposes it.
feedbackShowRealName: boolean('feedback_show_real_name').default(false).notNull(),
// Karma += 1 per reaction received from another user, decremented on unreact.
// Drives the public Bronze/Silver/Gold/Platinum-Eulen tier badge.
feedbackKarma: integer('feedback_karma').default(0).notNull(),
});
// Sessions table (Better Auth schema)
export const sessions = authSchema.table('sessions', {
id: text('id').primaryKey(), // Better Auth generates nanoid
expiresAt: timestamp('expires_at', { withTimezone: true }).notNull(),
token: text('token').unique().notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
ipAddress: text('ip_address'),
userAgent: text('user_agent'),
userId: text('user_id')
.references(() => users.id, { onDelete: 'cascade' })
.notNull(),
// Custom fields (not required by Better Auth)
refreshToken: text('refresh_token').unique(),
refreshTokenExpiresAt: timestamp('refresh_token_expires_at', { withTimezone: true }),
deviceId: text('device_id'),
deviceName: text('device_name'),
lastActivityAt: timestamp('last_activity_at', { withTimezone: true }).defaultNow(),
revokedAt: timestamp('revoked_at', { withTimezone: true }),
rememberMe: boolean('remember_me').default(false),
});
// Accounts table (for OAuth providers and credentials - Better Auth schema)
export const accounts = authSchema.table('accounts', {
id: text('id').primaryKey(), // Better Auth generates nanoid
accountId: text('account_id').notNull(), // Better Auth field
providerId: text('provider_id').notNull(), // Better Auth field (was 'provider')
userId: text('user_id')
.references(() => users.id, { onDelete: 'cascade' })
.notNull(),
accessToken: text('access_token'),
refreshToken: text('refresh_token'),
idToken: text('id_token'),
accessTokenExpiresAt: timestamp('access_token_expires_at', { withTimezone: true }),
refreshTokenExpiresAt: timestamp('refresh_token_expires_at', { withTimezone: true }),
scope: text('scope'),
password: text('password'), // Better Auth stores hashed password here for credential provider
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
});
// Verification table (Better Auth schema - for email verification, password reset)
export const verificationTokens = authSchema.table(
'verification',
{
id: text('id').primaryKey(), // Better Auth generates nanoid
identifier: text('identifier').notNull(), // Better Auth uses identifier (e.g., email)
value: text('value').notNull(), // Better Auth uses value (the token)
expiresAt: timestamp('expires_at', { withTimezone: true }).notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
identifierIdx: index('verification_identifier_idx').on(table.identifier),
})
);
// Password table (separate for security)
export const passwords = authSchema.table('passwords', {
userId: text('user_id')
.primaryKey()
.references(() => users.id, { onDelete: 'cascade' }),
hashedPassword: text('hashed_password').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
});
// Two-factor authentication
export const twoFactorAuth = authSchema.table('two_factor_auth', {
userId: text('user_id')
.primaryKey()
.references(() => users.id, { onDelete: 'cascade' }),
secret: text('secret').notNull(),
enabled: boolean('enabled').default(false).notNull(),
backupCodes: text('backup_codes').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
enabledAt: timestamp('enabled_at', { withTimezone: true }),
});
// Security events log
export const securityEvents = authSchema.table('security_events', {
id: uuid('id').primaryKey().defaultRandom(), // Our table, can keep UUID
userId: text('user_id').references(() => users.id, { onDelete: 'cascade' }),
eventType: text('event_type').notNull(),
ipAddress: text('ip_address'),
userAgent: text('user_agent'),
metadata: jsonb('metadata'),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
});
// JWKS table (Better Auth JWT plugin - stores signing keys)
export const jwks = authSchema.table('jwks', {
id: text('id').primaryKey(),
publicKey: text('public_key').notNull(),
privateKey: text('private_key').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
});
// Passkeys table (WebAuthn credentials).
// Field names match `@better-auth/passkey`'s expected schema so the
// Drizzle adapter can write/read directly without a translation layer.
// Notably: the TS field is `credentialID` (capital I/D) even though
// the SQL column stays snake_case; the plugin dereferences by TS name.
// `transports` is a comma-separated string (not jsonb) because the
// plugin stores the AuthenticatorTransport[] as a CSV.
// `name` (was `friendlyName`) is user-provided.
// `lastUsedAt` is ours — populated by the wrapper on successful
// authentication; the plugin itself doesn't touch it.
export const passkeys = authSchema.table(
'passkeys',
{
id: text('id').primaryKey(), // nanoid
userId: text('user_id')
.references(() => users.id, { onDelete: 'cascade' })
.notNull(),
credentialID: text('credential_id').unique().notNull(), // base64url-encoded
publicKey: text('public_key').notNull(), // base64url-encoded COSE public key
counter: integer('counter').default(0).notNull(), // signature counter
deviceType: text('device_type').notNull(), // 'singleDevice' | 'multiDevice'
backedUp: boolean('backed_up').default(false).notNull(),
transports: text('transports'), // CSV of AuthenticatorTransport values
name: text('name'),
aaguid: text('aaguid'), // authenticator AAGUID (optional)
lastUsedAt: timestamp('last_used_at', { withTimezone: true }),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
userIdIdx: index('passkeys_user_id_idx').on(table.userId),
})
);
// User settings table (synced across all apps)
export const userSettings = authSchema.table('user_settings', {
userId: text('user_id')
.primaryKey()
.references(() => users.id, { onDelete: 'cascade' }),
// Global defaults (applies to all apps)
// { nav: { desktopPosition, sidebarCollapsed }, theme: { mode, colorScheme }, locale }
globalSettings: jsonb('global_settings')
.default({
nav: { desktopPosition: 'top', sidebarCollapsed: false },
theme: { mode: 'system', colorScheme: 'ocean' },
locale: 'de',
})
.notNull(),
// Per-app overrides (applies to all devices)
// { "calendar": { nav: {...}, theme: {...} }, "chat": {...} }
appOverrides: jsonb('app_overrides').default({}).notNull(),
// Per-device settings (device-specific app settings)
// { "device-abc-123": { deviceName: "MacBook", deviceType: "desktop", lastSeen: "...", apps: { "calendar": { dayStartHour: 6, ... } } } }
deviceSettings: jsonb('device_settings').default({}).notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
});

View file

@ -1,146 +0,0 @@
import { text, timestamp, smallint, integer, boolean, index } from 'drizzle-orm/pg-core';
import { authSchema, users } from './auth';
/**
* Per-user encryption vault.
*
* Holds the user's master key (MK) wrapped with the service-wide Key
* Encryption Key (KEK). The MK itself is never stored in plaintext.
* Browsers fetch the unwrapped MK at login via `GET /api/v1/me/encryption-key`
* and keep it in sessionStorage for the duration of the session.
*
* Wire format of the wrapped key:
* AES-GCM-256 over the raw 32-byte MK, with the KEK as key.
* wrapped_mk = AES-GCM-encrypt(MK, KEK, wrap_iv) ciphertext + 16-byte auth tag.
* The auth tag is appended to wrapped_mk by the Web Crypto / Bun crypto API.
*
* Why a separate table (and not a column on users)?
* - Lifecycle is independent: a user can rotate their vault without
* touching the user record, and vice versa.
* - Permissions: only the dedicated vault service touches this table,
* so it's easy to grant minimal access via row-level security and
* restrict the audit surface.
* - Future-proofing: when we add per-device sub-keys or recovery wraps,
* they sit naturally next to the master entry.
*
* RLS is added via raw SQL in the migration file alongside the table.
* The migration enables ROW LEVEL SECURITY + FORCE so that even the
* mana-auth service role cannot read another user's vault entry without
* going through `set_config('app.current_user_id', ...)` first.
*/
export const encryptionVaults = authSchema.table(
'encryption_vaults',
{
userId: text('user_id')
.primaryKey()
.references(() => users.id, { onDelete: 'cascade' }),
/** AES-GCM ciphertext of the raw 32-byte master key, wrapped with
* the server-side KEK. Includes the 16-byte authentication tag at
* the tail (Web Crypto convention).
*
* NULLABLE since Phase 9: a vault in zero-knowledge mode has no
* server-side wrap. The CHECK constraint
* `encryption_vaults_has_wrap` ensures at least one of
* (wrapped_mk, recovery_wrapped_mk) is always populated so the
* user can never be locked out. */
wrappedMk: text('wrapped_mk'),
/** 12-byte IV used for the wrap operation. Stored base64. NULLABLE
* in lockstep with wrappedMk. */
wrapIv: text('wrap_iv'),
/** Wire format version of the KEK wrap. Lets us migrate to a
* different KDF or AEAD later without rewriting every existing
* row at once. */
formatVersion: smallint('format_version').notNull().default(1),
/** KEK identifier currently always 'env-v1' (the env-loaded KEK).
* Will become a KMS key ARN / Vault path / etc. when we move
* off the env-var KEK. Stored so a future rotation knows which
* KEK to unwrap with. */
kekId: text('kek_id').notNull().default('env-v1'),
// ─── Phase 9: Recovery wrap (zero-knowledge opt-in) ───
//
// recovery_wrapped_mk holds the same master key, wrapped with a
// key derived from the user's 32-byte recovery secret via HKDF.
// The server NEVER sees the recovery secret itself — it only
// accepts the already-sealed blob from the client. The client
// generates + displays the recovery code at setup time and the
// user is responsible for backing it up.
//
// When zero_knowledge=true:
// - wrapped_mk + wrap_iv are NULL (the KEK wrap is gone)
// - recovery_wrapped_mk + recovery_iv are NOT NULL
// - GET /key returns the recovery blob, NOT a plaintext MK
// - The server is computationally incapable of decrypting the
// user's data even with full DB + KEK access
/** AES-GCM ciphertext of the raw 32-byte master key, wrapped with
* the user's recovery-derived key. NULL until the user opts into
* recovery wrap via POST /recovery-wrap. */
recoveryWrappedMk: text('recovery_wrapped_mk'),
/** 12-byte IV for the recovery wrap. Stored base64. Paired with
* recoveryWrappedMk via the encryption_vaults_wrap_iv_pair
* constraint. */
recoveryIv: text('recovery_iv'),
/** Wire format version of the recovery wrap. */
recoveryFormatVersion: smallint('recovery_format_version').notNull().default(1),
/** Timestamp of when the user first set their recovery wrap. */
recoverySetAt: timestamp('recovery_set_at', { withTimezone: true }),
/** True iff the user has opted into zero-knowledge mode. When set,
* the server-side wrapped_mk is gone and the user MUST provide
* their recovery code to unlock the vault. */
zeroKnowledge: boolean('zero_knowledge').notNull().default(false),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
rotatedAt: timestamp('rotated_at', { withTimezone: true }),
},
(table) => [index('encryption_vaults_user_id_idx').on(table.userId)]
);
export type EncryptionVault = typeof encryptionVaults.$inferSelect;
export type NewEncryptionVault = typeof encryptionVaults.$inferInsert;
/**
* Append-only audit trail of vault accesses (init, fetch, rotate). Used
* for security investigations and compliance reporting. Not exposed to
* users only the admin endpoints can read this.
*
* Why a separate table instead of dumping into a generic audit log?
* - Encryption vault access is the highest-sensitivity operation in
* the entire system; a dedicated table makes the threat-monitoring
* query trivial ("show me all fetches in the last 24h grouped by
* IP / user-agent").
* - Retention can be tuned independently (longer than ordinary auth
* logs to support late-discovered breaches).
*/
export const encryptionVaultAudit = authSchema.table(
'encryption_vault_audit',
{
id: text('id').primaryKey(), // nanoid
userId: text('user_id')
.references(() => users.id, { onDelete: 'cascade' })
.notNull(),
action: text('action').notNull(), // 'init' | 'fetch' | 'rotate' | 'failed_fetch'
ipAddress: text('ip_address'),
userAgent: text('user_agent'),
/** Free-form context (e.g. failure reason, format version touched). */
context: text('context'),
/** HTTP status returned to the client — useful for spotting probing. */
status: integer('status').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => [
index('encryption_vault_audit_user_id_idx').on(table.userId),
index('encryption_vault_audit_created_at_idx').on(table.createdAt),
]
);
export type EncryptionVaultAudit = typeof encryptionVaultAudit.$inferSelect;
export type NewEncryptionVaultAudit = typeof encryptionVaultAudit.$inferInsert;

View file

@ -1,7 +0,0 @@
export * from './auth';
export * from './organizations';
export * from './api-keys';
export * from './login-attempts';
export * from './encryption-vaults';
export * from './spaces';
export * from './personas';

View file

@ -1,22 +0,0 @@
/**
* Login Attempts Schema
*
* Tracks login attempts for account lockout functionality.
* Failed attempts within a time window trigger account lockout.
*/
import { pgSchema, text, boolean, timestamp, index, serial } from 'drizzle-orm/pg-core';
const authSchema = pgSchema('auth');
export const loginAttempts = authSchema.table(
'login_attempts',
{
id: serial('id').primaryKey(),
email: text('email').notNull(),
ipAddress: text('ip_address'),
successful: boolean('successful').default(false).notNull(),
attemptedAt: timestamp('attempted_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => [index('login_attempts_email_attempted_at_idx').on(table.email, table.attemptedAt)]
);

View file

@ -1,72 +0,0 @@
import { pgSchema, text, timestamp, jsonb, index } from 'drizzle-orm/pg-core';
import { authSchema, users } from './auth';
/**
* Better Auth Organization Tables
* These tables follow Better Auth's organization plugin schema requirements
* @see https://www.better-auth.com/docs/plugins/organization
*
* Note: Better Auth uses TEXT for IDs (nanoid/ULID), but we use UUID for users.
* The foreign key constraints will be added via raw SQL migration to handle the type difference.
*/
// Organizations table
export const organizations = authSchema.table(
'organizations',
{
id: text('id').primaryKey(), // Better Auth uses TEXT IDs (ULIDs/nanoids)
name: text('name').notNull(),
slug: text('slug').unique(),
logo: text('logo'),
metadata: jsonb('metadata'), // Additional organization data
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
slugIdx: index('organizations_slug_idx').on(table.slug),
})
);
// Members table (links users to organizations with roles)
export const members = authSchema.table(
'members',
{
id: text('id').primaryKey(), // Better Auth uses TEXT IDs
organizationId: text('organization_id')
.references(() => organizations.id, { onDelete: 'cascade' })
.notNull(),
userId: text('user_id').notNull(), // References auth.users.id (UUID cast to TEXT)
role: text('role').notNull(), // 'owner', 'admin', 'member', or custom roles
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
organizationIdIdx: index('members_organization_id_idx').on(table.organizationId),
userIdIdx: index('members_user_id_idx').on(table.userId),
organizationUserIdx: index('members_organization_user_idx').on(
table.organizationId,
table.userId
),
})
);
// Invitations table (for inviting users to organizations)
export const invitations = authSchema.table(
'invitations',
{
id: text('id').primaryKey(), // Better Auth uses TEXT IDs
organizationId: text('organization_id')
.references(() => organizations.id, { onDelete: 'cascade' })
.notNull(),
email: text('email').notNull(),
role: text('role').notNull(), // Role they'll have when they accept
status: text('status').notNull(), // 'pending', 'accepted', 'rejected', 'canceled'
expiresAt: timestamp('expires_at', { withTimezone: true }).notNull(),
inviterId: text('inviter_id'), // References auth.users.id (UUID cast to TEXT)
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
organizationIdIdx: index('invitations_organization_id_idx').on(table.organizationId),
emailIdx: index('invitations_email_idx').on(table.email),
statusIdx: index('invitations_status_idx').on(table.status),
})
);

View file

@ -1,144 +0,0 @@
/**
* Persona infrastructure schemas backing the M2 phase of the
* MCP/Personas plan (`docs/plans/mana-mcp-and-personas.md`).
*
* A Persona is a real Mana user (`auth.users` row, `kind = 'persona'`)
* with extra metadata describing how the persona-runner should drive it:
* archetype, system prompt, module mix, tick cadence. Test-infrastructure
* concern runs in dev/staging today, may be enabled in prod once the
* runner has settled.
*
* Three tables, all in the `auth` namespace because they're 1:1-coupled
* to user lifecycle:
* - `personas` per-persona descriptor (1:1 with users)
* - `persona_actions` audit trail of every tool call the runner made
* - `persona_feedback` structured 15 ratings the runner emits per tick
*
* Why `auth.*` rather than `platform.*`: personas extend users, the FK
* lives here, and mana-auth is the natural CRUD owner. Cross-schema
* joins for nothing.
*/
import { jsonb, integer, smallint, text, timestamp, index } from 'drizzle-orm/pg-core';
import { authSchema, users } from './auth';
// ─── personas ─────────────────────────────────────────────────────
export const personas = authSchema.table(
'personas',
{
userId: text('user_id')
.primaryKey()
.references(() => users.id, { onDelete: 'cascade' }),
/**
* Short stable identifier for the persona's behavioural profile,
* e.g. `'adhd-student'`, `'ceo-busy'`, `'creative-parent'`. Used
* by analytics to bucket actions/feedback across personas of the
* same archetype.
*/
archetype: text('archetype').notNull(),
/**
* Long-form system prompt for the persona-runner. Includes
* demographics, motivations, current life context whatever the
* Claude SDK call should treat as "this is who you are when you
* use Mana today".
*/
systemPrompt: text('system_prompt').notNull(),
/**
* Hint to the runner about which modules the persona reaches for.
* Shape: `{ todo: 0.3, journal: 0.3, notes: 0.4 }` relative
* weights, not strict probabilities. The runner is free to ignore
* this if Claude decides differently in the moment.
*/
moduleMix: jsonb('module_mix').notNull(),
/**
* How often the runner should give this persona a turn.
* `daily` every day around the persona's "tick window"
* `weekdays` MonFri only
* `hourly` every hour (used for high-frequency stress tests)
*/
tickCadence: text('tick_cadence').notNull().default('daily'),
lastActiveAt: timestamp('last_active_at', { withTimezone: true }),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => [index('personas_archetype_idx').on(table.archetype)]
);
// ─── persona_actions ──────────────────────────────────────────────
export const personaActions = authSchema.table(
'persona_actions',
{
id: text('id').primaryKey(),
personaId: text('persona_id')
.notNull()
.references(() => personas.userId, { onDelete: 'cascade' }),
/**
* Groups every tool call within a single runner tick. Lets the
* dashboard show "Anna's Tuesday session: created 2 todos,
* read 3 articles, wrote 1 journal entry".
*/
tickId: text('tick_id').notNull(),
toolName: text('tool_name').notNull(),
/**
* SHA-256 of the JSON-stringified input. Lets analytics dedupe
* "the same tool with the same args was called N times across
* personas this week" without reconstructing inputs from the
* (potentially large, potentially encrypted) raw values.
*/
inputHash: text('input_hash'),
/** `'ok'` on success, `'error'` on any thrown handler exception. */
result: text('result').notNull(),
errorMessage: text('error_message'),
latencyMs: integer('latency_ms'),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => [
index('persona_actions_persona_idx').on(table.personaId, table.createdAt),
index('persona_actions_tick_idx').on(table.tickId),
]
);
// ─── persona_feedback ─────────────────────────────────────────────
export const personaFeedback = authSchema.table(
'persona_feedback',
{
id: text('id').primaryKey(),
personaId: text('persona_id')
.notNull()
.references(() => personas.userId, { onDelete: 'cascade' }),
tickId: text('tick_id').notNull(),
/** Module the rating applies to (e.g. `'todo'`, `'journal'`). */
module: text('module').notNull(),
/**
* 15. The runner asks Claude (in-character as the persona) to
* rate the modules they used in this tick. SMALLINT is enough
* range and signals to readers that the value is bounded.
*/
rating: smallint('rating').notNull(),
/** Free-text follow-up. May be German since most personas speak it. */
notes: text('notes'),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => [
index('persona_feedback_module_idx').on(table.module, table.createdAt),
index('persona_feedback_persona_idx').on(table.personaId, table.createdAt),
]
);

View file

@ -1,96 +0,0 @@
/**
* Spaces Postgres schema extensions for Better Auth organizations.
*
* The canonical SpaceType + SpaceMetadata contract lives in
* `@mana/shared-types`; the organization itself lives in the `auth` schema
* (see organizations.ts). This file adds the *server-side* extensions that
* don't belong in the client-synced world:
*
* - `spaces.credentials` per-space OAuth tokens + API keys
* (LinkedIn, Stripe, Twilio, SMTP, ).
* Must live server-side because we never
* want them in IndexedDB / Dexie.
* - `spaces.module_permissions` role × module × action matrix.
* Lets a club's trainer read `calendar`
* but not `club-finance`, for example.
*
* See docs/plans/spaces-foundation.md.
*/
import { pgSchema, text, timestamp, boolean, index, primaryKey } from 'drizzle-orm/pg-core';
import { organizations } from './organizations';
export const spacesSchema = pgSchema('spaces');
/**
* Per-space external credentials.
*
* Tokens are encrypted at rest with the service-wide KEK (same mechanism
* as auth.encryption_vaults). The `(space_id, provider)` composite key
* means one token per provider per space a second LinkedIn OAuth flow
* for the same Edisconet space overwrites the first.
*
* No FK on `provider` it's a free-form string (`linkedin`, `stripe`,
* `twilio_sms`, `twitter`, ) so we can add integrations without schema
* migrations. Service code handles the provider-specific payload.
*/
export const spaceCredentials = spacesSchema.table(
'credentials',
{
spaceId: text('space_id')
.references(() => organizations.id, { onDelete: 'cascade' })
.notNull(),
provider: text('provider').notNull(),
accessTokenEncrypted: text('access_token_encrypted').notNull(),
refreshTokenEncrypted: text('refresh_token_encrypted'),
expiresAt: timestamp('expires_at', { withTimezone: true }),
scopes: text('scopes').array(),
providerAccountId: text('provider_account_id'),
// Free-form per-provider metadata (org name, page id, webhook secret).
// Kept as text JSON to avoid pulling the jsonb type in — callers
// parse/serialize explicitly.
metadataJson: text('metadata_json'),
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
pk: primaryKey({ columns: [table.spaceId, table.provider] }),
spaceIdx: index('space_credentials_space_idx').on(table.spaceId),
})
);
/**
* Role × module permission matrix for a space.
*
* Example rows for a club:
* (org_123, 'owner', 'club-finance', true, true, true)
* (org_123, 'admin', 'club-finance', true, true, false)
* (org_123, 'trainer', 'club-finance', false, false, false) -- explicit deny
* (org_123, 'trainer', 'calendar', true, true, false)
*
* If no row exists for `(space, role, module)`, the fallback is the
* default derived from the space type (see SPACE_MODULE_ALLOWLIST in
* shared-types) + role tier (owner > admin > member). These rows only
* exist when the space owner has customized the default.
*/
export const spaceModulePermissions = spacesSchema.table(
'module_permissions',
{
spaceId: text('space_id')
.references(() => organizations.id, { onDelete: 'cascade' })
.notNull(),
role: text('role').notNull(),
moduleId: text('module_id').notNull(),
canRead: boolean('can_read').notNull().default(true),
canWrite: boolean('can_write').notNull().default(false),
canAdmin: boolean('can_admin').notNull().default(false),
updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow().notNull(),
},
(table) => ({
pk: primaryKey({ columns: [table.spaceId, table.role, table.moduleId] }),
spaceModuleIdx: index('space_module_permissions_space_module_idx').on(
table.spaceId,
table.moduleId
),
})
);

View file

@ -1,91 +0,0 @@
/**
* Email sending via mana-notify service.
* All emails are routed through the central notification service
* which handles SMTP, retries, and queuing.
*/
import { logger } from '@mana/shared-hono';
const NOTIFY_URL = process.env.MANA_NOTIFY_URL || 'http://localhost:3013';
const SERVICE_KEY = process.env.MANA_SERVICE_KEY || 'dev-service-key';
async function send(to: string, subject: string, html: string): Promise<boolean> {
try {
const res = await fetch(`${NOTIFY_URL}/api/v1/notifications/send`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Service-Key': SERVICE_KEY,
},
body: JSON.stringify({
channel: 'email',
appId: 'mana-auth',
recipient: to,
subject,
body: html,
}),
});
if (!res.ok) {
logger.error('mana-notify returned non-ok', {
status: res.status,
body: await res.text(),
recipient: to,
subject,
});
return false;
}
return true;
} catch (error) {
logger.error('mana-notify fetch failed', {
error: error instanceof Error ? { message: error.message, stack: error.stack } : error,
recipient: to,
subject,
});
return false;
}
}
export async function sendVerificationEmail(email: string, url: string, name?: string) {
return send(
email,
'E-Mail bestätigen — Mana',
`<p>Hallo ${name || ''},</p><p>Bitte bestätige deine E-Mail-Adresse:</p><p><a href="${url}">E-Mail bestätigen</a></p><p>Oder kopiere diesen Link: ${url}</p>`
);
}
export async function sendPasswordResetEmail(email: string, url: string, name?: string) {
return send(
email,
'Passwort zurücksetzen — Mana',
`<p>Hallo ${name || ''},</p><p>Klicke hier um dein Passwort zurückzusetzen:</p><p><a href="${url}">Passwort zurücksetzen</a></p><p>Der Link ist 1 Stunde gültig.</p>`
);
}
export async function sendInvitationEmail(
email: string,
orgName: string,
inviterName: string,
url: string
) {
return send(
email,
`Einladung: ${orgName} — Mana`,
`<p>${inviterName} hat dich zu <strong>${orgName}</strong> eingeladen.</p><p><a href="${url}">Einladung annehmen</a></p>`
);
}
export async function sendMagicLinkEmail(email: string, url: string) {
return send(
email,
'Login-Link — Mana',
`<p>Klicke hier um dich anzumelden:</p><p><a href="${url}">Jetzt anmelden</a></p><p>Der Link ist 10 Minuten gültig.</p>`
);
}
export async function sendAccountDeletionEmail(email: string, name?: string) {
return send(
email,
'Konto gelöscht — Mana',
`<p>Hallo ${name || ''},</p><p>Dein Mana-Konto wurde erfolgreich gelöscht. Alle deine Daten wurden entfernt.</p>`
);
}

View file

@ -1,254 +0,0 @@
/**
* mana-auth Central authentication service
*
* Hono + Bun runtime. Replaces NestJS-based mana-auth.
* Uses Better Auth natively (fetch-based handler, no Express conversion).
*/
// Sentry/Glitchtip — must run before the rest of the module loads so
// uncaughtException + unhandledRejection get hooked. No-op when
// GLITCHTIP_DSN is unset (e.g. local dev).
import { initErrorTracking, captureException } from '@mana/shared-error-tracking';
initErrorTracking({
serviceName: 'mana-auth',
dsn: process.env.GLITCHTIP_DSN,
environment: process.env.NODE_ENV,
release: process.env.GIT_SHA,
});
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { trimTrailingSlash } from 'hono/trailing-slash';
import { loadConfig } from './config';
import { getDb } from './db/connection';
import { createBetterAuth } from './auth/better-auth.config';
import {
serviceErrorHandler as errorHandler,
initLogger,
requestLogger,
logger,
} from '@mana/shared-hono';
import { jwtAuth } from './middleware/jwt-auth';
import { serviceAuth } from './middleware/service-auth';
import { SecurityEventsService, AccountLockoutService } from './services/security';
import { PasskeyRateLimitService } from './services/passkey-rate-limit';
import { SignupLimitService } from './services/signup-limit';
import { ApiKeysService } from './services/api-keys';
import { UserDataService } from './services/user-data';
import { EncryptionVaultService } from './services/encryption-vault';
import { MissionGrantService } from './services/encryption-vault/mission-grant';
import { loadKek } from './services/encryption-vault/kek';
import { createAuthRoutes } from './routes/auth';
import { createPasskeyRoutes } from './routes/passkeys';
import { createGuildRoutes } from './routes/guilds';
import { createApiKeyRoutes, createApiKeyValidationRoute } from './routes/api-keys';
import { createMeRoutes } from './routes/me';
import { createMeBootstrapRoutes } from './routes/me-bootstrap';
import { createOnboardingRoutes } from './routes/onboarding';
import { createEncryptionVaultRoutes } from './routes/encryption-vault';
import { createAiMissionGrantRoutes } from './routes/ai-mission-grant';
import { createSettingsRoutes } from './routes/settings';
import { createAdminRoutes } from './routes/admin';
import { createAdminPersonasRoutes } from './routes/admin-personas';
import { createInternalPersonasRoutes } from './routes/internal-personas';
// ─── Bootstrap ──────────────────────────────────────────────
initLogger('mana-auth');
const config = loadConfig();
const db = getDb(config.databaseUrl);
const auth = createBetterAuth(config.databaseUrl, config.syncDatabaseUrl, config.webauthn);
// Load the Key Encryption Key before any vault operation can run.
// Top-level await is supported by Bun. Throws if MANA_AUTH_KEK is
// missing in production or malformed in any environment.
await loadKek(config.encryptionKek);
// Initialize services
const security = new SecurityEventsService(db);
const lockout = new AccountLockoutService(db);
const passkeyRateLimit = new PasskeyRateLimitService();
// Periodic sweep of expired passkey rate-limit buckets. 5 min cadence
// is short enough that high IP churn doesn't balloon memory, long
// enough that the overhead is negligible. setInterval + unref so the
// sweep doesn't keep the process alive on shutdown (Bun implements
// unref but Node typings don't always pick it up — the optional
// chain makes it safe).
setInterval(() => passkeyRateLimit.sweep(), 5 * 60 * 1000)?.unref?.();
const signupLimit = new SignupLimitService(db);
const apiKeysService = new ApiKeysService(db);
const userDataService = new UserDataService(db, config);
const encryptionVaultService = new EncryptionVaultService(db);
const missionGrantService = new MissionGrantService(
encryptionVaultService,
config.missionGrantPublicKeyPem
);
// ─── App ────────────────────────────────────────────────────
const app = new Hono();
app.onError((err, c) => {
// Surface non-HTTPException errors to Glitchtip with request context.
// HTTPException is intentional 4xx/422 etc. — not an "error" worth alerting on.
const isHttpException = err.constructor.name === 'HTTPException';
if (!isHttpException) {
captureException(err, {
path: c.req.path,
method: c.req.method,
query: Object.fromEntries(new URL(c.req.url).searchParams),
});
}
return errorHandler(err, c);
});
app.use('*', requestLogger());
// Defense-in-depth for clients that accidentally request the trailing-slash
// form of a route (e.g. `/api/v1/me/onboarding/`). Hono's nested-router root
// match-up doesn't include the prefix-with-slash variant, so without this
// middleware those clients get a 404 even though the same path-without-slash
// would work. Trims the slash and 301-redirects on GET/HEAD, only when a
// non-trimmed lookup already produced a 404 — so the legitimate root path
// `/` is never touched.
app.use('*', trimTrailingSlash());
app.use(
'*',
cors({
origin: config.cors.origins,
credentials: true,
allowHeaders: ['Content-Type', 'Authorization', 'X-Service-Key', 'X-App-Id'],
exposeHeaders: ['Set-Cookie'],
})
);
// ─── Health ─────────────────────────────────────────────────
app.get('/health', (c) =>
c.json({ status: 'ok', service: 'mana-auth', timestamp: new Date().toISOString() })
);
// ─── Better Auth Native Handler ─────────────────────────────
app.all('/api/auth/*', async (c) => auth.handler(c.req.raw));
app.get('/.well-known/openid-configuration', async (c) => auth.handler(c.req.raw));
// ─── Custom Auth Endpoints ──────────────────────────────────
app.route('/api/v1/auth', createAuthRoutes(auth, config, security, lockout, signupLimit));
app.route(
'/api/v1/auth/passkeys',
createPasskeyRoutes(auth, config, config.webauthn, security, lockout, passkeyRateLimit)
);
// ─── Guilds ─────────────────────────────────────────────────
app.use('/api/v1/gilden/*', jwtAuth(config.baseUrl));
app.route('/api/v1/gilden', createGuildRoutes(auth, config));
// ─── API Keys ───────────────────────────────────────────────
app.use('/api/v1/api-keys/*', jwtAuth(config.baseUrl));
app.route('/api/v1/api-keys', createApiKeyRoutes(apiKeysService));
app.route('/api/v1/api-keys', createApiKeyValidationRoute(apiKeysService));
// ─── Me (GDPR) ──────────────────────────────────────────────
app.use('/api/v1/me/*', jwtAuth(config.baseUrl));
app.route('/api/v1/me', createMeRoutes(userDataService, db));
// ─── Encryption vault (per-user master key custody) ────────
// Mounted under /me so it inherits the JWT middleware above and shows
// up in the same self-service surface as the GDPR endpoints.
app.route('/api/v1/me/encryption-vault', createEncryptionVaultRoutes(encryptionVaultService));
// ─── AI Mission Grant ──────────────────────────────────────
// Mints per-mission Key-Grants so the mana-ai background runner can
// decrypt scoped encrypted records. Under /me so it inherits the JWT
// middleware above. See docs/plans/ai-mission-key-grant.md.
app.route('/api/v1/me/ai-mission-grant', createAiMissionGrantRoutes(missionGrantService));
// ─── Onboarding ────────────────────────────────────────────
// Per-user "did you finish the 3-screen onboarding flow yet" state.
// See docs/plans/onboarding-flow.md.
app.route('/api/v1/me/onboarding', createOnboardingRoutes(db));
// ─── Singleton Bootstrap ────────────────────────────────────
// Idempotent reconciliation endpoint for the per-user `userContext`
// singleton. Webapp boot calls this once; the signup-time hook remains
// the happy path. See docs/plans/sync-field-meta-overhaul.md and
// routes/me-bootstrap.ts.
app.route('/api/v1/me/bootstrap-singletons', createMeBootstrapRoutes(db, config.syncDatabaseUrl));
// ─── Settings ──────────────────────────────────────────────
app.use('/api/v1/settings/*', jwtAuth(config.baseUrl));
app.use('/api/v1/settings', jwtAuth(config.baseUrl));
app.route('/api/v1/settings', createSettingsRoutes(db));
// ─── Admin ──────────────────────────────────────────────────
app.use('/api/v1/admin/*', jwtAuth(config.baseUrl));
app.route('/api/v1/admin', createAdminRoutes(db, userDataService));
app.route('/api/v1/admin/personas', createAdminPersonasRoutes(db, auth));
// ─── Internal API ───────────────────────────────────────────
app.use('/api/v1/internal/*', serviceAuth(config.serviceKey));
app.route('/api/v1/internal/personas', createInternalPersonasRoutes(db));
app.get('/api/v1/internal/org/:orgId/member/:userId', async (c) => {
const { orgId, userId } = c.req.param();
const { members } = await import('./db/schema/organizations');
const { eq, and } = await import('drizzle-orm');
const [member] = await db
.select()
.from(members)
.where(and(eq(members.organizationId, orgId), eq(members.userId, userId)))
.limit(1);
return c.json({ isMember: !!member, role: member?.role || '' });
});
/**
* List every Space (organization) the given user is a member of. Used by
* mana-sync to pass the current user's space-membership list into the
* `app.current_user_space_ids` session setting so the multi-member RLS
* policy can let space co-members read each other's records.
*
* Returns a flat array of organization ids mana-sync doesn't care
* about names/roles here, only the set. Cached 5 min client-side.
*/
app.get('/api/v1/internal/users/:userId/memberships', async (c) => {
const { userId } = c.req.param();
const { members } = await import('./db/schema/organizations');
const { eq } = await import('drizzle-orm');
const rows = await db
.select({ organizationId: members.organizationId, role: members.role })
.from(members)
.where(eq(members.userId, userId));
return c.json({
userId,
memberships: rows.map((r) => ({ organizationId: r.organizationId, role: r.role })),
});
});
// ─── Login Page (OIDC) ─────────────────────────────────────
app.get('/login', (c) => {
const q = c.req.query();
return c.html(`<!DOCTYPE html>
<html><head><title>Mana Login</title></head>
<body style="font-family:system-ui;max-width:400px;margin:80px auto;padding:20px;">
<h1>Mana Login</h1>
<form method="POST" action="/api/auth/sign-in/email">
<input type="hidden" name="callbackURL" value="${q.callbackURL || '/'}" />
<label>Email<br><input type="email" name="email" required style="width:100%;padding:8px;margin:4px 0 12px;"></label>
<label>Password<br><input type="password" name="password" required style="width:100%;padding:8px;margin:4px 0 12px;"></label>
<button type="submit" style="width:100%;padding:10px;background:#3b82f6;color:white;border:none;cursor:pointer;">Login</button>
</form></body></html>`);
});
// ─── Start ──────────────────────────────────────────────────
logger.info(`mana-auth starting on port ${config.port}`);
export default { port: config.port, fetch: app.fetch };

View file

@ -1,357 +0,0 @@
/**
* Unit tests for the auth error classifier + response shaper.
*
* Covers every branch of `classifyFromError`, `classifyFromResponse`,
* and the key invariants of `respondWithError`:
* - infra errors (Postgres schema drift, fetch failures, unknown)
* must NOT increment the lockout counter
* - credential errors (bad password, bad 2FA) must increment it
* - security-event type matches the classification
* - the response body never leaks the cause/stack
*
* No network, no DB fakes injected for `security.logEvent` and
* `lockout.recordAttempt`.
*/
import { describe, it, expect } from 'bun:test';
import { Hono } from 'hono';
import {
AuthErrorCode,
classify,
classifyFromError,
classifyFromResponse,
respondWithError,
type AuthErrorDeps,
type ClassifiedError,
} from './auth-errors';
// ─── Fakes ────────────────────────────────────────────────────
function makeFakeDeps(): {
deps: AuthErrorDeps;
securityCalls: Array<Record<string, unknown>>;
lockoutCalls: Array<{ email: string; successful: boolean; ip?: string }>;
} {
const securityCalls: Array<Record<string, unknown>> = [];
const lockoutCalls: Array<{ email: string; successful: boolean; ip?: string }> = [];
const deps: AuthErrorDeps = {
security: {
logEvent: (params) => {
securityCalls.push(params as Record<string, unknown>);
},
},
lockout: {
recordAttempt: (email, successful, ip) => {
lockoutCalls.push({ email, successful, ip });
},
},
};
return { deps, securityCalls, lockoutCalls };
}
/**
* Build a throwaway Hono context the shaper can write into. We can't
* construct a real context directly; round-trip through a tiny app so
* the response shaper's `c.json(...)` + header calls work identically
* to production.
*/
async function runShaperInContext(
classified: ClassifiedError,
email: string | undefined,
deps: AuthErrorDeps
): Promise<{ status: number; body: unknown; headers: Headers }> {
const app = new Hono();
app.get('/test', (c) =>
respondWithError(c, classified, { endpoint: '/test', email, ipAddress: '127.0.0.1' }, deps)
);
const res = await app.request('/test');
return {
status: res.status,
body: await res.json().catch(() => null),
headers: res.headers,
};
}
// ─── classifyFromError ────────────────────────────────────────
describe('classifyFromError', () => {
describe('Better Auth APIError', () => {
it('maps body.code INVALID_EMAIL_OR_PASSWORD → INVALID_CREDENTIALS', () => {
const err = {
name: 'APIError',
status: 'UNAUTHORIZED',
statusCode: 401,
body: { code: 'INVALID_EMAIL_OR_PASSWORD', message: 'Nope' },
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.INVALID_CREDENTIALS);
expect(c.countsTowardLockout).toBe(true);
expect(c.message).toBe('Nope');
});
it('maps body.code USER_ALREADY_EXISTS → EMAIL_ALREADY_REGISTERED', () => {
const err = {
name: 'APIError',
status: 'UNPROCESSABLE_ENTITY',
statusCode: 422,
body: { code: 'USER_ALREADY_EXISTS' },
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.EMAIL_ALREADY_REGISTERED);
expect(c.status).toBe(409);
});
it('maps status FORBIDDEN (no code) → EMAIL_NOT_VERIFIED', () => {
const err = {
name: 'APIError',
status: 'FORBIDDEN',
statusCode: 403,
body: {},
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.EMAIL_NOT_VERIFIED);
});
it('maps status UNPROCESSABLE_ENTITY with exists-message → EMAIL_ALREADY_REGISTERED', () => {
const err = {
name: 'APIError',
status: 'UNPROCESSABLE_ENTITY',
statusCode: 422,
body: { message: 'User with email already exists' },
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.EMAIL_ALREADY_REGISTERED);
});
it('falls back to status when body has no useful code', () => {
const err = {
name: 'APIError',
status: 'INTERNAL_SERVER_ERROR',
statusCode: 500,
body: {},
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
});
});
describe('Postgres errors', () => {
it('23505 unique violation → EMAIL_ALREADY_REGISTERED', () => {
const err = { code: '23505', severity: 'ERROR', message: 'duplicate key' };
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.EMAIL_ALREADY_REGISTERED);
});
it('42703 undefined column → SERVICE_UNAVAILABLE', () => {
// This is the exact shape that caused the onboarding_completed_at
// incident — the classifier MUST bucket it as infra, not auth.
const err = {
code: '42703',
severity: 'ERROR',
message: 'column "onboarding_completed_at" does not exist',
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
expect(c.countsTowardLockout).toBe(false);
expect(c.logLevel).toBe('error');
});
it('08006 connection failure → SERVICE_UNAVAILABLE', () => {
const err = { code: '08006', severity: 'FATAL', message: 'connection lost' };
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
});
});
describe('Zod errors', () => {
it('issues[0].path + message → VALIDATION with path', () => {
const err = {
issues: [{ path: ['email'], message: 'Invalid email' }],
};
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.VALIDATION);
expect(c.message).toBe('email: Invalid email');
});
it('empty issues → generic VALIDATION', () => {
const err = { issues: [] };
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.VALIDATION);
});
});
describe('Network errors', () => {
it('AbortError → SERVICE_UNAVAILABLE', () => {
const err = new Error('aborted');
err.name = 'AbortError';
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
});
it('fetch failed → SERVICE_UNAVAILABLE', () => {
const err = new Error('fetch failed');
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
});
it('ECONNREFUSED → SERVICE_UNAVAILABLE', () => {
const err = Object.assign(new Error('connect ECONNREFUSED'), { code: 'ECONNREFUSED' });
const c = classifyFromError(err);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
});
});
describe('Unknown / bare errors', () => {
it('bare Error → INTERNAL', () => {
const c = classifyFromError(new Error('something broke'));
expect(c.code).toBe(AuthErrorCode.INTERNAL);
expect(c.logLevel).toBe('error');
});
it('null → INTERNAL', () => {
const c = classifyFromError(null);
expect(c.code).toBe(AuthErrorCode.INTERNAL);
});
it('string → INTERNAL', () => {
const c = classifyFromError('wat');
expect(c.code).toBe(AuthErrorCode.INTERNAL);
});
});
});
// ─── classifyFromResponse ─────────────────────────────────────
describe('classifyFromResponse', () => {
it('401 with {code: INVALID_EMAIL_OR_PASSWORD} → INVALID_CREDENTIALS', async () => {
const res = new Response(
JSON.stringify({ code: 'INVALID_EMAIL_OR_PASSWORD', message: 'Wrong' }),
{ status: 401, headers: { 'content-type': 'application/json' } }
);
const c = await classifyFromResponse(res);
expect(c.code).toBe(AuthErrorCode.INVALID_CREDENTIALS);
expect(c.message).toBe('Wrong');
});
it('403 with {code: EMAIL_NOT_VERIFIED} → EMAIL_NOT_VERIFIED', async () => {
const res = new Response(JSON.stringify({ code: 'EMAIL_NOT_VERIFIED' }), {
status: 403,
headers: { 'content-type': 'application/json' },
});
const c = await classifyFromResponse(res);
expect(c.code).toBe(AuthErrorCode.EMAIL_NOT_VERIFIED);
});
it('500 with empty body → SERVICE_UNAVAILABLE', async () => {
// The bug case: Better Auth's internal handler crashed on the
// missing column and returned a 500 with no body. The wrapper
// must classify this as infra, not bad password.
const res = new Response('', { status: 500 });
const c = await classifyFromResponse(res);
expect(c.code).toBe(AuthErrorCode.SERVICE_UNAVAILABLE);
expect(c.countsTowardLockout).toBe(false);
});
it('401 with non-JSON body → INVALID_CREDENTIALS (fallback)', async () => {
const res = new Response('nope', { status: 401 });
const c = await classifyFromResponse(res);
expect(c.code).toBe(AuthErrorCode.INVALID_CREDENTIALS);
});
it('does not consume the caller body (clone)', async () => {
const res = new Response(JSON.stringify({ code: 'X' }), {
status: 400,
headers: { 'content-type': 'application/json' },
});
await classifyFromResponse(res);
// Original body should still be readable.
const body = await res.json();
expect(body).toEqual({ code: 'X' });
});
});
// ─── respondWithError ─────────────────────────────────────────
describe('respondWithError', () => {
it('writes JSON body with {error, message, status}', async () => {
const { deps } = makeFakeDeps();
const { status, body } = await runShaperInContext(
classify(AuthErrorCode.INVALID_CREDENTIALS),
'user@x.de',
deps
);
expect(status).toBe(401);
expect(body).toEqual({
error: 'INVALID_CREDENTIALS',
message: 'Invalid credentials',
status: 401,
});
});
it('increments lockout ONLY for credential failures', async () => {
const { deps, lockoutCalls } = makeFakeDeps();
await runShaperInContext(classify(AuthErrorCode.INVALID_CREDENTIALS), 'user@x.de', deps);
expect(lockoutCalls).toHaveLength(1);
expect(lockoutCalls[0]!.successful).toBe(false);
});
it('does NOT increment lockout on SERVICE_UNAVAILABLE', async () => {
// THE bug this classifier exists to fix: if the DB is down, every
// login returned 401 AND incremented the counter, so after 5
// retries the user was locked out of their own account. Infra
// errors must be invisible to the lockout.
const { deps, lockoutCalls } = makeFakeDeps();
await runShaperInContext(classify(AuthErrorCode.SERVICE_UNAVAILABLE), 'user@x.de', deps);
expect(lockoutCalls).toHaveLength(0);
});
it('does NOT increment lockout on EMAIL_NOT_VERIFIED', async () => {
const { deps, lockoutCalls } = makeFakeDeps();
await runShaperInContext(classify(AuthErrorCode.EMAIL_NOT_VERIFIED), 'u@x.de', deps);
expect(lockoutCalls).toHaveLength(0);
});
it('fires LOGIN_FAILURE security event for bad credentials', async () => {
const { deps, securityCalls } = makeFakeDeps();
await runShaperInContext(classify(AuthErrorCode.INVALID_CREDENTIALS), 'u@x.de', deps);
expect(securityCalls).toHaveLength(1);
expect(securityCalls[0]!.eventType).toBe('LOGIN_FAILURE');
});
it('fires SERVICE_ERROR security event (not LOGIN_FAILURE) for infra failures', async () => {
const { deps, securityCalls } = makeFakeDeps();
await runShaperInContext(classify(AuthErrorCode.SERVICE_UNAVAILABLE), 'u@x.de', deps);
expect(securityCalls).toHaveLength(1);
expect(securityCalls[0]!.eventType).toBe('SERVICE_ERROR');
});
it('sets Retry-After header for 429s with retryAfterSec', async () => {
const { deps } = makeFakeDeps();
const { headers, body } = await runShaperInContext(
classify(AuthErrorCode.ACCOUNT_LOCKED, { retryAfterSec: 180 }),
'u@x.de',
deps
);
expect(headers.get('Retry-After')).toBe('180');
expect((body as { retryAfterSec?: number }).retryAfterSec).toBe(180);
});
it('never leaks `cause` into the response body', async () => {
const { deps } = makeFakeDeps();
const classified = classify(AuthErrorCode.INTERNAL, {
cause: new Error('db password was "hunter2" do not leak'),
});
const { body } = await runShaperInContext(classified, undefined, deps);
const s = JSON.stringify(body);
expect(s).not.toContain('hunter2');
expect(s).not.toContain('stack');
});
it('skips lockout when email is not provided', async () => {
const { deps, lockoutCalls } = makeFakeDeps();
// /validate, /refresh, and /session-to-token don't have a user email
// in scope — the shaper must cope without one rather than crash.
await runShaperInContext(classify(AuthErrorCode.INVALID_CREDENTIALS), undefined, deps);
expect(lockoutCalls).toHaveLength(0);
});
});

View file

@ -1,545 +0,0 @@
/**
* Auth error classification + response shaper.
*
* Problem this solves: every /login, /register etc. wrapper around
* Better Auth's native handler used to map every non-2xx upstream
* response onto `401 Invalid credentials`, with no log. A missing DB
* column, a space-create hook crash, a transient 5xx, and an actually
* wrong password all looked identical from the client. When debugging
* the onboarding_completed_at schema drift, that swallow cost ~30 min
* before the real error surfaced via a one-off reproducer script.
*
* The classifier turns an unknown error (APIError from Better Auth,
* PostgresError, Zod, fetch failure, Response, bare Error) into a
* machine-readable `{code, status, message, …}` envelope. `respond`
* writes the response, logs at the right level, fires the right
* security event, and critically only increments the password
* lockout counter for *credential* failures, so a DB outage does not
* lock every user out.
*/
import type { Context } from 'hono';
import { logger } from '@mana/shared-hono';
// ─── Error codes ──────────────────────────────────────────────
/**
* Canonical error codes the client switches on. Stable string values
* so the web/mobile UIs can i18n against them without carrying the
* server taxonomy by number.
*/
export enum AuthErrorCode {
// Credential flows
INVALID_CREDENTIALS = 'INVALID_CREDENTIALS',
EMAIL_NOT_VERIFIED = 'EMAIL_NOT_VERIFIED',
EMAIL_ALREADY_REGISTERED = 'EMAIL_ALREADY_REGISTERED',
WEAK_PASSWORD = 'WEAK_PASSWORD',
// Throttling
ACCOUNT_LOCKED = 'ACCOUNT_LOCKED',
SIGNUP_LIMIT_REACHED = 'SIGNUP_LIMIT_REACHED',
RATE_LIMITED = 'RATE_LIMITED',
// Tokens
TOKEN_EXPIRED = 'TOKEN_EXPIRED',
TOKEN_INVALID = 'TOKEN_INVALID',
// Two-factor
TWO_FACTOR_REQUIRED = 'TWO_FACTOR_REQUIRED',
TWO_FACTOR_FAILED = 'TWO_FACTOR_FAILED',
// Passkeys
PASSKEY_NOT_ENABLED = 'PASSKEY_NOT_ENABLED',
PASSKEY_CANCELLED = 'PASSKEY_CANCELLED',
PASSKEY_VERIFICATION_FAILED = 'PASSKEY_VERIFICATION_FAILED',
// Input
VALIDATION = 'VALIDATION',
// Generic
UNAUTHORIZED = 'UNAUTHORIZED',
NOT_FOUND = 'NOT_FOUND',
// Infra (do NOT count toward lockout)
SERVICE_UNAVAILABLE = 'SERVICE_UNAVAILABLE',
INTERNAL = 'INTERNAL',
}
/** Log level the classifier recommends for this category of error. */
type LogLevel = 'info' | 'warn' | 'error';
/**
* Classified error envelope. `cause` and `stack` are for server-side
* logging only they never leave the server (see `serializeResponseBody`).
*/
export interface ClassifiedError {
code: AuthErrorCode;
status: number;
message: string;
retryAfterSec?: number;
/** Original error, preserved for logs. Never serialised to client. */
cause?: unknown;
logLevel: LogLevel;
/** Security event to fire, if any. `null` = no event. */
securityEventType: string | null;
/** Whether `lockout.recordAttempt(false)` should fire for this error. */
countsTowardLockout: boolean;
}
// ─── Defaults per code ────────────────────────────────────────
type Defaults = Pick<
ClassifiedError,
'status' | 'message' | 'logLevel' | 'securityEventType' | 'countsTowardLockout'
>;
const DEFAULTS: Record<AuthErrorCode, Defaults> = {
[AuthErrorCode.INVALID_CREDENTIALS]: {
status: 401,
message: 'Invalid credentials',
logLevel: 'info',
securityEventType: 'LOGIN_FAILURE',
countsTowardLockout: true,
},
[AuthErrorCode.EMAIL_NOT_VERIFIED]: {
status: 403,
message: 'Email not verified',
logLevel: 'info',
securityEventType: 'LOGIN_FAILURE',
countsTowardLockout: false,
},
[AuthErrorCode.EMAIL_ALREADY_REGISTERED]: {
status: 409,
message: 'Email already registered',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.WEAK_PASSWORD]: {
status: 400,
message: 'Password too weak',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.ACCOUNT_LOCKED]: {
status: 429,
message: 'Account temporarily locked',
logLevel: 'warn',
securityEventType: 'ACCOUNT_LOCKED',
countsTowardLockout: false,
},
[AuthErrorCode.SIGNUP_LIMIT_REACHED]: {
status: 429,
message: 'Signup limit reached',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.RATE_LIMITED]: {
status: 429,
message: 'Too many requests',
logLevel: 'warn',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.TOKEN_EXPIRED]: {
status: 401,
message: 'Link expired',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.TOKEN_INVALID]: {
status: 400,
message: 'Invalid link',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.TWO_FACTOR_REQUIRED]: {
status: 401,
message: 'Two-factor authentication required',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.TWO_FACTOR_FAILED]: {
status: 401,
message: 'Invalid two-factor code',
logLevel: 'info',
securityEventType: 'LOGIN_FAILURE',
countsTowardLockout: true,
},
[AuthErrorCode.PASSKEY_NOT_ENABLED]: {
status: 404,
message: 'Passkey authentication is not enabled',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.PASSKEY_CANCELLED]: {
status: 400,
message: 'Passkey authentication was cancelled',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.PASSKEY_VERIFICATION_FAILED]: {
status: 401,
message: 'Passkey verification failed',
logLevel: 'warn',
securityEventType: 'PASSKEY_LOGIN_FAILURE',
countsTowardLockout: false,
},
[AuthErrorCode.VALIDATION]: {
status: 400,
message: 'Invalid request',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.UNAUTHORIZED]: {
status: 401,
message: 'Unauthorized',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.NOT_FOUND]: {
status: 404,
message: 'Not found',
logLevel: 'info',
securityEventType: null,
countsTowardLockout: false,
},
[AuthErrorCode.SERVICE_UNAVAILABLE]: {
status: 503,
message: 'Service temporarily unavailable',
logLevel: 'error',
securityEventType: 'SERVICE_ERROR',
countsTowardLockout: false,
},
[AuthErrorCode.INTERNAL]: {
status: 500,
message: 'Unexpected server error',
logLevel: 'error',
securityEventType: 'SERVICE_ERROR',
countsTowardLockout: false,
},
};
/** Build a ClassifiedError from a code + optional overrides. */
export function classify(
code: AuthErrorCode,
overrides?: Partial<Pick<ClassifiedError, 'message' | 'retryAfterSec' | 'cause'>>
): ClassifiedError {
return { code, ...DEFAULTS[code], ...overrides };
}
// ─── Classifier ───────────────────────────────────────────────
/**
* Parse a Better Auth error-body code string (the `code` field in the
* JSON body it returns from /api/auth/*) and map it onto our taxonomy.
* Unknown codes fall through to null so the caller can fall back to
* status-based classification.
*/
function codeFromBetterAuthBody(code: string | undefined): AuthErrorCode | null {
if (!code) return null;
switch (code) {
case 'INVALID_EMAIL_OR_PASSWORD':
case 'INVALID_CREDENTIALS':
case 'INVALID_PASSWORD':
return AuthErrorCode.INVALID_CREDENTIALS;
case 'EMAIL_NOT_VERIFIED':
return AuthErrorCode.EMAIL_NOT_VERIFIED;
case 'USER_ALREADY_EXISTS':
case 'EMAIL_ALREADY_EXISTS':
return AuthErrorCode.EMAIL_ALREADY_REGISTERED;
case 'PASSWORD_TOO_SHORT':
case 'PASSWORD_TOO_LONG':
case 'WEAK_PASSWORD':
return AuthErrorCode.WEAK_PASSWORD;
case 'INVALID_TOKEN':
return AuthErrorCode.TOKEN_INVALID;
case 'TOKEN_EXPIRED':
return AuthErrorCode.TOKEN_EXPIRED;
case 'VALIDATION_ERROR':
return AuthErrorCode.VALIDATION;
default:
return null;
}
}
/**
* Classify a fetch Response from Better Auth's native handler.
*
* Reads the body once (clones so the caller can still introspect the
* original). Missing / non-JSON bodies fall back to status-based
* classification.
*/
export async function classifyFromResponse(res: Response): Promise<ClassifiedError> {
// Clone before consuming — the /login wrapper reads headers from the
// original for set-cookie capture in the success path, so we can't
// drain the caller's response.
let body: { code?: string; message?: string } = {};
try {
body = (await res.clone().json()) as typeof body;
} catch {
// Non-JSON response (Better Auth returns empty body on some 5xx)
body = {};
}
const mapped = codeFromBetterAuthBody(body.code);
if (mapped) {
return classify(mapped, body.message ? { message: body.message } : undefined);
}
return classifyFromStatus(res.status, body.message);
}
/**
* Classify a Better Auth APIError thrown by `auth.api.*` calls.
*
* APIError has `{status: string | number, statusCode: number, body: {message?, code?}}`.
* We look at `body.code` first (most specific), then fall back to the
* string status enum ("UNPROCESSABLE_ENTITY" etc.), then the numeric
* statusCode.
*/
function classifyFromApiError(err: {
status: string | number;
statusCode: number;
body?: { message?: string; code?: string };
}): ClassifiedError {
const mapped = codeFromBetterAuthBody(err.body?.code);
if (mapped) {
return classify(
mapped,
err.body?.message ? { message: err.body.message, cause: err } : { cause: err }
);
}
// Better Auth uses UNPROCESSABLE_ENTITY for "user already exists" in
// some paths.
if (err.status === 'UNPROCESSABLE_ENTITY' && err.body?.message?.toLowerCase().includes('exist')) {
return classify(AuthErrorCode.EMAIL_ALREADY_REGISTERED, { cause: err });
}
if (err.status === 'FORBIDDEN') {
return classify(AuthErrorCode.EMAIL_NOT_VERIFIED, { cause: err });
}
return classifyFromStatus(err.statusCode, err.body?.message, err);
}
/** Fallback classifier when only a status code is available. */
function classifyFromStatus(status: number, message?: string, cause?: unknown): ClassifiedError {
if (status === 400) return classify(AuthErrorCode.VALIDATION, { message, cause });
if (status === 401) return classify(AuthErrorCode.INVALID_CREDENTIALS, { message, cause });
if (status === 403) return classify(AuthErrorCode.EMAIL_NOT_VERIFIED, { message, cause });
if (status === 404) return classify(AuthErrorCode.NOT_FOUND, { message, cause });
if (status === 409) return classify(AuthErrorCode.EMAIL_ALREADY_REGISTERED, { message, cause });
if (status === 422) return classify(AuthErrorCode.VALIDATION, { message, cause });
if (status === 429) return classify(AuthErrorCode.RATE_LIMITED, { message, cause });
if (status >= 500 && status < 600) {
return classify(AuthErrorCode.SERVICE_UNAVAILABLE, { cause });
}
return classify(AuthErrorCode.INTERNAL, { cause });
}
/**
* Classify an unknown thrown error.
*
* Recognises (in order): Better Auth APIError Postgres errors
* Zod-ish validation errors network errors bare Error unknown.
*/
export function classifyFromError(err: unknown): ClassifiedError {
// Better Auth APIError: check duck-type because the class lives
// inside `better-call` (a nested dep) and the instanceof doesn't
// survive re-bundling across workspace boundaries in all cases.
if (
err &&
typeof err === 'object' &&
(err as { name?: string }).name === 'APIError' &&
'statusCode' in err
) {
return classifyFromApiError(err as never);
}
// Postgres error — `postgres` (postgres-js) and `pg` both expose a
// `code` string (SQLSTATE). 23505 = unique violation. 42703 = undefined
// column (the onboarding_completed_at bug). 08* = connection issues.
if (
err &&
typeof err === 'object' &&
'code' in err &&
typeof (err as { code?: unknown }).code === 'string' &&
'severity' in err
) {
const pgCode = (err as { code: string }).code;
if (pgCode === '23505') {
return classify(AuthErrorCode.EMAIL_ALREADY_REGISTERED, { cause: err });
}
// Everything else — schema drift (42703, 42P01), conn refused (08*),
// timeout, etc. — is infrastructure, not user input.
return classify(AuthErrorCode.SERVICE_UNAVAILABLE, { cause: err });
}
// Zod error — `.issues` is the canonical discriminator.
if (err && typeof err === 'object' && Array.isArray((err as { issues?: unknown }).issues)) {
const issues = (err as { issues: { path?: (string | number)[]; message?: string }[] }).issues;
const first = issues[0];
const path = first?.path?.join('.') || '';
const msg = first?.message || 'Invalid input';
return classify(AuthErrorCode.VALIDATION, {
message: path ? `${path}: ${msg}` : msg,
cause: err,
});
}
// Network errors: fetch() in Bun/Node throws TypeError with cause,
// AbortError, or Error with code ECONNREFUSED/ETIMEDOUT.
if (err instanceof Error) {
const msg = err.message.toLowerCase();
const code = (err as Error & { code?: string }).code || '';
if (
err.name === 'AbortError' ||
msg.includes('fetch failed') ||
msg.includes('timeout') ||
code === 'ECONNREFUSED' ||
code === 'ETIMEDOUT' ||
code === 'ENOTFOUND'
) {
return classify(AuthErrorCode.SERVICE_UNAVAILABLE, { cause: err });
}
return classify(AuthErrorCode.INTERNAL, { cause: err });
}
return classify(AuthErrorCode.INTERNAL, { cause: err });
}
// ─── Response shaper ──────────────────────────────────────────
/**
* Shape of the JSON body returned to clients. Never carries stack /
* cause / internal details.
*/
export interface AuthErrorResponseBody {
error: AuthErrorCode;
message: string;
status: number;
retryAfterSec?: number;
}
/**
* Context passed to `respondWithError` so it can tag logs + security
* events. All fields optional the handler is responsible for filling
* in what it knows.
*/
export interface AuthErrorContext {
email?: string;
userId?: string;
ipAddress?: string;
userAgent?: string;
endpoint: string;
/** Additional metadata to include in the log entry (not security event). */
extra?: Record<string, unknown>;
}
/**
* Side-effect hooks the response shaper needs. Passed as deps so the
* shaper stays unit-testable without instantiating the whole service
* graph.
*/
export interface AuthErrorDeps {
security: {
logEvent(params: {
userId?: string;
eventType: string;
ipAddress?: string;
userAgent?: string;
metadata?: Record<string, unknown>;
}): Promise<void> | void;
};
lockout: {
recordAttempt(email: string, successful: boolean, ipAddress?: string): Promise<void> | void;
};
}
/**
* Write the error response: JSON body + HTTP status + structured log +
* (optional) security event + (optional) lockout bump.
*
* Returns the Hono Response so the caller can `return respondWithError(...)`.
*/
export function respondWithError(
c: Context,
classified: ClassifiedError,
ctx: AuthErrorContext,
deps: AuthErrorDeps
): Response {
// Log first so the stack trace lands before any async side effects
// that might throw again.
const logEntry: Record<string, unknown> = {
endpoint: ctx.endpoint,
code: classified.code,
status: classified.status,
email: ctx.email,
userId: ctx.userId,
ipAddress: ctx.ipAddress,
...ctx.extra,
};
if (classified.cause !== undefined) {
logEntry.cause = serializeCauseForLog(classified.cause);
}
logger[classified.logLevel]('auth error', logEntry);
// Security event (fire-and-forget; the service itself never throws).
if (classified.securityEventType) {
void deps.security.logEvent({
userId: ctx.userId,
eventType: classified.securityEventType,
ipAddress: ctx.ipAddress,
userAgent: ctx.userAgent,
metadata: {
code: classified.code,
endpoint: ctx.endpoint,
...(ctx.email ? { email: ctx.email } : {}),
},
});
}
// Lockout bump: only credential failures count. A DB outage (→
// SERVICE_UNAVAILABLE) must NOT lock every user out.
if (classified.countsTowardLockout && ctx.email) {
void deps.lockout.recordAttempt(ctx.email, false, ctx.ipAddress);
}
// Retry-After header for 429s — both informational for humans and
// respected by `fetch()` callers.
if (classified.retryAfterSec) {
c.header('Retry-After', String(classified.retryAfterSec));
}
const body: AuthErrorResponseBody = {
error: classified.code,
message: classified.message,
status: classified.status,
};
if (classified.retryAfterSec) body.retryAfterSec = classified.retryAfterSec;
return c.json(body, classified.status as never);
}
/**
* Serialise an error's `cause` for logging without risking runaway
* output. Extracts message + stack from Error instances; otherwise
* shallow-stringifies.
*/
function serializeCauseForLog(cause: unknown): unknown {
if (cause instanceof Error) {
return {
name: cause.name,
message: cause.message,
stack: cause.stack,
// postgres / APIError-shaped extras
code: (cause as { code?: unknown }).code,
body: (cause as { body?: unknown }).body,
};
}
return cause;
}

View file

@ -1,43 +0,0 @@
import { HTTPException } from 'hono/http-exception';
export class BadRequestError extends HTTPException {
constructor(message: string) {
super(400, { message });
}
}
export class UnauthorizedError extends HTTPException {
constructor(message = 'Unauthorized') {
super(401, { message });
}
}
export class ForbiddenError extends HTTPException {
constructor(message = 'Forbidden') {
super(403, { message });
}
}
export class NotFoundError extends HTTPException {
constructor(message = 'Not found') {
super(404, { message });
}
}
export class ConflictError extends HTTPException {
constructor(message = 'Conflict') {
super(409, { message });
}
}
export class InsufficientCreditsError extends HTTPException {
constructor(
public readonly required: number,
public readonly available: number
) {
super(402, {
message: 'Insufficient credits',
cause: { required, available },
});
}
}

View file

@ -1,57 +0,0 @@
/**
* JWT Authentication Middleware
*
* Validates Bearer tokens via JWKS from mana-auth.
* Uses jose library with EdDSA algorithm.
*/
import type { MiddlewareHandler } from 'hono';
import { createRemoteJWKSet, jwtVerify } from 'jose';
import { UnauthorizedError } from '../lib/errors';
let jwks: ReturnType<typeof createRemoteJWKSet> | null = null;
function getJwks(authUrl: string) {
if (!jwks) {
jwks = createRemoteJWKSet(new URL('/api/auth/jwks', authUrl));
}
return jwks;
}
export interface AuthUser {
userId: string;
email: string;
role: string;
}
/**
* Middleware that validates JWT tokens from Authorization: Bearer header.
* Sets c.set('user', { userId, email, role }) on success.
*/
export function jwtAuth(authUrl: string): MiddlewareHandler {
return async (c, next) => {
const authHeader = c.req.header('Authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new UnauthorizedError('Missing or invalid Authorization header');
}
const token = authHeader.slice(7);
try {
const { payload } = await jwtVerify(token, getJwks(authUrl), {
issuer: authUrl,
audience: 'mana',
});
const user: AuthUser = {
userId: payload.sub || '',
email: (payload.email as string) || '',
role: (payload.role as string) || 'user',
};
c.set('user', user);
await next();
} catch {
throw new UnauthorizedError('Invalid or expired token');
}
};
}

View file

@ -1,26 +0,0 @@
/**
* Service-to-Service Authentication Middleware
*
* Validates X-Service-Key header for backend-to-backend calls.
* Used by /internal/* routes.
*/
import type { MiddlewareHandler } from 'hono';
import { UnauthorizedError } from '../lib/errors';
/**
* Middleware that validates X-Service-Key header.
* Sets c.set('appId', ...) from X-App-Id header.
*/
export function serviceAuth(serviceKey: string): MiddlewareHandler {
return async (c, next) => {
const key = c.req.header('X-Service-Key');
if (!key || key !== serviceKey) {
throw new UnauthorizedError('Invalid or missing service key');
}
const appId = c.req.header('X-App-Id') || 'unknown';
c.set('appId', appId);
await next();
};
}

View file

@ -1,257 +0,0 @@
/**
* Admin endpoints for persona lifecycle.
*
* Personas are real Mana users with `kind = 'persona'`. They go through
* the same Better Auth sign-up pipeline as humans (no bypass), then get
* stamped with kind+tier and a personas-table row. The seed script
* (scripts/personas/seed.ts) drives this; the same endpoints power any
* future admin UI.
*
* Plan: docs/plans/mana-mcp-and-personas.md (M2.b).
*
* Lifecycle:
* POST /api/v1/admin/personas create-or-update by email (idempotent)
* GET /api/v1/admin/personas list with action+feedback summary
* GET /api/v1/admin/personas/:id detail
* DELETE /api/v1/admin/personas/:id hard delete (cascades user spaces)
*/
import { Hono } from 'hono';
import { and, count, desc, eq, gte } from 'drizzle-orm';
import type { PostgresJsDatabase } from 'drizzle-orm/postgres-js';
import type { AuthUser } from '../middleware/jwt-auth';
import type { BetterAuthInstance } from '../auth/better-auth.config';
import { users } from '../db/schema/auth';
import { personas, personaActions, personaFeedback } from '../db/schema/personas';
interface PersonaUpsertBody {
email: string;
name?: string;
password: string;
archetype: string;
systemPrompt: string;
moduleMix: Record<string, number>;
tickCadence?: 'daily' | 'weekdays' | 'hourly';
}
const VALID_CADENCES = new Set(['daily', 'weekdays', 'hourly']);
export function createAdminPersonasRoutes(db: PostgresJsDatabase<any>, auth: BetterAuthInstance) {
const app = new Hono<{ Variables: { user: AuthUser } }>();
// All routes admin-gated. Mirrors the check in admin.ts so this file
// is safe to mount under any prefix without losing protection.
app.use('*', async (c, next) => {
const principal = c.get('user');
if (principal.role !== 'admin') {
return c.json({ error: 'Forbidden', message: 'Admin access required' }, 403);
}
await next();
});
// ─── POST /api/v1/admin/personas ─ create or update ─────────────
app.post('/', async (c) => {
let body: PersonaUpsertBody;
try {
body = (await c.req.json()) as PersonaUpsertBody;
} catch {
return c.json({ error: 'Invalid JSON body' }, 400);
}
const errors: string[] = [];
if (!body.email || !body.email.includes('@')) errors.push('email required');
if (!body.password || body.password.length < 8) errors.push('password ≥ 8 chars required');
if (!body.archetype) errors.push('archetype required');
if (!body.systemPrompt) errors.push('systemPrompt required');
if (!body.moduleMix || typeof body.moduleMix !== 'object')
errors.push('moduleMix object required');
if (body.tickCadence && !VALID_CADENCES.has(body.tickCadence)) {
errors.push(`tickCadence must be one of ${[...VALID_CADENCES].join(', ')}`);
}
if (errors.length > 0) return c.json({ error: 'ValidationError', details: errors }, 400);
// Find or create the underlying user. signUpEmail throws on collision —
// we treat that as "user exists, we'll just upsert metadata".
let userId: string;
const [existing] = await db
.select({ id: users.id })
.from(users)
.where(eq(users.email, body.email));
if (existing) {
userId = existing.id;
} else {
try {
const signUp = await auth.api.signUpEmail({
body: {
email: body.email,
password: body.password,
name: body.name ?? body.email.split('@')[0],
},
headers: c.req.raw.headers,
});
if (!signUp?.user?.id) {
return c.json({ error: 'Sign-up returned no user' }, 500);
}
userId = signUp.user.id;
} catch (err) {
return c.json(
{
error: 'Sign-up failed',
message: err instanceof Error ? err.message : String(err),
},
500
);
}
}
// Stamp the user as a persona with founder tier and verified email
// (we control this address — no bounce risk, no need for the
// verification mail flow). updatedAt bumps so caches invalidate.
await db
.update(users)
.set({
kind: 'persona',
accessTier: 'founder',
emailVerified: true,
updatedAt: new Date(),
})
.where(eq(users.id, userId));
// Upsert the persona descriptor.
await db
.insert(personas)
.values({
userId,
archetype: body.archetype,
systemPrompt: body.systemPrompt,
moduleMix: body.moduleMix,
tickCadence: body.tickCadence ?? 'daily',
})
.onConflictDoUpdate({
target: personas.userId,
set: {
archetype: body.archetype,
systemPrompt: body.systemPrompt,
moduleMix: body.moduleMix,
tickCadence: body.tickCadence ?? 'daily',
},
});
return c.json({ ok: true, userId, email: body.email });
});
// ─── GET /api/v1/admin/personas ─ list ─────────────────────────
app.get('/', async (c) => {
const sevenDaysAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000);
const rows = await db
.select({
userId: personas.userId,
email: users.email,
name: users.name,
archetype: personas.archetype,
tickCadence: personas.tickCadence,
lastActiveAt: personas.lastActiveAt,
createdAt: personas.createdAt,
})
.from(personas)
.innerJoin(users, eq(users.id, personas.userId))
.orderBy(desc(personas.createdAt));
// Per-persona action count for the last 7d. One small grouped query
// rather than N round-trips.
const actionCounts = await db
.select({
personaId: personaActions.personaId,
value: count(),
})
.from(personaActions)
.where(gte(personaActions.createdAt, sevenDaysAgo))
.groupBy(personaActions.personaId);
const countByPersona = new Map(actionCounts.map((r) => [r.personaId, Number(r.value)]));
return c.json({
personas: rows.map((r) => ({
...r,
actions7d: countByPersona.get(r.userId) ?? 0,
})),
});
});
// ─── GET /api/v1/admin/personas/:id ─ detail ───────────────────
app.get('/:id', async (c) => {
const userId = c.req.param('id');
const [row] = await db
.select({
userId: personas.userId,
email: users.email,
name: users.name,
archetype: personas.archetype,
systemPrompt: personas.systemPrompt,
moduleMix: personas.moduleMix,
tickCadence: personas.tickCadence,
lastActiveAt: personas.lastActiveAt,
createdAt: personas.createdAt,
})
.from(personas)
.innerJoin(users, eq(users.id, personas.userId))
.where(eq(personas.userId, userId));
if (!row) return c.json({ error: 'Not found' }, 404);
// Recent activity: last 20 actions + feedback aggregate per module.
const recentActions = await db
.select()
.from(personaActions)
.where(eq(personaActions.personaId, userId))
.orderBy(desc(personaActions.createdAt))
.limit(20);
const feedbackAgg = await db
.select({
module: personaFeedback.module,
avgRating: count(),
})
.from(personaFeedback)
.where(eq(personaFeedback.personaId, userId))
.groupBy(personaFeedback.module);
return c.json({ persona: row, recentActions, feedbackByModule: feedbackAgg });
});
// ─── DELETE /api/v1/admin/personas/:id ─ hard delete ───────────
app.delete('/:id', async (c) => {
const userId = c.req.param('id');
// Safety check — only delete users that are actually personas.
// Without this, an admin typo could nuke a real account; the
// FK cascade from users would then take down credits, sync rows,
// the works.
const [row] = await db.select({ kind: users.kind }).from(users).where(eq(users.id, userId));
if (!row) return c.json({ error: 'Not found' }, 404);
if (row.kind !== 'persona') {
return c.json(
{
error: 'Refusing to delete non-persona user via this endpoint',
hint: 'Use /api/v1/admin/users/:id/data instead',
},
400
);
}
// Cascade: personas → personaActions, personaFeedback (via FK ON DELETE
// CASCADE), then users → personas (same), then organizations / sync /
// credits via their own onDelete handling. We only need to delete the
// user row.
await db.delete(users).where(and(eq(users.id, userId), eq(users.kind, 'persona')));
return c.json({ ok: true, deleted: userId });
});
return app;
}

View file

@ -1,215 +0,0 @@
/**
* Admin routes User management, tier management, user data access
*
* Protected by JWT auth + admin role check.
*/
import { Hono } from 'hono';
import { and, count, countDistinct, eq, gte, isNull } from 'drizzle-orm';
import type { AuthUser } from '../middleware/jwt-auth';
import type { PostgresJsDatabase } from 'drizzle-orm/postgres-js';
import { users, sessions } from '../db/schema/auth';
import { loginAttempts } from '../db/schema/login-attempts';
import type { UserDataService } from '../services/user-data';
const VALID_TIERS = ['guest', 'public', 'beta', 'alpha', 'founder'] as const;
type AccessTier = (typeof VALID_TIERS)[number];
export function createAdminRoutes(db: PostgresJsDatabase<any>, userDataService: UserDataService) {
const app = new Hono<{ Variables: { user: AuthUser } }>();
// Admin role check middleware
app.use('*', async (c, next) => {
const user = c.get('user');
if (user.role !== 'admin') {
return c.json({ error: 'Forbidden', message: 'Admin access required' }, 403);
}
await next();
});
// ─── Aggregate stats for the admin dashboard ──────────────
//
// Replaces hardcoded mock data in apps/mana/apps/web/src/routes/
// (app)/admin/+page.svelte. All seven values come from auth.users,
// auth.sessions and auth.login_attempts — no other service is
// involved, which keeps this endpoint a pure single-DB read.
app.get('/stats', async (c) => {
const now = new Date();
const sevenDaysAgo = new Date(now.getTime() - 7 * 24 * 60 * 60 * 1000);
const thirtyDaysAgo = new Date(now.getTime() - 30 * 24 * 60 * 60 * 1000);
const twentyFourHoursAgo = new Date(now.getTime() - 24 * 60 * 60 * 1000);
// One query per stat — Postgres handles them in parallel via
// the connection pool when wrapped in Promise.all. Each one is
// a single indexed count, so total latency is dominated by
// round-trip not query work.
const [
[{ value: totalUsers }],
[{ value: newUsers7d }],
[{ value: newUsers30d }],
[{ value: activeSessions }],
[{ value: uniqueUsers24h }],
[{ value: loginSuccess7d }],
[{ value: loginFailed7d }],
] = await Promise.all([
db.select({ value: count() }).from(users).where(isNull(users.deletedAt)),
db
.select({ value: count() })
.from(users)
.where(and(isNull(users.deletedAt), gte(users.createdAt, sevenDaysAgo))),
db
.select({ value: count() })
.from(users)
.where(and(isNull(users.deletedAt), gte(users.createdAt, thirtyDaysAgo))),
db
.select({ value: count() })
.from(sessions)
.where(and(gte(sessions.expiresAt, now), isNull(sessions.revokedAt))),
db
.select({ value: countDistinct(sessions.userId) })
.from(sessions)
.where(and(isNull(sessions.revokedAt), gte(sessions.lastActivityAt, twentyFourHoursAgo))),
db
.select({ value: count() })
.from(loginAttempts)
.where(
and(eq(loginAttempts.successful, true), gte(loginAttempts.attemptedAt, sevenDaysAgo))
),
db
.select({ value: count() })
.from(loginAttempts)
.where(
and(eq(loginAttempts.successful, false), gte(loginAttempts.attemptedAt, sevenDaysAgo))
),
]);
return c.json({
totalUsers,
newUsers7d,
newUsers30d,
activeSessions,
uniqueUsers24h,
loginSuccess7d,
loginFailed7d,
generatedAt: now.toISOString(),
});
});
// ─── List users with pagination and search ────────────────
app.get('/users', async (c) => {
const page = parseInt(c.req.query('page') || '1', 10);
const limit = parseInt(c.req.query('limit') || '20', 10);
const search = c.req.query('search');
const tier = c.req.query('tier');
// If tier-only query (legacy), use simple response
if (tier && !search && !c.req.query('page')) {
if (!VALID_TIERS.includes(tier as AccessTier)) {
return c.json({ error: 'Invalid tier' }, 400);
}
const result = await db
.select({
id: users.id,
email: users.email,
name: users.name,
role: users.role,
accessTier: users.accessTier,
createdAt: users.createdAt,
})
.from(users)
.where(eq(users.accessTier, tier as AccessTier))
.limit(limit);
return c.json({ users: result, count: result.length });
}
// Full paginated list with search
const result = await userDataService.listUsers(page, limit, search || undefined);
return c.json(result);
});
// ─── Get user data summary (aggregated) ───────────────────
app.get('/users/:userId/data', async (c) => {
const { userId } = c.req.param();
const summary = await userDataService.getUserDataSummary(userId);
if (!summary) {
return c.json({ error: 'Not found', message: 'User not found' }, 404);
}
return c.json(summary);
});
// ─── Delete user data ─────────────────────────────────────
app.delete('/users/:userId/data', async (c) => {
const { userId } = c.req.param();
// Get user email first for confirmation
const [user] = await db
.select({ email: users.email })
.from(users)
.where(eq(users.id, userId))
.limit(1);
if (!user) {
return c.json({ error: 'Not found', message: 'User not found' }, 404);
}
const result = await userDataService.deleteUserData(userId, user.email);
return c.json(result);
});
// ─── Update user's access tier ────────────────────────────
app.put('/users/:userId/tier', async (c) => {
const { userId } = c.req.param();
const body = await c.req.json();
const { tier } = body as { tier: string };
if (!tier || !VALID_TIERS.includes(tier as AccessTier)) {
return c.json(
{
error: 'Invalid tier',
message: `Tier must be one of: ${VALID_TIERS.join(', ')}`,
},
400
);
}
const [updated] = await db
.update(users)
.set({ accessTier: tier as AccessTier, updatedAt: new Date() })
.where(eq(users.id, userId))
.returning({ id: users.id, email: users.email, accessTier: users.accessTier });
if (!updated) {
return c.json({ error: 'Not found', message: 'User not found' }, 404);
}
return c.json({ success: true, user: updated });
});
// ─── Get user's current tier ──────────────────────────────
app.get('/users/:userId/tier', async (c) => {
const { userId } = c.req.param();
const [user] = await db
.select({ id: users.id, email: users.email, accessTier: users.accessTier })
.from(users)
.where(eq(users.id, userId))
.limit(1);
if (!user) {
return c.json({ error: 'Not found', message: 'User not found' }, 404);
}
return c.json(user);
});
return app;
}

View file

@ -1,115 +0,0 @@
/**
* Mission Grant route `POST /api/v1/me/ai-mission-grant`.
*
* Mints a grant that lets the mana-ai background runner decrypt the
* allowlisted records for a specific mission without needing the user's
* browser to be open. See `docs/plans/ai-mission-key-grant.md` for the
* full flow; crypto details in `services/encryption-vault/mission-grant.ts`.
*
* The client posts `{ missionId, tables, recordIds, ttlMs? }`; the server
* derives + RSA-wraps a Mission Data Key and returns the grant blob.
* The webapp attaches this to `Mission.grant` via the normal sync path.
* The recovery / revocation side lives on the webapp revoking is just
* setting `Mission.grant = null` on the Dexie record; the server has
* nothing to remember.
*/
import { Hono, type Context } from 'hono';
import type { AuthUser } from '../middleware/jwt-auth';
import {
MissionGrantService,
MissionGrantNotConfigured,
ZeroKnowledgeGrantForbidden,
VaultNotFoundError,
} from '../services/encryption-vault/mission-grant';
import type { AuditContext } from '../services/encryption-vault';
type AppContext = Context<{ Variables: { user: AuthUser } }>;
export function createAiMissionGrantRoutes(service: MissionGrantService) {
const app = new Hono<{ Variables: { user: AuthUser } }>();
app.post('/', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
const body = (await c.req.json().catch(() => null)) as {
missionId?: unknown;
tables?: unknown;
recordIds?: unknown;
ttlMs?: unknown;
} | null;
if (
!body ||
typeof body.missionId !== 'string' ||
!body.missionId ||
!Array.isArray(body.tables) ||
!body.tables.every((t) => typeof t === 'string') ||
!Array.isArray(body.recordIds) ||
!body.recordIds.every((r) => typeof r === 'string')
) {
return c.json(
{
error: 'missionId (string), tables (string[]), recordIds (string[]) are required',
code: 'BAD_REQUEST',
},
400
);
}
const ttlMs = typeof body.ttlMs === 'number' ? body.ttlMs : undefined;
try {
const grant = await service.createGrant(
user.userId,
{
missionId: body.missionId,
tables: body.tables as string[],
recordIds: body.recordIds as string[],
ttlMs,
},
ctx
);
return c.json(grant);
} catch (err) {
if (err instanceof MissionGrantNotConfigured) {
return c.json(
{
error: 'mission grants are not configured on this server',
code: 'GRANT_NOT_CONFIGURED',
},
503
);
}
if (err instanceof ZeroKnowledgeGrantForbidden) {
return c.json(
{
error:
'mission grants are unavailable in zero-knowledge mode — disable ZK or use the foreground runner',
code: 'ZK_ACTIVE',
},
409
);
}
if (err instanceof VaultNotFoundError) {
return c.json({ error: 'vault not initialised', code: 'VAULT_NOT_INITIALISED' }, 404);
}
if (err instanceof Error && /required|must/.test(err.message)) {
return c.json({ error: err.message, code: 'BAD_REQUEST' }, 400);
}
throw err;
}
});
return app;
}
function readAuditContext(c: AppContext): AuditContext {
return {
ipAddress:
c.req.header('x-forwarded-for')?.split(',')[0]?.trim() ||
c.req.header('x-real-ip') ||
undefined,
userAgent: c.req.header('user-agent') || undefined,
};
}

View file

@ -1,33 +0,0 @@
/**
* API Key routes Service-to-service authentication keys
*/
import { Hono } from 'hono';
import type { ApiKeysService } from '../services/api-keys';
import type { AuthUser } from '../middleware/jwt-auth';
export function createApiKeyRoutes(apiKeysService: ApiKeysService) {
return new Hono<{ Variables: { user: AuthUser } }>()
.get('/', async (c) => {
const user = c.get('user');
return c.json(await apiKeysService.listUserApiKeys(user.userId));
})
.post('/', async (c) => {
const user = c.get('user');
const body = await c.req.json();
const result = await apiKeysService.createApiKey(user.userId, body);
return c.json(result, 201);
})
.delete('/:id', async (c) => {
const user = c.get('user');
return c.json(await apiKeysService.revokeApiKey(user.userId, c.req.param('id')));
});
}
/** Validation route — no JWT required, uses API key itself */
export function createApiKeyValidationRoute(apiKeysService: ApiKeysService) {
return new Hono().post('/validate', async (c) => {
const { apiKey, scope } = await c.req.json();
return c.json(await apiKeysService.validateApiKey(apiKey, scope));
});
}

View file

@ -1,358 +0,0 @@
/**
* Integration-style tests for the auth-route wrappers.
*
* Stubs Better Auth's `handler` + `api.*` so the tests exercise the
* wrapper logic (classifier invocation, lockout semantics, security
* events) without needing a real DB. The one invariant every test
* enforces: a failing upstream MUST produce a classified error, and
* infra failures (5xx, throw) MUST NOT bump the password lockout.
*
* Unit tests for the classifier itself live in `lib/auth-errors.spec.ts`.
* This file is about the *routing layer*: does the handler correctly
* feed the classifier, forward the right context, and only hit the
* right side effects?
*/
import { describe, it, expect, beforeEach } from 'bun:test';
import { Hono } from 'hono';
import { createAuthRoutes } from './auth';
import type { BetterAuthInstance } from '../auth/better-auth.config';
import type { SecurityEventsService, AccountLockoutService } from '../services/security';
import type { SignupLimitService } from '../services/signup-limit';
import type { Config } from '../config';
// ─── Fakes ────────────────────────────────────────────────────
/** Fake that records what the routes call against it. */
type Recorded = {
securityEvents: Array<Record<string, unknown>>;
lockoutRecords: Array<{ email: string; successful: boolean; ip?: string }>;
lockoutCleared: string[];
};
function makeFakes(
overrides: {
signInResponse?: () => Response;
signUpResult?: () => unknown;
lockoutStatus?: { locked: boolean; remainingSeconds?: number };
} = {}
) {
const recorded: Recorded = {
securityEvents: [],
lockoutRecords: [],
lockoutCleared: [],
};
const security: SecurityEventsService = {
logEvent: (p: Record<string, unknown>) => {
recorded.securityEvents.push(p);
},
// Unused by the routes under test, but required by the type.
getUserEvents: async () => [] as never,
} as unknown as SecurityEventsService;
const lockout: AccountLockoutService = {
checkLockout: async () => overrides.lockoutStatus ?? { locked: false },
recordAttempt: async (email: string, successful: boolean, ip?: string) => {
recorded.lockoutRecords.push({ email, successful, ip });
},
clearAttempts: async (email: string) => {
recorded.lockoutCleared.push(email);
},
} as unknown as AccountLockoutService;
const signupLimit: SignupLimitService = {
checkLimit: async () => ({ allowed: true, remaining: 100, resetsAt: Date.now() + 86400000 }),
getStatus: async () => ({ allowed: true, remaining: 100 }),
} as unknown as SignupLimitService;
// Minimal BetterAuthInstance stub — only the methods the routes touch.
const auth = {
handler: async () =>
overrides.signInResponse ? overrides.signInResponse() : new Response('{}', { status: 200 }),
api: {
signUpEmail: async () => {
if (overrides.signUpResult) return overrides.signUpResult();
return { user: { id: 'u-new', email: 'x@y.de' } };
},
requestPasswordReset: async () => ({}),
resetPassword: async () => ({}),
sendVerificationEmail: async () => ({}),
updateUser: async () => ({}),
changeEmail: async () => ({}),
changePassword: async () => ({}),
deleteUser: async () => ({}),
},
} as unknown as BetterAuthInstance;
const config: Config = {
port: 3001,
databaseUrl: 'postgres://fake',
syncDatabaseUrl: 'postgres://fake',
baseUrl: 'http://localhost:3001',
cookieDomain: '',
nodeEnv: 'test',
serviceKey: 'test',
cors: { origins: [] },
manaNotifyUrl: '',
manaCreditsUrl: '',
manaSubscriptionsUrl: '',
manaMailUrl: '',
encryptionKek: 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=',
webauthn: { rpId: 'localhost', rpName: 'test', origin: 'http://localhost:5173' },
};
const app = new Hono();
app.route('/', createAuthRoutes(auth, config, security, lockout, signupLimit));
return { app, recorded };
}
// ─── /login ───────────────────────────────────────────────────
describe('/login', () => {
it('returns 200 + passes user through on success', async () => {
const { app } = makeFakes({
signInResponse: () =>
new Response(JSON.stringify({ user: { id: 'u1', email: 'u@x.de' }, token: 't' }), {
status: 200,
headers: { 'content-type': 'application/json' },
}),
});
const res = await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'correct' }),
});
expect(res.status).toBe(200);
const body = (await res.json()) as { user: { id: string } };
expect(body.user.id).toBe('u1');
});
it('maps upstream 401 → INVALID_CREDENTIALS + bumps lockout', async () => {
const { app, recorded } = makeFakes({
signInResponse: () =>
new Response(JSON.stringify({ code: 'INVALID_EMAIL_OR_PASSWORD' }), {
status: 401,
headers: { 'content-type': 'application/json' },
}),
});
const res = await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'wrong' }),
});
expect(res.status).toBe(401);
const body = (await res.json()) as { error: string };
expect(body.error).toBe('INVALID_CREDENTIALS');
expect(recorded.lockoutRecords).toHaveLength(1);
expect(recorded.lockoutRecords[0]!.successful).toBe(false);
});
it('REGRESSION: upstream 500 → 503 SERVICE_UNAVAILABLE + does NOT bump lockout', async () => {
// The ORIGINAL bug this whole refactor exists to prevent: the
// missing onboarding_completed_at column caused Better Auth's
// internal handler to crash with a Postgres error, return 500
// with empty body, and the old wrapper counted that as a
// credential failure. Five hits → every user locked out of
// their own account, indistinguishable from attackers.
const { app, recorded } = makeFakes({
signInResponse: () => new Response('', { status: 500 }),
});
const res = await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'whatever' }),
});
expect(res.status).toBe(503);
const body = (await res.json()) as { error: string };
expect(body.error).toBe('SERVICE_UNAVAILABLE');
// The critical invariant: no lockout bump on infra failure.
expect(recorded.lockoutRecords).toHaveLength(0);
});
it('upstream 403 FORBIDDEN → 403 EMAIL_NOT_VERIFIED, no lockout bump', async () => {
const { app, recorded } = makeFakes({
signInResponse: () =>
new Response(JSON.stringify({ code: 'EMAIL_NOT_VERIFIED' }), {
status: 403,
headers: { 'content-type': 'application/json' },
}),
});
const res = await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'correct' }),
});
expect(res.status).toBe(403);
const body = (await res.json()) as { error: string };
expect(body.error).toBe('EMAIL_NOT_VERIFIED');
expect(recorded.lockoutRecords).toHaveLength(0);
});
it('locked account → 429 ACCOUNT_LOCKED with Retry-After header', async () => {
const { app } = makeFakes({
lockoutStatus: { locked: true, remainingSeconds: 180 },
});
const res = await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'whatever' }),
});
expect(res.status).toBe(429);
expect(res.headers.get('retry-after')).toBe('180');
const body = (await res.json()) as { error: string };
expect(body.error).toBe('ACCOUNT_LOCKED');
});
it('upstream throw (network / uncaught) → 500 INTERNAL, no lockout bump', async () => {
const { app, recorded } = makeFakes({
signInResponse: () => {
throw new Error('connect ECONNREFUSED');
},
});
const res = await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'whatever' }),
});
// Error.message contains 'ECONNREFUSED' but the classifier
// needs a `.code` property for the network-error branch. Without
// that the Error falls through to INTERNAL. Both are valid
// infra classifications; key invariant is "no lockout bump".
expect(res.status).toBeGreaterThanOrEqual(500);
const body = (await res.json()) as { error: string };
expect(['INTERNAL', 'SERVICE_UNAVAILABLE']).toContain(body.error);
expect(recorded.lockoutRecords).toHaveLength(0);
});
it('malformed JSON body → 400 VALIDATION, no lockout bump', async () => {
const { app, recorded } = makeFakes();
const res = await app.request('/login', {
method: 'POST',
body: '{{{not json',
});
expect(res.status).toBe(400);
const body = (await res.json()) as { error: string };
expect(body.error).toBe('VALIDATION');
expect(recorded.lockoutRecords).toHaveLength(0);
});
it('success clears the lockout attempts for the email', async () => {
const { app, recorded } = makeFakes({
signInResponse: () =>
new Response(JSON.stringify({ user: { id: 'u1', email: 'u@x.de' }, token: 't' }), {
status: 200,
headers: { 'content-type': 'application/json' },
}),
});
await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'correct' }),
});
expect(recorded.lockoutCleared).toEqual(['u@x.de']);
});
});
// ─── /register ─────────────────────────────────────────────────
describe('/register', () => {
it('returns 200 on successful signup', async () => {
const { app } = makeFakes();
const res = await app.request('/register', {
method: 'POST',
body: JSON.stringify({ email: 'new@x.de', password: 'Aa-12345678', name: 'new' }),
});
expect(res.status).toBe(200);
});
it('Better Auth APIError USER_ALREADY_EXISTS → 409 EMAIL_ALREADY_REGISTERED', async () => {
const { app } = makeFakes({
signUpResult: () => {
const err = Object.assign(new Error('User already exists'), {
name: 'APIError',
status: 'UNPROCESSABLE_ENTITY',
statusCode: 422,
body: { code: 'USER_ALREADY_EXISTS' },
});
throw err;
},
});
const res = await app.request('/register', {
method: 'POST',
body: JSON.stringify({ email: 'existing@x.de', password: 'Aa-12345678' }),
});
expect(res.status).toBe(409);
const body = (await res.json()) as { error: string };
expect(body.error).toBe('EMAIL_ALREADY_REGISTERED');
});
it('REGRESSION: Postgres schema-drift error → 503 SERVICE_UNAVAILABLE', async () => {
// The ACTUAL production bug: Better Auth's signup hook ran a
// SELECT that referenced the missing onboarding_completed_at
// column, bubbling up a PostgresError. The old register
// wrapper re-threw it so Hono's errorHandler returned a
// generic 500. Now it routes through the classifier.
const { app } = makeFakes({
signUpResult: () => {
const err = Object.assign(new Error('column "foo_column" does not exist'), {
code: '42703',
severity: 'ERROR',
});
throw err;
},
});
const res = await app.request('/register', {
method: 'POST',
body: JSON.stringify({ email: 'new@x.de', password: 'Aa-12345678' }),
});
expect(res.status).toBe(503);
const body = (await res.json()) as { error: string };
expect(body.error).toBe('SERVICE_UNAVAILABLE');
});
it('signup-limit exhausted → 429 SIGNUP_LIMIT_REACHED', async () => {
const { app } = makeFakes();
// Override signupLimit via a fresh call. Simplest path: build
// a new fakes() and override. For brevity, we re-use the
// existing helper's test via runtime mutation.
const fakes = makeFakes();
// Swap the signupLimit mock mid-construction isn't easy with
// the current helper; instead trust the existence of
// SIGNUP_LIMIT_REACHED as a classifier output — covered by
// the classifier spec. This placeholder just asserts the app
// is still callable after the prior tests (no cross-test leak).
const res = await fakes.app.request('/register', {
method: 'POST',
body: JSON.stringify({ email: 'new@x.de', password: 'Aa-12345678' }),
});
expect(res.status).toBe(200);
});
});
// ─── End-to-end invariants ─────────────────────────────────────
describe('cross-endpoint invariants', () => {
it('infra-classified errors never touch the lockout table', async () => {
// Fire 20 login attempts against a "DB is down" stub. Lockout
// bumps should be exactly zero. Regression against the original
// bug where 5 of these would lock the account.
const { app, recorded } = makeFakes({
signInResponse: () => new Response('', { status: 500 }),
});
for (let i = 0; i < 20; i++) {
await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'whatever' }),
});
}
expect(recorded.lockoutRecords).toHaveLength(0);
});
it('infra-classified errors fire SERVICE_ERROR, not LOGIN_FAILURE', async () => {
const { app, recorded } = makeFakes({
signInResponse: () => new Response('', { status: 500 }),
});
await app.request('/login', {
method: 'POST',
body: JSON.stringify({ email: 'u@x.de', password: 'whatever' }),
});
const eventTypes = recorded.securityEvents.map((e) => e.eventType);
expect(eventTypes).toContain('SERVICE_ERROR');
expect(eventTypes).not.toContain('LOGIN_FAILURE');
});
});

View file

@ -1,762 +0,0 @@
/**
* Auth routes Custom endpoints wrapping Better Auth
*
* Adds business logic (security events, lockout, credit init)
* around Better Auth's native sign-in/sign-up.
*/
import { Hono } from 'hono';
import postgres from 'postgres';
import { logger } from '@mana/shared-hono';
import type { AuthUser } from '../middleware/jwt-auth';
import type { BetterAuthInstance } from '../auth/better-auth.config';
import type { SecurityEventsService, AccountLockoutService } from '../services/security';
import type { SignupLimitService } from '../services/signup-limit';
import type { Config } from '../config';
import { sourceAppStore, passwordResetRedirectStore } from '../auth/stores';
import { bootstrapUserSingletons } from '../services/bootstrap-singletons';
/** Module-scoped postgres pool for the sync DB. Lazily created on first
* signUp; reused across requests. Caller never closes the process
* lifetime owns it. */
let _syncSql: ReturnType<typeof postgres> | null = null;
function getSyncSql(syncDatabaseUrl: string): ReturnType<typeof postgres> {
if (!_syncSql) _syncSql = postgres(syncDatabaseUrl, { max: 2 });
return _syncSql;
}
import {
AuthErrorCode,
classify,
classifyFromError,
classifyFromResponse,
respondWithError,
type AuthErrorDeps,
} from '../lib/auth-errors';
export function createAuthRoutes(
auth: BetterAuthInstance,
config: Config,
security: SecurityEventsService,
lockout: AccountLockoutService,
signupLimit: SignupLimitService
) {
const app = new Hono<{ Variables: { user: AuthUser } }>();
// Deps passed to respondWithError. security + lockout are held by
// reference so later construction order doesn't matter; the shaper
// only calls these when it writes an error response.
const errDeps: AuthErrorDeps = { security, lockout };
// ─── Registration ────────────────────────────────────────
// ─── Signup Status (public) ─────────────────────────────
app.get('/signup-status', async (c) => {
const status = await signupLimit.getStatus();
return c.json(status);
});
app.post('/register', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { email?: string; password?: string; name?: string; sourceAppUrl?: string };
try {
body = await c.req.json();
} catch (err) {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/register', ipAddress: ip },
errDeps
);
}
// Check daily signup limit
const limitCheck = await signupLimit.checkLimit();
if (!limitCheck.allowed) {
return respondWithError(
c,
classify(AuthErrorCode.SIGNUP_LIMIT_REACHED, {
message: 'Das tägliche Registrierungslimit ist erreicht. Versuche es morgen wieder.',
}),
{
endpoint: '/register',
ipAddress: ip,
email: body.email,
extra: { resetsAt: limitCheck.resetsAt },
},
errDeps
);
}
// Store source app URL for email verification redirect
if (body.sourceAppUrl && body.email) {
sourceAppStore.set(body.email, body.sourceAppUrl);
}
let response;
try {
response = await auth.api.signUpEmail({
body: {
email: body.email || '',
password: body.password || '',
name: body.name || (body.email || '').split('@')[0],
},
headers: c.req.raw.headers,
});
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/register', ipAddress: ip, email: body.email },
errDeps
);
}
if (response?.user?.id) {
void security.logEvent({
userId: response.user.id,
eventType: 'REGISTER',
ipAddress: ip,
});
// Init credits (fire-and-forget)
fetch(`${config.manaCreditsUrl}/api/v1/internal/credits/init`, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Service-Key': config.serviceKey },
body: JSON.stringify({ userId: response.user.id }),
}).catch(() => {});
// Redeem pending gifts
fetch(`${config.manaCreditsUrl}/api/v1/internal/gifts/redeem-pending`, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Service-Key': config.serviceKey },
body: JSON.stringify({ userId: response.user.id, email: body.email }),
}).catch(() => {});
// Provision mail account (fire-and-forget)
fetch(`${config.manaMailUrl}/api/v1/internal/mail/on-user-created`, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Service-Key': config.serviceKey },
body: JSON.stringify({
userId: response.user.id,
email: body.email,
name: body.name || (body.email || '').split('@')[0],
}),
}).catch(() => {});
// Bootstrap per-user singletons in mana_sync (userContext today;
// kontextDoc + others can join later). Fire-and-forget — failure
// only means the webapp's `ensureDoc()` fallback path will create
// the row on the first mount, which is the F4-pre behaviour. See
// docs/plans/sync-field-meta-overhaul.md F4.
bootstrapUserSingletons(response.user.id, getSyncSql(config.syncDatabaseUrl)).catch(
(err: unknown) => {
logger.error('[auth] bootstrapUserSingletons failed', {
userId: response.user?.id,
err: err instanceof Error ? err.message : String(err),
});
}
);
}
return c.json(response);
});
// ─── Login ───────────────────────────────────────────────
app.post('/login', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
const userAgent = c.req.header('user-agent') ?? undefined;
let body: { email?: string; password?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/login', ipAddress: ip, userAgent },
errDeps
);
}
// Check lockout BEFORE talking to Better Auth — a locked account
// should not add further upstream load.
const lockoutStatus = await lockout.checkLockout(body.email || '');
if (lockoutStatus.locked) {
return respondWithError(
c,
classify(AuthErrorCode.ACCOUNT_LOCKED, {
retryAfterSec: lockoutStatus.remainingSeconds,
}),
{ endpoint: '/login', ipAddress: ip, userAgent, email: body.email },
errDeps
);
}
// Sign in via Better Auth's HTTP handler so we get back a real
// Response with Set-Cookie. The auth.api.signInEmail() SDK call
// only returns the body and we'd lose the signed cookie envelope
// that /api/auth/token needs to validate the session — the cookie
// value is `<sessionToken>.<HMAC>`, not just the raw session token,
// so reconstructing it from the API response doesn't work.
let signInResponse: Response;
try {
signInResponse = await auth.handler(
new Request(new URL('/api/auth/sign-in/email', config.baseUrl), {
method: 'POST',
headers: new Headers({
'Content-Type': 'application/json',
// Forward original X-Forwarded-For so Better Auth's rate
// limiting and our security log see the right IP.
...(c.req.header('x-forwarded-for')
? { 'X-Forwarded-For': c.req.header('x-forwarded-for') as string }
: {}),
}),
body: JSON.stringify({ email: body.email, password: body.password }),
})
);
} catch (error) {
// Upstream threw before even returning a response — Better Auth
// internals blew up (e.g. the APIError('FORBIDDEN') for
// unverified emails, or an unhandled DB error like the
// onboarding_completed_at case). Classifier handles both.
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/login', ipAddress: ip, userAgent, email: body.email },
errDeps
);
}
if (!signInResponse.ok) {
return respondWithError(
c,
await classifyFromResponse(signInResponse),
{ endpoint: '/login', ipAddress: ip, userAgent, email: body.email },
errDeps
);
}
const response = (await signInResponse.json()) as {
user?: { id: string };
token?: string;
redirect?: boolean;
};
if (response?.user?.id) {
void security.logEvent({
userId: response.user.id,
eventType: 'LOGIN_SUCCESS',
ipAddress: ip,
});
void lockout.clearAttempts(body.email || '');
}
// Capture the signed session cookie that Better Auth set on the
// sign-in response and forward it verbatim to /api/auth/token to
// mint a JWT. This is the only path that produces a cookie value
// with a valid HMAC signature.
const setCookie = signInResponse.headers.get('set-cookie');
if (setCookie) {
const tokenResponse = await auth.handler(
new Request(new URL('/api/auth/token', config.baseUrl), {
method: 'GET',
headers: new Headers({ cookie: setCookie }),
})
);
if (tokenResponse.ok) {
const tokenData = (await tokenResponse.json()) as { token: string };
return c.json({
...response,
accessToken: tokenData.token,
refreshToken: response.token,
});
}
}
// JWT mint failed (or no Set-Cookie came back). Still return the
// sign-in body so the client at least sees the user object.
return c.json(response);
});
// ─── Session → JWT Token Exchange ───────────────────────
// Used by SSO (trySSO) and after 2FA verification to get JWT from session cookie
app.post('/session-to-token', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
try {
const sessionResponse = await auth.handler(
new Request(new URL('/api/auth/get-session', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
if (!sessionResponse.ok) {
return respondWithError(
c,
classify(AuthErrorCode.UNAUTHORIZED, { message: 'No valid session' }),
{ endpoint: '/session-to-token', ipAddress: ip },
errDeps
);
}
const sessionData = await sessionResponse.json();
if (!sessionData?.session?.token) {
return respondWithError(
c,
classify(AuthErrorCode.UNAUTHORIZED, { message: 'No valid session' }),
{ endpoint: '/session-to-token', ipAddress: ip },
errDeps
);
}
const tokenResponse = await auth.handler(
new Request(new URL('/api/auth/token', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
if (!tokenResponse.ok) {
return respondWithError(
c,
await classifyFromResponse(tokenResponse),
{ endpoint: '/session-to-token', ipAddress: ip, extra: { step: 'mint-jwt' } },
errDeps
);
}
const tokenData = await tokenResponse.json();
return c.json({
accessToken: tokenData.token,
// Session token serves as refresh mechanism via session cookie
refreshToken: sessionData.session.token,
});
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/session-to-token', ipAddress: ip },
errDeps
);
}
});
// ─── Token Validation ────────────────────────────────────
app.post('/validate', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { token?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/validate', ipAddress: ip },
errDeps
);
}
if (!body.token) {
// /validate is a lookup; an absent token is a callable "is this
// JWT valid" query rather than an error. Return a falsey body
// at 200 to match the pre-existing contract (clients branch on
// `valid: false`, not status).
return c.json({ valid: false });
}
try {
const { jwtVerify, createRemoteJWKSet } = await import('jose');
const jwks = createRemoteJWKSet(new URL('/api/auth/jwks', config.baseUrl));
const { payload } = await jwtVerify(body.token, jwks, {
issuer: config.baseUrl,
audience: 'mana',
});
return c.json({ valid: true, payload });
} catch (error) {
const msg = error instanceof Error ? error.message.toLowerCase() : '';
// Expired / malformed JWT is a cold-path signal, not an outage.
// Only bucket JWKS-fetch failures as infra.
if (msg.includes('jwks') || msg.includes('fetch failed')) {
return respondWithError(
c,
classify(AuthErrorCode.SERVICE_UNAVAILABLE, { cause: error }),
{ endpoint: '/validate', ipAddress: ip },
errDeps
);
}
return c.json({ valid: false });
}
});
// ─── Session & Logout ────────────────────────────────────
app.post('/logout', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
try {
return await auth.handler(
new Request(new URL('/api/auth/sign-out', config.baseUrl), {
method: 'POST',
headers: c.req.raw.headers,
})
);
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/logout', ipAddress: ip },
errDeps
);
}
});
app.get('/session', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
try {
return await auth.handler(
new Request(new URL('/api/auth/get-session', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/session', ipAddress: ip },
errDeps
);
}
});
app.post('/refresh', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
try {
const tokenResponse = await auth.handler(
new Request(new URL('/api/auth/token', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
if (!tokenResponse.ok) {
// 401/403 here means "session expired" — Better Auth's /token
// only returns them when the cookie failed validation. Map
// to TOKEN_EXPIRED rather than INVALID_CREDENTIALS (which is
// the classifier's status-based fallback) so the client can
// trigger a clean re-login flow instead of showing a
// misleading "wrong password" toast.
if (tokenResponse.status === 401 || tokenResponse.status === 403) {
return respondWithError(
c,
classify(AuthErrorCode.TOKEN_EXPIRED, { message: 'Session expired' }),
{ endpoint: '/refresh', ipAddress: ip },
errDeps
);
}
return respondWithError(
c,
await classifyFromResponse(tokenResponse),
{ endpoint: '/refresh', ipAddress: ip },
errDeps
);
}
const tokenData = await tokenResponse.json();
// Also get session data for the refresh token. If this upstream
// fails we still return the access token so the refresh flow
// isn't a hard-dependency on two round-trips succeeding.
const sessionResponse = await auth.handler(
new Request(new URL('/api/auth/get-session', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
const sessionData = sessionResponse.ok ? await sessionResponse.json() : null;
return c.json({
accessToken: tokenData.token,
refreshToken: sessionData?.session?.token,
});
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/refresh', ipAddress: ip },
errDeps
);
}
});
// ─── Password Management ─────────────────────────────────
app.post('/forgot-password', async (c) => {
// Intentionally 200-always: revealing "email not registered" here
// is a user-enumeration oracle. We log upstream failures server-
// side so the failure mode is observable without leaking anything
// to the client.
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { email?: string; redirectTo?: string };
try {
body = await c.req.json();
} catch {
return c.json({ success: true });
}
if (body.redirectTo && body.email) {
passwordResetRedirectStore.set(body.email, body.redirectTo);
}
try {
// Better Auth's plugin calls this `requestPasswordReset` in
// 1.6+ (the older `forgetPassword` was a typo retained for
// back-compat and is typed-away in current builds).
await auth.api.requestPasswordReset({
body: { email: body.email || '', redirectTo: body.redirectTo },
});
void security.logEvent({
eventType: 'PASSWORD_RESET_REQUESTED',
ipAddress: ip,
metadata: { email: body.email },
});
} catch (error) {
// Log but do not surface — see comment above.
logger.warn('forgot-password upstream failed (still returning 200)', {
email: body.email,
error: error instanceof Error ? error.message : String(error),
});
}
return c.json({ success: true });
});
app.post('/reset-password', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { newPassword?: string; token?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/reset-password', ipAddress: ip },
errDeps
);
}
try {
await auth.api.resetPassword({
body: { newPassword: body.newPassword || '', token: body.token || '' },
});
void security.logEvent({ eventType: 'PASSWORD_RESET_COMPLETED', ipAddress: ip });
return c.json({ success: true });
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/reset-password', ipAddress: ip },
errDeps
);
}
});
app.post('/resend-verification', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { email?: string; sourceAppUrl?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/resend-verification', ipAddress: ip },
errDeps
);
}
if (body.sourceAppUrl && body.email) {
sourceAppStore.set(body.email, body.sourceAppUrl);
}
try {
await auth.api.sendVerificationEmail({ body: { email: body.email || '' } });
return c.json({ success: true });
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/resend-verification', ipAddress: ip, email: body.email },
errDeps
);
}
});
// ─── Profile ─────────────────────────────────────────────
app.get('/profile', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
try {
return await auth.handler(
new Request(new URL('/api/auth/get-session', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: 'GET /profile', ipAddress: ip },
errDeps
);
}
});
app.post('/profile', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: Record<string, unknown>;
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: 'POST /profile', ipAddress: ip },
errDeps
);
}
try {
const result = await auth.api.updateUser({ body, headers: c.req.raw.headers });
void security.logEvent({ eventType: 'PROFILE_UPDATED', ipAddress: ip });
return c.json(result);
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: 'POST /profile', ipAddress: ip },
errDeps
);
}
});
app.post('/change-email', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { newEmail?: string; callbackURL?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/change-email', ipAddress: ip },
errDeps
);
}
if (!body.newEmail) {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'newEmail is required' }),
{ endpoint: '/change-email', ipAddress: ip },
errDeps
);
}
try {
await auth.api.changeEmail({
body: { newEmail: body.newEmail, callbackURL: body.callbackURL },
headers: c.req.raw.headers,
});
void security.logEvent({
eventType: 'EMAIL_CHANGE_REQUESTED',
ipAddress: ip,
metadata: { newEmail: body.newEmail },
});
return c.json({ success: true, message: 'Verification email sent to new address' });
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/change-email', ipAddress: ip },
errDeps
);
}
});
app.post('/change-password', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { currentPassword?: string; newPassword?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: '/change-password', ipAddress: ip },
errDeps
);
}
try {
await auth.api.changePassword({
body: {
currentPassword: body.currentPassword || '',
newPassword: body.newPassword || '',
},
headers: c.req.raw.headers,
});
void security.logEvent({ eventType: 'PASSWORD_CHANGED', ipAddress: ip });
return c.json({ success: true });
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: '/change-password', ipAddress: ip },
errDeps
);
}
});
app.delete('/account', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
let body: { password?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: 'DELETE /account', ipAddress: ip },
errDeps
);
}
try {
await auth.api.deleteUser({
body: { password: body.password || '' },
headers: c.req.raw.headers,
});
void security.logEvent({ eventType: 'ACCOUNT_DELETED', ipAddress: ip });
return c.json({ success: true });
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: 'DELETE /account', ipAddress: ip },
errDeps
);
}
});
// ─── Security Events ─────────────────────────────────────
app.get('/security-events', async (c) => {
const user = c.get('user');
const events = await security.getUserEvents(user.userId);
return c.json(events);
});
// ─── JWKS ────────────────────────────────────────────────
app.get('/jwks', async (c) => {
return auth.handler(
new Request(new URL('/api/auth/jwks', config.baseUrl), {
method: 'GET',
headers: c.req.raw.headers,
})
);
});
return app;
}

View file

@ -1,309 +0,0 @@
/**
* Encryption vault routes `/api/v1/me/encryption-vault/*`
*
* The browser fetches its master key from these endpoints at login and
* stashes the result in sessionStorage. All routes require a valid JWT
* via the standard jwt-auth middleware there is no admin or service-
* to-service variant. The vault is a strictly per-user resource.
*
* Routes:
* POST /init Mints a fresh MK if none exists, then returns it.
* Idempotent calling twice is safe and returns
* the existing key on the second call.
* GET /key Returns the existing MK. 404 if not initialised
* (client should call /init).
* POST /rotate Mints a new MK, replaces the existing wrap. Caller
* MUST handle re-encryption of any data sealed with
* the old key.
*
* The master key crosses the wire as base64 never as raw bytes so
* a JSON-aware client (browser, curl, jq) can deserialise it without
* worrying about binary content.
*
* Audit logging is the service's job; the route just passes ip + UA in
* via AuditContext.
*/
import { Hono, type Context } from 'hono';
import type { AuthUser } from '../middleware/jwt-auth';
import {
EncryptionVaultService,
VaultNotFoundError,
RecoveryWrapMissingError,
ZeroKnowledgeActiveError,
ZeroKnowledgeRotateForbidden,
type AuditContext,
} from '../services/encryption-vault';
type AppContext = Context<{ Variables: { user: AuthUser } }>;
export function createEncryptionVaultRoutes(vaultService: EncryptionVaultService) {
const app = new Hono<{ Variables: { user: AuthUser } }>();
// ─── GET /status ─────────────────────────────────────────
// Cheap metadata read used by the settings page to hydrate the UI
// after a reload. No decryption, no audit logging — pure SELECT.
// Returns the same shape regardless of whether the vault row
// exists yet, so the client can avoid a 404 dance for the
// "vault not initialised" case.
app.get('/status', async (c) => {
const user = c.get('user');
const status = await vaultService.getStatus(user.userId);
return c.json(status);
});
// ─── POST /init ──────────────────────────────────────────
// Idempotent. First call creates a vault row; subsequent calls
// return the existing master key. The client uses this on first
// login per device — `init` is also a safe fallback if `/key`
// returns 404 because the user has somehow never been initialised.
app.post('/init', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
const result = await vaultService.init(user.userId, ctx);
return c.json(serializeFetchResult(result));
});
// ─── GET /key ────────────────────────────────────────────
// The hot path: every Phase 3 client calls this immediately after
// login. Returns either the unwrapped MK as base64 (standard mode)
// OR the recovery-wrapped blob with `requiresRecoveryCode: true`
// (zero-knowledge mode — Phase 9). The vault service writes a
// `fetch` audit row on success, `failed_fetch` on any error path.
app.get('/key', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
try {
const result = await vaultService.getMasterKey(user.userId, ctx);
return c.json(serializeFetchResult(result));
} catch (err) {
if (err instanceof VaultNotFoundError) {
return c.json({ error: 'vault not initialised', code: 'VAULT_NOT_INITIALISED' }, 404);
}
throw err; // 500 via global error handler + audit row already written
}
});
// ─── POST /rotate ────────────────────────────────────────
// Destructive. Mints a fresh MK and overwrites the wrap. The old MK
// is gone forever. Routes do NOT enforce a 2FA challenge here —
// that's a UX decision the front-end has to enforce before calling.
// Forbidden in zero-knowledge mode (returns 409); the client has to
// disable ZK first.
app.post('/rotate', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
try {
const result = await vaultService.rotate(user.userId, ctx);
return c.json(serializeFetchResult(result));
} catch (err) {
if (err instanceof ZeroKnowledgeRotateForbidden) {
return c.json(
{
error: 'cannot rotate in zero-knowledge mode',
code: 'ZK_ROTATE_FORBIDDEN',
},
409
);
}
throw err;
}
});
// ─── POST /recovery-wrap ─────────────────────────────────
// Phase 9. Stores (or replaces) the user's recovery wrap. The
// client wraps the master key with a recovery-derived key locally
// and posts only the resulting ciphertext + IV. The recovery secret
// itself NEVER touches the wire — that's the entire point of the
// zero-knowledge design.
//
// This endpoint by itself does NOT enable zero-knowledge mode. The
// client has to follow up with POST /zero-knowledge after the user
// confirms they have backed up the recovery code.
app.post('/recovery-wrap', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
const body = await c.req.json().catch(() => null);
if (
!body ||
typeof body.recoveryWrappedMk !== 'string' ||
typeof body.recoveryIv !== 'string' ||
!body.recoveryWrappedMk ||
!body.recoveryIv
) {
return c.json(
{
error: 'recoveryWrappedMk and recoveryIv are required (base64 strings)',
code: 'BAD_REQUEST',
},
400
);
}
try {
await vaultService.setRecoveryWrap(
user.userId,
{ recoveryWrappedMk: body.recoveryWrappedMk, recoveryIv: body.recoveryIv },
ctx
);
return c.json({ ok: true });
} catch (err) {
if (err instanceof VaultNotFoundError) {
return c.json({ error: 'vault not initialised', code: 'VAULT_NOT_INITIALISED' }, 404);
}
throw err;
}
});
// ─── DELETE /recovery-wrap ───────────────────────────────
// Removes the recovery wrap. Forbidden in zero-knowledge mode
// (would lock the user out). Returns 409 with code ZK_ACTIVE in
// that case.
app.delete('/recovery-wrap', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
try {
await vaultService.clearRecoveryWrap(user.userId, ctx);
return c.json({ ok: true });
} catch (err) {
if (err instanceof VaultNotFoundError) {
return c.json({ error: 'vault not initialised', code: 'VAULT_NOT_INITIALISED' }, 404);
}
if (err instanceof ZeroKnowledgeActiveError) {
return c.json(
{
error: 'cannot clear recovery wrap while zero-knowledge is active',
code: 'ZK_ACTIVE',
},
409
);
}
throw err;
}
});
// ─── POST /zero-knowledge ────────────────────────────────
// Toggles zero-knowledge mode. Body shape:
// { enable: true } → flip on (requires recovery wrap)
// { enable: false, masterKey: base64 } → flip off (re-wrap with KEK)
//
// Enabling is destructive: the server-side wrapped_mk is NULLed out
// and the server can no longer decrypt the user's data. The client
// MUST have already called POST /recovery-wrap before calling this
// — otherwise the server returns 400 RECOVERY_WRAP_MISSING.
//
// Disabling requires the client to supply the freshly-unwrapped MK
// (from the recovery code unwrap) so the server can re-wrap it
// with the KEK. The user has to be unlocked at the moment of
// disable.
app.post('/zero-knowledge', async (c) => {
const user = c.get('user');
const ctx = readAuditContext(c);
const body = (await c.req.json().catch(() => null)) as {
enable?: boolean;
masterKey?: string;
} | null;
if (!body || typeof body.enable !== 'boolean') {
return c.json({ error: '`enable: boolean` is required', code: 'BAD_REQUEST' }, 400);
}
try {
if (body.enable) {
await vaultService.enableZeroKnowledge(user.userId, ctx);
return c.json({ ok: true, zeroKnowledge: true });
} else {
if (typeof body.masterKey !== 'string' || !body.masterKey) {
return c.json(
{
error: '`masterKey: base64` is required when disabling zero-knowledge',
code: 'BAD_REQUEST',
},
400
);
}
const mkBytes = base64ToBytes(body.masterKey);
if (mkBytes.length !== 32) {
return c.json({ error: 'masterKey must decode to 32 bytes', code: 'BAD_REQUEST' }, 400);
}
await vaultService.disableZeroKnowledge(user.userId, mkBytes, ctx);
// Best-effort wipe of the bytes once we've handed them off.
mkBytes.fill(0);
return c.json({ ok: true, zeroKnowledge: false });
}
} catch (err) {
if (err instanceof VaultNotFoundError) {
return c.json({ error: 'vault not initialised', code: 'VAULT_NOT_INITIALISED' }, 404);
}
if (err instanceof RecoveryWrapMissingError) {
return c.json(
{
error: 'set a recovery wrap before enabling zero-knowledge',
code: 'RECOVERY_WRAP_MISSING',
},
400
);
}
throw err;
}
});
return app;
}
/** Maps the service's VaultFetchResult into the JSON response shape.
* Branches on `requiresRecoveryCode` so the route handler doesn't
* duplicate the field-juggling. */
function serializeFetchResult(result: {
masterKey: Uint8Array | null;
formatVersion: number;
kekId: string;
requiresRecoveryCode?: boolean;
recoveryWrappedMk?: string;
recoveryIv?: string;
}): Record<string, unknown> {
if (result.requiresRecoveryCode) {
return {
requiresRecoveryCode: true,
recoveryWrappedMk: result.recoveryWrappedMk,
recoveryIv: result.recoveryIv,
formatVersion: result.formatVersion,
};
}
return {
masterKey: bytesToBase64(result.masterKey!),
formatVersion: result.formatVersion,
kekId: result.kekId,
};
}
// ─── Helpers ─────────────────────────────────────────────────
function readAuditContext(c: AppContext): AuditContext {
return {
ipAddress:
c.req.header('x-forwarded-for')?.split(',')[0]?.trim() ||
c.req.header('x-real-ip') ||
undefined,
userAgent: c.req.header('user-agent') || undefined,
};
}
function bytesToBase64(bytes: Uint8Array): string {
let bin = '';
for (let i = 0; i < bytes.length; i++) bin += String.fromCharCode(bytes[i]);
return btoa(bin);
}
function base64ToBytes(b64: string): Uint8Array {
const bin = atob(b64);
const out = new Uint8Array(bin.length);
for (let i = 0; i < bin.length; i++) out[i] = bin.charCodeAt(i);
return out;
}

View file

@ -1,108 +0,0 @@
/**
* Guild routes Organization management with shared Mana pools
*/
import { Hono } from 'hono';
import type { AuthUser } from '../middleware/jwt-auth';
import type { Config } from '../config';
import type { BetterAuthInstance } from '../auth/better-auth.config';
export function createGuildRoutes(auth: BetterAuthInstance, config: Config) {
return new Hono<{ Variables: { user: AuthUser } }>()
.post('/', async (c) => {
const user = c.get('user');
const body = await c.req.json();
// Check subscription limits
const limitsRes = await fetch(
`${config.manaSubscriptionsUrl}/api/v1/internal/plan-limits/${user.userId}`,
{ headers: { 'X-Service-Key': config.serviceKey } }
).catch(() => null);
const limits = limitsRes?.ok ? await limitsRes.json() : { maxOrganizations: 1 };
// Create org via Better Auth
const result = await auth.api.createOrganization({
body: { name: body.name, slug: body.slug, logo: body.logo },
headers: c.req.raw.headers,
});
// Init guild pool via mana-credits
if (result?.id) {
fetch(`${config.manaCreditsUrl}/api/v1/internal/guild-pool/init`, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-Service-Key': config.serviceKey },
body: JSON.stringify({ organizationId: result.id }),
}).catch(() => {});
}
return c.json({ gilde: result, pool: { balance: 0 } }, 201);
})
.get('/', async (c) => {
const result = await auth.api.listOrganizations({ headers: c.req.raw.headers });
return c.json(result);
})
.get('/:id', async (c) => {
const result = await auth.api.getFullOrganization({
query: { organizationId: c.req.param('id') },
headers: c.req.raw.headers,
});
return c.json(result);
})
.put('/:id', async (c) => {
const body = await c.req.json();
const result = await auth.api.updateOrganization({
body: { organizationId: c.req.param('id'), data: body },
headers: c.req.raw.headers,
});
return c.json(result);
})
.delete('/:id', async (c) => {
await auth.api.deleteOrganization({
body: { organizationId: c.req.param('id') },
headers: c.req.raw.headers,
});
return c.json({ success: true });
})
.post('/:id/invite', async (c) => {
const body = await c.req.json();
const result = await auth.api.createInvitation({
body: {
organizationId: c.req.param('id'),
email: body.email,
role: body.role || 'member',
},
headers: c.req.raw.headers,
});
return c.json(result);
})
.post('/accept-invitation', async (c) => {
const { invitationId } = await c.req.json();
const result = await auth.api.acceptInvitation({
body: { invitationId },
headers: c.req.raw.headers,
});
return c.json(result);
})
.delete('/:id/members/:memberId', async (c) => {
await auth.api.removeMember({
body: {
organizationId: c.req.param('id'),
memberIdOrEmail: c.req.param('memberId'),
},
headers: c.req.raw.headers,
});
return c.json({ success: true });
})
.put('/:id/members/:memberId/role', async (c) => {
const { role } = await c.req.json();
const result = await auth.api.updateMemberRole({
body: {
organizationId: c.req.param('id'),
memberId: c.req.param('memberId'),
role,
},
headers: c.req.raw.headers,
});
return c.json(result);
});
}

View file

@ -1,234 +0,0 @@
/**
* Internal endpoints for the persona-runner (M3.c).
*
* Service-to-service gated by `X-Service-Key` at the app level (see
* `app.use('/api/v1/internal/*', serviceAuth(...))` in index.ts).
*
* Two write endpoints:
* POST /api/v1/internal/personas/:id/actions batch of tool-call rows
* POST /api/v1/internal/personas/:id/feedback batch of rating rows
*
* Both are **append-only** and **idempotent by (tickId + some natural
* key)** the runner can retry a failed batch without doubling rows.
* Also: both bump `personas.last_active_at` so the next tick's "is this
* persona due?" check sees the activity.
*/
import { Hono } from 'hono';
import { and, eq, isNull, lte, or, sql } from 'drizzle-orm';
import type { PostgresJsDatabase } from 'drizzle-orm/postgres-js';
import { users } from '../db/schema/auth';
import { personas, personaActions, personaFeedback } from '../db/schema/personas';
// ─── Input shapes (no zod dependency here — minimal sanity checks) ────
interface ActionRow {
tickId: string;
toolName: string;
inputHash?: string;
result: 'ok' | 'error';
errorMessage?: string;
latencyMs?: number;
}
interface FeedbackRow {
tickId: string;
module: string;
rating: 1 | 2 | 3 | 4 | 5;
notes?: string;
}
function isValidAction(row: unknown): row is ActionRow {
if (!row || typeof row !== 'object') return false;
const r = row as Record<string, unknown>;
return (
typeof r.tickId === 'string' &&
typeof r.toolName === 'string' &&
(r.result === 'ok' || r.result === 'error')
);
}
function isValidFeedback(row: unknown): row is FeedbackRow {
if (!row || typeof row !== 'object') return false;
const r = row as Record<string, unknown>;
return (
typeof r.tickId === 'string' &&
typeof r.module === 'string' &&
typeof r.rating === 'number' &&
r.rating >= 1 &&
r.rating <= 5
);
}
export function createInternalPersonasRoutes(db: PostgresJsDatabase<any>) {
const app = new Hono();
// Guard: every route under this router requires the :id to be an
// existing persona. Keeps the runner from accidentally writing
// audit rows for a deleted persona (FK would catch it, but a
// clean 404 is a better diagnostic).
async function requirePersona(personaId: string): Promise<boolean> {
const [row] = await db
.select({ userId: personas.userId })
.from(personas)
.where(eq(personas.userId, personaId));
return !!row;
}
// ─── GET /api/v1/internal/personas/due ──────────────────────────
//
// Returns personas the runner should act on **now**, given each
// persona's `tickCadence` + `lastActiveAt`. Simple rules:
//
// hourly — due if lastActiveAt is null or > 1 hour ago
// daily — due if lastActiveAt is null or > 24 hours ago
// weekdays — same as daily + server clock is MonFri
//
// Deletion and soft-delete are respected: users.deletedAt IS NULL.
app.get('/due', async (c) => {
const now = new Date();
const dow = now.getUTCDay(); // 0=Sun … 6=Sat
const isWeekday = dow >= 1 && dow <= 5;
const oneHourAgo = new Date(now.getTime() - 60 * 60 * 1000);
const oneDayAgo = new Date(now.getTime() - 24 * 60 * 60 * 1000);
// Compose (cadence='hourly' AND stale-by-hour) OR (cadence='daily' AND stale-by-day)
// OR (cadence='weekdays' AND today-is-weekday AND stale-by-day)
const hourly = and(
eq(personas.tickCadence, 'hourly'),
or(isNull(personas.lastActiveAt), lte(personas.lastActiveAt, oneHourAgo))
);
const daily = and(
eq(personas.tickCadence, 'daily'),
or(isNull(personas.lastActiveAt), lte(personas.lastActiveAt, oneDayAgo))
);
const weekdays = isWeekday
? and(
eq(personas.tickCadence, 'weekdays'),
or(isNull(personas.lastActiveAt), lte(personas.lastActiveAt, oneDayAgo))
)
: undefined;
const rows = await db
.select({
userId: personas.userId,
email: users.email,
archetype: personas.archetype,
systemPrompt: personas.systemPrompt,
moduleMix: personas.moduleMix,
tickCadence: personas.tickCadence,
lastActiveAt: personas.lastActiveAt,
})
.from(personas)
.innerJoin(users, eq(users.id, personas.userId))
.where(
and(
isNull(users.deletedAt),
or(...[hourly, daily, weekdays].filter((x): x is NonNullable<typeof x> => !!x))
)
)
.orderBy(sql`${personas.lastActiveAt} NULLS FIRST`);
return c.json({ personas: rows, serverTime: now.toISOString() });
});
// ─── POST /api/v1/internal/personas/:id/actions ──────────────────
app.post('/:id/actions', async (c) => {
const personaId = c.req.param('id');
if (!(await requirePersona(personaId))) {
return c.json({ error: 'Persona not found' }, 404);
}
let body: unknown;
try {
body = await c.req.json();
} catch {
return c.json({ error: 'Invalid JSON' }, 400);
}
const rawActions = (body as { actions?: unknown[] })?.actions;
if (!Array.isArray(rawActions) || rawActions.length === 0) {
return c.json({ error: '`actions` array required' }, 400);
}
if (rawActions.length > 500) {
return c.json({ error: '`actions` batch size must be ≤ 500' }, 400);
}
if (!rawActions.every(isValidAction)) {
return c.json({ error: 'One or more action rows failed validation' }, 400);
}
const now = new Date();
const values = rawActions.map((a, i) => ({
// Deterministic id per (tickId, toolName, index) so retrying
// the same batch doesn't produce duplicates. crypto.randomUUID
// would work too but would leak idempotency on retry.
id: `${a.tickId}-${i}-${a.toolName}`.slice(0, 255),
personaId,
tickId: a.tickId,
toolName: a.toolName,
inputHash: a.inputHash ?? null,
result: a.result,
errorMessage: a.errorMessage ?? null,
latencyMs: a.latencyMs ?? null,
createdAt: now,
}));
await db.insert(personaActions).values(values).onConflictDoNothing();
await db.update(personas).set({ lastActiveAt: now }).where(eq(personas.userId, personaId));
return c.json({ ok: true, written: values.length });
});
// ─── POST /api/v1/internal/personas/:id/feedback ─────────────────
app.post('/:id/feedback', async (c) => {
const personaId = c.req.param('id');
if (!(await requirePersona(personaId))) {
return c.json({ error: 'Persona not found' }, 404);
}
let body: unknown;
try {
body = await c.req.json();
} catch {
return c.json({ error: 'Invalid JSON' }, 400);
}
const rawFeedback = (body as { feedback?: unknown[] })?.feedback;
if (!Array.isArray(rawFeedback) || rawFeedback.length === 0) {
return c.json({ error: '`feedback` array required' }, 400);
}
if (rawFeedback.length > 100) {
return c.json({ error: '`feedback` batch size must be ≤ 100' }, 400);
}
if (!rawFeedback.every(isValidFeedback)) {
return c.json({ error: 'One or more feedback rows failed validation' }, 400);
}
const now = new Date();
const values = rawFeedback.map((f) => ({
// (tickId, module) is the natural uniqueness key — one rating
// per module per tick. Retries hit onConflictDoNothing.
id: `${f.tickId}-${f.module}`.slice(0, 255),
personaId,
tickId: f.tickId,
module: f.module,
rating: f.rating,
notes: f.notes ?? null,
createdAt: now,
}));
await db.insert(personaFeedback).values(values).onConflictDoNothing();
return c.json({ ok: true, written: values.length });
});
return app;
}
// Minimal unused import cleanup — drizzle-orm `and` was imported for
// potential future composite-WHERE needs but neither endpoint uses it
// today. Kept as a reminder when actions/feedback gain filter params.
void and;

View file

@ -1,76 +0,0 @@
/**
* Singleton bootstrap endpoint.
*
* `POST /api/v1/me/bootstrap-singletons` idempotently provisions the
* per-user `userContext` singleton. Called once per webapp boot as a
* reconciliation belt-and-suspenders for the signup-time hook
* (databaseHooks.user.create.after).
*
* Why both: the signup hook is a zero-latency happy-path bootstrap but
* fire-and-forget a transient mana_sync outage during signup leaves
* the user with no singleton and no signal that anything is wrong. The
* boot-time endpoint converges to the right state on every load.
* Idempotency in the bootstrap function makes double-invocation
* harmless.
*
* The endpoint is deliberately simple: no body, no parameters. The
* caller's identity (and thus the userId) comes from the JWT.
*
* Per-Space singletons used to be bootstrapped here too (kontextDoc),
* but the kontextDoc table was retired in favour of the user-driven
* `notes.isSpaceContext` flag there is nothing to bootstrap per
* Space anymore. The response shape keeps the `spaces` map for
* backwards compatibility with older webapp builds; it is always
* empty now.
*/
import { Hono } from 'hono';
import postgres from 'postgres';
import { logger } from '@mana/shared-hono';
import type { AuthUser } from '../middleware/jwt-auth';
import type { Database } from '../db/connection';
import { bootstrapUserSingletons } from '../services/bootstrap-singletons';
export interface BootstrapResponse {
ok: true;
bootstrapped: {
userContext: boolean;
spaces: Record<string, boolean>;
};
}
export function createMeBootstrapRoutes(
_db: Database,
syncDatabaseUrl: string
): Hono<{ Variables: { user: AuthUser } }> {
// Lazy module-scoped postgres pool. Mirrors routes/auth.ts and
// better-auth.config.ts — process lifetime owns it; never closed
// manually.
let _syncSql: ReturnType<typeof postgres> | null = null;
const getSyncSql = (): ReturnType<typeof postgres> => {
if (!_syncSql) _syncSql = postgres(syncDatabaseUrl, { max: 2 });
return _syncSql;
};
return new Hono<{ Variables: { user: AuthUser } }>().post('/', async (c) => {
const user = c.get('user');
const syncSql = getSyncSql();
const result: BootstrapResponse = {
ok: true,
bootstrapped: { userContext: false, spaces: {} },
};
try {
result.bootstrapped.userContext = await bootstrapUserSingletons(user.userId, syncSql);
} catch (err) {
logger.error('[me/bootstrap-singletons] userContext failed', {
userId: user.userId,
err: err instanceof Error ? err.message : String(err),
});
return c.json({ ok: false, error: 'userContext bootstrap failed' }, 500);
}
return c.json(result);
});
}

View file

@ -1,123 +0,0 @@
/**
* Me routes GDPR self-service data management
*
* GET /data Full user data summary (auth, credits, projects)
* GET /data/export Download all data as JSON
* DELETE /data Delete all user data (right to be forgotten)
*/
import { Hono } from 'hono';
import { eq } from 'drizzle-orm';
import type { AuthUser } from '../middleware/jwt-auth';
import type { UserDataService } from '../services/user-data';
import type { Database } from '../db/connection';
import { users } from '../db/schema/auth';
import { sendAccountDeletionEmail } from '../email/send';
export function createMeRoutes(userDataService: UserDataService, db: Database) {
return (
new Hono<{ Variables: { user: AuthUser } }>()
// ─── Get full user data summary ─────────────────────────
.get('/data', async (c) => {
const user = c.get('user');
const summary = await userDataService.getUserDataSummary(user.userId);
if (!summary) {
return c.json({ error: 'User not found' }, 404);
}
return c.json(summary);
})
// ─── Export user data as JSON download ──────────────────
.get('/data/export', async (c) => {
const user = c.get('user');
const exportData = await userDataService.exportUserData(user.userId);
if (!exportData) {
return c.json({ error: 'User not found' }, 404);
}
const filename = `meine-daten-${new Date().toISOString().split('T')[0]}.json`;
const json = JSON.stringify(exportData, null, 2);
return new Response(json, {
headers: {
'Content-Type': 'application/json',
'Content-Disposition': `attachment; filename="${filename}"`,
},
});
})
// ─── Delete all user data ───────────────────────────────
.delete('/data', async (c) => {
const user = c.get('user');
const result = await userDataService.deleteUserData(user.userId, user.email);
// Send confirmation email (fire-and-forget)
sendAccountDeletionEmail(user.email).catch(() => {});
return c.json(result);
})
// ─── Update profile (name, avatar) ──────────────────────
// Minimal patch endpoint used by the onboarding flow and
// Settings → Profile. JWT-based like the rest of /me/*; the
// updated name only lands in the user's JWT on next mint, so
// the caller is responsible for refreshing its in-memory
// representation of authStore.user. See docs/plans/onboarding-flow.md.
.patch('/profile', async (c) => {
const user = c.get('user');
const body = (await c.req.json().catch(() => ({}))) as {
name?: unknown;
image?: unknown;
feedbackShowRealName?: unknown;
};
const patch: {
name?: string;
image?: string;
feedbackShowRealName?: boolean;
updatedAt: Date;
} = {
updatedAt: new Date(),
};
if (typeof body.name === 'string') {
const trimmed = body.name.trim();
if (trimmed.length < 1 || trimmed.length > 80) {
return c.json({ error: 'name must be 180 characters' }, 400);
}
patch.name = trimmed;
}
if (typeof body.image === 'string') {
patch.image = body.image;
}
if (typeof body.feedbackShowRealName === 'boolean') {
patch.feedbackShowRealName = body.feedbackShowRealName;
}
if (!('name' in patch) && !('image' in patch) && !('feedbackShowRealName' in patch)) {
return c.json({ error: 'no fields to update' }, 400);
}
const [updated] = await db
.update(users)
.set(patch)
.where(eq(users.id, user.userId))
.returning({
id: users.id,
name: users.name,
image: users.image,
feedbackShowRealName: users.feedbackShowRealName,
});
if (!updated) return c.json({ error: 'User not found' }, 404);
return c.json({
name: updated.name,
image: updated.image,
feedbackShowRealName: updated.feedbackShowRealName,
});
})
);
}

View file

@ -1,69 +0,0 @@
/**
* Onboarding routes per-user completion status for the 3-screen
* first-login flow (Name Look Templates).
*
* GET / { completedAt: ISO string | null }
* POST /complete idempotent; sets `onboardingCompletedAt = now()` if null
* PATCH /reset sets back to null (for "Onboarding erneut durchlaufen")
*
* Mounted under `/api/v1/me/onboarding`, so it inherits the same
* `jwtAuth` middleware as the GDPR `/me/*` routes.
*
* Design notes see docs/plans/onboarding-flow.md §"Data changes":
* we keep the state on a first-class column (not in `user_settings`
* JSONB) so a brand-new account reliably returns `null` without having
* to distinguish "no settings row" from "explicitly null". And we use
* a dedicated endpoint rather than a JWT claim so finishing the flow
* takes effect without a token re-mint.
*/
import { Hono } from 'hono';
import { eq } from 'drizzle-orm';
import type { AuthUser } from '../middleware/jwt-auth';
import type { Database } from '../db/connection';
import { users } from '../db/schema/auth';
type OnboardingApp = Hono<{ Variables: { user: AuthUser } }>;
export function createOnboardingRoutes(db: Database) {
const app: OnboardingApp = new Hono();
app.get('/', async (c) => {
const user = c.get('user');
const [row] = await db
.select({ completedAt: users.onboardingCompletedAt })
.from(users)
.where(eq(users.id, user.userId))
.limit(1);
if (!row) return c.json({ error: 'User not found' }, 404);
return c.json({ completedAt: row.completedAt?.toISOString() ?? null });
});
app.post('/complete', async (c) => {
const user = c.get('user');
const now = new Date();
const [updated] = await db
.update(users)
.set({ onboardingCompletedAt: now, updatedAt: now })
.where(eq(users.id, user.userId))
.returning({ completedAt: users.onboardingCompletedAt });
if (!updated) return c.json({ error: 'User not found' }, 404);
return c.json({ completedAt: updated.completedAt?.toISOString() ?? null });
});
app.patch('/reset', async (c) => {
const user = c.get('user');
const [updated] = await db
.update(users)
.set({ onboardingCompletedAt: null, updatedAt: new Date() })
.where(eq(users.id, user.userId))
.returning({ completedAt: users.onboardingCompletedAt });
if (!updated) return c.json({ error: 'User not found' }, 404);
return c.json({ completedAt: null });
});
return app;
}

View file

@ -1,391 +0,0 @@
/**
* Passkey routes (WebAuthn).
*
* Thin wrappers around Better Auth's `@better-auth/passkey` plugin
* endpoints (mounted internally at /api/auth/passkey/*). The wrappers
* add:
* - Security-event logging (PASSKEY_REGISTER / PASSKEY_LOGIN_*)
* - JWT minting on successful authentication (mirrors /login)
* - Rate-limit accounting via a separate per-credential bucket
* so passkey failures don't trip the email/password lockout
* - Uniform error envelope via the auth-errors classifier
*
* Public read: GET /capability. Authenticated: everything else.
*
* The handlers that proxy to native endpoints use Better Auth's
* `auth.handler` (fetch-based) rather than the `auth.api.*` SDK so
* we can capture Set-Cookie headers on authenticate/verify and hand
* the cookie to /api/auth/token for the JWT mint. Same pattern as
* the /login wrapper.
*
* P2.3 lands capability-probe + the /register & /authenticate/options
* pass-throughs so the client can gate itself. P2.4 fills in verify
* + list + delete + rename with the full security logging treatment.
*/
import { Hono } from 'hono';
import type { Context } from 'hono';
import {
AuthErrorCode,
classify,
classifyFromError,
classifyFromResponse,
respondWithError,
type AuthErrorDeps,
} from '../lib/auth-errors';
import type { BetterAuthInstance, BetterAuthWebAuthnOptions } from '../auth/better-auth.config';
import type { SecurityEventsService, AccountLockoutService } from '../services/security';
import type { PasskeyRateLimitService } from '../services/passkey-rate-limit';
import type { Config } from '../config';
/**
* Response shape for the capability probe. Documented here so the
* client type in `@mana/shared-auth` can mirror it without a
* runtime dependency on this file.
*/
export interface PasskeyCapability {
enabled: boolean;
conditionalUIAvailable: boolean;
rpId: string | null;
}
export function createPasskeyRoutes(
auth: BetterAuthInstance,
config: Config,
webauthn: BetterAuthWebAuthnOptions,
security: SecurityEventsService,
lockout: AccountLockoutService,
rateLimit: PasskeyRateLimitService
) {
const app = new Hono();
const errDeps: AuthErrorDeps = { security, lockout };
// ─── Capability probe ───────────────────────────────────
// Called by the client once per session (cached) before it
// renders any passkey UI. Public (no auth) — the login page
// needs it before the user is known.
//
// `enabled: true` here simply means the plugin is wired up.
// The browser still has to check `window.PublicKeyCredential`
// and `isConditionalMediationAvailable()` — we surface the
// server side of the gate only.
app.get('/capability', (c) => {
const body: PasskeyCapability = {
enabled: true,
conditionalUIAvailable: true,
rpId: webauthn.rpId,
};
return c.json(body);
});
// ─── Registration options ───────────────────────────────
// Called from /settings/security when the user clicks
// "Add passkey". Requires auth (Better Auth enforces it on
// /api/auth/passkey/generate-register-options).
app.post('/register/options', async (c) => {
return proxyToBetterAuth({
c,
auth,
config,
upstreamPath: '/api/auth/passkey/generate-register-options',
upstreamMethod: 'POST',
endpoint: 'POST /passkeys/register/options',
errDeps,
});
});
// ─── Registration verification ──────────────────────────
app.post('/register/verify', async (c) => {
const res = await proxyToBetterAuth({
c,
auth,
config,
upstreamPath: '/api/auth/passkey/verify-registration',
upstreamMethod: 'POST',
endpoint: 'POST /passkeys/register/verify',
errDeps,
});
if (res.status === 200) {
void security.logEvent({
eventType: 'PASSKEY_REGISTERED',
ipAddress: c.req.header('x-forwarded-for') || 'unknown',
});
}
return res;
});
// ─── Authentication options ─────────────────────────────
// Unauthenticated — the browser needs a challenge before the
// user has signed in. Better Auth's native endpoint is GET
// for this one, but we expose POST for API symmetry with the
// rest of the passkey flow (client already posts an empty
// body).
//
// Rate-limited per IP: this is the primary target for a DoS /
// enumeration attack because it returns a fresh challenge +
// the RP ID with no auth required.
app.post('/authenticate/options', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
const gate = rateLimit.checkOptions(ip);
if (!gate.allowed) {
return respondWithError(
c,
classify(AuthErrorCode.RATE_LIMITED, { retryAfterSec: gate.retryAfterSec }),
{ endpoint: 'POST /passkeys/authenticate/options', ipAddress: ip },
errDeps
);
}
return proxyToBetterAuth({
c,
auth,
config,
upstreamPath: '/api/auth/passkey/generate-authenticate-options',
upstreamMethod: 'GET',
endpoint: 'POST /passkeys/authenticate/options',
errDeps,
});
});
// ─── Authentication verification + JWT mint ─────────────
// Mirrors /login's pattern: call the native handler, capture
// Set-Cookie, exchange the session cookie for a JWT via
// /api/auth/token.
//
// Rate-limited per credentialID: too many failed verifies for
// the same credential lock that credential out for 5 min (does
// NOT touch the password lockout counter — different factor).
app.post('/authenticate/verify', async (c) => {
const ip = c.req.header('x-forwarded-for') || 'unknown';
// Clone the body before the upstream read so we can extract
// credentialID for rate-limit bookkeeping without double-
// consuming the stream. The client sends Better-Auth's shape
// `{ response: { id: '<base64url>', ... } }` — see
// `verifyPasskeyAuthenticationBodySchema` in the upstream
// @better-auth/passkey plugin. Falls back to a flat `{ id }`
// body for any direct-to-mana-auth caller (legacy harness).
let credentialId: string | null = null;
let bodyText: string | null = null;
try {
bodyText = await c.req.text();
const parsed = JSON.parse(bodyText);
credentialId = parsed?.response?.id ?? parsed?.id ?? null;
} catch {
// Body malformed — let the upstream handler return a real
// validation error. No rate-limit bump because we don't
// have a credentialID.
}
if (credentialId) {
const gate = rateLimit.checkVerify(credentialId);
if (!gate.allowed) {
return respondWithError(
c,
classify(AuthErrorCode.RATE_LIMITED, { retryAfterSec: gate.retryAfterSec }),
{ endpoint: 'POST /passkeys/authenticate/verify', ipAddress: ip },
errDeps
);
}
}
let signInResponse: Response;
try {
signInResponse = await auth.handler(
new Request(new URL('/api/auth/passkey/verify-authentication', config.baseUrl), {
method: 'POST',
headers: c.req.raw.headers,
body: bodyText ?? undefined,
})
);
} catch (error) {
return respondWithError(
c,
classifyFromError(error),
{ endpoint: 'POST /passkeys/authenticate/verify', ipAddress: ip },
errDeps
);
}
if (!signInResponse.ok) {
if (credentialId) {
rateLimit.recordVerifyFailure(credentialId);
}
void security.logEvent({
eventType: 'PASSKEY_LOGIN_FAILURE',
ipAddress: ip,
});
const classified = await classifyFromResponse(signInResponse);
// Promote generic INVALID_CREDENTIALS from the passkey path
// to the more specific PASSKEY_VERIFICATION_FAILED so the UI
// can show "passkey didn't match" instead of "wrong password".
const promoted =
classified.code === AuthErrorCode.INVALID_CREDENTIALS
? classify(AuthErrorCode.PASSKEY_VERIFICATION_FAILED, { cause: classified.cause })
: classified;
return respondWithError(
c,
promoted,
{ endpoint: 'POST /passkeys/authenticate/verify', ipAddress: ip },
errDeps
);
}
const response = (await signInResponse.json()) as {
user?: { id: string };
token?: string;
};
if (response?.user?.id) {
void security.logEvent({
userId: response.user.id,
eventType: 'PASSKEY_LOGIN_SUCCESS',
ipAddress: ip,
});
if (credentialId) {
// Reset the per-credential failure counter so a user
// who mistyped/cancelled a few times doesn't stay
// penalised after they succeed.
rateLimit.clearVerifySuccess(credentialId);
}
}
// Exchange the signed session cookie for a JWT — same flow as
// /login lines 227ff.
const setCookie = signInResponse.headers.get('set-cookie');
if (setCookie) {
const tokenResponse = await auth.handler(
new Request(new URL('/api/auth/token', config.baseUrl), {
method: 'GET',
headers: new Headers({ cookie: setCookie }),
})
);
if (tokenResponse.ok) {
const tokenData = (await tokenResponse.json()) as { token: string };
return c.json({
...response,
accessToken: tokenData.token,
refreshToken: response.token,
});
}
}
return c.json(response);
});
// ─── List user's passkeys ───────────────────────────────
app.get('/', async (c) => {
return proxyToBetterAuth({
c,
auth,
config,
upstreamPath: '/api/auth/passkey/list-user-passkeys',
upstreamMethod: 'GET',
endpoint: 'GET /passkeys',
errDeps,
});
});
// ─── Delete passkey ─────────────────────────────────────
app.delete('/:id', async (c) => {
const id = c.req.param('id');
const res = await proxyToBetterAuth({
c,
auth,
config,
upstreamPath: '/api/auth/passkey/delete-passkey',
upstreamMethod: 'POST',
body: JSON.stringify({ id }),
endpoint: 'DELETE /passkeys/:id',
errDeps,
});
if (res.status === 200) {
void security.logEvent({
eventType: 'PASSKEY_DELETED',
ipAddress: c.req.header('x-forwarded-for') || 'unknown',
metadata: { passkeyId: id },
});
}
return res;
});
// ─── Rename passkey ─────────────────────────────────────
app.patch('/:id', async (c) => {
const id = c.req.param('id');
let body: { name?: string };
try {
body = await c.req.json();
} catch {
return respondWithError(
c,
classify(AuthErrorCode.VALIDATION, { message: 'Invalid JSON body' }),
{ endpoint: 'PATCH /passkeys/:id', ipAddress: c.req.header('x-forwarded-for') },
errDeps
);
}
const res = await proxyToBetterAuth({
c,
auth,
config,
upstreamPath: '/api/auth/passkey/update-passkey',
upstreamMethod: 'POST',
body: JSON.stringify({ id, name: body.name }),
endpoint: 'PATCH /passkeys/:id',
errDeps,
});
if (res.status === 200) {
void security.logEvent({
eventType: 'PASSKEY_RENAMED',
ipAddress: c.req.header('x-forwarded-for') || 'unknown',
metadata: { passkeyId: id },
});
}
return res;
});
return app;
}
// ─── Helper: proxy a request to Better Auth's handler ─────
//
// Centralises the "forward incoming headers + body, classify any
// upstream error" pattern so each passkey endpoint stays a
// three-liner.
interface ProxyOpts {
c: Context;
auth: BetterAuthInstance;
config: Config;
upstreamPath: string;
upstreamMethod: 'GET' | 'POST';
body?: string;
endpoint: string;
errDeps: AuthErrorDeps;
}
async function proxyToBetterAuth(opts: ProxyOpts): Promise<Response> {
const { c, auth, config, upstreamPath, upstreamMethod, body, endpoint, errDeps } = opts;
const ip = c.req.header('x-forwarded-for') || 'unknown';
try {
const init: RequestInit = {
method: upstreamMethod,
headers: c.req.raw.headers,
};
if (upstreamMethod === 'POST') {
init.body = body ?? c.req.raw.body;
// @ts-expect-error duplex is required for streaming bodies
init.duplex = 'half';
}
const res = await auth.handler(new Request(new URL(upstreamPath, config.baseUrl), init));
if (!res.ok) {
return respondWithError(
c,
await classifyFromResponse(res),
{ endpoint, ipAddress: ip },
errDeps
);
}
return res;
} catch (error) {
return respondWithError(c, classifyFromError(error), { endpoint, ipAddress: ip }, errDeps);
}
}

View file

@ -1,218 +0,0 @@
/**
* Settings routes User settings CRUD (synced across all apps)
*
* GET / Get all settings (global + app overrides + device settings)
* PATCH /global Update global settings (deep merge)
* PATCH /app/:appId Update app-specific override
* DELETE /app/:appId Remove app override
* PATCH /device/:deviceId/:appId Update device-specific app settings
* GET /devices List all devices
* DELETE /device/:deviceId Remove a device
*/
import { Hono } from 'hono';
import { eq } from 'drizzle-orm';
import type { AuthUser } from '../middleware/jwt-auth';
import type { Database } from '../db/connection';
import { userSettings } from '../db/schema/auth';
type SettingsApp = Hono<{ Variables: { user: AuthUser } }>;
/**
* Deep merge two objects (1 level of nesting for settings)
*/
function deepMerge(
target: Record<string, unknown>,
source: Record<string, unknown>
): Record<string, unknown> {
const result = { ...target };
for (const key of Object.keys(source)) {
if (
source[key] !== null &&
typeof source[key] === 'object' &&
!Array.isArray(source[key]) &&
typeof result[key] === 'object' &&
result[key] !== null &&
!Array.isArray(result[key])
) {
result[key] = deepMerge(
result[key] as Record<string, unknown>,
source[key] as Record<string, unknown>
);
} else {
result[key] = source[key];
}
}
return result;
}
/**
* Get or create user settings row
*/
async function getOrCreateSettings(db: Database, userId: string) {
const [existing] = await db
.select()
.from(userSettings)
.where(eq(userSettings.userId, userId))
.limit(1);
if (existing) return existing;
const [created] = await db.insert(userSettings).values({ userId }).returning();
return created;
}
/**
* Return the standard response shape
*/
function settingsResponse(row: typeof userSettings.$inferSelect) {
return {
success: true,
globalSettings: row.globalSettings,
appOverrides: row.appOverrides,
deviceSettings: row.deviceSettings,
};
}
export function createSettingsRoutes(db: Database) {
const app: SettingsApp = new Hono();
// ─── GET / — Fetch all settings ────────────────────────────
app.get('/', async (c) => {
const user = c.get('user');
const row = await getOrCreateSettings(db, user.userId);
return c.json(settingsResponse(row));
});
// ─── PATCH /global — Update global settings (deep merge) ───
app.patch('/global', async (c) => {
const user = c.get('user');
const body = await c.req.json();
const row = await getOrCreateSettings(db, user.userId);
const merged = deepMerge(
row.globalSettings as Record<string, unknown>,
body as Record<string, unknown>
);
const [updated] = await db
.update(userSettings)
.set({ globalSettings: merged, updatedAt: new Date() })
.where(eq(userSettings.userId, user.userId))
.returning();
return c.json(settingsResponse(updated));
});
// ─── PATCH /app/:appId — Update app override ───────────────
app.patch('/app/:appId', async (c) => {
const user = c.get('user');
const appId = c.req.param('appId');
const body = await c.req.json();
const row = await getOrCreateSettings(db, user.userId);
const overrides = (row.appOverrides as Record<string, unknown>) || {};
const existing = (overrides[appId] as Record<string, unknown>) || {};
overrides[appId] = deepMerge(existing, body as Record<string, unknown>);
const [updated] = await db
.update(userSettings)
.set({ appOverrides: overrides, updatedAt: new Date() })
.where(eq(userSettings.userId, user.userId))
.returning();
return c.json(settingsResponse(updated));
});
// ─── DELETE /app/:appId — Remove app override ──────────────
app.delete('/app/:appId', async (c) => {
const user = c.get('user');
const appId = c.req.param('appId');
const row = await getOrCreateSettings(db, user.userId);
const overrides = (row.appOverrides as Record<string, unknown>) || {};
delete overrides[appId];
const [updated] = await db
.update(userSettings)
.set({ appOverrides: overrides, updatedAt: new Date() })
.where(eq(userSettings.userId, user.userId))
.returning();
return c.json(settingsResponse(updated));
});
// ─── PATCH /device/:deviceId/:appId — Update device app settings ──
app.patch('/device/:deviceId/:appId', async (c) => {
const user = c.get('user');
const { deviceId, appId } = c.req.param();
const body = await c.req.json<{
deviceName?: string;
deviceType?: string;
settings?: Record<string, unknown>;
}>();
const row = await getOrCreateSettings(db, user.userId);
const devices = (row.deviceSettings as Record<string, Record<string, unknown>>) || {};
const device = devices[deviceId] || {
deviceName: body.deviceName || 'Unknown',
deviceType: body.deviceType || 'desktop',
lastSeen: new Date().toISOString(),
apps: {},
};
device.lastSeen = new Date().toISOString();
if (body.deviceName) device.deviceName = body.deviceName;
if (body.deviceType) device.deviceType = body.deviceType;
const apps = (device.apps as Record<string, unknown>) || {};
const existingApp = (apps[appId] as Record<string, unknown>) || {};
apps[appId] = { ...existingApp, ...(body.settings || {}) };
device.apps = apps;
devices[deviceId] = device;
const [updated] = await db
.update(userSettings)
.set({ deviceSettings: devices, updatedAt: new Date() })
.where(eq(userSettings.userId, user.userId))
.returning();
return c.json(settingsResponse(updated));
});
// ─── GET /devices — List all devices ───────────────────────
app.get('/devices', async (c) => {
const user = c.get('user');
const row = await getOrCreateSettings(db, user.userId);
const devices = (row.deviceSettings as Record<string, Record<string, unknown>>) || {};
const deviceList = Object.entries(devices).map(([id, d]) => ({
deviceId: id,
deviceName: d.deviceName || 'Unknown',
deviceType: d.deviceType || 'desktop',
lastSeen: d.lastSeen || null,
appCount: Object.keys((d.apps as Record<string, unknown>) || {}).length,
}));
return c.json({ success: true, devices: deviceList });
});
// ─── DELETE /device/:deviceId — Remove a device ────────────
app.delete('/device/:deviceId', async (c) => {
const user = c.get('user');
const deviceId = c.req.param('deviceId');
const row = await getOrCreateSettings(db, user.userId);
const devices = (row.deviceSettings as Record<string, unknown>) || {};
delete devices[deviceId];
const [updated] = await db
.update(userSettings)
.set({ deviceSettings: devices, updatedAt: new Date() })
.where(eq(userSettings.userId, user.userId))
.returning();
return c.json(settingsResponse(updated));
});
return app;
}

View file

@ -1,103 +0,0 @@
/**
* API Keys Service Generate, validate, revoke service API keys
*/
import { eq, and, isNull, sql } from 'drizzle-orm';
import { randomBytes, createHash } from 'crypto';
import type { Database } from '../db/connection';
import { NotFoundError } from '../lib/errors';
// Schema imported inline to avoid circular deps
import { apiKeys } from '../db/schema/api-keys';
export class ApiKeysService {
constructor(private db: Database) {}
private generateKey(): string {
return `sk_live_${randomBytes(32).toString('hex')}`;
}
private hashKey(key: string): string {
return createHash('sha256').update(key).digest('hex');
}
private getKeyPrefix(key: string): string {
return key.replace('sk_live_', '').slice(0, 8);
}
async listUserApiKeys(userId: string) {
return this.db
.select({
id: apiKeys.id,
name: apiKeys.name,
keyPrefix: apiKeys.keyPrefix,
scopes: apiKeys.scopes,
createdAt: apiKeys.createdAt,
lastUsedAt: apiKeys.lastUsedAt,
revokedAt: apiKeys.revokedAt,
})
.from(apiKeys)
.where(eq(apiKeys.userId, userId));
}
async createApiKey(userId: string, data: { name: string; scopes?: string[] }) {
const key = this.generateKey();
const hash = this.hashKey(key);
const prefix = this.getKeyPrefix(key);
const [created] = await this.db
.insert(apiKeys)
.values({
userId,
name: data.name,
keyHash: hash,
keyPrefix: prefix,
scopes: data.scopes || ['stt', 'tts'],
})
.returning();
return { ...created, key }; // Full key returned ONLY on creation
}
async revokeApiKey(userId: string, keyId: string) {
const [revoked] = await this.db
.update(apiKeys)
.set({ revokedAt: new Date() })
.where(and(eq(apiKeys.id, keyId), eq(apiKeys.userId, userId)))
.returning();
if (!revoked) throw new NotFoundError('API key not found');
return { success: true };
}
async validateApiKey(apiKey: string, scope?: string) {
const hash = this.hashKey(apiKey);
const [key] = await this.db
.select()
.from(apiKeys)
.where(and(eq(apiKeys.keyHash, hash), isNull(apiKeys.revokedAt)))
.limit(1);
if (!key) return { valid: false };
// Check scope if provided
if (scope && key.scopes && !(key.scopes as string[]).includes(scope)) {
return { valid: false, reason: 'scope_denied' };
}
// Update lastUsedAt (fire-and-forget)
this.db
.update(apiKeys)
.set({ lastUsedAt: new Date() })
.where(eq(apiKeys.id, key.id))
.catch(() => {});
return {
valid: true,
userId: key.userId,
scopes: key.scopes,
rateLimit: { requests: 60, window: 60 },
};
}
}

View file

@ -1,137 +0,0 @@
/**
* Server-side singleton bootstrap.
*
* On first user-creation, write the singleton records that the webapp
* would otherwise create on demand via `ensureDoc()` /
* `getOrCreateLocalDoc()`. This makes the bootstrap deterministic
* every fresh client pulls the singleton from mana-sync instead of
* racing on a local insert that the next pull would clobber.
*
* Currently bootstrapped:
* - `userContext` per-user. The structured profile + freeform markdown
* blob keyed by `id='singleton'`. Default shape mirrors the webapp's
* `emptyUserContext()` factory in `profile/types.ts`.
*
* (The per-Space `kontextDoc` singleton was retired the
* notes.isSpaceContext flag now carries the same role, and a flagged
* Note is created on demand by the user, not bootstrapped empty.)
*
* Idempotency: the function performs an existence-check on
* `sync_changes` before inserting if a row matching the singleton's
* scope already exists, the call is a no-op. This makes the bootstrap
* safe to run from multiple sources without producing duplicate rows:
* - signup-time hook (databaseHooks.user.create.after) fires on the
* happy path
* - boot-time endpoint (POST /api/v1/me/bootstrap-singletons) fires
* on every webapp boot as a reconciliation belt-and-suspenders
*
* The TOCTOU race between two concurrent callers can theoretically
* still produce a duplicate insert, but field-LWW collapses duplicates
* harmlessly on the client (latest `at` wins). The check is a
* waste-reduction, not a correctness mechanism.
*/
import postgres from 'postgres';
interface Actor {
kind: 'system';
principalId: string;
displayName: string;
}
const BOOTSTRAP_ACTOR: Actor = {
kind: 'system',
principalId: 'system:bootstrap',
displayName: 'Bootstrap',
};
const BOOTSTRAP_CLIENT_ID = 'system:bootstrap';
const BOOTSTRAP_ORIGIN = 'system';
/**
* Build a `field_meta` object for the bootstrap insert: every key in
* `data` (except `id`) gets the same `at` timestamp. The receiving client
* reads this column and populates `__fieldMeta[k] = { at, actor:
* changeActor, origin: 'server-replay' }` — never surfaces as a conflict.
*/
function buildFieldMeta(data: Record<string, unknown>, at: string): Record<string, string> {
const meta: Record<string, string> = {};
for (const key of Object.keys(data)) {
if (key === 'id') continue;
meta[key] = at;
}
return meta;
}
/**
* Default content for a new user's `userContext` singleton. Keep in sync
* with `apps/mana/apps/web/src/lib/modules/profile/types.ts:emptyUserContext()`.
* If the shape ever drifts, the receiving client will merge whatever
* fields the server emits via field-LWW extra fields stay at their
* default (`undefined` no override), missing fields default to the
* client's local TypeScript shape on read.
*/
function emptyUserContextData(userId: string): Record<string, unknown> {
return {
id: 'singleton',
about: {},
interests: [],
routine: {},
nutrition: {},
leisure: {},
goals: [],
social: {},
freeform: '',
interview: { answeredIds: [], skippedIds: [] },
userId,
};
}
/**
* Insert the per-user singletons into mana_sync.sync_changes. Idempotent
* skips the insert if a row for `(userContext, 'singleton', userId)`
* already exists. Called from the post-signUp hook in routes/auth.ts and
* from the boot-time `/me/bootstrap-singletons` endpoint; both are
* fire-and-forget at the caller, but the caller can also `await` it
* (the boot endpoint does) and report failure to the client without
* causing a write conflict.
*
* Returns true if an insert was actually written, false if the
* idempotency check skipped it.
*/
export async function bootstrapUserSingletons(
userId: string,
syncSql: ReturnType<typeof postgres>
): Promise<boolean> {
if (!userId) throw new Error('bootstrapUserSingletons: empty userId');
const existing = await syncSql<Array<{ exists: number }>>`
SELECT 1 AS exists
FROM sync_changes
WHERE table_name = 'userContext'
AND record_id = 'singleton'
AND user_id = ${userId}
LIMIT 1
`;
if (existing.length > 0) return false;
const now = new Date().toISOString();
const data = emptyUserContextData(userId);
const fieldMeta = buildFieldMeta(data, now);
await syncSql`
INSERT INTO sync_changes (
app_id, table_name, record_id, user_id, space_id, op, data,
field_meta, client_id, schema_version, actor, origin
)
VALUES (
'profile', 'userContext', 'singleton', ${userId}, NULL, 'insert',
${syncSql.json(data as never)},
${syncSql.json(fieldMeta as never)},
${BOOTSTRAP_CLIENT_ID}, 1,
${syncSql.json(BOOTSTRAP_ACTOR as never)},
${BOOTSTRAP_ORIGIN}
)
`;
return true;
}

View file

@ -1,497 +0,0 @@
/**
* EncryptionVaultService integration tests (Phase 9 backlog #1).
*
* Exercises the full service surface against a real Postgres so the
* row-level-security policies, CHECK constraints and audit-row writes
* are tested as the production app actually sees them. Pure-crypto
* tests live in `kek.test.ts` and don't need this scaffolding.
*
* Test database
* -------------
* Reads `TEST_DATABASE_URL` from the environment. The whole suite is
* SKIPPED if the variable is not set, so unrelated CI runs (and the
* default `bun test` from a fresh checkout) don't fail with "no
* connection" only the encryption-vault sub-job has to provision a
* Postgres.
*
* Schema is assumed to already exist (run `pnpm db:push` against the
* test DB before invoking the suite). The tests TRUNCATE the relevant
* tables before each case so they're independent.
*
* Note on the user FK: encryption_vaults.user_id references auth.users
* via ON DELETE CASCADE. We seed a single test user in beforeAll and
* tear down in afterAll. Each test uses a fresh per-test sub-id stored
* as a row in users this avoids cross-test pollution from a single
* shared user_id while still respecting the FK.
*/
import { describe, it, expect, beforeAll, beforeEach, afterAll } from 'bun:test';
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import { eq, sql } from 'drizzle-orm';
import { nanoid } from 'nanoid';
import * as schema from '../../db/schema';
import { users } from '../../db/schema/auth';
import { encryptionVaults, encryptionVaultAudit } from '../../db/schema/encryption-vaults';
import {
EncryptionVaultService,
VaultNotFoundError,
RecoveryWrapMissingError,
ZeroKnowledgeActiveError,
ZeroKnowledgeRotateForbidden,
} from './index';
import { loadKek, _resetForTesting as resetKek } from './kek';
const TEST_KEK_BASE64 = 'AQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyA=';
const TEST_DATABASE_URL = process.env.TEST_DATABASE_URL ?? '';
// Skip the entire suite if no test DB is configured. The describe.skip
// pattern keeps the file importable so type-checking still runs against
// production code.
const maybeDescribe = TEST_DATABASE_URL ? describe : describe.skip;
maybeDescribe('EncryptionVaultService (integration)', () => {
let client: ReturnType<typeof postgres>;
let db: ReturnType<typeof drizzle<typeof schema>>;
let service: EncryptionVaultService;
let testUserId: string;
beforeAll(async () => {
// Connect to the test database. `max: 5` keeps the connection
// pool small — we don't run anything in parallel inside one test
// suite, and CI runners are usually limited.
client = postgres(TEST_DATABASE_URL, { max: 5 });
db = drizzle(client, { schema });
resetKek();
await loadKek(TEST_KEK_BASE64);
service = new EncryptionVaultService(db);
});
afterAll(async () => {
// Drop the test user (CASCADE wipes the vault row + audit
// entries via FK). Then close the pool so bun test exits cleanly.
if (testUserId) {
await db.delete(users).where(eq(users.id, testUserId));
}
await client.end();
});
beforeEach(async () => {
// Fresh user per test so the unique-email constraint doesn't bite
// and so each test starts from a clean vault state.
testUserId = `test-user-${nanoid(8)}`;
await db.insert(users).values({
id: testUserId,
name: 'Vault Integration Test',
email: `${testUserId}@test.local`,
emailVerified: true,
});
});
// ─── init() ────────────────────────────────────────────────
describe('init', () => {
it('mints a fresh vault when none exists', async () => {
const result = await service.init(testUserId);
expect(result.masterKey).toBeInstanceOf(Uint8Array);
expect(result.masterKey!.length).toBe(32);
expect(result.formatVersion).toBe(1);
expect(result.kekId).toBe('env-v1');
expect(result.requiresRecoveryCode).toBeUndefined();
// Verify the row was actually inserted
const rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows).toHaveLength(1);
expect(rows[0].wrappedMk).not.toBeNull();
expect(rows[0].wrapIv).not.toBeNull();
expect(rows[0].zeroKnowledge).toBe(false);
expect(rows[0].recoveryWrappedMk).toBeNull();
});
it('is idempotent — second call returns the same key', async () => {
const a = await service.init(testUserId);
const b = await service.init(testUserId);
expect(Buffer.from(a.masterKey!).toString('hex')).toBe(
Buffer.from(b.masterKey!).toString('hex')
);
});
it('writes init audit rows', async () => {
await service.init(testUserId);
await service.init(testUserId);
const audit = await db
.select()
.from(encryptionVaultAudit)
.where(eq(encryptionVaultAudit.userId, testUserId));
expect(audit.length).toBeGreaterThanOrEqual(2);
const actions = audit.map((a) => a.action);
expect(actions).toContain('init');
});
});
// ─── getStatus() ───────────────────────────────────────────
describe('getStatus', () => {
it('returns vaultExists=false for a user with no vault', async () => {
const status = await service.getStatus(testUserId);
expect(status.vaultExists).toBe(false);
expect(status.hasRecoveryWrap).toBe(false);
expect(status.zeroKnowledge).toBe(false);
expect(status.recoverySetAt).toBeNull();
});
it('reports vaultExists=true after init, no recovery yet', async () => {
await service.init(testUserId);
const status = await service.getStatus(testUserId);
expect(status.vaultExists).toBe(true);
expect(status.hasRecoveryWrap).toBe(false);
expect(status.zeroKnowledge).toBe(false);
});
it('reports hasRecoveryWrap=true after setRecoveryWrap', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
const status = await service.getStatus(testUserId);
expect(status.hasRecoveryWrap).toBe(true);
expect(status.zeroKnowledge).toBe(false);
expect(status.recoverySetAt).not.toBeNull();
});
it('reports zeroKnowledge=true after enableZeroKnowledge', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
const status = await service.getStatus(testUserId);
expect(status.zeroKnowledge).toBe(true);
expect(status.hasRecoveryWrap).toBe(true);
});
it('does NOT write an audit row (cheap metadata read)', async () => {
await service.init(testUserId);
// Clear audit rows from init
await db.execute(sql`DELETE FROM auth.encryption_vault_audit WHERE user_id = ${testUserId}`);
await service.getStatus(testUserId);
const audit = await db
.select()
.from(encryptionVaultAudit)
.where(eq(encryptionVaultAudit.userId, testUserId));
expect(audit).toHaveLength(0);
});
});
// ─── setRecoveryWrap() ─────────────────────────────────────
describe('setRecoveryWrap', () => {
it('stores the recovery wrap on an existing vault', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
const rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].recoveryWrappedMk).toBe('AAAA');
expect(rows[0].recoveryIv).toBe('BBBB');
expect(rows[0].recoverySetAt).not.toBeNull();
});
it('throws VaultNotFoundError when no vault exists', async () => {
await expect(
service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
})
).rejects.toThrow(VaultNotFoundError);
});
it('is idempotent — replaces the previous wrap', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'CCCC',
recoveryIv: 'DDDD',
});
const rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].recoveryWrappedMk).toBe('CCCC');
expect(rows[0].recoveryIv).toBe('DDDD');
});
it('writes a recovery_set audit row', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
const audit = await db
.select()
.from(encryptionVaultAudit)
.where(eq(encryptionVaultAudit.userId, testUserId));
const actions = audit.map((a) => a.action);
expect(actions).toContain('recovery_set');
});
});
// ─── clearRecoveryWrap() ───────────────────────────────────
describe('clearRecoveryWrap', () => {
it('removes the recovery wrap', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.clearRecoveryWrap(testUserId);
const rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].recoveryWrappedMk).toBeNull();
expect(rows[0].recoveryIv).toBeNull();
expect(rows[0].recoverySetAt).toBeNull();
});
it('throws ZeroKnowledgeActiveError when ZK is on', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
await expect(service.clearRecoveryWrap(testUserId)).rejects.toThrow(ZeroKnowledgeActiveError);
});
it('throws VaultNotFoundError when no vault exists', async () => {
await expect(service.clearRecoveryWrap(testUserId)).rejects.toThrow(VaultNotFoundError);
});
});
// ─── enableZeroKnowledge() ─────────────────────────────────
describe('enableZeroKnowledge', () => {
it('flips zero_knowledge=true and NULLs out wrapped_mk', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
const rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].zeroKnowledge).toBe(true);
expect(rows[0].wrappedMk).toBeNull();
expect(rows[0].wrapIv).toBeNull();
expect(rows[0].recoveryWrappedMk).not.toBeNull();
});
it('throws RecoveryWrapMissingError if no recovery wrap is set', async () => {
await service.init(testUserId);
await expect(service.enableZeroKnowledge(testUserId)).rejects.toThrow(
RecoveryWrapMissingError
);
});
it('is idempotent — second call is a no-op', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
// Should not throw
await service.enableZeroKnowledge(testUserId);
});
it('throws VaultNotFoundError when no vault exists', async () => {
await expect(service.enableZeroKnowledge(testUserId)).rejects.toThrow(VaultNotFoundError);
});
});
// ─── disableZeroKnowledge() ────────────────────────────────
describe('disableZeroKnowledge', () => {
it('restores wrapped_mk from a client-supplied master key', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
// Verify wrapped_mk is gone
let rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].wrappedMk).toBeNull();
// Hand back a fresh 32-byte MK and disable
const freshMk = new Uint8Array(32).fill(0x42);
await service.disableZeroKnowledge(testUserId, freshMk);
rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].zeroKnowledge).toBe(false);
expect(rows[0].wrappedMk).not.toBeNull();
expect(rows[0].wrapIv).not.toBeNull();
// Verify the round-trip: getMasterKey should now unwrap to
// the same 32 bytes we handed in
const fetched = await service.getMasterKey(testUserId);
expect(fetched.masterKey).not.toBeNull();
expect(Buffer.from(fetched.masterKey!).toString('hex')).toBe(
Buffer.from(freshMk).toString('hex')
);
});
it('is a no-op when ZK is already off', async () => {
await service.init(testUserId);
const fresh = new Uint8Array(32).fill(0x99);
// Should not throw
await service.disableZeroKnowledge(testUserId, fresh);
});
});
// ─── getMasterKey() ────────────────────────────────────────
describe('getMasterKey', () => {
it('returns the unwrapped MK in standard mode', async () => {
const init = await service.init(testUserId);
const fetch = await service.getMasterKey(testUserId);
expect(fetch.masterKey).not.toBeNull();
expect(Buffer.from(fetch.masterKey!).toString('hex')).toBe(
Buffer.from(init.masterKey!).toString('hex')
);
expect(fetch.requiresRecoveryCode).toBeUndefined();
});
it('returns recovery blob with requiresRecoveryCode=true in ZK mode', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'WRAPPED-CT',
recoveryIv: 'WRAPPED-IV',
});
await service.enableZeroKnowledge(testUserId);
const result = await service.getMasterKey(testUserId);
expect(result.masterKey).toBeNull();
expect(result.requiresRecoveryCode).toBe(true);
expect(result.recoveryWrappedMk).toBe('WRAPPED-CT');
expect(result.recoveryIv).toBe('WRAPPED-IV');
});
it('throws VaultNotFoundError when uninitialised', async () => {
await expect(service.getMasterKey(testUserId)).rejects.toThrow(VaultNotFoundError);
});
});
// ─── rotate() ──────────────────────────────────────────────
describe('rotate', () => {
it('mints a fresh master key and wipes any existing recovery wrap', async () => {
const init = await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'OLD-WRAP',
recoveryIv: 'OLD-IV',
});
const rotated = await service.rotate(testUserId);
expect(Buffer.from(rotated.masterKey!).toString('hex')).not.toBe(
Buffer.from(init.masterKey!).toString('hex')
);
// The old recovery wrap was for the old MK and is now invalid —
// the service wipes it on rotate to prevent confusion.
const rows = await db
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, testUserId));
expect(rows[0].recoveryWrappedMk).toBeNull();
expect(rows[0].recoveryIv).toBeNull();
});
it('throws ZeroKnowledgeRotateForbidden in ZK mode', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
await expect(service.rotate(testUserId)).rejects.toThrow(ZeroKnowledgeRotateForbidden);
});
});
// ─── DB CHECK constraint enforcement ───────────────────────
describe('DB-level invariants', () => {
// Drizzle's chainable update() object isn't a real Promise — it
// only executes when you await it (or call .then). For these
// constraint-violation tests we wrap the call in an arrow so
// expect(...).rejects.toThrow() sees a real Promise.
it('enforces zk_consistency: setting wrapped_mk back while ZK active is rejected', async () => {
await service.init(testUserId);
await service.setRecoveryWrap(testUserId, {
recoveryWrappedMk: 'AAAA',
recoveryIv: 'BBBB',
});
await service.enableZeroKnowledge(testUserId);
// Try to set wrapped_mk back manually — should be rejected by
// the encryption_vaults_zk_consistency constraint.
await expect(
(async () => {
await db
.update(encryptionVaults)
.set({ wrappedMk: 'BAD', wrapIv: 'BAD' })
.where(eq(encryptionVaults.userId, testUserId));
})()
).rejects.toThrow(/encryption_vaults_zk_consistency/);
});
it('enforces wrap_iv_pair: setting wrap_iv to NULL while wrapped_mk is set is rejected', async () => {
await service.init(testUserId);
await expect(
(async () => {
await db
.update(encryptionVaults)
.set({ wrapIv: null })
.where(eq(encryptionVaults.userId, testUserId));
})()
).rejects.toThrow(/encryption_vaults_wrap_iv_pair/);
});
});
});

View file

@ -1,606 +0,0 @@
/**
* EncryptionVaultService server-side master key custody.
*
* Responsibilities:
* - init(userId): mint a fresh per-user master key, wrap it with the
* KEK, and store it. Idempotent: returns the existing vault if one
* already exists for this user.
* - getMasterKey(userId): unwrap the stored MK and return the raw 32
* bytes ready for HTTPS transit to the browser.
* - rotate(userId): mint a fresh MK, replace the existing wrap. The
* old MK is GONE the caller must ensure all encrypted data is
* re-encrypted (or accepted as lost) before invoking rotate.
*
* All reads and writes go through `withUserScope(userId, fn)` so the
* row-level-security policy on `auth.encryption_vaults` and
* `auth.encryption_vault_audit` is satisfied. The transaction sets
* `app.current_user_id` via `set_config(..., true)` (LOCAL scope) so
* even if a future bug forgets the WHERE clause, the database refuses
* to expose another user's vault entry.
*
* The audit table records every action successful and failed with
* IP, user-agent, and HTTP status. Routes pass these in via the
* AuditContext shape.
*/
import { eq, sql } from 'drizzle-orm';
import { nanoid } from 'nanoid';
import type { Database } from '../../db/connection';
import {
encryptionVaults,
encryptionVaultAudit,
type EncryptionVault,
} from '../../db/schema/encryption-vaults';
import { wrapMasterKey, unwrapMasterKey, generateMasterKey, activeKekId } from './kek';
/** Per-request metadata used for audit log entries. */
export interface AuditContext {
ipAddress?: string;
userAgent?: string;
}
export interface VaultFetchResult {
/** Raw 32 bytes of the unwrapped master key. Caller must base64-encode
* before placing in the JSON response body.
*
* null in zero-knowledge mode the server cannot unwrap the MK
* itself and must return the recovery-wrapped blob instead. The
* route handler reads `requiresRecoveryCode` to know which branch
* to send to the client. */
masterKey: Uint8Array | null;
/** Format version of the wrap currently in storage bumps if we ever
* migrate the wire format. The client doesn't usually care, but the
* rotate flow uses it to know whether a re-wrap is needed. */
formatVersion: number;
/** Which KEK produced the wrapped value. Empty string in zero-knowledge
* mode (no KEK wrap exists). */
kekId: string;
/** True if the vault is in zero-knowledge mode and the client must
* provide a recovery code to unwrap. When set, masterKey is null
* and the recovery* fields are populated instead. */
requiresRecoveryCode?: boolean;
/** Recovery wrap ciphertext (only set when requiresRecoveryCode). */
recoveryWrappedMk?: string;
/** Recovery wrap IV (only set when requiresRecoveryCode). */
recoveryIv?: string;
}
/** Input for setting (or replacing) the recovery wrap. The client wraps
* the master key locally with a key derived from the recovery secret
* and sends only the resulting ciphertext + IV. The recovery secret
* itself NEVER touches the wire. */
export interface RecoveryWrapInput {
recoveryWrappedMk: string;
recoveryIv: string;
}
/** Snapshot of the vault row's high-level state, exposed via
* GET /api/v1/me/encryption-vault/status. The settings page reads
* this on mount to render the right UI section without having to
* trigger a full unwrap of the master key. Cheap (single SELECT,
* no decrypt). */
export interface VaultStatus {
/** True iff a vault row exists for this user. */
vaultExists: boolean;
/** True iff the user has a recovery wrap stored. Independent of
* whether zero-knowledge is currently active. */
hasRecoveryWrap: boolean;
/** True iff zero-knowledge mode is active (server has no usable
* KEK wrap, recovery wrap is the only way to unlock). */
zeroKnowledge: boolean;
/** ISO timestamp of when the recovery wrap was last set, or null
* if never set. Useful for "last backup" hints in the UI. */
recoverySetAt: string | null;
}
export class EncryptionVaultService {
constructor(private db: Database) {}
// ─── Public API ──────────────────────────────────────────
/**
* Idempotent vault initialisation. Returns the existing vault row if
* one already exists for this user, otherwise mints a fresh master
* key, wraps it with the KEK, and inserts.
*
* Returns the unwrapped master key bytes either way so the client
* can stash them immediately after the call.
*/
async init(userId: string, ctx: AuditContext = {}): Promise<VaultFetchResult> {
return this.withUserScope(userId, async (tx) => {
const existing = await tx
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (existing.length > 0) {
// Already initialised. If the user is in zero-knowledge mode,
// the server can no longer hand out the plaintext master key
// — the route handler will return the recovery blob instead.
const row = existing[0];
if (row.zeroKnowledge) {
await this.writeAudit(tx, userId, 'init', ctx, 200, 'already-exists-zk');
return {
masterKey: null,
formatVersion: row.recoveryFormatVersion,
kekId: '',
requiresRecoveryCode: true,
recoveryWrappedMk: row.recoveryWrappedMk!,
recoveryIv: row.recoveryIv!,
};
}
const masterKey = await unwrapMasterKey(row.wrappedMk!, row.wrapIv!);
await this.writeAudit(tx, userId, 'init', ctx, 200, 'already-exists');
return {
masterKey,
formatVersion: row.formatVersion,
kekId: row.kekId,
};
}
const mkBytes = generateMasterKey();
const { wrappedMk, wrapIv } = await wrapMasterKey(mkBytes);
await tx.insert(encryptionVaults).values({
userId,
wrappedMk,
wrapIv,
formatVersion: 1,
kekId: activeKekId(),
});
await this.writeAudit(tx, userId, 'init', ctx, 201, 'created');
return { masterKey: mkBytes, formatVersion: 1, kekId: activeKekId() };
});
}
/**
* Fetches the current master key for a user. Throws if no vault has
* been initialised yet the route handler converts that to a 404 so
* the client can call init() to bootstrap.
*/
async getMasterKey(userId: string, ctx: AuditContext = {}): Promise<VaultFetchResult> {
return this.withUserScope(userId, async (tx) => {
const rows = await tx
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (rows.length === 0) {
await this.writeAudit(tx, userId, 'failed_fetch', ctx, 404, 'not-initialised');
throw new VaultNotFoundError(userId);
}
const row = rows[0];
// Zero-knowledge fork: the server CANNOT decrypt the MK and
// must return the recovery blob for the client to unwrap.
// `requiresRecoveryCode` flips the route handler's response
// shape — it sends the recovery wrap instead of a base64 MK.
if (row.zeroKnowledge) {
await this.writeAudit(tx, userId, 'fetch', ctx, 200, 'zk-recovery-blob');
return {
masterKey: null,
formatVersion: row.recoveryFormatVersion,
kekId: '',
requiresRecoveryCode: true,
recoveryWrappedMk: row.recoveryWrappedMk!,
recoveryIv: row.recoveryIv!,
};
}
let masterKey: Uint8Array;
try {
masterKey = await unwrapMasterKey(row.wrappedMk!, row.wrapIv!);
} catch (err) {
// Auth-tag mismatch, wrong KEK, malformed row — all the same
// to the caller (500), but we want a clear audit trail.
await this.writeAudit(
tx,
userId,
'failed_fetch',
ctx,
500,
`unwrap-failed: ${(err as Error).message}`
);
throw err;
}
await this.writeAudit(tx, userId, 'fetch', ctx, 200, null);
return { masterKey, formatVersion: row.formatVersion, kekId: row.kekId };
});
}
/**
* Rotates a user's master key. The old MK is permanently lost the
* caller is responsible for re-encrypting any data that was sealed
* with it BEFORE calling this method, or accepting the loss.
*
* Use cases:
* - Suspected device compromise rotate + force logout all
* sessions + tell user "your old data needs re-syncing"
* - Periodic best-practice rotation (rare in this design the
* KEK can rotate without touching the MK)
*/
async rotate(userId: string, ctx: AuditContext = {}): Promise<VaultFetchResult> {
return this.withUserScope(userId, async (tx) => {
// Rotate is forbidden in zero-knowledge mode — the server can't
// re-wrap a key it can't read. The client has to disable
// zero-knowledge first (which restores a server-side wrap),
// then call rotate, then re-enable if desired.
const existing = await tx
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (existing.length > 0 && existing[0].zeroKnowledge) {
await this.writeAudit(tx, userId, 'rotate', ctx, 409, 'zk-rotate-forbidden');
throw new ZeroKnowledgeRotateForbidden(userId);
}
const mkBytes = generateMasterKey();
const { wrappedMk, wrapIv } = await wrapMasterKey(mkBytes);
const updated = await tx
.update(encryptionVaults)
.set({
wrappedMk,
wrapIv,
kekId: activeKekId(),
rotatedAt: new Date(),
// Rotation also wipes any existing recovery wrap — the
// new MK has nothing to do with the old one, so the old
// recovery code would unwrap into garbage. The user has
// to set up a fresh recovery code after rotating.
recoveryWrappedMk: null,
recoveryIv: null,
recoverySetAt: null,
})
.where(eq(encryptionVaults.userId, userId))
.returning();
if (updated.length === 0) {
// No existing vault — treat rotate as init.
await tx.insert(encryptionVaults).values({
userId,
wrappedMk,
wrapIv,
formatVersion: 1,
kekId: activeKekId(),
});
await this.writeAudit(tx, userId, 'rotate', ctx, 201, 'init-on-rotate');
} else {
await this.writeAudit(tx, userId, 'rotate', ctx, 200, null);
}
return { masterKey: mkBytes, formatVersion: 1, kekId: activeKekId() };
});
}
/**
* Cheap status read for UI rendering. No decryption, no audit
* row (this gets called on every settings page mount and we don't
* want to flood the audit log with read-only metadata fetches).
*
* Returns sane defaults when the vault row doesn't exist yet
* the page can render "vault not initialised" without needing a
* separate code path.
*/
async getStatus(userId: string): Promise<VaultStatus> {
return this.withUserScope(userId, async (tx) => {
const rows = await tx
.select({
recoveryWrappedMk: encryptionVaults.recoveryWrappedMk,
recoverySetAt: encryptionVaults.recoverySetAt,
zeroKnowledge: encryptionVaults.zeroKnowledge,
})
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (rows.length === 0) {
return {
vaultExists: false,
hasRecoveryWrap: false,
zeroKnowledge: false,
recoverySetAt: null,
};
}
const row = rows[0];
return {
vaultExists: true,
hasRecoveryWrap: row.recoveryWrappedMk !== null,
zeroKnowledge: row.zeroKnowledge,
recoverySetAt: row.recoverySetAt ? row.recoverySetAt.toISOString() : null,
};
});
}
// ─── Phase 9: Recovery Wrap + Zero-Knowledge ─────────────
/**
* Stores (or replaces) the user's recovery wrap. The client builds
* the wrap locally derives a key from the recovery secret, AES-GCM
* encrypts the master key, sends only the resulting ciphertext + IV.
* The recovery secret itself NEVER touches the wire.
*
* Storing a recovery wrap does NOT enable zero-knowledge mode by
* itself the user has to follow up with `enableZeroKnowledge` to
* actually delete the server-side wrap. This two-step setup gives
* the UI room to confirm the recovery code is backed up before
* making the rotation irreversible.
*
* Idempotent: calling twice replaces the previous recovery wrap.
* Use case: user re-prints the recovery code with a fresh secret.
*/
async setRecoveryWrap(
userId: string,
input: RecoveryWrapInput,
ctx: AuditContext = {}
): Promise<void> {
return this.withUserScope(userId, async (tx) => {
const updated = await tx
.update(encryptionVaults)
.set({
recoveryWrappedMk: input.recoveryWrappedMk,
recoveryIv: input.recoveryIv,
recoveryFormatVersion: 1,
recoverySetAt: new Date(),
})
.where(eq(encryptionVaults.userId, userId))
.returning();
if (updated.length === 0) {
await this.writeAudit(tx, userId, 'recovery_set', ctx, 404, 'no-vault');
throw new VaultNotFoundError(userId);
}
await this.writeAudit(tx, userId, 'recovery_set', ctx, 200, null);
});
}
/**
* Removes the recovery wrap. Forbidden in zero-knowledge mode (would
* leave the user with no usable wrap and no way to unlock).
*/
async clearRecoveryWrap(userId: string, ctx: AuditContext = {}): Promise<void> {
return this.withUserScope(userId, async (tx) => {
const existing = await tx
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (existing.length === 0) {
await this.writeAudit(tx, userId, 'recovery_clear', ctx, 404, 'no-vault');
throw new VaultNotFoundError(userId);
}
if (existing[0].zeroKnowledge) {
await this.writeAudit(tx, userId, 'recovery_clear', ctx, 409, 'zk-active');
throw new ZeroKnowledgeActiveError(userId);
}
await tx
.update(encryptionVaults)
.set({
recoveryWrappedMk: null,
recoveryIv: null,
recoverySetAt: null,
})
.where(eq(encryptionVaults.userId, userId));
await this.writeAudit(tx, userId, 'recovery_clear', ctx, 200, null);
});
}
/**
* Enables zero-knowledge mode. NULLs out wrapped_mk + wrap_iv,
* sets zero_knowledge=true. After this, the server is computationally
* incapable of decrypting the user's data even with full DB +
* KEK access until the user provides the recovery code on the
* next unlock.
*
* Precondition: a recovery wrap MUST already be stored. Without it,
* enabling zero-knowledge would lock the user out forever (the CHECK
* constraint enforces this at the DB level too).
*
* This is the destructive step. The UI should require an explicit
* confirmation modal there is no undo without first calling
* `disableZeroKnowledge`, which itself requires a freshly-unwrapped
* MK from the client side.
*/
async enableZeroKnowledge(userId: string, ctx: AuditContext = {}): Promise<void> {
return this.withUserScope(userId, async (tx) => {
const rows = await tx
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (rows.length === 0) {
await this.writeAudit(tx, userId, 'zk_enable', ctx, 404, 'no-vault');
throw new VaultNotFoundError(userId);
}
if (rows[0].zeroKnowledge) {
// Already enabled — idempotent no-op so retried calls don't
// look like errors.
await this.writeAudit(tx, userId, 'zk_enable', ctx, 200, 'already-enabled');
return;
}
if (!rows[0].recoveryWrappedMk || !rows[0].recoveryIv) {
await this.writeAudit(tx, userId, 'zk_enable', ctx, 400, 'no-recovery-wrap');
throw new RecoveryWrapMissingError(userId);
}
await tx
.update(encryptionVaults)
.set({
zeroKnowledge: true,
wrappedMk: null,
wrapIv: null,
})
.where(eq(encryptionVaults.userId, userId));
await this.writeAudit(tx, userId, 'zk_enable', ctx, 200, null);
});
}
/**
* Disables zero-knowledge mode. The client must hand back a fresh
* KEK-friendly master key (i.e. the same MK it just unwrapped with
* the recovery code, re-supplied so the server can KEK-wrap it).
*
* Why doesn't the server generate a new MK? Because that would
* orphan all existing encrypted data. The user-side workflow is:
* 1. Unlock with recovery code (client now has the plaintext MK)
* 2. POST /zero-knowledge/disable with `{ masterKey: base64(MK) }`
* 3. Server KEK-wraps the supplied MK and stores it as wrapped_mk
* 4. zero_knowledge flips back to false
*
* The client SHOULD memzero its copy of the MK bytes after the call.
*/
async disableZeroKnowledge(
userId: string,
mkBytes: Uint8Array,
ctx: AuditContext = {}
): Promise<void> {
return this.withUserScope(userId, async (tx) => {
const rows = await tx
.select()
.from(encryptionVaults)
.where(eq(encryptionVaults.userId, userId))
.limit(1);
if (rows.length === 0) {
await this.writeAudit(tx, userId, 'zk_disable', ctx, 404, 'no-vault');
throw new VaultNotFoundError(userId);
}
if (!rows[0].zeroKnowledge) {
await this.writeAudit(tx, userId, 'zk_disable', ctx, 200, 'already-disabled');
return;
}
const { wrappedMk, wrapIv } = await wrapMasterKey(mkBytes);
await tx
.update(encryptionVaults)
.set({
zeroKnowledge: false,
wrappedMk,
wrapIv,
kekId: activeKekId(),
})
.where(eq(encryptionVaults.userId, userId));
await this.writeAudit(tx, userId, 'zk_disable', ctx, 200, null);
});
}
// ─── Internals ───────────────────────────────────────────
/**
* Wraps `fn` in a transaction with `app.current_user_id` set to the
* given userId via `set_config(..., true)`. RLS policies on
* encryption_vaults and encryption_vault_audit then admit only rows
* matching that userId defense in depth on top of the explicit
* WHERE clauses.
*
* `set_config(name, value, true)` is the parameterised equivalent of
* `SET LOCAL` (which can't take bind parameters). The `true` flag
* scopes the setting to the current transaction.
*/
private async withUserScope<T>(
userId: string,
fn: (tx: Parameters<Parameters<Database['transaction']>[0]>[0]) => Promise<T>
): Promise<T> {
if (!userId) {
throw new Error('mana-auth/vault: userId is required for vault operations');
}
return this.db.transaction(async (tx) => {
await tx.execute(sql`SELECT set_config('app.current_user_id', ${userId}, true)`);
return fn(tx);
});
}
private async writeAudit(
tx: Parameters<Parameters<Database['transaction']>[0]>[0],
userId: string,
action:
| 'init'
| 'fetch'
| 'rotate'
| 'failed_fetch'
| 'recovery_set'
| 'recovery_clear'
| 'zk_enable'
| 'zk_disable',
ctx: AuditContext,
status: number,
context: string | null
): Promise<void> {
await tx.insert(encryptionVaultAudit).values({
id: nanoid(),
userId,
action,
ipAddress: ctx.ipAddress ?? null,
userAgent: ctx.userAgent ?? null,
context,
status,
});
}
}
/**
* Thrown when a fetch is attempted against a user who hasn't called
* init() yet. Routes catch this specifically to convert it to a 404
* (so the client can react with init() instead of treating it as a
* server error).
*/
export class VaultNotFoundError extends Error {
constructor(public userId: string) {
super(`encryption vault not initialised for user ${userId}`);
this.name = 'VaultNotFoundError';
}
}
/**
* Thrown when the client tries to enable zero-knowledge mode without
* first storing a recovery wrap. Routes convert to 400.
*/
export class RecoveryWrapMissingError extends Error {
constructor(public userId: string) {
super(`cannot enable zero-knowledge mode: no recovery wrap stored for user ${userId}`);
this.name = 'RecoveryWrapMissingError';
}
}
/**
* Thrown when the client tries to clear the recovery wrap while
* zero-knowledge mode is active (would lock the user out). Routes
* convert to 409.
*/
export class ZeroKnowledgeActiveError extends Error {
constructor(public userId: string) {
super(`cannot clear recovery wrap while zero-knowledge mode is active for user ${userId}`);
this.name = 'ZeroKnowledgeActiveError';
}
}
/**
* Thrown when rotate() is called on a vault in zero-knowledge mode.
* Routes convert to 409.
*/
export class ZeroKnowledgeRotateForbidden extends Error {
constructor(public userId: string) {
super(`cannot rotate master key in zero-knowledge mode for user ${userId}`);
this.name = 'ZeroKnowledgeRotateForbidden';
}
}
/** Re-export the type for route handlers. */
export type { EncryptionVault };

View file

@ -1,133 +0,0 @@
/**
* KEK (Key Encryption Key) helper tests.
*
* Pure crypto no Postgres or Drizzle dependency. Run with `bun test`.
*
* Service-level tests for EncryptionVaultService live in `index.test.ts`
* and require a real Postgres (RLS + CHECK constraints can't be
* faithfully reproduced with pg-mem). They auto-skip when
* TEST_DATABASE_URL is unset, so this kek.test.ts always runs.
*/
import { describe, it, expect, beforeEach } from 'bun:test';
import {
loadKek,
wrapMasterKey,
unwrapMasterKey,
generateMasterKey,
activeKekId,
_resetForTesting,
} from './kek';
// Deterministic 32-byte test KEK (NOT the dev fallback — that's all
// zeros, which would trigger the warning every test run).
const TEST_KEK_BASE64 = 'AQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyA=';
beforeEach(async () => {
_resetForTesting();
await loadKek(TEST_KEK_BASE64);
});
describe('loadKek', () => {
it('imports a valid 32-byte base64 KEK', () => {
expect(activeKekId()).toBe('env-v1');
});
it('rejects a base64 string that decodes to the wrong length', async () => {
_resetForTesting();
// 16 bytes — half the size of an AES-256 KEK
await expect(loadKek('AAAAAAAAAAAAAAAAAAAAAA==')).rejects.toThrow(/expected 32 bytes/);
});
it('is idempotent — second call is a no-op', async () => {
// Already loaded in beforeEach. A second call should not throw.
await loadKek(TEST_KEK_BASE64);
expect(activeKekId()).toBe('env-v1');
});
it('refuses to expose kekId before loadKek is called', () => {
_resetForTesting();
expect(() => activeKekId()).toThrow(/loadKek\(\) not called/);
});
});
describe('generateMasterKey', () => {
it('returns 32 bytes of cryptographic randomness', () => {
const a = generateMasterKey();
const b = generateMasterKey();
expect(a).toBeInstanceOf(Uint8Array);
expect(a.length).toBe(32);
expect(b.length).toBe(32);
// Two consecutive calls should virtually never collide
expect(Buffer.from(a).toString('hex')).not.toBe(Buffer.from(b).toString('hex'));
});
});
describe('wrapMasterKey / unwrapMasterKey roundtrip', () => {
it('roundtrips a freshly generated master key', async () => {
const mk = generateMasterKey();
const { wrappedMk, wrapIv } = await wrapMasterKey(mk);
expect(typeof wrappedMk).toBe('string');
expect(typeof wrapIv).toBe('string');
expect(wrappedMk.length).toBeGreaterThan(0);
expect(wrapIv.length).toBeGreaterThan(0);
const recovered = await unwrapMasterKey(wrappedMk, wrapIv);
expect(Buffer.from(recovered).toString('hex')).toBe(Buffer.from(mk).toString('hex'));
});
it('produces a different ciphertext for the same MK on each call', async () => {
const mk = generateMasterKey();
const a = await wrapMasterKey(mk);
const b = await wrapMasterKey(mk);
const c = await wrapMasterKey(mk);
expect(a.wrappedMk).not.toBe(b.wrappedMk);
expect(b.wrappedMk).not.toBe(c.wrappedMk);
expect(a.wrapIv).not.toBe(b.wrapIv);
// All three still unwrap correctly
expect(Buffer.from(await unwrapMasterKey(a.wrappedMk, a.wrapIv)).toString('hex')).toBe(
Buffer.from(mk).toString('hex')
);
expect(Buffer.from(await unwrapMasterKey(b.wrappedMk, b.wrapIv)).toString('hex')).toBe(
Buffer.from(mk).toString('hex')
);
});
it('rejects a master key of the wrong length', async () => {
await expect(wrapMasterKey(new Uint8Array(16))).rejects.toThrow(/32-byte master key/);
await expect(wrapMasterKey(new Uint8Array(64))).rejects.toThrow(/32-byte master key/);
});
});
describe('unwrapMasterKey error paths', () => {
it('throws on tampered ciphertext (auth tag mismatch)', async () => {
const mk = generateMasterKey();
const { wrappedMk, wrapIv } = await wrapMasterKey(mk);
// Flip the last base64 character to corrupt the auth tag
const lastChar = wrappedMk.charAt(wrappedMk.length - 1);
const swapped = lastChar === 'A' ? 'B' : 'A';
const tampered = wrappedMk.slice(0, -1) + swapped;
await expect(unwrapMasterKey(tampered, wrapIv)).rejects.toThrow();
});
it('throws on a wrong-length IV', async () => {
const mk = generateMasterKey();
const { wrappedMk } = await wrapMasterKey(mk);
const badIv = 'AAAAAAAA'; // 6 bytes after base64 decode
await expect(unwrapMasterKey(wrappedMk, badIv)).rejects.toThrow(/12-byte IV/);
});
it('throws when a different KEK was used to wrap', async () => {
// Wrap with the test KEK
const mk = generateMasterKey();
const { wrappedMk, wrapIv } = await wrapMasterKey(mk);
// Reload with a different KEK
_resetForTesting();
const otherKek = 'IB8eHRwbGhkYFxYVFBMSERAPDg0MCwoJCAcGBQQDAgE=';
await loadKek(otherKek);
await expect(unwrapMasterKey(wrappedMk, wrapIv)).rejects.toThrow();
});
});

View file

@ -1,179 +0,0 @@
/**
* Key Encryption Key (KEK) loader and AES-GCM wrap/unwrap helpers.
*
* The KEK is a 32-byte AES-256 key loaded from the MANA_AUTH_KEK env
* var (base64). It wraps each user's master key (MK) before storage in
* `auth.encryption_vaults.wrapped_mk`. The KEK itself NEVER touches the
* database it lives only in process memory and is sourced from a
* single environment variable that must be provisioned out of band
* (Docker secret, KMS-injected, etc.).
*
* Why a separate AES-GCM wrap instead of e.g. libsodium SecretBox?
* - Both Bun and the browser ship native Web Crypto AES-GCM, so the
* wire format is portable across the future "client-side wrap"
* scenario without bundling extra crypto deps.
* - The encryption-vault rows live behind row-level security and
* are never exposed; the threat model here is "what if an
* attacker dumps the auth DB?", which AES-GCM-256 with a 256-bit
* KEK fully addresses.
*
* Future migration to KMS / Vault:
* The KEK loader is a single function. When we move to AWS KMS or
* Hashicorp Vault, only `loadKek()` changes. The `wrapMasterKey` /
* `unwrapMasterKey` callers stay identical, and the wrapped_mk
* column gets a new `kek_id` value to mark which KEK produced it.
*/
import { logger } from '@mana/shared-hono';
const KEK_LENGTH_BYTES = 32; // AES-256
const IV_LENGTH_BYTES = 12; // AES-GCM standard
const MK_LENGTH_BYTES = 32; // user master key is also AES-256
let _kek: CryptoKey | null = null;
let _kekId: string | null = null;
/**
* Loads the KEK from a base64 string and prepares it for use as an
* AES-GCM key. Idempotent: subsequent calls with the same string are
* no-ops. Throws if the input is not exactly 32 bytes after decoding.
*
* Call this once at boot from `index.ts` after `loadConfig()` has run.
*/
export async function loadKek(base64: string): Promise<void> {
if (_kek) return;
const raw = base64ToBytes(base64);
if (raw.length !== KEK_LENGTH_BYTES) {
throw new Error(
`mana-auth/kek: expected ${KEK_LENGTH_BYTES} bytes after base64 decode, got ${raw.length}. ` +
'Generate a fresh key with `openssl rand -base64 32`.'
);
}
// Loud warning if the dev fallback KEK (32 zero bytes) is in use —
// catches accidental production deploys without a real secret.
if (raw.every((b) => b === 0)) {
logger.warn('mana-auth: USING DEV KEK (32 zero bytes). Set MANA_AUTH_KEK before production.');
}
_kek = await crypto.subtle.importKey(
'raw',
toBufferSource(raw),
{ name: 'AES-GCM', length: 256 },
false,
['encrypt', 'decrypt']
);
// kek_id format lets us distinguish env-loaded keys from future
// KMS-loaded ones at unwrap time. The `v1` suffix gives us a path
// for in-place rotation: a new KEK gets `env-v2`, old vault rows
// keep working until a background rotator re-wraps them.
_kekId = 'env-v1';
}
/** Returns the kek_id stamp written to encryption_vaults.kek_id. */
export function activeKekId(): string {
if (!_kekId) throw new Error('mana-auth/kek: loadKek() not called yet');
return _kekId;
}
/**
* Wraps a 32-byte master key with the KEK. Returns the base64 IV and
* base64 ciphertext (which includes the 16-byte AES-GCM auth tag at
* the tail). Both pieces are written to `encryption_vaults`.
*/
export async function wrapMasterKey(
mkBytes: Uint8Array
): Promise<{ wrappedMk: string; wrapIv: string }> {
if (mkBytes.length !== MK_LENGTH_BYTES) {
throw new Error(
`mana-auth/kek: expected ${MK_LENGTH_BYTES}-byte master key, got ${mkBytes.length}`
);
}
const kek = requireKek();
const iv = crypto.getRandomValues(new Uint8Array(IV_LENGTH_BYTES));
const ct = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv: toBufferSource(iv) },
kek,
toBufferSource(mkBytes)
);
return {
wrappedMk: bytesToBase64(new Uint8Array(ct)),
wrapIv: bytesToBase64(iv),
};
}
/**
* Unwraps a stored master key. Returns the raw 32 bytes ready to be
* sent to the client (over HTTPS) and re-imported as a CryptoKey by
* the browser.
*
* Throws on tampered ciphertext (auth tag mismatch), wrong IV length,
* wrong KEK, or any AES-GCM failure. The caller (vault service)
* surfaces these as 500s and writes a `failed_fetch` audit row.
*/
export async function unwrapMasterKey(wrappedMk: string, wrapIv: string): Promise<Uint8Array> {
const kek = requireKek();
const iv = base64ToBytes(wrapIv);
if (iv.length !== IV_LENGTH_BYTES) {
throw new Error(`mana-auth/kek: expected ${IV_LENGTH_BYTES}-byte IV, got ${iv.length}`);
}
const ct = base64ToBytes(wrappedMk);
const plain = await crypto.subtle.decrypt(
{ name: 'AES-GCM', iv: toBufferSource(iv) },
kek,
toBufferSource(ct)
);
const out = new Uint8Array(plain);
if (out.length !== MK_LENGTH_BYTES) {
throw new Error(
`mana-auth/kek: unwrapped key has wrong length ${out.length} (expected ${MK_LENGTH_BYTES})`
);
}
return out;
}
/**
* Generates a fresh 32-byte master key. Used by the vault service at
* vault initialisation time and during rotation.
*/
export function generateMasterKey(): Uint8Array {
return crypto.getRandomValues(new Uint8Array(MK_LENGTH_BYTES));
}
// ─── Internals ────────────────────────────────────────────────
function requireKek(): CryptoKey {
if (!_kek) {
throw new Error('mana-auth/kek: loadKek() must be called before any wrap/unwrap operation');
}
return _kek;
}
function bytesToBase64(bytes: Uint8Array): string {
let bin = '';
for (let i = 0; i < bytes.length; i++) bin += String.fromCharCode(bytes[i]);
return btoa(bin);
}
function base64ToBytes(b64: string): Uint8Array {
const bin = atob(b64);
const out = new Uint8Array(bin.length);
for (let i = 0; i < bin.length; i++) out[i] = bin.charCodeAt(i);
return out;
}
/** TS 5.7 compat — Uint8Array<ArrayBufferLike> isn't assignable to BufferSource. */
function toBufferSource(bytes: Uint8Array): ArrayBuffer {
const buf = new ArrayBuffer(bytes.length);
new Uint8Array(buf).set(bytes);
return buf;
}
// Test-only reset hook so vitest can reload the KEK between tests
// without re-running the whole module. Not exported from any barrel.
export function _resetForTesting(): void {
_kek = null;
_kekId = null;
}

View file

@ -1,207 +0,0 @@
/**
* MissionGrantService unit tests.
*
* Crypto-only stubs the EncryptionVaultService so we don't need a
* real Postgres. Generates a fresh RSA-OAEP-2048 keypair per-test,
* exports the public key as SPKI PEM, feeds it into the service, then
* unwraps the returned grant with the private key and checks it matches
* the expected HKDF output.
*/
import { describe, it, expect } from 'bun:test';
import { deriveMissionDataKeyRaw, GRANT_DERIVATION_VERSION } from '@mana/shared-ai';
import {
MissionGrantService,
MissionGrantNotConfigured,
ZeroKnowledgeGrantForbidden,
} from './mission-grant';
import type { EncryptionVaultService, VaultFetchResult } from './index';
const fixedMasterKey = new Uint8Array(32).map((_, i) => i + 1);
/** The service zero-fills the returned masterKey after use, so each
* getMasterKey() call must return a fresh copy otherwise a second
* call in the same test would derive from all-zero bytes. */
function stubVault(result: VaultFetchResult): EncryptionVaultService {
return {
getMasterKey: async () => ({
...result,
masterKey: result.masterKey ? new Uint8Array(result.masterKey) : null,
}),
} as unknown as EncryptionVaultService;
}
async function genKeypair() {
const kp = await crypto.subtle.generateKey(
{
name: 'RSA-OAEP',
modulusLength: 2048,
publicExponent: new Uint8Array([1, 0, 1]),
hash: 'SHA-256',
},
true,
['encrypt', 'decrypt']
);
const spki = new Uint8Array(await crypto.subtle.exportKey('spki', kp.publicKey));
const pem =
'-----BEGIN PUBLIC KEY-----\n' +
chunk(bytesToBase64(spki), 64).join('\n') +
'\n-----END PUBLIC KEY-----';
return { pem, privateKey: kp.privateKey };
}
describe('MissionGrantService', () => {
it('mints a grant whose wrappedKey unwraps to the derived MDK', async () => {
const { pem, privateKey } = await genKeypair();
const service = new MissionGrantService(
stubVault({ masterKey: new Uint8Array(fixedMasterKey), formatVersion: 1, kekId: 'env-v1' }),
pem
);
const grant = await service.createGrant('user-1', {
missionId: 'mission-abc',
tables: ['notes', 'tasks'],
recordIds: ['notes:n1', 'tasks:t1'],
});
expect(grant.derivation.version).toBe(GRANT_DERIVATION_VERSION);
expect(grant.derivation.missionId).toBe('mission-abc');
expect(grant.derivation.tables).toEqual(['notes', 'tasks']);
expect(grant.derivation.recordIds).toEqual(['notes:n1', 'tasks:t1']);
const wrappedBytes = base64ToBytes(grant.wrappedKey);
const plain = new Uint8Array(
await crypto.subtle.decrypt({ name: 'RSA-OAEP' }, privateKey, toBufferSource(wrappedBytes))
);
const expectedMdk = await deriveMissionDataKeyRaw(
new Uint8Array(fixedMasterKey),
grant.derivation
);
expect(Array.from(plain)).toEqual(Array.from(expectedMdk));
});
it('sorts tables and recordIds before binding into the key', async () => {
const { pem, privateKey } = await genKeypair();
const service = new MissionGrantService(
stubVault({ masterKey: new Uint8Array(fixedMasterKey), formatVersion: 1, kekId: 'env-v1' }),
pem
);
const a = await service.createGrant('u', {
missionId: 'm',
tables: ['tasks', 'notes'],
recordIds: ['tasks:t1', 'notes:n1'],
});
const b = await service.createGrant('u', {
missionId: 'm',
tables: ['notes', 'tasks'],
recordIds: ['notes:n1', 'tasks:t1'],
});
const keyA = new Uint8Array(
await crypto.subtle.decrypt(
{ name: 'RSA-OAEP' },
privateKey,
toBufferSource(base64ToBytes(a.wrappedKey))
)
);
const keyB = new Uint8Array(
await crypto.subtle.decrypt(
{ name: 'RSA-OAEP' },
privateKey,
toBufferSource(base64ToBytes(b.wrappedKey))
)
);
expect(Array.from(keyA)).toEqual(Array.from(keyB));
});
it('rejects zero-knowledge users', async () => {
const { pem } = await genKeypair();
const service = new MissionGrantService(
stubVault({
masterKey: null,
formatVersion: 1,
kekId: '',
requiresRecoveryCode: true,
recoveryWrappedMk: 'x',
recoveryIv: 'y',
}),
pem
);
await expect(
service.createGrant('u', { missionId: 'm', tables: ['notes'], recordIds: ['notes:n1'] })
).rejects.toBeInstanceOf(ZeroKnowledgeGrantForbidden);
});
it('throws MissionGrantNotConfigured when no public key is set', async () => {
const service = new MissionGrantService(
stubVault({ masterKey: new Uint8Array(fixedMasterKey), formatVersion: 1, kekId: 'env-v1' }),
undefined
);
await expect(
service.createGrant('u', { missionId: 'm', tables: ['notes'], recordIds: ['notes:n1'] })
).rejects.toBeInstanceOf(MissionGrantNotConfigured);
});
it('rejects missing tables / recordIds', async () => {
const { pem } = await genKeypair();
const service = new MissionGrantService(
stubVault({ masterKey: new Uint8Array(fixedMasterKey), formatVersion: 1, kekId: 'env-v1' }),
pem
);
await expect(
service.createGrant('u', { missionId: 'm', tables: [], recordIds: ['a'] })
).rejects.toThrow(/tables/);
await expect(
service.createGrant('u', { missionId: 'm', tables: ['notes'], recordIds: [] })
).rejects.toThrow(/recordIds/);
});
it('clamps ttl to the upper bound', async () => {
const { pem } = await genKeypair();
const service = new MissionGrantService(
stubVault({ masterKey: new Uint8Array(fixedMasterKey), formatVersion: 1, kekId: 'env-v1' }),
pem
);
const grant = await service.createGrant('u', {
missionId: 'm',
tables: ['notes'],
recordIds: ['notes:n1'],
ttlMs: 365 * 24 * 60 * 60 * 1000, // 1 year → clamped to 30d
});
const ttlMs = new Date(grant.expiresAt).getTime() - new Date(grant.issuedAt).getTime();
expect(ttlMs).toBe(30 * 24 * 60 * 60 * 1000);
});
});
// ─── helpers ─────────────────────────────────────────────
function bytesToBase64(bytes: Uint8Array): string {
let bin = '';
for (let i = 0; i < bytes.length; i++) bin += String.fromCharCode(bytes[i]);
return btoa(bin);
}
function base64ToBytes(b64: string): Uint8Array {
const bin = atob(b64);
const out = new Uint8Array(bin.length);
for (let i = 0; i < bin.length; i++) out[i] = bin.charCodeAt(i);
return out;
}
function toBufferSource(bytes: Uint8Array): ArrayBuffer {
const buf = new ArrayBuffer(bytes.length);
new Uint8Array(buf).set(bytes);
return buf;
}
function chunk(s: string, n: number): string[] {
const out: string[] = [];
for (let i = 0; i < s.length; i += n) out.push(s.slice(i, i + n));
return out;
}

View file

@ -1,207 +0,0 @@
/**
* MissionGrantService issues Key-Grants that let the `mana-ai`
* background runner decrypt scoped encrypted records without the
* user's browser being open.
*
* Flow:
* 1. Fetch the user's master key via the existing vault service.
* Zero-knowledge users return null grant is refused.
* 2. Derive a Mission Data Key (MDK) with the canonical HKDF from
* `@mana/shared-ai`. Scope (tables + recordIds) is cryptographically
* bound, so any scope change invalidates the grant automatically.
* 3. RSA-OAEP-2048-wrap the raw MDK bytes with the mana-ai public
* key. Only the paired private key (held in mana-ai's process
* memory) can unwrap.
* 4. Return the grant blob `{ wrappedKey, derivation, issuedAt,
* expiresAt }`. The route attaches it to `Mission.grant` via the
* webapp's normal sync write path.
*
* Why here and not in mana-ai?
* Only mana-auth has the KEK, the vault rows, and therefore the
* unwrapped master key. Everyone else either doesn't get the key
* at all (services) or gets it transiently on first login (webapp).
* Centralising the grant mint means one audit-logged path, not two.
*/
import {
deriveMissionDataKeyRaw,
GRANT_DERIVATION_VERSION,
type GrantDerivation,
type MissionGrant,
} from '@mana/shared-ai';
import { EncryptionVaultService, VaultNotFoundError, type AuditContext } from './index';
/** Default lifetime of a freshly-minted grant. User keeps a mission
* editing / ticking within this window grant stays fresh; long
* idle grant expires and the runner falls back to foreground. */
const DEFAULT_TTL_MS = 7 * 24 * 60 * 60 * 1000; // 7 days
export interface CreateGrantInput {
missionId: string;
tables: string[];
recordIds: string[];
/** Override the default 7-day TTL. Upper-bounded by the service to
* stay below the rotation horizon. */
ttlMs?: number;
}
/** Thrown when the user is in zero-knowledge mode. The server has no
* usable master key cannot derive an MDK grant is impossible.
* Routes convert to 409 ZK_ACTIVE so the UI can fall back to the
* foreground runner without treating this as an error. */
export class ZeroKnowledgeGrantForbidden extends Error {
constructor(public userId: string) {
super(`cannot issue mission grant: user ${userId} is in zero-knowledge mode`);
this.name = 'ZeroKnowledgeGrantForbidden';
}
}
/** Thrown when the service boots without a configured mission-grant
* public key. Routes convert to 503 so the UI degrades cleanly to
* foreground-only execution. */
export class MissionGrantNotConfigured extends Error {
constructor() {
super('mana-auth: MANA_AI_PUBLIC_KEY_PEM is not set — grants are disabled');
this.name = 'MissionGrantNotConfigured';
}
}
export class MissionGrantService {
private pubKeyPromise: Promise<CryptoKey> | null = null;
constructor(
private vaultService: EncryptionVaultService,
private publicKeyPem: string | undefined
) {}
/** Mints a fresh grant for the given mission + scope. Idempotent in
* the sense that callers can invoke repeatedly to refresh the TTL
* each call produces a new `wrappedKey` with the same MDK (HKDF is
* deterministic) but fresh `issuedAt`/`expiresAt`. */
async createGrant(
userId: string,
input: CreateGrantInput,
ctx: AuditContext = {}
): Promise<MissionGrant> {
if (!this.publicKeyPem) {
throw new MissionGrantNotConfigured();
}
validateInput(input);
// VaultFetchResult with null masterKey means the user is in
// zero-knowledge mode. The server simply has no way to help — the
// user has to disable ZK first or stick to the foreground runner.
const vault = await this.vaultService.getMasterKey(userId, ctx);
if (!vault.masterKey) {
throw new ZeroKnowledgeGrantForbidden(userId);
}
const derivation: GrantDerivation = {
version: GRANT_DERIVATION_VERSION,
missionId: input.missionId,
tables: [...input.tables].sort(),
recordIds: [...input.recordIds].sort(),
};
let mdkBytes: Uint8Array | null = null;
try {
mdkBytes = await deriveMissionDataKeyRaw(vault.masterKey, derivation);
const pubKey = await this.loadPublicKey();
const ct = await crypto.subtle.encrypt(
{ name: 'RSA-OAEP' },
pubKey,
toBufferSource(mdkBytes)
);
const now = Date.now();
const ttl = clampTtl(input.ttlMs ?? DEFAULT_TTL_MS);
return {
wrappedKey: bytesToBase64(new Uint8Array(ct)),
derivation,
issuedAt: new Date(now).toISOString(),
expiresAt: new Date(now + ttl).toISOString(),
};
} finally {
if (mdkBytes) mdkBytes.fill(0);
vault.masterKey.fill(0);
}
}
/** Lazily parse the PEM once per process. Web Crypto doesn't speak PEM
* directly we strip the header/footer and decode the base64 DER. */
private loadPublicKey(): Promise<CryptoKey> {
if (!this.pubKeyPromise) {
this.pubKeyPromise = importRsaPublicKey(this.publicKeyPem!);
}
return this.pubKeyPromise;
}
}
// ─── Helpers ─────────────────────────────────────────────────
function validateInput(input: CreateGrantInput): void {
if (!input.missionId) throw new Error('missionId is required');
if (!Array.isArray(input.tables) || input.tables.length === 0) {
throw new Error('tables must be a non-empty array');
}
if (!Array.isArray(input.recordIds) || input.recordIds.length === 0) {
throw new Error('recordIds must be a non-empty array');
}
if (input.recordIds.length > 1000) {
// Hard cap so a pathological client can't blow up the HKDF info
// string. 1000 is ~50KB of info bytes which Web Crypto handles
// fine but we don't need more than that for any real mission.
throw new Error('recordIds must not exceed 1000 entries');
}
}
/** Clamp the requested TTL to [1h, 30d]. Below 1h is probably a bug;
* above 30d forces a re-consent eventually even for long-running
* missions. */
function clampTtl(ms: number): number {
const MIN = 60 * 60 * 1000;
const MAX = 30 * 24 * 60 * 60 * 1000;
if (ms < MIN) return MIN;
if (ms > MAX) return MAX;
return ms;
}
async function importRsaPublicKey(pem: string): Promise<CryptoKey> {
const body = pem
.replace(/-----BEGIN [^-]+-----/g, '')
.replace(/-----END [^-]+-----/g, '')
.replace(/\s+/g, '');
const der = base64ToBytes(body);
return crypto.subtle.importKey(
'spki',
toBufferSource(der),
{ name: 'RSA-OAEP', hash: 'SHA-256' },
false,
['encrypt']
);
}
function bytesToBase64(bytes: Uint8Array): string {
let bin = '';
for (let i = 0; i < bytes.length; i++) bin += String.fromCharCode(bytes[i]);
return btoa(bin);
}
function base64ToBytes(b64: string): Uint8Array {
const bin = atob(b64);
const out = new Uint8Array(bin.length);
for (let i = 0; i < bin.length; i++) out[i] = bin.charCodeAt(i);
return out;
}
function toBufferSource(bytes: Uint8Array): ArrayBuffer {
const buf = new ArrayBuffer(bytes.length);
new Uint8Array(buf).set(bytes);
return buf;
}
// Re-export VaultNotFoundError so the route can catch it from one import.
export { VaultNotFoundError };

View file

@ -1,118 +0,0 @@
/**
* Unit tests for PasskeyRateLimitService.
*
* Isolated from DB + network. Asserts the three invariants:
* - IP bucket on /authenticate/options blocks after 20 req / min
* - Credential bucket blocks after 10 failures / min for 5 min
* - Successful verify clears the credential bucket
* - sweep() removes expired buckets without affecting blocked ones
*/
import { describe, it, expect } from 'bun:test';
import { PasskeyRateLimitService } from './passkey-rate-limit';
describe('PasskeyRateLimitService.checkOptions (IP bucket)', () => {
it('allows up to 20 requests per minute per IP', () => {
const svc = new PasskeyRateLimitService();
for (let i = 0; i < 20; i++) {
expect(svc.checkOptions('1.2.3.4').allowed).toBe(true);
}
});
it('blocks the 21st request in the same minute', () => {
const svc = new PasskeyRateLimitService();
for (let i = 0; i < 20; i++) svc.checkOptions('1.2.3.4');
const res = svc.checkOptions('1.2.3.4');
expect(res.allowed).toBe(false);
if (!res.allowed) {
expect(res.retryAfterSec).toBeGreaterThan(0);
expect(res.retryAfterSec).toBeLessThanOrEqual(60);
}
});
it('buckets are per-IP (one IP blocked does not affect another)', () => {
const svc = new PasskeyRateLimitService();
for (let i = 0; i < 25; i++) svc.checkOptions('1.2.3.4');
expect(svc.checkOptions('1.2.3.4').allowed).toBe(false);
expect(svc.checkOptions('5.6.7.8').allowed).toBe(true);
});
});
describe('PasskeyRateLimitService.checkVerify / recordVerifyFailure', () => {
it('allows a fresh credential without any recorded failures', () => {
const svc = new PasskeyRateLimitService();
expect(svc.checkVerify('cred-A').allowed).toBe(true);
});
it('blocks a credential on the 11th failure (limit=10 allows 10, blocks 11th)', () => {
const svc = new PasskeyRateLimitService();
// Standard rate-limit semantics: limit N means N allowed, N+1
// triggers the block. Spec tracks the contract, not an off-by-one.
for (let i = 0; i < 11; i++) svc.recordVerifyFailure('cred-A');
const res = svc.checkVerify('cred-A');
expect(res.allowed).toBe(false);
if (!res.allowed) {
// 5-minute block window.
expect(res.retryAfterSec).toBeGreaterThan(60);
expect(res.retryAfterSec).toBeLessThanOrEqual(5 * 60);
}
});
it('clearVerifySuccess wipes the failure bucket', () => {
const svc = new PasskeyRateLimitService();
for (let i = 0; i < 11; i++) svc.recordVerifyFailure('cred-A');
expect(svc.checkVerify('cred-A').allowed).toBe(false);
svc.clearVerifySuccess('cred-A');
expect(svc.checkVerify('cred-A').allowed).toBe(true);
});
it('does not cross-contaminate different credentials', () => {
const svc = new PasskeyRateLimitService();
for (let i = 0; i < 15; i++) svc.recordVerifyFailure('cred-A');
expect(svc.checkVerify('cred-A').allowed).toBe(false);
expect(svc.checkVerify('cred-B').allowed).toBe(true);
});
});
describe('PasskeyRateLimitService lockout isolation', () => {
it('passkey rate limit and password lockout are independent stores', () => {
// There's nothing to assert here beyond "these services don't
// share state" — but the regression this guards against is
// real: the original bug had the password lockout counter
// tripping on passkey failures. This file's mere existence
// (and the separation at the service level) codifies the
// invariant.
const svc = new PasskeyRateLimitService();
for (let i = 0; i < 100; i++) svc.recordVerifyFailure('cred-A');
// Importantly: the AccountLockoutService DB is untouched
// because it's never reached via this code path. The
// integration test in auth-routes.spec.ts covers the HTTP
// layer.
expect(svc.checkVerify('cred-A').allowed).toBe(false);
});
});
describe('PasskeyRateLimitService.sweep', () => {
it('removes idle buckets but preserves blocked ones', async () => {
const svc = new PasskeyRateLimitService();
// Put IP A over the limit → blocked.
for (let i = 0; i < 21; i++) svc.checkOptions('A');
// Put IP B at a moderate count, then age it by fast-forwarding
// the window artificially — sweep should kill idle B.
svc.checkOptions('B');
// Hack: sweep won't touch B until its resetAt < now. That
// requires waiting a full minute, which would slow the suite
// to a crawl. Instead, we test the logical contract: a fresh
// sweep should NOT evict a still-blocked bucket.
const before = (svc as unknown as { ipBuckets: Map<string, unknown> }).ipBuckets.size;
svc.sweep();
const after = (svc as unknown as { ipBuckets: Map<string, unknown> }).ipBuckets.size;
// A should still be there (blocked); B may or may not be (depending
// on timing; just verify we didn't lose the blocked one).
expect(after).toBeGreaterThanOrEqual(1);
expect(before).toBeGreaterThanOrEqual(1);
});
});

View file

@ -1,203 +0,0 @@
/**
* Passkey-specific rate limiter.
*
* Kept deliberately separate from the password lockout
* (AccountLockoutService) because:
*
* 1. A compromised passkey implies physical access to the
* authenticator different threat model than a guessed
* password. Spamming failed passkey verifies is a DoS/enum
* attempt, not a credential-guessing attempt.
* 2. The lockout buckets by email, but passkey
* /authenticate/options runs BEFORE the user is known
* (conditional UI gives the browser a challenge, then the
* authenticator picks a credential). There's no email to
* bucket by at that point only IP.
* 3. We don't want a passkey DoS to lock a user out of password
* login. Separate counters = separate blast radius.
*
* Two distinct buckets:
*
* - IP-based on `/authenticate/options` (unauthenticated
* endpoint, amplification target): N requests per minute.
* - CredentialID-based on `/authenticate/verify` failures:
* after M failures in a minute, reject for K minutes. Protects
* against counter-replay + credential-harvesting.
*
* In-memory per-process sufficient for single-instance dev +
* small-scale prod. Swap to Redis once mana-auth runs multi-
* replica. The existing `mana-redis` container is already in the
* compose; wiring it is a straight substitution of the `Map` with
* a Redis-backed store.
*/
import { logger } from '@mana/shared-hono';
interface Bucket {
count: number;
/** Epoch ms when this bucket resets */
resetAt: number;
/** Epoch ms until which requests are rejected (set when count exceeded) */
blockedUntil?: number;
}
/** Config for each limiter. */
interface LimiterOptions {
/** How many events to allow in the window. */
limit: number;
/** Window size in milliseconds. */
windowMs: number;
/** How long to block for after the limit is hit. Defaults to windowMs. */
blockMs?: number;
}
/**
* Two separate limiters with their own key namespaces. Exposed as a
* single service so the passkey routes don't reach for two distinct
* dependencies.
*/
export class PasskeyRateLimitService {
private ipBuckets = new Map<string, Bucket>();
private credentialBuckets = new Map<string, Bucket>();
// Defaults chosen to be noticeable on real attacks but invisible
// to legitimate users. Conditional UI only fires once per login
// page mount; 20/min per IP accommodates a busy multi-user IP
// (corporate NAT) while stopping a script looping the endpoint.
private readonly optionsOpts: LimiterOptions = {
limit: 20,
windowMs: 60 * 1000,
blockMs: 60 * 1000,
};
// Verify: 10 failures / min per credential → block that credential
// for 5 min. Successful verifies reset the bucket.
private readonly verifyOpts: LimiterOptions = {
limit: 10,
windowMs: 60 * 1000,
blockMs: 5 * 60 * 1000,
};
/**
* Check + increment the IP bucket for `/authenticate/options`.
* Returns `{ allowed: true }` when under limit, `{ allowed: false,
* retryAfterSec }` when blocked.
*
* Always counts toward the limit, even when returning allowed
* that's the whole point of rate limiting.
*/
checkOptions(ip: string): { allowed: true } | { allowed: false; retryAfterSec: number } {
return this.bump(this.ipBuckets, ip, this.optionsOpts);
}
/**
* Record a failed `/authenticate/verify` for a given credential
* ID. Call this AFTER the verification upstream returned a failure
* (i.e. not for every verify call only the ones that didn't
* authenticate). Returns the same shape as checkOptions so the
* caller can decide whether to still return the real error or
* downgrade to a rate-limit error for subsequent attempts.
*/
recordVerifyFailure(
credentialId: string
): { allowed: true } | { allowed: false; retryAfterSec: number } {
return this.bump(this.credentialBuckets, credentialId, this.verifyOpts);
}
/**
* Check whether a credential is currently blocked WITHOUT bumping
* the counter. Called at the TOP of /authenticate/verify before we
* hit the upstream a blocked credential should not even get its
* verification attempted.
*/
checkVerify(credentialId: string): { allowed: true } | { allowed: false; retryAfterSec: number } {
const bucket = this.credentialBuckets.get(credentialId);
if (!bucket) return { allowed: true };
const now = Date.now();
if (bucket.blockedUntil && bucket.blockedUntil > now) {
return { allowed: false, retryAfterSec: Math.ceil((bucket.blockedUntil - now) / 1000) };
}
return { allowed: true };
}
/**
* Reset a credential's failure counter on successful verify so a
* user who mistypes their PIN a few times doesn't stay penalised
* after they succeed.
*/
clearVerifySuccess(credentialId: string): void {
this.credentialBuckets.delete(credentialId);
}
private bump(
store: Map<string, Bucket>,
key: string,
opts: LimiterOptions
): { allowed: true } | { allowed: false; retryAfterSec: number } {
const now = Date.now();
const existing = store.get(key);
// Reject immediately if currently blocked.
if (existing?.blockedUntil && existing.blockedUntil > now) {
return {
allowed: false,
retryAfterSec: Math.ceil((existing.blockedUntil - now) / 1000),
};
}
// Start or continue a bucket.
const bucket: Bucket =
existing && existing.resetAt > now ? existing : { count: 0, resetAt: now + opts.windowMs };
bucket.count += 1;
if (bucket.count > opts.limit) {
bucket.blockedUntil = now + (opts.blockMs ?? opts.windowMs);
store.set(key, bucket);
logger.warn('passkey rate limit exceeded', {
key: hashForLog(key),
count: bucket.count,
limit: opts.limit,
blockedForSec: Math.ceil((opts.blockMs ?? opts.windowMs) / 1000),
});
return {
allowed: false,
retryAfterSec: Math.ceil((opts.blockMs ?? opts.windowMs) / 1000),
};
}
store.set(key, bucket);
return { allowed: true };
}
/**
* Sweep expired buckets. The process is long-lived and buckets
* never leave unless someone calls this; a user churn rate of
* ~1 new IP/second implies ~86k entries/day which is noticeable.
* Call periodically from index.ts via setInterval.
*/
sweep(): void {
const now = Date.now();
for (const [k, v] of this.ipBuckets) {
if (v.resetAt < now && (!v.blockedUntil || v.blockedUntil < now)) {
this.ipBuckets.delete(k);
}
}
for (const [k, v] of this.credentialBuckets) {
if (v.resetAt < now && (!v.blockedUntil || v.blockedUntil < now)) {
this.credentialBuckets.delete(k);
}
}
}
}
/**
* Hash bucket keys for logs so IPs + credential IDs don't land in
* JSON logs verbatim. Non-cryptographic just obfuscation.
*/
function hashForLog(key: string): string {
let h = 0;
for (let i = 0; i < key.length; i++) {
h = ((h << 5) - h + key.charCodeAt(i)) | 0;
}
return Math.abs(h).toString(36).padStart(8, '0').slice(0, 8);
}

View file

@ -1,152 +0,0 @@
/**
* Security Services Audit logging + Account lockout
*/
import { eq, and, gte, desc, sql } from 'drizzle-orm';
import { logger } from '@mana/shared-hono';
import type { Database } from '../db/connection';
// Security events — fire-and-forget, never throw
const EVENT_TYPES = [
'LOGIN_SUCCESS',
'LOGIN_FAILURE',
'REGISTER',
'LOGOUT',
'PASSWORD_CHANGED',
'PASSWORD_RESET_REQUESTED',
'PASSWORD_RESET_COMPLETED',
'EMAIL_VERIFIED',
'ACCOUNT_DELETED',
'ACCOUNT_LOCKED',
'PROFILE_UPDATED',
'API_KEY_CREATED',
'API_KEY_REVOKED',
'PASSKEY_REGISTERED',
'PASSKEY_LOGIN_SUCCESS',
'TWO_FACTOR_ENABLED',
'TWO_FACTOR_DISABLED',
'ORG_CREATED',
'ORG_DELETED',
] as const;
export class SecurityEventsService {
constructor(private db: Database) {}
async logEvent(params: {
userId?: string;
eventType: string;
ipAddress?: string;
userAgent?: string;
metadata?: Record<string, unknown>;
}) {
// postgres-js renders `undefined` as literal nothing in tagged-template
// SQL — `${undefined}` collapses the parameter slot, producing
// `VALUES (..., , , ...)` and a syntax error. Explicitly fall back to
// `null` so optional fields go in as NULL.
const userId = params.userId ?? null;
const ipAddress = params.ipAddress ?? null;
const userAgent = params.userAgent ?? null;
const metadata = JSON.stringify(params.metadata ?? {});
try {
// Use raw SQL since securityEvents table may be in auth schema
await this.db.execute(
sql`INSERT INTO auth.security_events (id, user_id, event_type, ip_address, user_agent, metadata, created_at)
VALUES (gen_random_uuid(), ${userId}, ${params.eventType}, ${ipAddress}, ${userAgent}, ${metadata}::jsonb, NOW())`
);
} catch (error) {
// Audit logging is non-critical, so we never throw — but actually
// surface the error message so the failure mode is debuggable
// instead of a silent warn that hides the real cause.
logger.warn('security.logEvent failed (non-critical)', {
eventType: params.eventType,
userId: params.userId,
error: error instanceof Error ? error.message : String(error),
});
}
}
async getUserEvents(userId: string, limit = 50) {
try {
const result = await this.db.execute(
sql`SELECT * FROM auth.security_events WHERE user_id = ${userId} ORDER BY created_at DESC LIMIT ${limit}`
);
return result;
} catch {
return [];
}
}
}
// Lockout policy: 5 failures in 15 min → locked 30 min
const MAX_ATTEMPTS = 5;
const WINDOW_MINUTES = 15;
const LOCKOUT_MINUTES = 30;
export class AccountLockoutService {
constructor(private db: Database) {}
async checkLockout(email: string): Promise<{ locked: boolean; remainingSeconds?: number }> {
try {
// postgres-js can't bind a JS Date directly via the drizzle sql
// template — it tries to byteLength() the parameter and crashes
// with `Received an instance of Date`. Pass an ISO string instead.
const windowStart = new Date(Date.now() - WINDOW_MINUTES * 60 * 1000).toISOString();
const result = await this.db.execute(
sql`SELECT COUNT(*) as count, MAX(attempted_at) as last_attempt
FROM auth.login_attempts
WHERE email = ${email} AND successful = false AND attempted_at > ${windowStart}`
);
const row = (result as any)[0];
if (!row || Number(row.count) < MAX_ATTEMPTS) return { locked: false };
const lastAttempt = new Date(row.last_attempt);
const lockoutEnd = new Date(lastAttempt.getTime() + LOCKOUT_MINUTES * 60 * 1000);
if (Date.now() > lockoutEnd.getTime()) return { locked: false };
return {
locked: true,
remainingSeconds: Math.ceil((lockoutEnd.getTime() - Date.now()) / 1000),
};
} catch (error) {
// Fail open on lockout-check errors (we'd rather let a legit
// user log in than block them on a transient DB hiccup), but
// surface the cause so the next bug doesn't take 4 hours to
// find like this one did.
logger.warn('lockout.checkLockout failed (fail-open)', {
email,
error: error instanceof Error ? error.message : String(error),
});
return { locked: false };
}
}
async recordAttempt(email: string, successful: boolean, ipAddress?: string) {
try {
// Don't INSERT id — auth.login_attempts.id is a serial integer
// (`nextval('auth.login_attempts_id_seq')` default), not a UUID.
// The previous code passed `gen_random_uuid()` into it and the
// resulting type-cast error was silently eaten by the catch
// below — meaning lockout's "5 failures in 15 min" check ran on
// an empty table forever and the lockout never actually triggered.
await this.db.execute(
sql`INSERT INTO auth.login_attempts (email, successful, ip_address, attempted_at)
VALUES (${email}, ${successful}, ${ipAddress ?? null}, NOW())`
);
} catch (error) {
logger.warn('lockout.recordAttempt failed (non-critical)', {
email,
successful,
error: error instanceof Error ? error.message : String(error),
});
}
}
async clearAttempts(email: string) {
try {
await this.db.execute(sql`DELETE FROM auth.login_attempts WHERE email = ${email}`);
} catch {
// Non-critical
}
}
}

View file

@ -1,93 +0,0 @@
/**
* Signup Limit Daily registration cap ("Organic Growth Gate")
*
* Limits new registrations per day to protect hardware and
* enable organic growth. Uses PostgreSQL security_events table
* (no Redis dependency needed).
*
* Configure via MAX_DAILY_SIGNUPS env var (default: 0 = unlimited).
*/
import { sql } from 'drizzle-orm';
import type { Database } from '../db/connection';
export class SignupLimitService {
private maxDaily: number;
constructor(private db: Database) {
this.maxDaily = parseInt(process.env.MAX_DAILY_SIGNUPS || '0', 10);
}
/** Check if registration is allowed right now */
async checkLimit(): Promise<{
allowed: boolean;
current: number;
limit: number;
resetsAt: string;
}> {
// 0 = unlimited (feature disabled)
if (this.maxDaily <= 0) {
return { allowed: true, current: 0, limit: 0, resetsAt: '' };
}
const todayCount = await this.getTodayCount();
const midnight = new Date();
midnight.setHours(24, 0, 0, 0);
return {
allowed: todayCount < this.maxDaily,
current: todayCount,
limit: this.maxDaily,
resetsAt: midnight.toISOString(),
};
}
/** Count registrations today (UTC) */
private async getTodayCount(): Promise<number> {
try {
const result = await this.db.execute(
sql`SELECT COUNT(*) as count
FROM auth.security_events
WHERE event_type = 'REGISTER'
AND created_at >= CURRENT_DATE
AND created_at < CURRENT_DATE + INTERVAL '1 day'`
);
const row = (result as any)[0];
return row ? Number(row.count) : 0;
} catch {
// On error, allow registration (fail open)
return 0;
}
}
/** Public status for the signup page */
async getStatus(): Promise<{
registrationOpen: boolean;
spotsRemaining: number | null;
totalToday: number;
limit: number;
resetsAt: string;
}> {
if (this.maxDaily <= 0) {
return {
registrationOpen: true,
spotsRemaining: null,
totalToday: 0,
limit: 0,
resetsAt: '',
};
}
const todayCount = await this.getTodayCount();
const midnight = new Date();
midnight.setHours(24, 0, 0, 0);
return {
registrationOpen: todayCount < this.maxDaily,
spotsRemaining: Math.max(0, this.maxDaily - todayCount),
totalToday: todayCount,
limit: this.maxDaily,
resetsAt: midnight.toISOString(),
};
}
}

View file

@ -1,582 +0,0 @@
/**
* User Data Aggregation Service
*
* Aggregates user data from auth DB, mana-credits, and mana-sync
* for GDPR self-service (/me) and admin endpoints.
*/
import { eq, sql, and, count, isNull, desc, ilike, or } from 'drizzle-orm';
import type { Database } from '../db/connection';
import type { Config } from '../config';
import {
users,
sessions,
accounts,
twoFactorAuth,
passkeys,
securityEvents,
} from '../db/schema/auth';
import { apiKeys } from '../db/schema/api-keys';
import postgres from 'postgres';
// ─── Types ─────────────────────────────────────────────────
export interface UserInfo {
id: string;
email: string;
name: string;
role: string;
createdAt: string;
emailVerified: boolean;
}
export interface AuthDataSummary {
sessionsCount: number;
accountsCount: number;
has2FA: boolean;
lastLoginAt: string | null;
}
export interface CreditsDataSummary {
balance: number;
totalEarned: number;
totalSpent: number;
transactionsCount: number;
}
export interface EntityCount {
entity: string;
count: number;
label: string;
}
export interface ProjectDataSummary {
projectId: string;
projectName: string;
icon: string;
available: boolean;
error?: string;
entities: EntityCount[];
totalCount: number;
lastActivityAt?: string;
}
export interface UserDataSummary {
user: UserInfo;
auth: AuthDataSummary;
credits: CreditsDataSummary;
projects: ProjectDataSummary[];
totals: {
totalEntities: number;
projectsWithData: number;
};
}
export interface ProjectDeleteResult {
projectId: string;
projectName: string;
success: boolean;
deletedCount?: number;
error?: string;
}
export interface DeleteUserDataResponse {
success: boolean;
deletedFromProjects: ProjectDeleteResult[];
deletedFromAuth: {
sessions: number;
accounts: number;
credits: number;
user: boolean;
};
totalDeleted: number;
}
export interface UserListItem {
id: string;
email: string;
name: string;
role: string;
createdAt: string;
lastActiveAt?: string;
}
export interface UserListResponse {
users: UserListItem[];
total: number;
page: number;
limit: number;
}
// ─── Project Metadata ──────────────────────────────────────
const PROJECT_META: Record<string, { name: string; icon: string }> = {
todo: { name: 'Todo', icon: '✅' },
chat: { name: 'ManaChat', icon: '💬' },
calendar: { name: 'Kalender', icon: '📅' },
clock: { name: 'Clock', icon: '⏰' },
contacts: { name: 'Kontakte', icon: '👤' },
cards: { name: 'Cards', icon: '🃏' },
picture: { name: 'ManaPicture', icon: '🎨' },
quotes: { name: 'Quotes', icon: '✨' },
presi: { name: 'Presi', icon: '📊' },
inventory: { name: 'Inventar', icon: '📦' },
food: { name: 'Food', icon: '🥗' },
plants: { name: 'Plants', icon: '🌱' },
storage: { name: 'Storage', icon: '☁️' },
questions: { name: 'Questions', icon: '❓' },
music: { name: 'Music', icon: '🎵' },
context: { name: 'Context', icon: '📄' },
photos: { name: 'Photos', icon: '📷' },
skilltree: { name: 'SkillTree', icon: '🌳' },
citycorners: { name: 'CityCorners', icon: '🏙️' },
times: { name: 'Taktik', icon: '⏱️' },
uload: { name: 'uLoad', icon: '🔗' },
calc: { name: 'Calc', icon: '🧮' },
mana: { name: 'Mana', icon: '💎' },
};
/** Convert camelCase/snake_case table name to readable label */
function tableNameToLabel(name: string): string {
return name
.replace(/([A-Z])/g, ' $1')
.replace(/_/g, ' ')
.replace(/^\w/, (c) => c.toUpperCase())
.trim();
}
// ─── Service ───────────────────────────────────────────────
export class UserDataService {
private syncSql: ReturnType<typeof postgres> | null = null;
constructor(
private db: Database,
private config: Config
) {}
private getSyncSql() {
if (!this.syncSql) {
this.syncSql = postgres(this.config.syncDatabaseUrl, { max: 5 });
}
return this.syncSql;
}
// ─── User Info ───────────────────────────────────────────
async getUserInfo(userId: string): Promise<UserInfo | null> {
const [user] = await this.db
.select({
id: users.id,
email: users.email,
name: users.name,
role: users.role,
createdAt: users.createdAt,
emailVerified: users.emailVerified,
})
.from(users)
.where(eq(users.id, userId))
.limit(1);
if (!user) return null;
return {
...user,
createdAt: user.createdAt.toISOString(),
};
}
// ─── Auth Data ───────────────────────────────────────────
async getAuthData(userId: string): Promise<AuthDataSummary> {
const [sessionsResult, accountsResult, twoFaResult, lastSession] = await Promise.all([
this.db
.select({ count: count() })
.from(sessions)
.where(and(eq(sessions.userId, userId), isNull(sessions.revokedAt))),
this.db.select({ count: count() }).from(accounts).where(eq(accounts.userId, userId)),
this.db
.select({ enabled: twoFactorAuth.enabled })
.from(twoFactorAuth)
.where(eq(twoFactorAuth.userId, userId))
.limit(1),
this.db
.select({ lastActivity: sessions.lastActivityAt })
.from(sessions)
.where(eq(sessions.userId, userId))
.orderBy(desc(sessions.lastActivityAt))
.limit(1),
]);
return {
sessionsCount: sessionsResult[0]?.count ?? 0,
accountsCount: accountsResult[0]?.count ?? 0,
has2FA: twoFaResult[0]?.enabled ?? false,
lastLoginAt: lastSession[0]?.lastActivity?.toISOString() ?? null,
};
}
// ─── Credits Data ────────────────────────────────────────
async getCreditsData(userId: string): Promise<CreditsDataSummary> {
try {
const res = await fetch(
`${this.config.manaCreditsUrl}/api/v1/internal/credits/balance/${userId}`,
{ headers: { 'X-Service-Key': this.config.serviceKey } }
);
if (!res.ok) {
return { balance: 0, totalEarned: 0, totalSpent: 0, transactionsCount: 0 };
}
const data = (await res.json()) as {
balance?: number;
totalEarned?: number;
totalSpent?: number;
transactionsCount?: number;
};
return {
balance: data.balance ?? 0,
totalEarned: data.totalEarned ?? 0,
totalSpent: data.totalSpent ?? 0,
transactionsCount: data.transactionsCount ?? 0,
};
} catch {
return { balance: 0, totalEarned: 0, totalSpent: 0, transactionsCount: 0 };
}
}
// ─── Project Data (from mana-sync) ───────────────────────
async getProjectData(userId: string): Promise<ProjectDataSummary[]> {
try {
const syncSql = this.getSyncSql();
// Get entity counts per app/table (latest state, excluding deleted)
const entityCounts = await syncSql`
SELECT app_id, table_name, COUNT(*) as count
FROM (
SELECT DISTINCT ON (app_id, table_name, record_id)
app_id, table_name, record_id, op
FROM sync_changes
WHERE user_id = ${userId}
ORDER BY app_id, table_name, record_id, created_at DESC
) latest
WHERE op != 'delete'
GROUP BY app_id, table_name
ORDER BY app_id, table_name
`;
// Get last activity per app
const lastActivity = await syncSql`
SELECT app_id, MAX(created_at) as last_activity
FROM sync_changes
WHERE user_id = ${userId}
GROUP BY app_id
`;
const lastActivityMap = new Map<string, string>();
for (const row of lastActivity) {
lastActivityMap.set(row.app_id, new Date(row.last_activity).toISOString());
}
// Group by app
const appEntities = new Map<string, EntityCount[]>();
for (const row of entityCounts) {
const appId = row.app_id;
if (!appEntities.has(appId)) {
appEntities.set(appId, []);
}
appEntities.get(appId)!.push({
entity: row.table_name,
count: Number(row.count),
label: tableNameToLabel(row.table_name),
});
}
// Build project summaries for all known projects
const projects: ProjectDataSummary[] = [];
for (const [projectId, meta] of Object.entries(PROJECT_META)) {
const entities = appEntities.get(projectId) || [];
const totalCount = entities.reduce((sum, e) => sum + e.count, 0);
projects.push({
projectId,
projectName: meta.name,
icon: meta.icon,
available: true,
entities,
totalCount,
lastActivityAt: lastActivityMap.get(projectId),
});
}
// Add any unknown apps from sync data
for (const [appId, entities] of appEntities) {
if (!PROJECT_META[appId]) {
const totalCount = entities.reduce((sum, e) => sum + e.count, 0);
projects.push({
projectId: appId,
projectName: appId,
icon: '📁',
available: true,
entities,
totalCount,
lastActivityAt: lastActivityMap.get(appId),
});
}
}
return projects;
} catch (err) {
// If sync DB is unavailable, return all projects as unavailable
return Object.entries(PROJECT_META).map(([projectId, meta]) => ({
projectId,
projectName: meta.name,
icon: meta.icon,
available: false,
error: 'Sync-Datenbank nicht erreichbar',
entities: [],
totalCount: 0,
}));
}
}
// ─── Full Summary ────────────────────────────────────────
async getUserDataSummary(userId: string): Promise<UserDataSummary | null> {
const userInfo = await this.getUserInfo(userId);
if (!userInfo) return null;
const [auth, credits, projects] = await Promise.all([
this.getAuthData(userId),
this.getCreditsData(userId),
this.getProjectData(userId),
]);
const totalEntities = projects.reduce((sum, p) => sum + p.totalCount, 0);
const projectsWithData = projects.filter((p) => p.totalCount > 0).length;
return {
user: userInfo,
auth,
credits,
projects,
totals: { totalEntities, projectsWithData },
};
}
// ─── Export ──────────────────────────────────────────────
async exportUserData(userId: string) {
const summary = await this.getUserDataSummary(userId);
if (!summary) return null;
// Also fetch detailed auth data for export
const [userSessions, userPasskeys, userApiKeys, userSecurityEvents] = await Promise.all([
this.db
.select({
id: sessions.id,
createdAt: sessions.createdAt,
expiresAt: sessions.expiresAt,
ipAddress: sessions.ipAddress,
deviceName: sessions.deviceName,
lastActivityAt: sessions.lastActivityAt,
revokedAt: sessions.revokedAt,
})
.from(sessions)
.where(eq(sessions.userId, userId)),
this.db
.select({
id: passkeys.id,
// Renamed from friendlyName in the passkey-bootstrap migration.
// Alias back to `friendlyName` here so the GDPR export contract
// with the client stays stable.
friendlyName: passkeys.name,
deviceType: passkeys.deviceType,
createdAt: passkeys.createdAt,
lastUsedAt: passkeys.lastUsedAt,
})
.from(passkeys)
.where(eq(passkeys.userId, userId)),
this.db
.select({
id: apiKeys.id,
name: apiKeys.name,
keyPrefix: apiKeys.keyPrefix,
scopes: apiKeys.scopes,
createdAt: apiKeys.createdAt,
lastUsedAt: apiKeys.lastUsedAt,
revokedAt: apiKeys.revokedAt,
})
.from(apiKeys)
.where(eq(apiKeys.userId, userId)),
this.db
.select({
eventType: securityEvents.eventType,
ipAddress: securityEvents.ipAddress,
createdAt: securityEvents.createdAt,
})
.from(securityEvents)
.where(eq(securityEvents.userId, userId))
.orderBy(desc(securityEvents.createdAt))
.limit(200),
]);
return {
exportedAt: new Date().toISOString(),
exportVersion: '2.0',
data: summary,
details: {
sessions: userSessions,
passkeys: userPasskeys,
apiKeys: userApiKeys,
securityEvents: userSecurityEvents,
},
};
}
// ─── Delete ──────────────────────────────────────────────
async deleteUserData(userId: string, userEmail: string): Promise<DeleteUserDataResponse> {
const deletedFromProjects: ProjectDeleteResult[] = [];
let totalDeleted = 0;
// 1. Delete sync data
try {
const syncSql = this.getSyncSql();
const result = await syncSql`
DELETE FROM sync_changes WHERE user_id = ${userId}
`;
const deletedCount = result.count;
totalDeleted += deletedCount;
deletedFromProjects.push({
projectId: 'sync',
projectName: 'Sync-Daten',
success: true,
deletedCount,
});
} catch (err) {
deletedFromProjects.push({
projectId: 'sync',
projectName: 'Sync-Daten',
success: false,
error: 'Sync-Datenbank nicht erreichbar',
});
}
// 2. Delete credits data
let creditsDeleted = 0;
try {
const res = await fetch(
`${this.config.manaCreditsUrl}/api/v1/internal/credits/balance/${userId}`,
{
method: 'DELETE',
headers: { 'X-Service-Key': this.config.serviceKey },
}
);
if (res.ok) {
const data = (await res.json()) as { deletedCount?: number };
creditsDeleted = data.deletedCount ?? 0;
}
} catch {
// Credits deletion is best-effort
}
// 3. Count auth records before deletion
const [sessionsCount, accountsCount] = await Promise.all([
this.db.select({ count: count() }).from(sessions).where(eq(sessions.userId, userId)),
this.db.select({ count: count() }).from(accounts).where(eq(accounts.userId, userId)),
]);
const deletedSessions = sessionsCount[0]?.count ?? 0;
const deletedAccounts = accountsCount[0]?.count ?? 0;
totalDeleted += deletedSessions + deletedAccounts + creditsDeleted;
// 4. Delete user (cascades sessions, accounts, passkeys, api keys, etc.)
await this.db.delete(users).where(eq(users.id, userId));
totalDeleted += 1; // the user record itself
return {
success: true,
deletedFromProjects,
deletedFromAuth: {
sessions: deletedSessions,
accounts: deletedAccounts,
credits: creditsDeleted,
user: true,
},
totalDeleted,
};
}
// ─── User List (Admin) ───────────────────────────────────
async listUsers(
page: number = 1,
limit: number = 20,
search?: string
): Promise<UserListResponse> {
const offset = (page - 1) * limit;
// Count total
let totalQuery = this.db.select({ count: count() }).from(users);
if (search) {
totalQuery = totalQuery.where(
or(ilike(users.email, `%${search}%`), ilike(users.name, `%${search}%`))
) as typeof totalQuery;
}
const [{ count: total }] = await totalQuery;
// Fetch page with last activity
let query = this.db
.select({
id: users.id,
email: users.email,
name: users.name,
role: users.role,
createdAt: users.createdAt,
})
.from(users);
if (search) {
query = query.where(
or(ilike(users.email, `%${search}%`), ilike(users.name, `%${search}%`))
) as typeof query;
}
const rows = await query.orderBy(desc(users.createdAt)).limit(limit).offset(offset);
// Get last activity for these users
const userIds = rows.map((r) => r.id);
const lastActivities =
userIds.length > 0
? await this.db
.select({
userId: sessions.userId,
lastActivity: sql<Date>`MAX(${sessions.lastActivityAt})`.as('last_activity'),
})
.from(sessions)
.where(sql`${sessions.userId} IN ${userIds}`)
.groupBy(sessions.userId)
: [];
const activityMap = new Map(lastActivities.map((a) => [a.userId, a.lastActivity]));
return {
users: rows.map((r) => ({
id: r.id,
email: r.email,
name: r.name,
role: r.role,
createdAt: r.createdAt.toISOString(),
lastActiveAt: activityMap.get(r.id)?.toISOString(),
})),
total,
page,
limit,
};
}
}

View file

@ -1,23 +0,0 @@
/**
* Spaces multi-tenancy helpers for mana-auth.
*
* The canonical SpaceType + allowlist lives in @mana/shared-types. This
* barrel adds auth-side concerns: Better Auth hook helpers for validating
* organization metadata, and (future) slug generation for personal spaces.
*
* See docs/plans/spaces-foundation.md.
*/
export {
assertValidSpaceMetadataForCreate,
assertSpaceIsDeletable,
buildSpaceMetadata,
} from './metadata';
export {
createPersonalSpaceFor,
candidateSlugFromEmail,
resolveUniqueSlug,
dbSlugTaken,
type SlugTakenLookup,
} from './personal-space';

View file

@ -1,84 +0,0 @@
/**
* Tests for Space-metadata validation used by Better Auth organization hooks.
*/
import { describe, it, expect } from 'bun:test';
import {
assertValidSpaceMetadataForCreate,
assertSpaceIsDeletable,
buildSpaceMetadata,
} from './metadata';
describe('assertValidSpaceMetadataForCreate', () => {
it('accepts metadata with every valid SpaceType', () => {
for (const type of ['personal', 'brand', 'club', 'family', 'team', 'practice'] as const) {
const parsed = assertValidSpaceMetadataForCreate({ type });
expect(parsed.type).toBe(type);
}
});
it('preserves extra metadata fields', () => {
const parsed = assertValidSpaceMetadataForCreate({
type: 'brand',
voiceDoc: 'hello',
uid: 'CH-123',
});
expect(parsed.voiceDoc).toBe('hello');
expect(parsed.uid).toBe('CH-123');
});
it('rejects missing metadata', () => {
expect(() => assertValidSpaceMetadataForCreate(null)).toThrow(/type/i);
expect(() => assertValidSpaceMetadataForCreate(undefined)).toThrow(/type/i);
});
it('rejects missing type field', () => {
expect(() => assertValidSpaceMetadataForCreate({})).toThrow(/type/i);
expect(() => assertValidSpaceMetadataForCreate({ name: 'Edisconet' })).toThrow(/type/i);
});
it('rejects unknown SpaceType values', () => {
expect(() => assertValidSpaceMetadataForCreate({ type: 'corporate' })).toThrow(/type/i);
expect(() => assertValidSpaceMetadataForCreate({ type: 'PERSONAL' })).toThrow(/type/i);
});
});
describe('assertSpaceIsDeletable', () => {
it('blocks deletion of personal spaces', () => {
expect(() => assertSpaceIsDeletable({ type: 'personal' })).toThrow(
/personal space cannot be deleted/i
);
});
it('allows deletion of other space types', () => {
for (const type of ['brand', 'club', 'family', 'team', 'practice'] as const) {
expect(() => assertSpaceIsDeletable({ type })).not.toThrow();
}
});
it('allows deletion when metadata is malformed (fail-open by design)', () => {
// If metadata is missing or invalid, we don't block — the delete endpoint
// enforces other permission checks (owner role, etc.) and we only want to
// guard the personal-space special case.
expect(() => assertSpaceIsDeletable(null)).not.toThrow();
expect(() => assertSpaceIsDeletable({})).not.toThrow();
expect(() => assertSpaceIsDeletable({ type: 'unknown' })).not.toThrow();
});
});
describe('buildSpaceMetadata', () => {
it('returns a metadata blob with the given type', () => {
expect(buildSpaceMetadata('club').type).toBe('club');
});
it('merges extra fields after the type', () => {
const meta = buildSpaceMetadata('brand', { voiceDoc: 'X', uid: 'Y' });
expect(meta).toEqual({ type: 'brand', voiceDoc: 'X', uid: 'Y' });
});
it('lets explicit type win even if extras try to override', () => {
// Extra is typed to exclude `type`, but at runtime someone could try.
const meta = buildSpaceMetadata('brand', { voiceDoc: 'X' } as Record<string, unknown>);
expect(meta.type).toBe('brand');
});
});

View file

@ -1,68 +0,0 @@
/**
* Space metadata validation for Better Auth organization hooks.
*
* Every Better Auth organization in Mana must carry a `metadata.type` field
* that identifies the Space type (personal/brand/club/family/team/practice).
* This module enforces that contract at the plugin-hook layer.
*
* See docs/plans/spaces-foundation.md.
*/
import { APIError } from 'better-auth/api';
import {
SPACE_TYPES,
isSpaceType,
parseSpaceMetadata,
type SpaceMetadata,
type SpaceType,
} from '@mana/shared-types';
/**
* Validate the metadata blob that will be persisted for a new organization.
* Throws a Better Auth `APIError` (BAD_REQUEST) if the shape is invalid.
*
* Intended for `organizationHooks.beforeCreateOrganization`.
*/
export function assertValidSpaceMetadataForCreate(raw: unknown): SpaceMetadata {
const parsed = parseSpaceMetadata(raw);
if (!parsed) {
throw new APIError('BAD_REQUEST', {
message: `Organization metadata must include a valid "type" field. Expected one of: ${SPACE_TYPES.join(', ')}.`,
code: 'SPACE_METADATA_INVALID',
});
}
return parsed;
}
/**
* Guard a delete call against removing the user's personal space.
* Better Auth will still allow admins/owners to delete other spaces we only
* protect the auto-created personal one, because losing it would orphan all
* the user's private data.
*
* Intended for `organizationHooks.beforeDeleteOrganization`.
*/
export function assertSpaceIsDeletable(metadata: unknown): void {
const parsed = parseSpaceMetadata(metadata);
if (parsed?.type === 'personal') {
throw new APIError('FORBIDDEN', {
message: 'The personal space cannot be deleted. Delete the user account instead.',
code: 'SPACE_PERSONAL_UNDELETABLE',
});
}
}
/**
* Build a metadata blob for a freshly-created space of a given type. Used by
* the signup-time personal-space auto-creator and by any future UI that
* creates spaces of other types.
*/
export function buildSpaceMetadata(
type: SpaceType,
extra: Omit<SpaceMetadata, 'type'> = {}
): SpaceMetadata {
if (!isSpaceType(type)) {
throw new Error(`Invalid SpaceType: ${String(type)}`);
}
return { ...extra, type };
}

View file

@ -1,74 +0,0 @@
/**
* Tests for personal-space slug derivation and uniqueness resolution.
*
* createPersonalSpaceFor is covered by an integration test (DB-backed)
* once that harness exists here we pin down the pure string logic and
* the slug-collision loop.
*/
import { describe, it, expect } from 'bun:test';
import { candidateSlugFromEmail, resolveUniqueSlug, type SlugTakenLookup } from './personal-space';
describe('candidateSlugFromEmail', () => {
it('takes the local part and lowercases it', () => {
expect(candidateSlugFromEmail('Till@memoro.ai')).toBe('till');
expect(candidateSlugFromEmail('Foo.Bar@X.de')).toBe('foo-bar');
});
it('strips non-alphanumerics and collapses dashes', () => {
expect(candidateSlugFromEmail('a..b+c@x.de')).toBe('a-b-c');
});
it('trims leading/trailing dashes', () => {
expect(candidateSlugFromEmail('--till--@x.de')).toBe('till');
});
it('caps at 30 characters', () => {
const long = 'a'.repeat(60) + '@x.de';
const slug = candidateSlugFromEmail(long);
expect(slug.length).toBeLessThanOrEqual(30);
});
it('falls back to a random slug when stripping empties the string', () => {
expect(candidateSlugFromEmail('_____@x.de')).toMatch(/^user-[a-z0-9]{6}$/);
});
it('falls back when local-part contains only whitespace', () => {
expect(candidateSlugFromEmail(' @x.de')).toMatch(/^user-[a-z0-9]{6}$/);
});
it('preserves digits', () => {
expect(candidateSlugFromEmail('user42@x.de')).toBe('user42');
});
});
function lookupFor(taken: string[]): SlugTakenLookup {
const set = new Set(taken);
return async (slug) => set.has(slug);
}
describe('resolveUniqueSlug', () => {
it('returns the base slug when free', async () => {
expect(await resolveUniqueSlug('till', lookupFor([]))).toBe('till');
});
it('appends -2 on single collision', async () => {
expect(await resolveUniqueSlug('till', lookupFor(['till']))).toBe('till-2');
});
it('walks through suffixes until free', async () => {
expect(await resolveUniqueSlug('till', lookupFor(['till', 'till-2', 'till-3']))).toBe('till-4');
});
it('skips reserved slugs even when DB says free', async () => {
expect(await resolveUniqueSlug('admin', lookupFor([]))).toBe('admin-2');
expect(await resolveUniqueSlug('api', lookupFor([]))).toBe('api-2');
expect(await resolveUniqueSlug('me', lookupFor([]))).toBe('me-2');
});
it('does NOT skip non-reserved slugs that happen to contain reserved words', async () => {
// We only match the exact reserved set; `admins`, `apikey`, `myself` are fine.
expect(await resolveUniqueSlug('admins', lookupFor([]))).toBe('admins');
expect(await resolveUniqueSlug('myself', lookupFor([]))).toBe('myself');
});
});

View file

@ -1,161 +0,0 @@
/**
* Personal-Space auto-creation.
*
* Every user gets a Space of type `personal` at signup their private
* default context for modules like mood, dreams, sleep, etc. This module
* implements the creation logic and the slug-collision handling it needs.
*
* Called from `databaseHooks.user.create.after` in better-auth.config.ts.
* If creation fails (e.g. a DB error), the hook propagates the error and
* the signup fails orphan users without a personal space would be a
* worse failure mode than a retry-able signup error.
*/
import { and, eq } from 'drizzle-orm';
import { nanoid } from 'nanoid';
import { isSpaceTier, type SpaceTier } from '@mana/shared-types';
import { organizations, members } from '../db/schema/organizations';
import type { Database } from '../db/connection';
import { buildSpaceMetadata } from './metadata';
/** Max suffix we try before giving up on collision resolution. */
const MAX_SLUG_SUFFIX = 999;
/** Slugs we never hand out — reserved for system routes or future use. */
const RESERVED_SLUGS = new Set([
'me',
'admin',
'api',
'auth',
'login',
'logout',
'signup',
'signin',
'register',
'settings',
'new',
'app',
'www',
'support',
'help',
'billing',
'invite',
]);
/**
* Turn an email local-part (or any free-form input) into a slug candidate.
* Lowercase, alphanumerics + hyphens only, max 30 chars.
*/
export function candidateSlugFromEmail(email: string): string {
const localPart = email.split('@', 1)[0] ?? '';
const slug = localPart
.toLowerCase()
.replace(/[^a-z0-9-]+/g, '-')
.replace(/-+/g, '-')
.replace(/^-|-$/g, '')
.slice(0, 30);
// If stripping left nothing, fall back to a short random string so the
// caller always gets a non-empty base to work from.
return slug || `user-${nanoid(6).toLowerCase()}`;
}
/** Lookup function: returns true iff the given slug is already taken. */
export type SlugTakenLookup = (slug: string) => Promise<boolean>;
/**
* Find a free slug by appending `-2`, `-3`, if the base is taken or
* reserved. Gives up after MAX_SLUG_SUFFIX attempts and falls back to a
* random suffix in practice collision at that scale means something's
* wrong with the base generator, not real contention.
*
* Takes an injectable `isSlugTaken` function so unit tests don't need a
* DB. Production code uses `dbSlugTaken(db)` (below) as the adapter.
*/
export async function resolveUniqueSlug(
base: string,
isSlugTaken: SlugTakenLookup
): Promise<string> {
const isFree = async (slug: string): Promise<boolean> => {
if (RESERVED_SLUGS.has(slug)) return false;
return !(await isSlugTaken(slug));
};
if (await isFree(base)) return base;
for (let i = 2; i <= MAX_SLUG_SUFFIX; i++) {
const candidate = `${base}-${i}`;
if (await isFree(candidate)) return candidate;
}
// Defensive fallback — should never be reached under realistic load.
return `${base}-${nanoid(6).toLowerCase()}`;
}
/** Adapter: turns a Drizzle db into a SlugTakenLookup. */
export function dbSlugTaken(db: Database): SlugTakenLookup {
return async (slug) => {
const existing = await db
.select({ id: organizations.id })
.from(organizations)
.where(eq(organizations.slug, slug))
.limit(1);
return existing.length > 0;
};
}
/**
* Create the personal space for a freshly-registered user.
*
* Idempotent: if the user already owns a space of type `personal`, returns
* its id without creating a second one. Protects against accidental retry
* in the auth signup flow.
*/
export async function createPersonalSpaceFor(
db: Database,
user: { id: string; email: string; name?: string | null; accessTier?: string | null }
): Promise<{ organizationId: string; slug: string; created: boolean }> {
// Idempotency guard — check for existing personal space via member join.
const existing = await db
.select({ orgId: organizations.id, slug: organizations.slug, metadata: organizations.metadata })
.from(organizations)
.innerJoin(members, eq(members.organizationId, organizations.id))
.where(eq(members.userId, user.id));
for (const row of existing) {
const meta = row.metadata as { type?: string } | null;
if (meta?.type === 'personal') {
return { organizationId: row.orgId, slug: row.slug ?? '', created: false };
}
}
const base = candidateSlugFromEmail(user.email);
const slug = await resolveUniqueSlug(base, dbSlugTaken(db));
const orgId = nanoid();
const memberId = nanoid();
const displayName = user.name?.trim() || user.email.split('@', 1)[0] || 'Personal';
// Carry the user's existing access tier onto the personal Space so
// the user→space tier migration doesn't downgrade anyone. A founder
// account setting up their first space stays at founder in that
// space. Invalid or missing values default to 'public' — matches the
// Better Auth user.accessTier default.
const inheritedTier: SpaceTier = isSpaceTier(user.accessTier) ? user.accessTier : 'public';
await db.transaction(async (tx) => {
await tx.insert(organizations).values({
id: orgId,
name: displayName,
slug,
metadata: buildSpaceMetadata('personal', { tier: inheritedTier }),
logo: null,
});
await tx.insert(members).values({
id: memberId,
organizationId: orgId,
userId: user.id,
role: 'owner',
});
});
return { organizationId: orgId, slug, created: true };
}

View file

@ -1,13 +0,0 @@
{
"compilerOptions": {
"target": "ESNext",
"module": "ESNext",
"moduleResolution": "bundler",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"outDir": "dist",
"rootDir": "src"
},
"include": ["src/**/*.ts"]
}