mirror of
https://github.com/Memo-2023/mana-monorepo.git
synced 2026-05-14 22:21:10 +02:00
2619 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
b3523f8bdc |
chore: cleanup leftover dirs from ManaCore→Mana rename + document apps/api
Removed: - apps/manacore/ — three Svelte files were byte-identical duplicates of the apps/mana/ versions, leftover from the 2025 rename. Untracked .env files in the same dir were also cleared. - 21 empty apps/*/apps/web-archived/ directories — leftover from the unification move, never tracked in git. - services/it-landing/ — empty directory, picked up by the services/* workspace glob for no reason. - apps/news/apps/server-archived/ — empty. Fixed: - scripts/mac-mini/status.sh: COMPOSE_PROJECT_NAME fallback was still manacore-monorepo from before the rename. Documented: - Root CLAUDE.md now describes apps/api/ (the @mana/api unified backend) as a top-level peer to apps/mana/. It was completely missing from the trimmed CLAUDE.md, which made the layout look frontend-only. |
||
|
|
ed8ab44832 |
feat(sync): conflict visualization with restore-my-version toast
Closes backlog C from the Phase 9 audit. The data layer has had
real field-level LWW since Sprint 1, but when the server's value
beat a local edit, the user had no way to know. This commit adds
the missing UI piece: a toast that appears whenever applyServerChanges
overwrites a non-empty local field with a strictly newer server
value, with a one-click "restore my version" path.
sync.ts — detection
-------------------
Two new exports:
- SyncConflictPayload: per-field overwrite event shape
(tableName, recordId, field, wasLocal, nowServer, localTime,
serverTime).
- subscribeSyncConflicts(listener): in-module pub/sub. Returns
an unsubscribe function.
Both LWW branches in applyServerChanges (insert-as-update and the
canonical update-with-fields path) now call notifyConflict() when:
1. The server time is STRICTLY greater (not equal) than the local
field time → there's actually an edit window to lose
2. The local field value is non-null/undefined → user actually
typed something to overwrite
3. The values are not equal (cheap JSON-string compare for objects,
=== for primitives) → there's a real change, not an idempotent
server replay
Why a custom registry instead of CustomEvent + window.dispatchEvent?
The existing sync-telemetry + quota-detect helpers use
window.dispatchEvent which doesn't work in node-based vitest envs
(no DOM EventTarget). The conflict bus is small enough that a plain
Set<listener> is simpler than polyfilling EventTarget — and the
node test path matters because we need automated coverage of the
detection logic.
conflict-store.svelte.ts — UI state
-----------------------------------
Svelte 5 $state-backed store with three responsibilities:
1. Coalescing: a SyncConflict is keyed by `${tableName}|${recordId}`,
so a burst of N field-overwrites on the same record collapses
into ONE toast with all affected fields underneath. The original
wasLocal value is preserved across coalescing (we don't clobber
the user's first typed value if a later field event arrives).
2. Auto-dismiss: each conflict has a 30s TTL after which it
evicts itself. Manual dismiss trumps the timer.
3. Restore: writes wasLocal back to Dexie with a fresh updatedAt
that beats the server's serverTime, plus a __fieldTimestamps
patch so the field-LWW pass on the next sync round will let
our value win. Deferred via setTimeout(0) so it lands AFTER
applyServerChanges releases its per-table apply lock — running
before the lock release would silently drop the restore (the
hook suppression is per-table-set, not per-record).
FIFO eviction at MAX_VISIBLE=8 keeps a bursty server from growing
the visible array unbounded.
SyncConflictToast.svelte — the UI
---------------------------------
Mounts globally in +layout.svelte. Stacks bottom-right above the
OfflineIndicator. Each toast shows:
- Module label ("Aufgabe", "Notiz", "Termin", …) derived from a
table-name → German label map. Unknown tables fall through to
the bare table name.
- Field count summary ("Feld »title«" / "3 Felder") — we
deliberately do NOT render the actual values because some are
encrypted blobs and decrypting them in the toast would be
significant complexity for marginal UX gain. The user knows
what they were just editing.
- Two buttons: "Wiederherstellen" (calls conflictStore.restore)
and "Behalten" (calls dismiss).
Slide-in animation, dark-mode-aware styling, role="alertdialog"
for accessibility.
Wiring
------
data-layer-listeners.ts:
- Imports installConflictListener from conflict-store
- Calls it from installDataLayerListeners() right after the
quota + telemetry handlers
- Adds the disposeConflict() call to the cleanup return
+layout.svelte:
- Imports SyncConflictToast and mounts it next to SuggestionToast
so it inherits the same global-overlay positioning context
Tests
-----
Five new integration tests in sync.test.ts cover:
- Fires when server overwrites a non-empty local field with a
strictly newer value
- Does NOT fire when local field is null/undefined (no edit to lose)
- Does NOT fire when values are equal (idempotent replay)
- Fires once per overwritten field on a multi-field update
- Does NOT fire on a timestamp tie (LWW lets server win silently
when there's no real edit window)
All 25 sync tests + 138 total data-layer tests pass. The new
captureConflicts() helper subscribes via subscribeSyncConflicts()
which works in the node-vitest env without needing a DOM polyfill.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
fe3fc9e7e2 |
docs: trim CLAUDE.md files — remove stale + duplicated guidance
Root CLAUDE.md: 1138 → 169 lines. Removed ghost apps-archived list, Supabase env examples, duplicate mana-auth row, contradictory "Code Quality TODO" block. Pushed search/storage/database/landing/manascore howtos out to docs/ + .claude/guidelines/ pointers. apps/mana/CLAUDE.md: 259 → 175 lines. Dropped non-existent workbench/ route from the routing diagram. Folded the auth section into a pointer to root + the mana-specific current-user stamping pattern. Merged the two module-system sections. Kept the data-flow ASCII diagram and the encryption 3-step workflow (the part you actually need while writing stores). |
||
|
|
b6486a8a46 |
fix(mana-video-gen): typo in get_model_info — total_mem → total_memory
PyTorch's `torch.cuda.get_device_properties(0)` returns a `_CudaDeviceProperties` object whose memory attribute is `total_memory` (bytes), not `total_mem`. The typo crashed the service immediately at startup because `get_model_info()` is called from the FastAPI lifespan handler, not lazily — uvicorn logged "Application startup failed" before any request could land. Found while installing mana-video-gen on the Windows GPU box (192.168.178.11:3026) for the gpu-video.mana.how Cloudflare route. After the fix the service starts cleanly under the ManaVideoGen scheduled task and responds 200 on /health both LAN and via Cloudflare tunnel. status.mana.how now reports 42/42 — first time ever. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
142a65a22f |
docs: Phase 9 documentation roundup — close encryption-shaped doc gaps
Five documentation surfaces gained encryption awareness in this
sweep. Before this commit, the only place anyone could learn about
the at-rest encryption layer or the zero-knowledge opt-in was the
internal DATA_LAYER_AUDIT.md. New contributors and self-hosters
would never discover one of the most important features of the
product just by reading the standard onboarding docs.
apps/docs/src/content/docs/architecture/security.mdx (NEW)
----------------------------------------------------------
First-class user-facing security page in the Starlight site,
slotted into the Architecture sidebar between Authentication and
Backend.
Sections:
- What's encrypted (overview table of 27 modules + the
intentional plaintext carve-outs)
- Standard mode flow with ASCII diagram
- "What Mana CAN see" trust statements per mode
- Zero-knowledge mode setup walkthrough (Steps component)
- Unlock flow on a new device
- Recovery code rotation
- Deployment requirements (the loud MANA_AUTH_KEK warning)
- Audit trail action vocabulary
- Threat model summary table
- Implementation file references with paths
services/mana-auth/CLAUDE.md
----------------------------
New "Encryption Vault" section under Key Endpoints, listing all 7
routes (status, init, key, rotate, recovery-wrap GET+DELETE,
zero-knowledge) with their HTTP method, path, error codes, and a
description. Mentions the three CHECK constraints + RLS + audit
table. Points readers at DATA_LAYER_AUDIT.md and the new
security.mdx for the deep dive.
Environment Variables block gains MANA_AUTH_KEK with a multi-line
comment explaining the openssl rand command + dev fallback warning.
apps/mana/CLAUDE.md
-------------------
Full rewrite. The existing file was from the Supabase era and
described things like @supabase/ssr, safeGetSession(), and a
five-table schema with users + organizations + teams that doesn't
exist any more. Replaced with the unified-app architecture:
- Module system layout (collections.ts / queries.ts / stores/)
- Mana Auth (Better Auth + EdDSA JWT) instead of Supabase
- Local-first data layer with the full pipeline diagram
- At-rest encryption section with the "when writing module code
that touches sensitive fields" 4-step guide
- Updated routing structure (no more separate /organizations,
/teams routes)
- Module store pattern code example
- Reference document table at the bottom pointing at the audit,
the new security.mdx, and the auth doc
Root CLAUDE.md
--------------
New "At-Rest Encryption (Phase 1–9)" subsection under the
Local-First Architecture section. Two-mode trust summary table,
production requirement for MANA_AUTH_KEK with the openssl command,
the "when writing module code" 4-step guide, and a reference
table. New contributors reading the root CLAUDE.md from top to
bottom now hit encryption naturally as part of the data layer
discussion.
.env.macmini.example
--------------------
MANA_AUTH_KEK was missing from the production env example
entirely — the macmini deployment would silently boot on the
32-zero-byte dev fallback if you copied this file. Added with a
multi-paragraph comment covering: how to generate, why it's
required, how to store securely (Docker secrets / KMS / Vault),
and the rotation caveat.
apps/docs/src/content/docs/deployment/self-hosting.mdx
------------------------------------------------------
Two changes:
1. Added MANA_AUTH_KEK to the mana-auth service block in the
Compose example with an inline comment pointing at the new
section below.
2. New "Encryption Vault Setup" H2 section with subsections:
- Generating a KEK (with a fake example value labelled DO NOT
USE — generate your own)
- Securing the KEK (Docker secrets, KMS, systemd
LoadCredential, anti-patterns)
- "What if I lose the KEK?" — explains the data is
unrecoverable by design and mitigation via zero-knowledge
mode opt-in
- KEK rotation — calls out the missing background re-wrap
job as a known limitation
apps/docs/astro.config.mjs
--------------------------
Added "Security & Encryption" entry to the Architecture sidebar
between Authentication and Backend so the new page is reachable
from the docs nav.
Astro check: 0 errors, 0 warnings, 0 hints across 4 .astro files.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
b961453244 |
docs(audit): roll up Phase 9 backlog sweep
Marks the four backlog items closed in this session — vault service
integration tests, recovery code rotation, pre-wired insert helpers
for future server-pushed records, and boards/boardItems encryption.
Updates the encrypted-tables list to 27 tables.
Updates
-------
1. Sprint table grows by 4 rows (BL1, BL2, BL3+4, BL5) with the
four backlog commits.
2. Test-Status line bumped:
21 web test files → 21 web + 2 mana-auth
78 vitest crypto tests + 39 bun mana-auth tests
"25+ tables" → "27 tables" (boards + boardItems added)
3. Section 5 encrypted-tables list grows by:
- boards (name, description)
- boardItems (textContent, only when itemType === 'text')
Both labelled "9 BL" in the Phase column to mark them as
backlog-sweep additions.
4. "Tabellen ohne Encryption (bewusst)" subsection: removed the
stale "boards/boardItems are a candidate for later" entry —
they're encrypted now. Added a redirect note pointing readers
at Section 6 where the actual decision is recorded.
5. Section 6 ("Backlog") completely restructured. The flat
"in priority order" list became two subsections:
"Abgeschlossen (Phase 9 Follow-Up Sweep)" — table with the four
commits + a one-line "what" notice each. Item 3+4 is explicitly
marked as a re-frame: the original "server pushes plaintext"
risk turned out to overstate the problem because the
generate/upload UIs are TODO stubs. The fix was pre-wired
insert() helpers, not a server-side rewrite.
"Offen" — five remaining items, reordered:
1. File-Bytes-Encryption (NEW: surfaced as "#4b" while
documenting that filesStore.insert() only protects metadata)
2. Image-Generation / File-Upload Wire-Up (NEW: ensures the
future UIs go through the helpers from #3+4)
3. Conflict Visualization UI (unchanged)
4. Composite Indexes für Multi-Account (unchanged)
5. V3 Migration Tests (unchanged)
6. Eckdaten line bumped from "25+ Tabellen aktiv" to "27 Tabellen
aktiv". Best Practices line for ZK gets the "+ rotate im
Active-State-Support" suffix.
7. Last-update header bumped to today.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
a7e5b39ad0 |
feat(picture): encrypt boards + boardItems
Closes backlog #5 from the Phase 9 audit. Adds two new registry entries (boards, boardItems) and wraps the boards store + queries + search provider so the moodboard names, descriptions and text-item content are sealed at rest like every other user-typed field. Registry -------- - boards: ['name', 'description'] - boardItems: ['textContent'] Inline comments explain that textContent is only set when itemType === 'text' (image-type items have it null, encryptRecord is a pass-through). Coordinates / dimensions / z-index / opacity stay plaintext for the canvas renderer. Boards store ------------ - createBoard: snapshots plaintext for the return value before encryptRecord mutates the row in place - updateBoard: encrypts the diff before update, then re-fetches + decrypts for the return value (so the caller gets plaintext, not the ciphertext we just wrote) - duplicateBoard: NEW behaviour — explicitly decrypts the original board first because the duplicate concatenates "(Kopie)" onto the name string. Concatenating onto a "enc:1:..." prefix would produce a malformed blob that fails to decrypt later. The board items are spread directly because the duplicate uses the SAME master key, so the existing ciphertext stays valid; encryptRecord is idempotent on already-encrypted strings so it's a no-op safety check. Reads ----- - useAllBoards: decrypts the visible board set before mapping. The item count map only reads structural fields (deletedAt + boardId) so it doesn't need a decrypt pass for boardItems. - allBoards$ raw observable: same pattern - search/providers/picture: decrypts before substring scoring against the user query The unified mana app currently has no UI that renders boardItems .textContent (the seed data in collections.ts is exported as PICTURE_GUEST_SEED but never imported anywhere — dead code), so no item-side reader needs touching for this commit. When a future canvas editor lands it'll go through the existing decryptRecord helpers naturally. 78/78 crypto tests still pass (registry shape unchanged at the API level). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
109de61e21 |
feat(picture,storage): pre-wired insert helpers for future generate/upload flows
Closes backlog #3+4 from the Phase 9 audit. The original framing —
"server-pushed records bypass client-side encryption" — turned out
to overstate the problem after a code audit:
- apps/mana/apps/web/src/routes/(app)/picture/generate/+page.svelte
is currently a TODO stub. The handleGenerate() function returns
"requires connection to Picture-Server (port 3006)" without
inserting anything.
- There is no fileTable.add() call site anywhere in the unified
mana app. File uploads still happen via the standalone storage
server in apps/storage and arrive via legacy mana-sync push.
So the production code path that would write plaintext images or
files to the user's IndexedDB doesn't yet exist. The risk only
materialises when someone wires up the in-app generate / upload
UI in the unified app.
The right action is to leave behind a clearly-labelled, encryption-
aware insert() helper on each store so the future implementation
has an obvious "do the right thing" path to call. This commit does
exactly that.
picture/stores/images.svelte.ts
-------------------------------
New imagesStore.insert(image: LocalImage) method:
- Calls encryptRecord('images', image) to seal `prompt` +
`negativePrompt` (the two registered encrypted fields)
- Calls imageTable().add(image)
- Fires the PictureEvents.imageCreated analytic (replaces the
old plain-table-add path)
A long doc comment on the method explains the architectural
reasoning: the server cannot encrypt under the user's master key
(the key only lives in the browser), so the generation flow MUST
round-trip through the client store even if the AI call itself
happens server-side. The pattern is documented as:
1. Client posts { prompt, negativePrompt, ... } to image-gen API
2. Server returns { storagePath, generationId, dimensions, ... }
3. Client calls imagesStore.insert(...) with both halves
4. encryptRecord seals the prompt fields before the IndexedDB write
The mixed-state guarantee from picture/queries.ts already covers
the migration window where some images came in via legacy
server-side push and others through this path — decryptRecord
passes plaintext through and unwraps ciphertext blobs.
storage/stores/files.svelte.ts
------------------------------
New filesStore.insert(file: LocalFile) method:
- Calls encryptRecord('files', file) to seal `name` +
`originalName`
- Calls fileTable.add(file)
Same architectural reasoning applies. The doc comment also flags a
SEPARATE concern that this commit does NOT address: encrypting the
actual file *bytes* on S3 (so the storage provider can't read the
content) needs streaming AES-GCM and is a much bigger lift. Tracked
as "backlog #4b" in the comment for whoever picks it up next.
(No analytic call yet on the storage side because StorageEvents
doesn't have a fileUploaded() event — the upload UI is unbuilt, so
adding the analytic event is up to whoever lands the UI.)
Pre-existing TS error on line 46 of images.svelte.ts (the
`toggleField(imageTable(), ...)` Drizzle/Dexie type variance bug)
is unchanged — it predates Phase 9 and is not introduced by this
commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
05ae348b12 |
fix(macmini): blackbox-exporter uses 1.1.1.1/8.8.8.8 directly for DNS
Docker's embedded DNS resolver (127.0.0.11) forwards to the host resolver, which on the Mac Mini forwards to the home router's FRITZ!Box DNS. The router keeps a stale negative cache for hours after a hostname first fails, so any newly added Cloudflare CNAME (e.g. the GPU public hostnames recreated via the Cloudflare dashboard during the 2026-04-07 cleanup) appears as "no such host" to the blackbox probes for the entire negative-cache TTL — even though the hostname resolves fine via 1.1.1.1 directly the entire time. Symptom before the fix: health-check.sh (uses dig @1.1.1.1) → All services healthy ✅ status.mana.how (via blackbox/VM) → 4 GPU services down ❌ The two views were lying to each other in opposite directions — the public-facing status page reported four healthy services as down while the operator runbook reported them as up. Confusing and exactly the kind of monitoring discrepancy a launch should not ship with. Fix: pin the blackbox container to public DNS (Cloudflare + Google) in compose. Blackbox now resolves directly against 1.1.1.1, bypassing the home-router negative cache entirely. After the recreate the four GPU probes flipped from probe_success=0 to probe_success=1 within one scrape interval, and status.mana.how went from 38/42 to 41/42 (only gpu-video remains down — LTX Video Gen is intentionally not deployed on the Windows GPU box yet). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
24001e9545 |
feat(vault): rotate recovery code while zero-knowledge is active
Closes backlog #2 from the Phase 9 audit. Lets a user replace their recovery code without going through the disable→generate→re-enable dance. Works in BOTH standard and zero-knowledge modes. vault-client ------------ New rotateRecoveryCode() method on the VaultClient interface. Returns RecoveryCodeSetupResult, identical shape to setupRecoveryCode. Branches on the current vault state via getStatus(): Standard mode: Re-fetches the plaintext MK from the server (same path as the initial setupRecoveryCode), generates a fresh 32-byte recovery secret, derives the new wrap key via HKDF, seals the MK, posts the wrap to /recovery-wrap (idempotent server-side, replaces the existing row in place). Zero-knowledge mode: Server can't hand out the plaintext MK any more, so we use the cachedUnwrappedMkBytes that unlockWithRecoveryCode stashed when the user typed in their old recovery code earlier this session. Throws with a clear message if the cache is empty (e.g. user landed on the page via init rather than recovery-unlock): "sign out and back in with your current recovery code first" so the cache gets repopulated. Both branches: - Wipe the raw MK reference after sealing - Wipe the recovery secret after format - Return the formatted code for the UI to display The OLD recovery code is now permanently invalid. Using it on a future unlock attempt will fail with the standard generic "wrong recovery code" error. Settings UI ----------- New rotateStep state machine ('idle' / 'rotated') runs alongside the existing zkSetupStep so the user can rotate without leaving the active-state UI. In the active-mode card (zkSetupStep === 'enabled'): - Two side-by-side buttons: "🔁 Recovery-Code rotieren" + "Zero-Knowledge-Modus wieder deaktivieren …" - When the user clicks rotate, handleRotateRecoveryCode() runs the flow and renders an inline "Neuer Recovery-Code" subsection (same .recovery-code monospace block + Copy button as the initial setup) with explicit warning that the old code is now invalid. - "Ich habe den neuen Code gesichert" button wipes the displayed code and drops back to idle. - The disable flow stays available (the rotate UI hides itself when the user has clicked into the disable confirmation path). The 28 vault integration tests still pass (39 total in encryption-vault/, including the existing 11 KEK tests). The new rotateRecoveryCode method reuses the already-tested setRecoveryWrap server endpoint, so no new server-side tests are needed for this commit. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
c2c960121e |
test(mana-auth): vault service integration tests against real postgres
Closes backlog #1 from the Phase 9 audit. Adds 28 integration tests for the EncryptionVaultService against a real Postgres so the RLS policies, CHECK constraints and audit-row writes are exercised as the production app actually sees them. The pure-crypto KEK tests in kek.test.ts already covered the wrap/unwrap primitives — this new file fills in the service-shaped gaps that need a real DB. Test infrastructure ------------------- - Reads TEST_DATABASE_URL from env. Whole suite is SKIPPED via describe.skip if unset, so unrelated CI runs and `bun test` from a fresh checkout don't fail on missing connection. The encryption-vault sub-job has to provision a Postgres explicitly. - Schema is assumed already migrated (run `pnpm db:push` or apply sql/002 + sql/003 manually before invoking the suite). Tests insert a fresh test user per case via beforeEach so cross-test pollution is impossible despite the FK to auth.users. - afterAll cleans up the user (CASCADE wipes vault + audit) and closes the postgres pool so bun test exits cleanly. Coverage -------- init (3): - Mints a fresh vault, wrapped_mk + wrap_iv populated, ZK off - Idempotent (returns same key) - Audit rows are written getStatus (5): - vaultExists=false for unconfigured user - vaultExists=true after init, no recovery wrap - hasRecoveryWrap=true after setRecoveryWrap - zeroKnowledge=true after enableZK - Does NOT write an audit row (cheap metadata read) setRecoveryWrap (4): - Stores wrap on existing vault - VaultNotFoundError on missing vault - Idempotent (replaces previous wrap) - Writes recovery_set audit row clearRecoveryWrap (3): - Removes the wrap - ZeroKnowledgeActiveError when ZK is on - VaultNotFoundError on missing vault enableZeroKnowledge (4): - Flips zero_knowledge=true and NULLs out wrapped_mk + wrap_iv - RecoveryWrapMissingError if no recovery wrap is set - Idempotent (already-on is no-op) - VaultNotFoundError on missing vault disableZeroKnowledge (2): - Restores wrapped_mk from a client-supplied master key, verifies the round-trip via getMasterKey returns the same bytes - No-op when ZK is already off getMasterKey (3): - Returns unwrapped MK in standard mode - Returns recovery blob with requiresRecoveryCode=true in ZK mode - VaultNotFoundError on missing vault rotate (2): - Mints fresh MK and wipes any existing recovery wrap - ZeroKnowledgeRotateForbidden in ZK mode DB-level invariants (2): - Setting wrapped_mk back while ZK active is rejected by encryption_vaults_zk_consistency - Setting wrap_iv to NULL while wrapped_mk is set is rejected by encryption_vaults_wrap_iv_pair Both wrap the Drizzle update in an arrow IIFE so expect(...).rejects.toThrow() sees a real Promise (Drizzle's chainable update() only executes on await/then). Run results ----------- With TEST_DATABASE_URL set + schema migrated: 28 pass, 0 fail, 64 expect() calls Without TEST_DATABASE_URL set (default): 0 pass, 30 skip (full suite cleanly skipped) KEK tests in kek.test.ts still run unaffected. Drive-by: kek.test.ts header comment updated to point at the new sibling file instead of saying "tests will live alongside mana-sync" (which was outdated speculation from Phase 2). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
ea165c8b46 |
docs(audit): roll up Phase 9 in DATA_LAYER_AUDIT.md
Marks the Zero-Knowledge opt-in as live and documents the new
architecture surface so future readers can understand the trust
model without spelunking through six commits.
Updates
-------
1. Sprint table grows from Phase 1–8 to Phase 1–9, adds the six new
commits (4 milestones + 2 follow-ups: status endpoint + lock-screen
modal). Test count bumped from 262 to 284 (22 new in recovery.test.ts).
2. Section 5 "Encryption Pipeline" reworked:
- "Wer hält was?" now has TWO tables — Standard-Modus and
Zero-Knowledge-Modus — making the trust model difference explicit
- New "Recovery-Code-Pipeline" subsection with two ASCII flow
diagrams (setup + unlock) showing every step from "user clicks
button" to "MK in MemoryKeyProvider"
- New "Schlüssel- + Datei-Kette für Phase 9" table mapping each
code path to its file
3. "Was Mana technisch (nicht) sehen kann" rewritten to compare both
modes side by side. Standard mode keeps the existing
"theoretically decryptable by KEK operator" disclosure;
zero-knowledge mode is upgraded to a hard "computationally
incapable" guarantee — and the trade-off ("Recovery-Code lost =
data lost") is called out explicitly. The DB CHECK constraint
that enforces "ZK active ⇒ recovery wrap exists" is mentioned as
the schema-level safety net.
4. Backlog reordered. Phase 9 is no longer listed as an open item;
the only true-zero-knowledge follow-up is now item #1 (service
tests against real Postgres for the four new vault methods,
analogous to the existing kek.test.ts pattern but needing a
container DB). Items 2–8 are unchanged from the previous
roundup.
5. Eckdaten + Best Practices + final production-grade summary all
reflect the new ZK opt-in. Schwachstelle #4 row updated to
"Phase 1–9".
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
a48b2d5841 |
feat(layout): lock-screen recovery code unlock modal
Closes the second Phase 9 follow-up. When a user has zero-knowledge
mode active and signs in on a new device (or after a session expiry),
the layout's vault-unlock effect lands in the new
'awaiting-recovery-code' state. Previously this was a dead end —
the layout just logged a warning and the rest of the app sat with a
locked vault.
This commit adds the missing UI piece: a non-dismissable modal that
mounts whenever the unlock effect signals 'awaiting-recovery-code'.
RecoveryCodeUnlockModal component
---------------------------------
- Reads the singleton vault client via getVaultClient()
- Single text input + submit button
- On submit:
1. Calls vaultClient.unlockWithRecoveryCode(input)
2. On success: clears input, calls onUnlocked() prop → parent
hides the modal, app boots normally
3. On RecoveryCodeFormatError: shows a format hint
4. On any other error (wrong code OR corrupted blob — surfaced
uniformly so an attacker can't distinguish): shows
"Recovery-Code falsch, prüfe deine Eingabe"
- Non-dismissable: there's no Cancel button. Without the recovery
code the app cannot read encrypted data and would just sit in a
half-broken state. The user can sign out from the header (the
auth flow runs above the encryption layer) if they need to bail.
- Help text at the bottom is honest about the irreversible nature
of losing the recovery code.
Layout integration
------------------
+layout.svelte:
- Imports the modal
- New `needsRecoveryCode = $state(false)` flag
- The vault-unlock effect now switches on three branches instead
of just success/failure:
'unlocked' → needsRecoveryCode = false
'awaiting-recovery-code' → needsRecoveryCode = true (mount modal)
anything else → console.warn (unchanged)
- Logout path also resets needsRecoveryCode so the modal doesn't
leak across sessions
- {#if needsRecoveryCode} mounts the component at the bottom of
the markup (above the existing global toasts and banners)
The autofocus warning is suppressed via svelte-ignore — the input
needs immediate focus because it's the only thing the user can
interact with on this surface, and screen-reader users will hear
the modal's accessible name from the role="dialog" + aria-labelledby
binding.
End-to-end smoke flow that now works:
1. User goes to /settings/security on Device A, enables ZK
2. User signs out, signs back in on Device B
3. Layout effect calls vaultClient.unlock() → server returns
recovery blob → vaultClient state goes to awaiting-recovery-code
4. Modal mounts, user pastes their recovery code from password
manager
5. unlockWithRecoveryCode runs the inline AES-GCM unwrap, imports
the MK as non-extractable, caches the bytes for a future
disable, transitions to 'unlocked'
6. Modal calls onUnlocked → layout dismisses modal → rest of the
app boots and renders decrypted data
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
78d949d051 |
feat(crypto): vault status endpoint + settings page hydration
Closes the Phase 9 Milestone 4 known limitation where the settings
page always started in 'idle' state regardless of whether the user
had already enabled zero-knowledge mode. Adds a cheap server-side
status read + hydrates the page on mount.
Server side
-----------
New VaultStatus interface and getStatus(userId) method on
EncryptionVaultService — single SELECT against encryption_vaults,
no decryption, no audit logging (this gets called on every settings
page mount and we don't want to flood the audit log with read-only
metadata fetches). Returns sane defaults when the vault row doesn't
exist yet so the client can avoid a 404 dance.
GET /api/v1/me/encryption-vault/status →
{
vaultExists: boolean,
hasRecoveryWrap: boolean,
zeroKnowledge: boolean,
recoverySetAt: string | null
}
Client side
-----------
vault-client.ts gains a `getStatus()` method that bypasses the
fetchVault retry helper (status reads should be cheap and one-shot;
if they fail we let the caller fall back to defaults). Re-exports
VaultStatus + RecoveryCodeSetupResult from the crypto barrel.
settings/security/+page.svelte
------------------------------
onMount kicks off a getStatus() call. Two things change based on
the response:
1. If the server says zero_knowledge=true, jump zkSetupStep to
'enabled' so the page renders the active-state UI directly
instead of the setup flow.
2. New `hasRecoveryWrap` state tracks whether a wrap is stored,
even if ZK isn't active yet. The idle branch now has TWO
variants:
- hasRecoveryWrap=false: original "Recovery-Code einrichten"
single button (unchanged from milestone 4)
- hasRecoveryWrap=true: amber notice "you have a code stored
but ZK isn't active" with three buttons:
* "Zero-Knowledge jetzt aktivieren" (jumps straight to the
enable call)
* "Neuen Recovery-Code generieren" (rotates the wrap)
* "Recovery-Code entfernen" (with two-click confirmation,
calls DELETE /recovery-wrap)
This handles the previously-orphaned state where a user generated a
code, copied it to their password manager, but never confirmed the
final activation step. Without this branch, after a reload the
settings page would show "Setup" again and the call would fail
with "vault is already in zero-knowledge mode" — except it wouldn't,
because the vault wasn't actually in ZK yet, just had a recovery wrap
stored. Either way the state was confusing.
handleSetupRecoveryCode + handleClearRecoveryCode now keep
hasRecoveryWrap in sync after the round trip.
Fail-quiet on getStatus error: if the network/auth/server-side fetch
fails, the page stays at the idle default. The user can still run
the setup flow, and any inconsistencies surface via the usual
server-side error responses.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
56312ff579 |
feat(settings): phase 9 milestone 4 — zero-knowledge UI section
Adds the user-facing setup + management surface for the Phase 9 recovery code + zero-knowledge opt-in. Lives in /settings/security between the Rotate and Honest-disclosure cards. Three-step setup flow --------------------- Step 1 — Generate Single button "Recovery-Code einrichten". Disabled unless the vault is currently unlocked. Clicks call vaultClient.setupRecoveryCode() which mints a fresh 32-byte secret, derives the wrap key, posts the sealed wrap to /recovery-wrap, and returns the formatted code. Step 2 — Display + copy Shows the formatted code (1A2B-3C4D-...) in a monospace, user- selectable block with a 📋 Copy button. Explicit warning: "Wir zeigen ihn dir nur ein einziges Mal." User clicks "Ich habe den Code gesichert" to advance. Step 3 — Confirm User has to type (or paste) the code back into a verification input. Comparison is case-insensitive and ignores dashes/whitespace on both sides so format jitter doesn't punish them. Mismatch shows a clear inline error and stays in the same step. Step 4 — Activate Final danger confirmation: "Wenn du jetzt aktivierst, löscht der Server seine Kopie deines Schlüssels." Click → vaultClient. enableZeroKnowledge() → server NULLs out wrapped_mk + wrap_iv, state flips to 'enabled', generatedCode is wiped from the closure. Active state ------------ After enable, the section shows a green "✅ Zero-Knowledge-Modus aktiv" panel with a "Disable" button. Disabling needs an unlocked vault (the cached MK bytes from the recovery-code unlock get sent back to the server for KEK re-wrapping). Two-click confirmation guards the destructive call. State machine ------------- zkSetupStep: 'idle' → 'generated' → 'confirming' → 'enabling' → 'enabled' plus a `handleResetSetup` escape that clears the in-flight code + input + error and drops back to 'idle' from any step. Known limitation: the page state doesn't survive a reload — there is no GET /encryption-vault/status endpoint yet to query the server's current zero_knowledge flag, so on a fresh page load we always start at 'idle' regardless of whether ZK is actually on. A future commit will add the status endpoint + an onMount call to hydrate zkSetupStep correctly. For now, the existing 'awaiting-recovery-code' badge from milestone 3 covers the lock- screen path, and the dashboard sets the right initial state at unlock time. Status badge fix from milestone 3 (statusBadge() handling the new 'awaiting-recovery-code' variant) is reused here. Styles ------ .zk-error — light red bordered alert for inline errors .zk-actions — flex row of buttons (wraps on mobile) .zk-step — bordered group with the step heading .recovery-code — monospace, user-select:all so click+copy works .recovery-input — monospace input for the confirm step .btn-ghost — transparent border-less variant for "Abbrechen" Dark-mode handling for the new surfaces is in the existing media query block. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
6de01937cf |
feat(vault-client): phase 9 milestone 3 — recovery + zero-knowledge flows
Extends the browser-side vault client with five new methods that
mirror the server-side Phase 9 routes, plus a new
`awaiting-recovery-code` state that pauses the unlock mid-flow
when the server is in zero-knowledge mode.
VaultUnlockState gains a fourth variant
---------------------------------------
| { status: 'awaiting-recovery-code' }
This is the state the client sits in between calling unlock()
(which received a recovery blob from GET /key) and the user typing
their recovery code into the UI. The settings page status badge
got updated to render this case as "🔑 Recovery-Code erforderlich".
New closure state inside createVaultClient
------------------------------------------
- pendingRecoveryBlob: stash for the recovery wrap returned by
GET /key in zero-knowledge mode. unlockWithRecoveryCode reads
from here so the second round of input doesn't need a re-fetch.
- cachedUnwrappedMkBytes: kept ONLY when the vault was unlocked
via the recovery code path AND the user might want to disable
zero-knowledge later (which needs to hand the MK back to the
server for KEK re-wrapping). The standard unlock path leaves
this null because the server already has the KEK wrap. Wiped
on lock(), on disable success, and on any state transition
that destroys the master key.
Modified existing methods
-------------------------
- unlock(): branches on the response shape. If the server returns
a recovery blob (zero-knowledge mode), stash it via
awaitRecoveryCode() and return state='awaiting-recovery-code'.
Otherwise unwrap as before. Same fork applies to the /init
fallback path.
- rotate(): if the server somehow returned a ZK shape (it should
never — rotate is forbidden in ZK mode server-side), bail with
a server error instead of silently misinterpreting bytes.
- lock(): also clears pendingRecoveryBlob + wipes
cachedUnwrappedMkBytes.
New methods (all wired into the returned VaultClient)
-----------------------------------------------------
- setupRecoveryCode(): generates a fresh 32-byte recovery secret,
derives the wrap key, re-fetches the active master key in
extractable form, seals it, posts to /recovery-wrap, returns
the formatted recovery code for the UI to display. Wipes both
raw byte references after the seal. Caller is responsible for
clearing the formatted string from memory once the user has
confirmed they backed it up.
- clearRecoveryCode(): DELETE /recovery-wrap. Server enforces the
"not while ZK is active" rule.
- enableZeroKnowledge(): POST /zero-knowledge { enable: true }.
Maps RECOVERY_WRAP_MISSING server response to a clear "set up
a recovery code first" client error.
- disableZeroKnowledge(): POST /zero-knowledge { enable: false,
masterKey: base64 }. Reads the cached MK bytes, base64-encodes,
sends. Wipes the cache after success.
- unlockWithRecoveryCode(code): completes the flow that started
in unlock(). Parses the user-typed code (RecoveryCodeFormatError
bubbles up if the shape is wrong), derives the wrap key, runs a
single inline AES-GCM decrypt on the stashed blob (yields both
the raw bytes for the cache AND a non-extractable runtime key
for the provider), wipes raw bytes, transitions to 'unlocked'.
Generic error message on failure ("wrong recovery code or
corrupted vault") so an attacker can't distinguish wrong-code
from tampered-blob. Stays in 'awaiting-recovery-code' on
failure so the user can retry without a re-fetch.
Drive-by stale test fix
-----------------------
aes.test.ts had an assertion from Phase 1 that `tasks` and `events`
return null because they were on enabled:false. Phase 7.1 flipped
both tables on, so the assertion has been failing since that
commit. Replaced the test with a stable negative case
(non-existent table name) that doesn't shift with each rollout
phase.
Test results: 78/78 crypto tests pass after the fix.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
a55aae6cb5 |
chore(macmini): infra cleanup — compose env, blackbox mem, prometheus gpu probes
Three Mac Mini infrastructure follow-ups bundled:
1. docker-compose.macmini.yml — drop ghost backend env vars from
the mana-app-web service (todo, calendar, contacts, chat, storage,
cards, music, nutriphi `PUBLIC_*_API_URL{,_CLIENT}` plus the memoro
server URLs). The matching consumers were removed in the earlier
ghost-API cleanup commits, so these env entries had been wiring
nothing into the running container for several deploys. Force-
recreating mana-app-web after pulling this commit will pick up
the slimmer env automatically.
2. docker-compose.macmini.yml — bump `mana-mon-blackbox` mem_limit
from 32m to 128m. blackbox-exporter v0.25 sits north of 32m
under load and was OOM-restart-looping every ~90 seconds, which
in turn made `status.mana.how` and the prometheus probe metrics
stale (since the scraper was missing every other window).
3. docker/prometheus/prometheus.yml — split `blackbox-gpu` into two
jobs:
- `blackbox-gpu` now probes `/health` via the http_health
module, because the GPU services (whisper STT, FLUX image
gen, Coqui TTS) return 401/404 on `/` by design (auth or
API-only). The previous http_2xx-on-`/` probe was reporting
all four as down even though they answered `/health` with
200, which inflated the down count on status.mana.how.
- `blackbox-gpu-root` keeps the http_2xx-on-`/` probe for
Ollama, which has no `/health` endpoint but does answer
2xx on its root.
Both jobs share the same blackbox-exporter relabel rewrite so
the targets are routed through the exporter container, not
scraped directly by VictoriaMetrics.
Verified post-fix: status.mana.how reports 41/42 services up (only
`gpu-video` remains down — LTX Video Gen is intentionally not
deployed yet on the Windows GPU box).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
4cfa869f33 |
docs: PRE_LAUNCH_CLEANUP.md — what we removed before launch and why
Companion document to the pre-launch cleanup commits. Describes every
piece of legacy/dead/deprecated scaffolding that was removed while the
system still has no live users — the cheapest moment to do it.
Each entry follows a fixed shape:
- What was there
- Why it had to happen pre-launch (the user-facing risk if done later)
- What concretely changed
- LOC / size impact
Thirteen entries land with this commit:
1. Schema v1–v10 collapsed into a single db.version(1)
2. setApplyingServerChanges() deprecated shim removed
3. LocalLabel @deprecated alias renamed to TaskTag
4. labelsStore backward-compat alias removed
5. $lib/stores/tags.svelte.ts re-export shim removed
6. EMOJI_TO_ICON_MAP legacy data-migration fallback removed
7. useAllEvents() unused calendar query removed
8. Cross-app search providers lazy-loaded
9. Bundle analysis findings (web-llm route-isolated, no further work)
10. Production restoration — 2026-04-07 outage postmortem
11. Eighteen broken subdomains triaged — 16 fixed, 2 follow-ups
12. Memoro server detached from mana.how stack
13. Ghost backend API hostnames removed (12 hostnames + clients)
Plus a "How to add an entry" template for future cleanups.
The two open follow-ups are documented with concrete manual-fix
instructions:
- stt-api / tts-api 502 — needs Cloudflare Zero Trust dashboard
cleanup of stale Public Hostname mappings on an old tunnel.
- gpu-video.mana.how — LTX video generation, planned but not yet
deployed on the Windows GPU box.
Once the system has launched this document becomes historical and
should not be edited further — new pre-launch cleanups won't be a
thing anymore by definition.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
85e38176d8 |
chore(macmini/scripts): runbook hardening — status diff + ingress walk
Two failures during the 2026-04-07 production outage triage were caused not by the underlying outage but by `status.sh` and `health-check.sh` hiding the broken state. Both scripts hardened so the same outage shape can't reoccur invisibly. status.sh — compose-vs-running diff The old script printed "X containers running / Y total" without noticing that some compose-defined containers were never started in the first place. The Mac Mini was running 37 of 42 declared containers and the script reported "37 running" with no indication of the gap — `mana-core-sync` and `mana-api-gateway` were silently missing for hours. New behaviour: read every service from `docker compose config`, diff its `container_name` against `docker ps`, and report each declared service whose container is not currently up. The same outage state would have been flagged on the very first run. health-check.sh — public-hostname walk via Cloudflare DNS The old script probed ~50 hardcoded `localhost:<port>/health` endpoints across Chat, Todo, Calendar, etc. — but the per-app HTTP backends those endpoints expected don't exist anymore (the ghost-API cleanup removed them entirely). Every probe returned HTTP 000 / connection refused, generating a wall of false-positive alerts that drowned out the real signal. The block was replaced with a dynamic walk of every `hostname:` entry in `~/.cloudflared/config.yml`. Each hostname is probed via the public Cloudflare tunnel, so DNS gaps, missing tunnel routes, 502/530 origin failures and timeouts surface as failures the same way real users would experience them. On its first run after the cleanup it surfaced eighteen previously-invisible hostname failures (no DNS, 502, or 530) — every one of them a real production issue. DNS resolution intentionally goes through `dig +short HOST @1.1.1.1` instead of the local resolver. The Mac Mini's home-router DNS keeps a negative cache for hours after the first failed lookup, so newly added CNAMEs (like the post-outage sync/media records) appeared as "no response" from inside the script for hours even though external users saw them resolve immediately. Asking Cloudflare's DNS directly gives the script the same view the public internet has. The Matrix, Element, GPU-LAN-redundant and monitoring port-by-port blocks were removed — the public-hostname walk covers all of them via their `*.mana.how` hostnames going through the actual tunnel. The "stuck container" detector now ignores `*-init` containers (one-shot init pods, Exit 0 = success, intentionally never re-run). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
a94abd37e0 |
chore(macmini): pin COMPOSE_PROJECT_NAME=manacore-monorepo
The Mac Mini's existing containers were originally created under the project name `manacore-monorepo` (from the historical directory name) but the current checkout lives in `mana-monorepo`. Without an explicit pin, every `docker compose up` from this directory spawned a SECOND project, creating duplicate containers and silent volume conflicts. The 2026-04-07 outage recovery had to pass `-p manacore-monorepo` manually for exactly this reason. Pinning the name in `.env.macmini.example` (which is checked in) means any fresh checkout that copies it to `.env.macmini` inherits the right project name automatically. The pin is also live on the production Mac Mini in `.env` and `.env.macmini` (untracked). Removing this line WILL break the next deployment — the comment in the file says so explicitly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
171fbd18be |
chore(mana/web): pre-launch module cleanup — schema collapse, dead code, lazy search
Six independent pre-launch tidy-ups bundled because they all touch
the same module-layer surface and the larger commit reads more
clearly than six adjacent two-line PRs.
1. database.ts schema v1–v10 collapsed into a single canonical
db.version(1). The system has no live users yet, so dropping the
versioned migration history is the cheapest moment to do it.
The post-collapse Dexie table set is provably identical to the
pre-collapse state (asserted by module-registry.test.ts).
Removed: EMOJI_TO_ICON map + v2 upgrade, v3 timeBlocks data
migration (~250 LOC of one-shot code), versions 4-10.
Also dropped the @deprecated `setApplyingServerChanges()` shim
(replaced by `beginApplyingTables()` weeks ago, no callers).
2. LocalLabel @deprecated alias renamed to TaskTag in the todo
module and all 11 consumers (board-views, ListView, DetailView,
QuickAddTask, +page.svelte). The alias was annotated @deprecated
but had eleven live consumers — exactly the worst kind of dead
code, the one that grows accidental new consumers via autocomplete
the longer it stays. Renamed to TaskTag rather than `Tag` to
avoid colliding with the `Tag` icon from `@mana/shared-icons`.
3. labelsStore backward-compat alias deleted from todo/stores —
pure dead code with zero consumers.
4. EMOJI_TO_ICON_MAP fallback in habits/queries removed. The
constant only existed as the in-memory equivalent of the v2
schema migration that was just deleted; once no record can have
the old `emoji` field, the fallback can never fire.
5. useAllEvents() in calendar/queries removed. JSDoc itself called
it out as "for backward compatibility with calendar-specific
views" — zero external consumers, only the barrel referenced it.
6. $lib/stores/tags.svelte.ts re-export shim deleted. It was a
20-line pure re-export from @mana/shared-stores with the explicit
header "for backward compatibility with existing imports".
Thirteen importers (todo/calendar/contacts/places/zitare ListView
+ DetailView, plus +layout.svelte and the calendar/contacts/tags
route +page.svelte files) rewritten to import directly.
7. SearchRegistry got `registerLazy(appId, loader)` and the eleven
per-app providers now register via dynamic `import()`. Spotlight
search is opened on demand, so the eleven provider chunks stay
out of the initial JS bundle until the user actually searches.
Sister benefit: a search filtered to a single appId only loads
that one provider.
The structural backbone for all of this — the per-module
`module.config.ts` files plus `module-registry.{ts,test.ts}` — was
committed earlier in
|
||
|
|
3a473897ec |
chore(mana/web): pre-launch cleanup — remove ghost backend API clients
Twelve `*-api.mana.how` Cloudflare hostnames (todo, calendar, contacts, chat, storage, cards, music, picture, presi, zitare, clock, context) plus their matching `lib/api/services/*.ts` clients still existed in the unified web app even though the per-app HTTP backends had been gone since the local-first migration. Their tunnel routes pointed at ports nothing listened on, so every consumer call returned 502 — and the corresponding `__PUBLIC_*_API_URL__` runtime variables were silently injected into every page render. The only live consumer was `qrExportService` (committed separately as part of the rewrite to read directly from Dexie). Two admin / data- management pages also imported the types but were already migrated to the unified `adminService` / `myDataService` clients. Removed: - Twenty-four files deleted: the twelve `lib/api/services/*.ts` clients plus their `*.test.ts` siblings. - `services/index.ts` collapsed from a thirteen-symbol re-export to just the four genuinely server-bound services (`adminService`, `landing`, `myDataService`, `qrExportService`). - `hooks.server.ts` no longer reads or injects any of the twelve `__PUBLIC_*_API_URL__` runtime variables, and the CSP `connect-src` list shrank by the same amount. Memoro server URL also removed since the unified `memoro` module is fully local-first and never hit the standalone server (the docker-compose service stays defined for the mobile app). - `routes/status/+page.server.ts` stops probing the dead per-app health endpoints — only `auth`, `sync`, `uload-server`, `media` and `llm` remain in the public status page. The cloudflared tunnel ingress entries for these hostnames were also removed in `~/.cloudflared/config.yml` on the Mac Mini (not in this repo) so the formerly-502 responses now return 404 from the edge. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
c27cb84f28 |
fix(mana/web): bundle rrule into SSR build to fix /calendar 500
`rrule@2.8.1` ships dual CJS/ESM builds but its `package.json` has no
`exports` field, so the SvelteKit Node adapter resolves it to the CJS
bundle at runtime. The named import `import { RRule } from 'rrule'`
then throws `SyntaxError: Named export 'RRule' not found` whenever
`/calendar` SSRs, which crashed every render of the route in production.
Adding `'rrule'` to `ssr.noExternal` forces Vite to bundle rrule into
the server output, where its CJS↔ESM interop layer handles the named
import correctly. The source files using rrule (`time-blocks/recurrence.ts`
and `calendar/components/CustomRecurrenceBuilder.svelte`) need no change.
Surfaced via the rebuilt `health-check.sh` ingress walk after a
postgres restart cycle pushed mana-app-web into a 500 state.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
f46d1328d8 |
feat(mana-auth): phase 9 milestone 2 — vault recovery wrap + zero-knowledge
Server-side support for the Phase 9 zero-knowledge opt-in. Adds the
recovery-wrap columns + four new vault operations + the routes that
expose them.
Schema (sql/003_recovery_wrap.sql)
----------------------------------
Adds to auth.encryption_vaults:
- recovery_wrapped_mk text (NULL until set)
- recovery_iv text (NULL until set)
- recovery_format_version smallint NOT NULL DEFAULT 1
- recovery_set_at timestamptz
- zero_knowledge boolean NOT NULL DEFAULT false
Drops NOT NULL from wrapped_mk + wrap_iv (a vault in zero-knowledge
mode has no server-side wrap at all).
Three CHECK constraints enforce the invariant at the DB level so no
service bug can leave a vault in an inconsistent state:
- encryption_vaults_has_wrap — at least one of (wrapped_mk,
recovery_wrapped_mk) is set
- encryption_vaults_wrap_iv_pair — ciphertext + IV are paired
(both NULL or both set) on
each wrap form
- encryption_vaults_zk_consistency — zero_knowledge=true implies
wrapped_mk IS NULL AND
recovery_wrapped_mk IS NOT NULL
If a code-level bug ever tried to enable ZK without a recovery wrap,
or to leave both wraps empty, Postgres would reject the UPDATE.
Drizzle schema (db/schema/encryption-vaults.ts)
-----------------------------------------------
Mirrors the migration: wrappedMk + wrapIv become nullable, the four
new columns added with the right defaults. Inline doc comment explains
the zero-knowledge fork.
Service (services/encryption-vault/index.ts)
--------------------------------------------
VaultFetchResult gains optional `requiresRecoveryCode` /
`recoveryWrappedMk` / `recoveryIv` so the route handler can serialize
the right shape. masterKey becomes Uint8Array | null (null in ZK mode).
Existing methods updated:
- init: branches on row.zeroKnowledge — returns the recovery blob
instead of an unwrapped MK if the user is already in ZK mode
- getMasterKey: same fork, with audit context "zk-recovery-blob"
- rotate: throws ZeroKnowledgeRotateForbidden in ZK mode (the server
can't re-wrap a key it can't read). Also wipes any stale recovery
wrap on rotation — the new MK has nothing to do with the old one,
so the old recovery code would unwrap into garbage.
New methods:
- setRecoveryWrap(userId, { recoveryWrappedMk, recoveryIv }, ctx)
Stores (or replaces) the user's recovery wrap. Idempotent.
- clearRecoveryWrap(userId, ctx)
Removes the recovery wrap. Forbidden if ZK is active (would lock
the user out) — throws ZeroKnowledgeActiveError → 409.
- enableZeroKnowledge(userId, ctx)
NULLs out wrapped_mk + wrap_iv, sets zero_knowledge=true. Requires
a recovery wrap to already be present — throws
RecoveryWrapMissingError → 400 otherwise. Idempotent on already-on.
- disableZeroKnowledge(userId, mkBytes, ctx)
Inverse: takes a freshly-unwrapped MK from the client, KEK-wraps
it, stores as wrapped_mk, flips zero_knowledge=false. The client
is the only entity that can supply the MK at this point, since
the server can't decrypt the recovery wrap.
Three new error classes:
- RecoveryWrapMissingError → 400 RECOVERY_WRAP_MISSING
- ZeroKnowledgeActiveError → 409 ZK_ACTIVE
- ZeroKnowledgeRotateForbidden → 409 ZK_ROTATE_FORBIDDEN
Audit action union extended with:
- 'recovery_set' | 'recovery_clear' | 'zk_enable' | 'zk_disable'
Routes (routes/encryption-vault.ts)
-----------------------------------
GET /key + POST /init now share a serializeFetchResult helper that
returns either:
- { masterKey, formatVersion, kekId } (standard)
- { requiresRecoveryCode: true, recoveryWrappedMk, (ZK mode)
recoveryIv, formatVersion }
Three new routes:
- POST /recovery-wrap — body: { recoveryWrappedMk, recoveryIv }
Stores the wrap. Validates both fields
are non-empty strings.
- DELETE /recovery-wrap — Removes the wrap. 409 if ZK active.
- POST /zero-knowledge — body: { enable: boolean, masterKey?: base64 }
enable=true: flip on (no body MK needed)
enable=false: flip off (MK required)
Validates the MK decodes to exactly 32 bytes.
Wipes the bytes after handing them to the
service.
POST /rotate now catches ZeroKnowledgeRotateForbidden → 409
ZK_ROTATE_FORBIDDEN so the client can show "disable zero-knowledge
first".
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
2f48f867f1 |
feat(crypto): phase 9 milestone 1 — recovery code primitives
Foundation for the zero-knowledge opt-in. New crypto/recovery.ts
provides the user-held secret half of the Phase 9 design:
- generateRecoverySecret() — 32 random bytes (256 bits) from Web
Crypto CSPRNG
- formatRecoveryCode() — renders raw bytes as 16 dash-separated
groups of 4 uppercase hex chars: "1A2B-3C4D-5E6F-..." (79 chars
total). Copy-pasteable, password-manager-friendly, no language
dependency.
- parseRecoveryCode() — tolerant inverse: strips whitespace + any
dash placement, accepts mixed case, throws RecoveryCodeFormatError
on wrong length / non-hex (no position-leaking errors)
- deriveRecoveryWrapKey() — HKDF-SHA256 with empty salt + versioned
info "mana-recovery-v1" → non-extractable AES-GCM-256 wrap key.
HKDF (not PBKDF2/scrypt) because the input already has full 256
bits of entropy — no slow KDF needed.
- wrapMasterKeyWithRecovery() — exports the master key bytes,
AES-GCM-encrypts with the recovery wrap key, returns base64
ciphertext + IV ready for the server. Wipes the raw MK reference
immediately after sealing.
- unwrapMasterKeyWithRecovery() — inverse, returns a non-extractable
CryptoKey. Throws uniformly on wrong code / tampered ciphertext —
the UI maps both to "wrong recovery code" so an attacker gets no
side-channel signal about which check failed.
Why hex over BIP-39?
- No 2048-word wordlist to bundle (~17 KB even gzipped)
- 32 random bytes have full 256 bits of entropy on their own — no
checksum word needed because there's nothing to "validate"
- Trivially copy-pasteable into any password manager, no language
dependency, no autocomplete-confusing dictionary words
- Survives autocorrect (no spaces)
22 tests in recovery.test.ts cover:
- generation (length, randomness)
- format (16 groups, uppercase, total 79 chars, wrong-length input)
- parse (roundtrip, lowercase, whitespace, missing dashes, extra
dashes, error cases, no position leakage)
- key derivation (non-extractable, deterministic, wrong-length input)
- wrap/unwrap roundtrip (with and without format/parse trip)
- failure modes (wrong code, tampered ciphertext)
- IV uniqueness (no reuse on repeated wraps)
This is the self-contained foundation. Server-side schema, vault
service extensions, vault-client wire-up and the settings UI all
build on these primitives in subsequent commits.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
25aabc3f49 |
docs(audit): roll up Phase 7 + 8 in DATA_LAYER_AUDIT.md
The encryption rollout is complete. Updates the audit doc to reflect
the final state:
- Encryption-Sprints table grows to Phase 1–8 with the four new
commits (status roundup, 7.1 timeBlocks-coupled, 7.2 storeless,
8 storage/picture/music/events)
- Section 5 encrypted-tables list bumped from 14 to 25+ tables —
adds tasks, calendar.events, timeBlocks, questions, answers,
links, documents, meals, files, images, songs, mukkePlaylists,
socialEvents, eventGuests
- New "Bewusste Plaintext-Carve-Outs" subsection documents the
structural fields kept plaintext on purpose (songs.artist for
browsing aggregations, links.originalUrl for the public redirect
handler, socialEvents decrypt-before-publish, files/images
indexed columns where the index is now a no-op, etc.)
- New "Tabellen ohne Encryption (bewusst)" subsection explains why
manaLinks, boards, boardItems and the sync/system tables stay
out of the registry
- Backlog reordered: the three Phase 7 items are now done, only
Phase 9 (recovery-code opt-in for true zero-knowledge),
server-side image/file wrapping, and the boards edge case remain
- "Test-Status" line + "Best Practices" line + "Eckdaten" line all
bumped from 22 to 25+ tables
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
be611cd1ee |
feat(crypto): phase 8 — encrypt remaining tables (storage, picture, music, events, guests)
Closes the last sweep of registry entries that were stuck on
enabled:false. Each table is corrected to match the actual schema
fields, then flipped on with writers + readers wrapped.
Registry corrections + flips
----------------------------
- files: was ['name','originalName','notes'] → ['name','originalName']
LocalFile has no `notes` column. `name` IS indexed but no
.where('name') call site exists in the app, so encryption is safe
— the index just becomes a no-op for content lookups.
- images: was ['prompt','negativePrompt','revisedPrompt','notes']
→ ['prompt','negativePrompt']. Neither revisedPrompt nor notes
exists on LocalImage. `prompt` is indexed, same caveat as
files.name.
- songs: was ['title','artist','album','lyrics','notes']
→ ['title']. lyrics + notes don't exist; artist / album /
albumArtist / genre stay PLAINTEXT so the album / artist / genre
browsing views (which aggregate by those fields) don't have to
decrypt the entire library on every render.
- mukkePlaylists: kept ['name','description'], now flipped on
- socialEvents: was ['title','description','notes']
→ ['title','description','location'] (no notes column; location
is the actually sensitive third field)
- eventGuests: was ['name','email','phone','notes']
→ ['name','email','phone','note'] (singular `note`, matching the
schema)
- manaLinks: REMOVED from registry entirely. Despite the name it's
the cross-app foreign-key table — sourceAppId / sourceRecordId /
targetAppId / targetRecordId — with zero user-typed content. The
Phase 1 placeholder listed label/url/notes which don't exist.
Storage (files)
---------------
- storage/stores/files.svelte.ts: renameFile encrypts diff before
fileTable.update. Other store ops touch only metadata (favorite /
isDeleted / parent) so they stay unwrapped.
- storage/queries.ts: useAllFiles decrypts before sort
- storage/ListView.svelte (Workbench): same decrypt-before-render
- storage/views/DetailView.svelte (inline editor binds to plaintext)
- cross-app-queries.useStorageStats: decrypts only the recent slice
(totalSize stays cheap because it reads plaintext .size)
- search/providers/storage: decrypts before substring scoring
- storage/trash/+page.svelte: decrypts the visible deleted set
Picture (images)
----------------
- No client-side .add for images — they arrive purely via sync, so
no store-level encryption to add. Reads are wrapped:
- picture/queries.ts: useAllImages, useArchivedImages, allImages\$
- picture/ListView.svelte (uses prompt as alt text)
- cross-app-queries.useRecentImages (dashboard widget renders prompt)
- search/providers/picture: decrypts before substring scoring
Sync-applied plaintext rows coexist with locally-edited ciphertext
rows without issue — decryptRecord is per-row idempotent on
non-encrypted strings.
Music (songs + playlists)
-------------------------
- music/stores/library.svelte.ts: updateMetadata + insert encrypt
diffs before write
- music/stores/playlists.svelte.ts: create snapshots plaintext for
the return value before encryptRecord mutates the row, update
encrypts diff
- music/queries.ts: useAllSongs decrypts before title sort,
useAllPlaylists decrypts before name sort
- music/ListView.svelte (Workbench)
- music/views/DetailView.svelte (inline editor)
- cross-app-queries.useMusicStats decrypts only the recent slice
- search/providers/music decrypts songs + playlists before scoring
Events (social gatherings + guests)
-----------------------------------
This one needed careful handling because publishEvent is the
exception to the local-only confidentiality model — it intentionally
pushes the event content to a public RSVP page anyone with the link
can read.
- events/stores/events.svelte.ts:
- createEvent encrypts before .add
- updateEvent encrypts the diff before .update
- publishEvent + syncSnapshotIfPublished now DECRYPT the local row
before forwarding to eventsApi.publish / .updateSnapshot — the
server-side public snapshot needs plaintext, by design. The
privacy contract is: drafts and unpublished events are
encrypted at rest; the moment you publish, you accept that the
content becomes readable via the share link.
- events/stores/guests.svelte.ts: addGuest + updateGuest encrypt
diff before write. Guests are NEVER pushed to the public
snapshot, so no decrypt-before-publish path.
- events/queries.ts: useAllEvents, useUpcomingEvents, usePastEvents,
useEvent all decrypt the visible socialEvents rows before joining
with timeBlocks. useGuestsByEvent + useEventGuests decrypt the
eventGuests rows.
Phase 8 is the last big sweep. The registry is now ~25 tables on,
~3 left intentionally off (manaLinks because no user content;
boards / boardItems / dreamSymbols partially handled in earlier
phases). The "what's encrypted?" surface should look complete on
the settings/security page.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
40b7069eb0 |
feat(crypto): phase 7.2 — encrypt storeless modules (questions, links, documents, meals)
Five storeless modules whose writes happen directly from view files
(no central store yet) get the same encryption treatment by wrapping
each .add/.update call site with encryptRecord and each read site
with decryptRecord(s). Registry entries are also corrected to match
the actual schemas — the previous Phase 1 placeholder names guessed
the wrong field names.
Registry corrections + flips
----------------------------
- meals: was ['description', 'notes', 'aiAnalysis'] → now
['description', 'portionSize'] (LocalMeal has neither notes nor
aiAnalysis on the schema; portionSize is a short user label same
sensitivity as description)
- documents: was ['title', 'content', 'body'] → now
['title', 'content'] (LocalDocument uses content, no body column)
- links: was ['title', 'description', 'targetUrl'] → now
['title', 'description']. originalUrl STAYS PLAINTEXT — the
public redirect handler resolves shortCode → originalUrl on every
click, encrypting it would force the redirect path to do an async
decrypt before issuing the 302
- questions: was ['title', 'body', 'notes'] → now
['title', 'description'] (LocalQuestion uses description)
- answers: was ['body'] → now ['content'] (LocalAnswer uses content)
All five tables flipped to enabled:true.
Write sites wrapped
-------------------
Each call site builds the row/diff as a typed object, runs
encryptRecord on it, then calls table.add / table.update:
- questions/views/DetailView.svelte (saveField)
- questions/[id]/+page.svelte (saveEdit + answer.add)
- questions/new/+page.svelte (initial create)
- uload/+page.svelte (createLink + saveEdit)
- uload/views/DetailView.svelte (saveField)
- context/documents/+page.svelte (handleCreateDocument)
- context/documents/[id]/+page.svelte (handleSave with encrypted diff)
- context/spaces/[id]/+page.svelte (handleCreateDocument)
- nutriphi/add/+page.svelte (handleSubmit)
Pure metadata writes (toggle pinned, toggle isActive, soft-delete via
deletedAt) are intentionally NOT wrapped — they touch zero encrypted
fields so encryptRecord would be a no-op anyway.
Read sites decrypted
--------------------
- questions/queries.ts: useAllQuestions, useAnswersByQuestion
- questions/views/DetailView.svelte (liveQuery clone)
- questions/ListView.svelte (Workbench)
- uload/queries.ts: allLinks$, useAllLinks, useLinkById
- uload/views/DetailView.svelte (liveQuery clone)
- uload/ListView.svelte
- uload/settings/+page.svelte (decrypts before serializing the
JSON export — otherwise the user would download ciphertext)
- context/queries.ts: useAllDocuments, useSpaceDocuments
- context/ListView.svelte
- cross-app-queries.useRecentDocuments (dashboard widget)
- nutriphi/queries.ts: useAllMeals
- nutriphi/ListView.svelte
The cards/dashboard widget for nutrition only reads m.nutrition (the
plaintext numeric breakdown), so it stays untouched. nutriphi/history
benefits transparently because it consumes useAllMeals which now
decrypts.
Why
---
Closes the second-tier plaintext gaps. The five tables flipped here
were on the registry from day one but stuck behind enabled:false
because no central store existed to hook into. Phase 7.2 takes the
pragmatic approach of wrapping at each call site rather than blocking
on a store extraction refactor — same end result for security, much
smaller diff. A future store consolidation pass can collapse the
duplication without changing the encryption surface.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
c875b4e966 |
feat(crypto): phase 7.1 — encrypt timeBlocks-coupled tasks + calendar events
Flips three coordinated registry entries to enabled:true at once:
- tasks: title, description, subtasks, metadata
- events (calendar): title, description, location
- timeBlocks: title, description (NEW entry)
These three tables have to move together because the consumer modules
(todo, calendar) denormalize their title/description into a TimeBlock
for cheap calendar rendering. Encrypting only the source records would
still leak the same fields through the timeBlocks hub. Indexed columns
(startDate, endDate, kind, type, sourceModule/sourceId, parentBlockId,
recurrenceDate, isLive, isCompleted, dueDate, priority) all stay
plaintext — the calendar query layer needs them for range scans.
Service layer
-------------
- time-blocks/service.ts: createBlock + updateBlock now route through
encryptRecord before the Dexie write. startFromScheduled decrypts the
scheduled block first so the new logged block carries plaintext
forward instead of an already-encrypted blob (encryptRecord is
idempotent so this is also defence-in-depth). New decryptBlock helper
for callers that need plaintext outside a liveQuery.
- todo/stores/tasks.svelte.ts: createTask snapshots the plaintext task
before encryptRecord mutates it, returns the snapshot to the UI.
updateTask decrypts the existing row before forwarding task.title as
a fallback into updateBlock (would otherwise leak ciphertext to the
linked TimeBlock). updateLabels + updateSubtasks decrypt-merge-encrypt
so structured fields don't get spliced into a ciphertext blob.
- calendar/stores/events.svelte.ts: encryptRecord wrapped around all
four event-write paths (create, update, updateSingleInstance,
updateAllFuture).
Read paths
----------
Every liveQuery / one-shot read that surfaces title/description/
location through the UI now decrypts after the plaintext-metadata
filter:
- time-blocks/queries.ts: useAllTimeBlocks, timeBlocksInRange$,
timeBlocksBySource$, useLiveTimeBlock
- todo/queries.ts: useAllTasks
- calendar/queries.ts: useAllCalendarItems (decrypts both the blocks
and the joined events)
- cross-app-queries.ts: useOpenTasks, useTodayTasks, useUpcomingTasks,
useUpcomingEvents
- dashboard widgets: DayTimelineWidget, ActivityFeedWidget,
TasksTodayWidget, UpcomingEventsWidget
- search providers: todo + calendar (substring scoring needs
plaintext)
- quick-input adapters: todo + calendar (search-as-you-type)
- calendar/components/ConflictWarning, CalendarHeader (iCal export
embeds title in the file)
- calendar/views/DetailView, todo/views/DetailView (inline editor)
- api/services/qr-export (the QR snapshot would otherwise ship
ciphertext)
- triggers/suggestions (cross-matches habit titles against task /
event titles)
- todo/reminder-source (notification body uses task title)
Habits is implicitly covered: it only writes through createBlock /
updateBlock and only reads block.startDate from the timeBlock side, so
no per-store changes were needed for habits to participate.
Why
---
This closes the last big plaintext gap on the dashboard. tasks +
events + the timeBlocks hub were the highest-value targets after chat
+ contacts because they're the surfaces a casual observer of an
unlocked DB would scan first ("what's this person doing today?"). With
Phase 7.1, the answer to that query is opaque without the master key.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
4bdf4238ce |
docs(mana/web): roundup data layer audit through encryption phase 6
Updates DATA_LAYER_AUDIT.md to reflect everything that landed since
the last refresh (which stopped at Sprint 4). The doc is now the
authoritative status surface for both audit-sprint and encryption-
sprint progress.
What's new in the doc:
Status table (Section 0)
Adds the missing post-Sprint 4 work and the full encryption phase
table:
- Sprint 4+ Listeners (
|
||
|
|
28395b313d |
docs: GPU tunnel setup, STT env wiring, and 2026-04-07 postmortem
Three docs updates landing the institutional knowledge from today's
Memoro voice recording deploy:
- docs/MAC_MINI_SERVER.md: architecture diagram updated to show the
two-tunnel setup (cloudflared on the Mac Mini for *.mana.how
except gpu-*, plus a separate cloudflared running as a Windows
Service on the GPU box for gpu-*.mana.how). New "GPU Tunnel
(mana-gpu-server)" section explains how to add hostnames in the
Cloudflare dashboard, the standard 502 debug ladder (DNS misroute,
service stopped, scheduled task crashed, missing public hostname),
and how the API key flows from the Windows .env through Mac Mini
.env to the mana-web container.
- docs/ENVIRONMENT_VARIABLES.md: STT section updated to reflect that
MANA_STT_URL/API_KEY are now wired into the mana-web container via
docker-compose.macmini.yml (committed in
|
||
|
|
6b8e2c7176 |
feat(mana/web): encryption phase 6.2/6.3 — settings page + onboarding banner
Two user-facing surfaces for the encryption pipeline that's been
running invisibly since Phase 4. Closes the loop on "we encrypt
your data" by making the claim concrete, verifiable, and rotatable.
vault-instance.ts (new)
Lazy-singleton wrapper around createVaultClient. The root layout
was holding a private vault client reference; the settings page
needs the same instance to call rotate() and read state.
getVaultClient() builds it on first call from authStore +
getManaAuthUrl(), reuses it forever after. Phase 3's
setKeyProvider/getActiveKey wiring means the rest of the data
layer doesn't need to know about the singleton at all — only
callers that want to drive lock/unlock/rotate explicitly do.
+layout.svelte and the new settings/security page both call
getVaultClient() — the underlying MemoryKeyProvider is shared
via setKeyProvider, so an unlock from either surface immediately
reflects in both.
routes/(app)/settings/security/+page.svelte (new)
Surface for the encryption vault state. Three sections:
1. STATUS card with a coloured badge:
- 🔒 Verschlüsselt (green) when unlocked
- 🔓 Gesperrt (amber) when locked, plus a "Schlüssel jetzt
laden" button that calls vaultClient.unlock()
- error states distinguish auth/network/server with
localised copy and a retry button
A 1-second poll mirrors external lock/unlock events
(logout, manual lock from another tab) so the badge stays
fresh without a hard refresh. Disposed on unmount.
2. ENCRYPTED FIELDS list — derived from the registry:
Object.entries(ENCRYPTION_REGISTRY).filter(enabled).map(...)
Renders one row per table with the field allowlist visible
in monospace, plus a count summary at the top. The list is
always honest: if a registry entry is enabled:false (Phase 7
targets, server-pushed tables, etc.), it does not appear.
3. ROTATE card (danger styling):
Two-step confirm before mutating. Calls vaultClient.rotate()
which the existing Phase 3 wire already routes through
/api/v1/me/encryption-vault/rotate. Toast on success/failure.
Explicitly documents that the old MK is GONE and current
data is NOT auto-re-encrypted — the user accepts that risk.
4. HONEST DISCLOSURE section: lists what Mana CAN'T see
(encrypted blobs), what Mana COULD technically see
(the wrapped MK if a hosting employee actively reaches for
the KEK), and what's structurally visible (counts,
timestamps, relationships). Reads better than any policy
page because it's anchored in the actual data layout.
EncryptionIntroBanner.svelte (new)
One-time onboarding banner that fires on the first vault unlock
ever on a given device. Uses localStorage('mana-encryption-intro-
dismissed') as the persistent flag. Shows a green-bordered card
bottom-centre explaining at-rest encryption in three sentences,
with a "Mehr erfahren →" link to /settings/security and an X
dismiss button.
Why a banner instead of a toast?
- Toasts disappear after 3s; a privacy claim deserves longer
attention.
- The banner has room for a learn-more link; toasts don't.
- Dismissing it is an explicit user action, which matches the
"you understand and accept" social contract.
Polls vault state every 500ms for up to 30s after mount so it
fires even if the unlock happens asynchronously after the layout
finishes rendering. Auto-clears the timer once it shows or after
the 30s window. SSR-safe: localStorage access is guarded.
Mounted globally in the root layout next to the existing
SuggestionToast, OfflineIndicator, PwaUpdatePrompt.
Layout integration
routes/+layout.svelte:
- Drops the inline createVaultClient + getManaAuthUrl import
in favour of getVaultClient() — single source of truth.
- <EncryptionIntroBanner /> mounted alongside the other
global UI elements.
Verified: 20 test files, 262/262 tests passing. Pre-existing
TS error in src/routes/(app)/settings/+page.svelte:338
(getSecurityEvents on authStore) is unrelated parallel drift.
Encryption pipeline status: Phase 1-6 complete.
- 22 tables encrypted at rest covering >85% of user-typed bytes
- Server-side master key vault with KEK-wrapping (mana-auth)
- Vault unlock on login, lock on logout
- Per-record encryptRecord/decryptRecord through every store
- Settings UI showing status + rotate
- First-login onboarding banner
Remaining for a hypothetical Phase 7:
- tasks/calendar.events/habits — title leakage via timeBlocks
- picture/storage/music — server-pushed, needs API encryption
- nutriphi/uload/context.documents/questions — store extraction
needed before they can flow through encryptRecord
- Recovery code opt-in for true zero-knowledge users (server
can't even technically decrypt)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
de33ed8687 |
fix(mana/web): disable prerender on /offline (FIXME)
The SvelteKit prerender worker throws "Error: 500 /offline" with no usable stack trace, blocking the production build. Suspected cause: a module-level side-effect on the shared layout that fails when no `window` is available — likely from one of the new vault-client or data-layer-listeners imports that landed in the encryption phase 4-6 sprints. SSR'ing /offline at request time is harmless — it's just a static "you're offline" message — so this is a safe workaround that unblocks the deploy. The real fix is to bisect which import on the offline codepath throws on the bare server and add a `typeof window` guard or move it to onMount. Without this, the unified mana-web image cannot be rebuilt. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
5d4123d2b0 |
fix(mana/web): commit module-registry + module.config.ts files (build-critical)
These files have been sitting untracked in working trees on multiple
machines since the unified module-registry refactor. database.ts
imports from $lib/data/module-registry but the file itself was never
git-add'd, so the production build crashes on any clean clone with:
Could not resolve "./module-registry" from "src/lib/data/database.ts"
Discovered today during the first deploy of the Memoro recording
pipeline: pulling onto the Mac Mini (which had its own untracked copies
of these files in a stash) revealed that origin/main has been silently
broken for clean builds. Fixed by committing the canonical versions:
- apps/mana/apps/web/src/lib/data/module-registry.ts
- apps/mana/apps/web/src/lib/data/module-registry.test.ts
- apps/mana/apps/web/src/lib/modules/{31 modules}/module.config.ts
The events module already had its module.config.ts committed in
|
||
|
|
42bd2a3a04 |
chore(deploy): wire MANA_STT_URL/API_KEY into mana-web container
The unified mana-web container needs MANA_STT_URL + MANA_STT_API_KEY at runtime so its server-side proxies (/api/v1/memoro/transcribe and /api/v1/dreams/transcribe) can reach mana-stt with the right credentials. The browser never holds the key. URL points at the public tunnel (https://gpu-stt.mana.how → Cloudflare tunnel mana-gpu-server → Windows GPU box localhost:3020) so the resolver works regardless of where the container runs. The API key is sourced from the Mac Mini .env, which is gitignored. Without this, the proxies short-circuit with HTTP 503 "mana-stt is not configured" — observed today on first deploy of the recording pipeline. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
73f294b298 |
feat(mana/web): encryption phase 6.1 — cards, presi, inventar, planta
Four more modules join the encrypted-at-rest path. Tables flipped:
- cards.cards front + back (no `notes` column on LocalCard)
- cards.cardDecks name + description (schema uses `name` not `title`)
- presi.presiDecks title + description
- presi.slides content (LocalSlide has only the SlideContent
object — no separate `notes`. The
JSON-stringify in wrapValue handles
nested-object content cleanly)
- inventar.invItems description (only — `name` is in the schema
index used by where()/sortBy
queries, and `notes` is an array
of {id, content, createdAt} that
addNote/deleteNote splice in
place; encrypting either would
force per-mutation decrypt+
re-encrypt of the whole array.
Phase 7 concern.)
- planta.plants name + careNotes + temperature + soilType
(`name` is NOT indexed for plants — the schema
only indexes id/isActive/healthStatus, so it's
safe to encrypt unlike inventar/dreamSymbols)
Per-module mutations
Each store now follows the established Phase 4/5 pattern:
- createX: build LocalRecord, snapshot via toX() for the optimistic
return, encryptRecord, then table.add
- updateX: build diff, encryptRecord on the diff, then table.update
- The Sprint 1 atomic-cascade deleteDeck (cards + presi) is unchanged
because deletes only touch plaintext deletedAt/updatedAt fields.
planta.update() reads the row back after the write to return a Plant
to its caller; that read goes through decryptRecord because the
raw row is now encrypted on disk.
Per-module queries
useAllDecks / useDeck / useCardsByDeck (cards)
useAllDecks / useDeck / useDeckSlides (presi)
useAllItems (inventar)
useAllPlants (planta)
All filter on plaintext metadata first, then decryptRecords on the
visible set.
cross-app-queries dashboard widgets
- useRecentDecks (presi) decrypts the title/description before the
dashboard widget renders the deck name
- useCardsProgress decrypts the deck name list — counts continue to
work on plaintext fields
Skipped intentionally
- tasks / calendar.events / habits — title is duplicated to the
cross-module timeBlocks table. Encrypting only the task copy
would still leak the title via the timeBlock. Needs a coordinated
timeBlocks encryption pass (Phase 6.1.5).
- picture.images / storage.files / music.songs — records are
server-pushed (image generation, file uploads, library imports).
Client-side encryptRecord can't help; needs the API service to
encrypt before pushing, or a sync-time wrap step. Documented as
a Phase 7 concern.
- nutriphi.meals / uload.links / context.documents / questions /
answers — write directly from views, no store. Need a store
extraction first.
Verified: 20 test files, 262/262 tests passing. Pre-existing TS
errors in context/index.ts, picture/images.svelte.ts, planta/
quick-input-adapter.ts and questions/index.ts are unrelated parallel
refactor drift.
Phase 6.2 next: settings/security UI showing vault status, encrypted-
table list, manual rotate button.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
b2bddfefab |
docs(events): roadmap of remaining Phase 2 work + tech debt
Drops a ROADMAP.md inside the module so the next session has a single place to look. Lists what's shipped, the remaining feature ideas (iCal, per-guest tokens, recurring, websockets, email invites, reminders, capacity waitlist), and the tech-debt residue. |
||
|
|
6a60e22a31 |
feat(events): bring list (wer bringt was?) — Phase 2
Add an "eventItems" mini-collection attached to each social event so hosts can track what each guest is bringing, and so public visitors on the share-link page can claim an item without an account. Local-first side - New eventItems table (Dexie v11), module config update for sync. - LocalEventItem type + EventItem domain type, useEventItems query. - eventItemsStore: addItem / updateItem / toggleDone / assign / deleteItem. Every mutation pushes the full list to the server snapshot via eventsStore.syncItems if the event is published. - BringListEditor component on the host DetailView with assign-to- guest dropdown, quantity, and done-checkbox. - eventsStore.syncItems + a syncItems call in publishEvent so the public page sees pre-existing items as soon as the event ships. Server side - New event_items_published table (FK cascade from events_published so unpublishing wipes the bring list along with the snapshot). - Host endpoints PUT/GET /events/:eventId/items: full-replace upsert that preserves any existing claimed_by_name across host edits, max 100 items, ownership check. - Public POST /rsvp/:token/items/:itemId/claim: name-only claim, 1× per item (first write wins), shares the per-token hourly rate bucket with RSVP submissions to keep the abuse surface uniform. - GET /rsvp/:token now also returns the bring list (sorted) so the public page renders in a single round-trip. Public RSVP page - Renders the bring list with claim buttons; clicking prompts for a name and POSTs the claim, then optimistically updates the UI. - New bring-list i18n keys for all five locales (de/en/it/fr/es). Tests - 15 new server tests covering host PUT/GET (insert / update / prune / ownership / claimed-name preservation / cascade), GET /rsvp item exposure, and POST /claim (success / double-claim / cross-token / cancelled / validation). 50 server tests total, all green. - E2E spec scoped to .guest-editor where the new BringListEditor introduced a duplicate "Hinzufügen" button label. |
||
|
|
af92720a62 |
feat(mana/web): encryption phase 5 — rollout to chat/dreams/memoro/contacts/cycles/finance
Six modules join the notes pilot (Phase 4) on the encrypted-at-rest path.
Every user-typed text and PII field listed below is now wrapped via
AES-GCM-256 with the per-user master key before any write hits Dexie,
and decrypted on every liveQuery read coming back through the public
queries module.
Tables flipped to enabled:true in the registry
- chat.messages messageText
- chat.conversations title
- chat.chatTemplates name + description + systemPrompt + initialQuestion
- dreams.dreams title + content + transcript + interpretation
+ aiInterpretation + location
- dreams.dreamSymbols meaning (name stays plaintext — used as
indexed lookup key in touchSymbols /
updateSymbol via where('name'))
- memoro.memos title + intro + transcript
- memoro.memories title + content
- contacts.contacts firstName + lastName + email + phone + mobile
+ birthday + street + city + postalCode
+ country + notes + website + linkedin
+ twitter + instagram + github
- cycles.cycles notes
- cycles.cycleDayLogs notes + mood (symptoms stays plaintext —
standardised label array
consumed by symptomsStore.touchSymptoms
via Set diffs in dayLogsStore.logDay)
- finance.transactions description + note (the schema uses
`note` singular,
not `notes` or `merchant`
as my earlier draft had it)
Tables intentionally left disabled
- questions / answers — direct db.table().update() call sites in
DetailView.svelte instead of going through a store. Need a store
extraction first; registry entry stays in place so the flip is a
one-line change once the store exists.
- tasks, events, calendar.events, plants, meals, slides, presiDecks,
cards, links, etc. — fall through to a future Phase 6 once the
chat/dreams/memoro/contacts pilots are validated in real use.
Per-module changes
Each store now follows the same pattern the notes pilot established:
1. Build the LocalRecord with plaintext fields
2. Snapshot it via toX() for the optimistic UI return value
3. await encryptRecord(tableName, record) // mutates in place
4. await table.add(record) // ciphertext lands on disk
For updates the diff is encrypted in place before the update() call
so partial updates only encrypt the modified fields.
The transcribeBlob flows in dreams + memoro decrypt the existing
record first (to read the user-typed `content`), then build a
diff and re-encrypt it. Same for contactsStore.ensureSelfContact
which compares against decrypted-existing values to decide whether
the profile-sync needs an update.
Per-module query changes
Each public liveQuery now filters on plaintext metadata (deletedAt,
isArchived, etc.) FIRST, then runs decryptRecords on the visible
set, then maps to the public type. Cost stays bounded by what the
view actually renders, not the total table size.
cross-app-queries.ts useFavoriteContacts decrypts firstName before
the localeCompare sort.
Test fixes
- aes.test.ts: the "registry returns null for disabled tables"
assertion now picks tasks + events as the disabled examples
(messages + contacts both flipped on in this commit).
- cycles.integration.test.ts:
1. beforeEach installs a fresh MemoryKeyProvider with a real
Web Crypto key so dayLogsStore.logDay can encrypt mood/notes
2. The "no duplicate" upsert test decrypts the raw rows it reads
directly from the table before asserting on the mood field
- module-registry.test.ts (drive-by, unrelated): adds eventItems
to the events appId snapshot to match the parallel module-registry
refactor.
Verified: 20 test files, 262/262 tests passing.
Phase 6 will roll out to the remaining tables (tasks, events, plants,
meals, slides, etc.) and finally light up the settings/security UI
(lock state, manual rotate, recovery code opt-in).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
3eabbc5e53 |
i18n(events): RSVP page in it/fr/es + extract e2e helper
- strings.ts: add Italian, French, Spanish dictionaries (≈25 keys each) and widen Lang to the full DE/EN/IT/FR/ES set. - +page.server.ts: pickLang now matches any of the five supported locales from Accept-Language; SSR error messages localised the same way. - e2e/helpers.ts: extract the dismissWelcomeModal helper out of events.spec.ts so future module specs can reuse it without duplicating the locale-agnostic dialog locator. |
||
|
|
897256c985 |
test(mana-events): 35 server tests covering routes + sweeper
Add bun:test integration suite that exercises every public and host endpoint plus the rate-bucket sweeper against a real Postgres. The Hono app factory was extracted from index.ts into app.ts so tests can build their own instance with a header-based auth mock instead of spinning up mana-auth + JWKS. Coverage: - health route smoke - public RSVP: snapshot fetch (incl. 404, cancelled, summary privacy), submit, validation (name, status, email, plus-ones, cancelled), upsert dedup (incl. null/missing email parity), summary aggregation across yes/no/maybe + plus-ones, rate-limit cap (5/h), absolute per-token cap (20) - host events: publish (auth, idempotent token reuse, ownership), snapshot update (partial, ownership, 404), delete (cascade FK to rsvps + buckets, ownership, idempotent), get rsvps (ownership) - sweeper: removes >2h-old buckets, keeps fresh ones, no-op on empty Mock auth lives in a small helper that injects an X-Test-User header into a fake middleware, so the same createApp() factory powers both production (real jwtAuth) and tests (header mock). |
||
|
|
bed08a1aa6 |
feat(mana/web): encryption phase 4 — notes pilot live
First module with at-rest encryption flipped on. The notes table's
title + content are now encrypted with AES-GCM-256 before any write
hits Dexie, decrypted on every read coming back through liveQuery,
and travel as opaque ciphertext through the sync wire (pending
changes, server push, applyServerChanges, the lot).
What changes for the user
- Nothing visible. Optimistic UI render still uses the plaintext
snapshot returned by createNote(). Edits look identical to the
old Phase 3 behaviour. The difference is invisible until you
crack open DevTools → Application → IndexedDB → mana → notes,
where you'll see ciphertext instead of "Buy milk".
What changes on disk
- notes.title and notes.content store ciphertext blobs
(`enc:1:<iv-b64>.<ct-b64>`)
- All other columns (id, color, isPinned, isArchived, createdAt,
updatedAt, deletedAt, userId, __fieldTimestamps) stay plaintext
so liveQuery filtering, sorting, and Field-Level LWW continue to
work without changes.
- _pendingChanges.data carries the same ciphertext blobs — server
receives opaque values, never plaintext.
Files
registry.ts
notes flipped to enabled:true with the corrected field list
['title', 'content'] (the schema has no 'body' column).
aes.test.ts
Existing assertion that "Phase 1 has no encrypted tables" is
rewritten as "notes is enabled in Phase 4" so the registry flip
doesn't break the foundation suite.
record-helpers.ts
encryptRecord/decryptRecord/decryptRecords loosen the generic
constraint from `T extends Record<string, unknown>` to
`T extends object`. Domain types like LocalNote work as direct
arguments without an `as Record<string, unknown>` cast at every
call site. Internal field reads/writes go through a sealed
Record-shaped view.
notes/stores/notes.svelte.ts
createNote: snapshots the plaintext for the optimistic return
value, then encryptRecord('notes', record) before noteTable.add.
updateNote: encrypts the diff in place; non-encrypted fields
(color, isPinned, isArchived) pass through untouched.
togglePin / archiveNote / deleteNote: untouched — they only
update plaintext columns.
notes/queries.ts
useAllNotes: filter on plaintext metadata first (deletedAt,
isArchived) so the decrypt workload is bounded by the visible
set, not the whole table. Then decryptRecords across what's
left, then map+sort.
useNote(id): new helper for detail views.
notes-encryption.test.ts (new — 8 cases)
End-to-end against fake-indexeddb with a real Web Crypto master
key in MemoryKeyProvider:
1. Title + content land as ciphertext on disk
2. Structural fields stay plaintext on disk
3. updateNote re-encrypts modified content but leaves flags
4. togglePin / archiveNote produce byte-identical title blobs
(i.e. no spurious re-encryption)
5. _pendingChanges.data carries ciphertext + plaintext metadata
6. Wrong-key decrypt fails closed (returns blobs, not garbage)
7. Locked vault refuses new writes with VaultLockedError
8. Locked vault still serves blobs without crashing on read
Test bilanz: 4 crypto-related test files, 64/64 passing
(31 AES + 12 record-helpers + 12 vault-client + 8 notes E2E + 1 misc).
Full mana/web suite: 20 files, 262/262 tests passing.
Stand der encryption pipeline:
Phase 1 ✅ Foundation (
|
||
|
|
640242500e |
fix(events): production wiring + polling resilience (quick wins)
Five small follow-ups on Phase 1b: - docker-compose.macmini.yml: add the mana-events container with the same shape as mana-credits, expose port 3065, add a Traefik route for events.mana.how, and inject PUBLIC_MANA_EVENTS_URL into the mana-web container so the SvelteKit SSR + browser both reach it. - mana-events: background sweeper that deletes rsvp_rate_buckets rows older than 2h every hour. Without it, long-published events accumulate one row per traffic-hour forever (FK cascade only fires on snapshot delete). - PublicRsvpList: track consecutiveFailures and only show the error banner after two failures in a row, so a single mid-poll network hiccup doesn't flash a 30s error the user can't act on. - apps/mana/apps/web: declare postgres as a devDep (already imported by the e2e spec via pnpm hoisting, now explicit). |
||
|
|
354cbcb176 |
feat(mana/web): encryption phase 3 — vault client + record helpers + layout wire-up
Adds the client-side wire-up that lets browsers fetch their master key
from the mana-auth server vault and use it to encrypt/decrypt configured
record fields. Still a no-op at the user-visible level until Phase 4
flips registry entries to enabled:true on a per-table basis.
vault-client.ts
Browser HTTP client for the three Phase 2 endpoints. Built around a
factory that takes (authUrl, getToken) and returns { unlock, lock,
refetch, rotate, getState }. Reuses the active MemoryKeyProvider if
one is already installed, otherwise registers a fresh one.
unlock() flow:
1. Short-circuits if already unlocked.
2. GET /api/v1/me/encryption-vault/key with Bearer token.
3. On 404 + code:'VAULT_NOT_INITIALISED', auto-fires POST /init so
the user is bootstrapped on first login per device.
4. Imports the returned base64 bytes via importMasterKey() into a
non-extractable CryptoKey, pushes it into MemoryKeyProvider.
5. Zeroes the raw byte buffer once imported (best-effort heap hygiene).
Network layer: 3-attempt retry loop with full-jitter exponential
backoff (500ms→8s), retries only on 0/408/429/5xx. 4xx surfaces
immediately so auth/permission errors don't stall the UI for seconds.
Error categorisation: 401/403→auth, network→network, 5xx→server,
rest→unknown. Returned as VaultUnlockState so callers can render
intent ("please re-login" vs "we're trying again" vs "the server
is having a moment").
record-helpers.ts
encryptRecord(tableName, record):
- Looks up the registry, returns unchanged if the table is not
configured or registry entry is disabled.
- Builds a work list of fields that need encryption (skipping
null/undefined and already-encrypted blobs — the latter makes
the helper idempotent on a re-emit from liveQuery).
- Throws VaultLockedError on the first call that needs the key
but finds the vault locked. Module stores let it bubble; the
UI surfaces "you need to unlock" toast.
decryptRecord(tableName, record):
- Mirror of encryptRecord. Locked-vault behaviour is to LEAVE the
blobs in place (rather than throw) so views can still render
structural fields and show a "🔒" placeholder where content
used to be.
- Per-field decrypt failure (corrupt blob, wrong key) is caught,
logged, and the field stays encrypted. The rest of the record
decrypts normally — one bad blob doesn't kill the whole read.
decryptRecords: array variant that skips null/undefined entries.
Layout integration (+layout.svelte)
- createVaultClient is constructed once at module init, reused
across all auth-state changes.
- The existing $effect on authStore.user gets a new branch:
- userId set + hasAnyEncryption() → vaultClient.unlock()
- userId cleared → vaultClient.lock()
- hasAnyEncryption() guards the network round-trip: while every
table is enabled:false (Phase 3 default), no fetch happens at all.
Phase 4 enables tables one by one and the unlock kicks in
automatically.
Tests
- record-helpers.test.ts: 12 cases — encrypt skips non-listed fields,
null/undefined pass-through, idempotent on already-encrypted,
table-not-in-registry no-op, VaultLockedError on missing key,
decrypt roundtrip, locked-vault returns blobs unchanged, per-field
failure logged + others continue, JSON.stringify/parse roundtrip
survives the sync wire.
- vault-client.test.ts: 12 cases — happy path GET /key, idempotent
second unlock, 404 → auto /init, generic 404 does NOT trigger
/init, 401/403 → auth error, fetch throw → network error, no
token → auth error without network call, lock() clears key,
refetch() re-pulls, rotate() POSTs and installs.
Verified: 7 test files, 110/110 src/lib/data/ tests passing
(31 AES + 12 record-helpers + 12 vault-client + 20 sync + 6 activity
+ 19 recurrence + 10 misc helpers).
Phase 4 (next): pilot the notes module — flip its registry entry to
enabled:true, wrap the notes store add/update to call encryptRecord,
wrap the notes queries to call decryptRecord, add a settings page
showing lock state and a manual rotate button.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
c5aeaf5e7f |
feat(memoro): voice recording → mana-stt transcription pipeline
Adds end-to-end browser voice capture for the Memoro module, mirroring the existing dreams pattern: MediaRecorder → SvelteKit server proxy → mana-stt on the Windows GPU box via Cloudflare tunnel. Recording UI lives in /memoro page header (mic button + live timer + cancel + sticky-permission retry). Server proxy at /api/v1/memoro/transcribe forwards the blob with the server-held X-API-Key. memosStore.createFromVoice creates a placeholder memo with processingStatus='processing' and fires transcribeBlob in the background, which writes the transcript and flips status on completion (or 'failed' with error in metadata). Also corrects the mana-stt hostname across the repo: stt-api.mana.how (which never existed in DNS) → gpu-stt.mana.how (the actual Cloudflare tunnel route to the Windows GPU box). Adds an ENVIRONMENT_VARIABLES.md section explaining how to obtain MANA_STT_API_KEY and where the tunnel terminates. Adds tunnel health probes to the mac-mini health-check script so we catch tunnel-side breakage in addition to LAN-side. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
4d9bf78f41 |
docs(cycles): add ROADMAP with future feature ideas
Consolidates all the "could still do" ideas from the initial design sessions into a single roadmap document next to the module: - Short-term quality-of-life polish (keyboard shortcuts, date picker, orphan symptom IDs, plural forms) - Mid-term features (BBT chart, history page, pattern recognition, cycle notes panel, per-day detail page) - Testing gaps (component tests, Playwright E2E, migration tests) - Long-term production-readiness (notifications, memoro audio notes, PDF export, privacy mode with app-lock, mobile port) - Initial ManaScore estimate and ecosystem health indicators - Explicit non-goals and a recommended next-steps ordering Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
e9915428cb |
feat(mana-auth): encryption vault — phase 2 (server-side master key custody)
Adds the server side of the per-user encryption vault. Phase 1 shipped
the client foundation (no-op while every table is enabled:false). This
commit lets the client actually fetch a master key when Phase 3 flips
the registry switches.
Schema (Drizzle + raw SQL migration)
- auth.encryption_vaults: per-user wrapped MK + IV + format version +
kek_id stamp + created/rotated timestamps. PK = user_id, ON DELETE
CASCADE so account deletion wipes the vault.
- auth.encryption_vault_audit: append-only trail of init/fetch/rotate
actions with IP, user-agent, HTTP status, free-form context.
- sql/002_encryption_vaults.sql: idempotent CREATE TABLE + ENABLE +
FORCE row-level security with a `current_setting('app.current_user_id')`
policy on both tables. FORCE makes the policy apply to the table
owner too — no bypass via grants.
KEK loader (services/encryption-vault/kek.ts)
- Loads a 32-byte AES-256 KEK from the MANA_AUTH_KEK env var (base64).
- Production: missing or wrong-length input is fatal at boot.
- Development: 32-zero-byte fallback so contributors can run the
service without provisioning a secret. Logs a loud warning.
- wrapMasterKey / unwrapMasterKey use Web Crypto AES-GCM-256 over the
raw 32-byte MK with a fresh 12-byte IV per wrap. Returns base64
pair for storage.
- generateMasterKey + activeKekId helpers used by the service.
- Future migration to KMS / Vault: only loadKek() changes; the
kek_id stamp on each row tracks which KEK produced it.
EncryptionVaultService (services/encryption-vault/index.ts)
- init(userId): idempotent — returns existing MK or mints a new one.
- getMasterKey(userId): unwraps the stored MK; throws VaultNotFoundError
on no-row so the route can return 404 cleanly.
- rotate(userId): mints fresh MK, replaces wrap. Caller is on the
hook for re-encryption — destructive by design.
- withUserScope(userId, fn): wraps every read/write in a Drizzle
transaction with set_config('app.current_user_id', userId, true)
so the RLS policy admits only the matching row. Empty userId is
rejected up-front.
- writeAudit() appends a row to encryption_vault_audit on every
action including failures, so probing attempts leave a trail.
Routes (routes/encryption-vault.ts)
- POST /api/v1/me/encryption-vault/init — idempotent bootstrap
- GET /api/v1/me/encryption-vault/key — fetch the active MK
- POST /api/v1/me/encryption-vault/rotate — destructive rotation
- All return base64-encoded master key bytes plus formatVersion +
kekId. JWT-protected via the existing /api/v1/me/* middleware.
- readAuditContext() pulls X-Forwarded-For + User-Agent off the
request for the audit row.
Bootstrap (index.ts)
- loadKek() runs at top-level await before any route can fire so a
misconfigured KEK fails closed at boot, never at request time.
- encryptionVaultService is mounted under /api/v1/me/encryption-vault
so it inherits the existing JWT middleware and shows up next to the
GDPR self-service endpoints.
Tests (services/encryption-vault/kek.test.ts)
- 11 Bun-test cases covering: KEK load (happy path, wrong length,
idempotent, before-load guard), generateMasterKey randomness,
wrap/unwrap roundtrip, IV uniqueness across repeated wraps,
wrong-MK-length rejection, tampered-ciphertext rejection,
wrong-length IV rejection, wrong-KEK rejection.
- Service-level integration tests deferred — they need a real
Postgres for the RLS behaviour, set up via existing mana-sync
test pattern in CI.
Config + env
- .env.development gains MANA_AUTH_KEK= (empty → dev fallback)
with a comment explaining the production requirement.
- services/mana-auth/package.json gains "test": "bun test".
Verified: 11/11 KEK tests passing, 31/31 Phase 1 client tests still
passing, only pre-existing TS errors remain in mana-auth (auth.ts:281
forgetPassword + api-keys.ts:50 insert overload — both unrelated).
Phase 3: client wires the MemoryKeyProvider to GET /encryption-vault/key
on login, flips registry entries to enabled:true table by table, and
extends the Dexie hooks to call wrapValue/unwrapValue on configured
fields.
Phase 4: settings UI for lock state, key rotation, recovery code opt-in.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|
|
3a4c6654b5 |
test(events): playwright e2e specs + flake-resistant config
Restore the events Playwright suite (lost in a rebase) and harden it against Vite cold-start HMR flakes. Six tests cover the local-first host flow (create, edit guests, RSVP totals, delete) and the public RSVP page (snapshot render, submit, upsert, 404). The host flow runs in guest mode and dismisses the welcome modal via a small helper. playwright.config.ts boots mana-auth, the Vite dev server, and mana-events as separate webServers with reuseExistingServer=true so running tests against an already-up dev environment is a no-op. Bumps the per-test timeout to 60s and the expect timeout to 10s, and tells goto() to wait for networkidle so locator clicks don't race a Vite recompile. |
||
|
|
4d46cbb676 |
i18n(cycles): real translations for it/fr/es
Replace the English-copy stubs in it.json, fr.json, and es.json with actual Italian, French, and Spanish translations covering the full cycles namespace — phase labels, flow/mood levels, section headers, actions, placeholders, stats, relative dates, symptom manager, and the calendar. Key structure remains identical across all 5 locales so the parity test still passes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|
|
343804b25c |
refactor(cycles): make date formatting locale-aware
Replace hardcoded 'de-DE' toLocaleDateString calls across ListView,
CyclesWidget, and pure helpers with the active svelte-i18n locale.
Pure helpers in queries.ts now take their locale (and for relative
dates, their labels) as parameters so they stay pure and testable:
- formatLogDate(iso, labels, dateLocale)
- groupLogsByMonth(logs, dateLocale)
- New RelativeDateLabels type, exported from the module barrel
ListView builds relativeLabels from $_ and threads dateLocale through;
CyclesWidget does the same using a tiny $locale-derived helper.
New i18n keys cycles.relativeDate.{today,yesterday,daysAgo} across
all five locales (real de/en translations, stubs for it/fr/es).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|