Wraps all `var(--color-X)` references with `hsl()` and routes the muted
backgrounds + borders through `--color-card` / `--color-border` instead
of the rgba-on-white fallbacks. The brand violet (#8b5cf6) automations
accent and the deliberate when/filter/then flow-step palette
(blue/amber/green) stay literal — they encode trigger/condition/action
semantics, not theme intent.
Last file from the original P5 ListView migration list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The pre-launch consolidation collapsed 17+ per-product backends into
the single Hono/Bun process at apps/api. That makes apps/api the
single point of failure for every authenticated module call the
unified Mana web app makes — a missing index, a hot-path allocation
in auth middleware, or rate-limiter contention degrades all 16
modules at once. The other scripts in load-tests/ already cover
mana-auth, mana-sync, mana-llm and the SvelteKit frontends, but
apps/api itself was unmeasured. This is that missing piece.
What it tests
-------------
A weighted mixed workload that walks the full middleware stack
(CORS → request logger → rate limit → auth → router → handler)
plus a representative range of handler shapes:
25% GET /health (no auth, baseline)
20% GET /api/v1/moodlit/presets (auth + in-memory return)
15% GET /api/v1/chat/models (auth + DB read)
20% POST /api/v1/calendar/events/expand (auth + Zod + RRULE compute)
12% POST /api/v1/todo/compute/next-occurrence
(auth + Zod + rrule lib)
8% POST /api/v1/todo/compute/validate (auth + Zod + validation)
Deliberately no write endpoints — those would conflate write
amplification with API-server cost. The compute routes here all run
in <50ms warm; what we're measuring is the overhead the unified
server adds on top of pure handler work.
Per-route-class p95 budgets via tags:
health < 100ms
authed_get < 300ms
authed_post < 500ms
global p95 < 500ms, p99 < 2s
Application-level error rate (4xx + 5xx + check failures) must stay
under 1% — exit code 1 otherwise, so it's CI-gateable.
Auth setup
----------
apps/api requires JWT on every /api/* route. setup() acquires a
token once before VUs start hammering and shares it for the run.
Three sources tried in order:
1. $MANA_API_TOKEN (CI passes a pre-minted token)
2. login at $TEST_EMAIL / $TEST_PASSWORD
3. register a fresh account on the fly
Bails with a clear error message if all three fail.
Load profile
------------
4 minute total: 30s warmup → 2m sustained @ 50 VUs → 1m peak @ 100 VUs
→ 30s cooldown. Override with --vus / --duration as usual.
Closes item #23 in docs/REFACTORING_AUDIT_2026_04.md.
Follow-ups not in this commit:
- Wire into .github/workflows/daily-tests.yml (requires standing
up the apps/api stack in the runner — bigger lift, separate PR)
- Per-module thresholds once we have a few real runs and know
where the natural baseline sits
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Six chrome-level UI components — modals, toasts and prompts that float
above the workbench — moved off hand-rolled #1e293b/#e5e7eb/#6366f1/etc.
literals onto theme tokens.
Files migrated:
- RecoveryCodeUnlockModal — backdrop overlay (literal black/60),
danger-state background → color-error
- SessionWarning — warning toast bg → color-warning, dark text on the
bright warning bg stays literal (intentional contrast pair)
- SuggestionToast — primary CTA → color-primary, muted/error text →
tokens. The toast itself keeps its dark literal bg by design (it's
a floating notification, not a theme-aware surface)
- SyncConflictToast — hover background → color-surface-hover
- PwaUpdatePrompt — primary CTA was hardcoded indigo (#6366f1), now
follows the active theme variant
- auth/AuthRequiredModal — backdrop overlay literal, primary button
text → color-primary-foreground
Backdrop overlays use literal `hsl(0 0% 0% / 0.6)` rather than a theme
token because semi-transparent black is the deliberate UI affordance
for "modal screen dimmer", not a theme-aware surface.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a "Local Login & Dev Users" section to docs/LOCAL_DEVELOPMENT.md
and a short pointer in services/mana-auth/CLAUDE.md so the next dev
finds the script without first hitting the "why can't I log in?" wall:
- Why it exists (no admin seed, requireEmailVerification + no SMTP)
- The 3 default accounts + password
- Single-account form + env overrides (TIER, AUTH_URL, …)
- Idempotency promise
- Prereqs (Postgres + mana-auth on :3001)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three more removed packages had stale COPY entries in the base
Dockerfile, blocking the build the moment is_base_image_stale tried
to rebuild the image:
- packages/credit-operations (deleted in NestJS→Hono migration)
- packages/shared-api-client (same)
- packages/shared-splitscreen (separate cleanup)
Same shape as the shared-subscription-types/-ui removal earlier
today (commit a9178ec2f). The deletions go in cleanup commits and
the Dockerfile lines stay behind because nobody runs --base
manually anymore — until is_base_image_stale picks up a packages/
change and tries to rebuild, at which point COPY of a non-existent
path bricks the build.
Removed both the COPY lines AND the corresponding `cd /app/packages/
{credit-operations,shared-api-client} && pnpm build` lines from the
post-install build chain so they can't accidentally re-introduce
the references.
Verified by `grep '^COPY packages/' Dockerfile.sveltekit-base | awk
{print $2} | while read pkg; do [ ! -d $pkg ] && echo MISSING: $pkg;
done` returning empty.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
skilltree/types.ts has had `var(--color-branch-{intellect,body,creativity,
social,practical,mindset})` references for as long as I can grep, but those
CSS variables were never defined anywhere. Every skill in the gamified
tree was rendering inherited color (effectively invisible accent), making
the 6 branches visually indistinguishable.
Add the 6 colors as a new "domain accent" section in shared-tailwind/themes.css,
defined once at :root and never overridden by .dark or variant blocks
(they're brand-internal accents, not theme-aware — the same way cycles
keeps its brand pink literal).
- intellect → blue (217 91% 60%) — knowledge, thinking
- body → red (0 84% 60%) — physical, energy
- creativity → violet (271 91% 65%) — art, expression
- social → amber (38 92% 50%) — warmth, relationships
- practical → teal (173 80% 40%) — craft, tools
- mindset → green (142 71% 45%) — calm, growth
Also update skilltree/types.ts to wrap the var() calls with hsl() per
the canonical convention (the values are now raw HSL channels).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Local mana-auth has no built-in admin seed and `requireEmailVerification`
turned on with no real SMTP — every developer ends up writing the same
"register + UPDATE auth.users" SQL incantation by hand. Bundles it
into one idempotent script + a pnpm alias.
pnpm setup:dev-user # creates 3 default accounts
./scripts/dev/setup-dev-user.sh foo bar # creates / repairs one
What it does per user:
1. POST /api/v1/auth/register on mana-auth (so Better Auth's
signUpEmail handles password hashing the way the runtime
expects — no hand-rolled scrypt)
2. UPDATE auth.users SET email_verified = true, access_tier = 'founder'
so the new user can immediately log in AND exercise every
tier-gated module without a tier upgrade dance
Idempotent: existing users get tier + verification re-applied without
touching the password. Re-running after a partial setup is safe.
Defaults to three accounts (tills95 / tilljkb / rajiehq @gmail.com,
all with password "Aa-123456789") so the next dev doesn't have to
remember anything. Override via `TIER=alpha` / `DB_HOST=...` env
vars when needed.
Two preflight gates fail loud: psql in PATH + mana-auth reachable
on :3001. ON_ERROR_STOP=1 in psql so a bad SQL run doesn't get
silently swallowed.
Replaces the dangling `seed:dev-user` package.json alias that pointed
at a `pnpm --filter @mana/auth db:seed:dev` script that was never
created — clean rename to `setup:dev-user` to match the existing
`setup:env` / `setup:db` family.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two pre-existing bugs in the memoro module that became visible after
the Phase 5 LLM auto-title work landed. Both are independent of the
Phase 5 framework — neither was introduced by it — but the auto-title
was the first feature to systematically write to memo.title, which is
when the broken read path stopped hiding behind always-null titles.
Bug 1: DetailView shows ciphertext instead of plaintext
apps/mana/apps/web/src/lib/modules/memoro/views/DetailView.svelte
passed `useDetailEntity({ table: 'memos', ... })` WITHOUT setting
`decrypt: true`. The crypto registry has memos.{title, intro,
transcript} marked as encrypted, so the inputs were binding to
raw `enc:1:Ghj1eJV0zz4PgfRL...` ciphertext strings instead of
plaintext. Nobody noticed before because:
- title was always null (no UI path to set it until Phase 5)
- intro is rarely used
- transcript was the only visible encrypted field, and the
garbled `enc:1:...` string in the transcript area was apparently
attributed to "broken transcription" rather than "broken read"
Add `decrypt: true` to the useDetailEntity options. Same flag the
other Mana modules already use for their encrypted DetailViews.
Bug 2: createdAt and updatedAt never set on memo records
apps/mana/apps/web/src/lib/modules/memoro/stores/memos.svelte.ts
create() built a LocalMemo object without populating createdAt or
updatedAt. The LocalMemo type declares both as required strings,
but TypeScript didn't catch the omission because the store relied
on a TS Type assertion / partial-shape pattern.
The Dexie creating hook in apps/mana/apps/web/src/lib/data/database.ts
only auto-stamps userId + __fieldTimestamps — it does NOT auto-stamp
createdAt. Module stores are expected to set their own timestamps
(consistent with the todo, calendar, contacts, notes stores etc.).
So every memoro memo had `createdAt === undefined`, and the
DetailView's `new Date(memo.createdAt ?? '').toLocaleDateString('de')`
rendered as "Erstellt: Invalid Date" for every single memo.
Fix: set both timestamps explicitly in create() before the Dexie
add. Existing memos in the wild are still broken — they'd need a
one-shot migration to backfill createdAt from the
__fieldTimestamps map, but that's a bigger commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The /admin route in the unified Mana web app was rendering hardcoded
mock data (42 users, 156 successful logins, 3 failed) for every
admin who opened it. The previous code had a TODO comment to wire
up a real endpoint and the backend half had been waiting for the
frontend half ever since the consolidation landed.
Backend (mana-auth):
Add GET /api/v1/admin/stats — admin-only, returns the seven counts
the dashboard needs in a single response. Each count is its own
Drizzle query against auth.users / auth.sessions / auth.login_
attempts; they run in parallel via Promise.all so total latency is
dominated by the round-trip to Postgres, not the per-query work.
Stats:
- totalUsers → users where deleted_at IS NULL
- newUsers7d → users created in the last 7 days
- newUsers30d → users created in the last 30 days
- activeSessions → sessions where expires_at > now() AND not revoked
- uniqueUsers24h → distinct user_id from sessions with last_activity
in the last 24h (and not revoked)
- loginSuccess7d → login_attempts where successful=true, last 7d
- loginFailed7d → login_attempts where successful=false, last 7d
Plus a generatedAt ISO timestamp so the client can show staleness
if it ever caches the response.
Frontend (apps/mana/apps/web):
- Add adminService.getStats() in the existing admin API service
(sits next to getUsers / getUserData / deleteUserData; uses the
same authenticated base-client and ApiResult envelope).
- Replace the onMount mock-data block in admin/+page.svelte with
a single adminService.getStats() call. Drop the local Stats
interface in favor of the AdminStats type exported from the
service.
- Guard the Success Rate calculation against division by zero on
fresh deployments — when there have been no login attempts in
the last 7 days, render '—%' instead of NaN%.
Verification:
- mana-auth type-check unchanged (baseline errors only)
- mana-auth runtime tests still 19/19 passing
- svelte-check on the two changed web files: zero errors
Closes item #12 in docs/REFACTORING_AUDIT_2026_04.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
15 files across 11 modules — the consumer-specific style overrides on
top of DetailViewShell, plus a few module-internal sub-views and pages
(SymbolDetailView, SymbolsView, ContactPage, PageEditBar, TodoPage,
CycleCalendar, SymptomManager).
All :global(.dark) duplicates removed (theme system handles light/dark
via .dark class on <html>) and the hand-rolled #374151/#9ca3af palette
+ indigo/violet/red/green brand accents replaced with hsl(var(--color-X)).
Brand colors that should NOT track theme primary stay literal:
- cycles brand pink (#ec4899) — menstrual cycle tracker accent
- dreams indigo accents and skilltree violet star colors → color-primary
(these were arbitrary indigo brand accents, now they follow the variant)
- semantic income/success green → color-success
- semantic error/danger red → color-error
- favorite/star amber/gold → color-warning
Final P5 batch closing out the visual track consolidation. Combined with
earlier P5 commits (foundation shells, page-shell, picker, list views),
mana-web now has a single coherent theme-token convention across the
workbench surface.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Temporarily flips every MANA_APPS entry from public/beta/alpha/founder
to 'guest' so the tier-gated workbench picker, openApps soft-filter,
and (app)/+layout per-route gate can be exercised end-to-end without
needing a tier upgrade. The hasAppAccess hierarchy is unchanged —
guests are still tier 0; this just makes every app's threshold also 0.
Revert before any release. Only the 36 in-app entries are touched;
function signatures and type definitions stay intact.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Guests and under-tier users could see and use every module in the
workbench because tier-filtering only existed in @mana/shared-branding's
MANA_APPS list — never in the workbench app-registry that the picker
and the page-level routes actually consume. Three leaks closed:
──── 1. Workbench AppPagePicker ────
The picker was calling getAllApps() and only filtering by "already
open in this scene". Result: a guest opening "Add page" saw all 32
modules including founder-only ones like dreams, finance, memoro.
Fix: new getAccessibleApps(userTier) helper in app-registry/registry.ts
joins the workbench in-memory map with MANA_APPS by id, looks up
each app's requiredTier, and filters via hasAppAccess. Apps that
exist in the workbench registry but NOT in MANA_APPS (`automations`,
`playground`, the `inventar` ↔ `inventory` id mismatch) default to
visible — hiding them would silently break internal tools for
founders/devs.
AppPagePicker now takes a `userTier` prop and calls
getAccessibleApps(userTier) instead of getAllApps(). (app)/+page.svelte
threads authStore.user?.tier into it.
──── 2. openApps soft-filter ────
The default Home scene seeds [todo, calendar, notes] — `notes` is
founder-tier, so a brand-new guest device would still try to render
the notes view in their workbench tab strip even though they can't
actually use it. Same risk for any cross-device synced scene that
contains gated apps (e.g. founder logs in on a public-tier secondary
account).
Fix: (app)/+page.svelte derives `openApps` through a soft filter
(isAppAccessible) instead of using workbenchScenesStore.openApps
directly. The store keeps the full list — we don't destructively
delete on tier downgrades — so the tabs reappear when the user
upgrades or signs in. Internal-only apps (no MANA_APPS entry)
stay visible by the same default-visible rule.
──── 3. Per-route tier gate in (app)/+layout.svelte ────
The wrapping <AuthGate> in (app)/+layout.svelte:
- only runs onMount, so it doesn't react to client-side navigation
- skips the tier check entirely when !authStore.isAuthenticated
- has no per-route requiredTier — it's set once on the outer wrapper
So a guest typing /dreams or /cycles in the URL bar slipped past
silently and rendered the gated module. Same for a public-tier user
clicking through to /finance.
Fix: reactive `routeBlocked` derivation in the (app) layout:
- Extract first path segment from $page.url.pathname
- Look it up in MANA_APPS by id
- If found and user (or 'guest') doesn't satisfy requiredTier,
render an inline tier-denied panel instead of {@render children()}
The panel mirrors AuthGate's tier-denied design (same locked icon +
tier comparison + "Zur Übersicht" / "Anmelden" buttons) but works
reactively for any subsequent navigation. Routes that don't map to
a MANA_APPS id (settings, profile, admin, help, observatory, …)
fall through with routeAppId=null and are never blocked.
──── New helpers in app-registry ────
getAccessibleApps(userTier?) — filtered AppDescriptor[]
isAppAccessible(appId, userTier?) — boolean for single-app lookup
Both treat `userTier === undefined | null` as 'guest' (the lowest
tier in @mana/shared-branding). Both default-visible for apps not
in MANA_APPS so the workbench-internal tools keep working.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sweep across the workbench-foundation components that wrap or chrome
every app surface in mana-web. All :global(.dark) duplicates removed
(theme system handles light/dark via .dark class on <html>) and the
hand-rolled #374151/#9ca3af palette + indigo/violet brand accents
replaced with hsl(var(--color-X)).
Files migrated:
- DetailViewShell — the inline-edit detail card my P2 introduced.
Had compound :global(.dark .detail-view .X) selectors duplicating
every input/border for dark mode. Now single source.
- PickerOverlay — the workbench app picker / page picker.
- page-carousel/PageCarousel — drop-target hover (was hardcoded violet)
becomes color-primary.
- workbench/AppPage — drop-target hover (was hardcoded green) becomes
color-success.
- workbench/scenes/{ConfirmDialog,SceneRenameDialog,SceneTabs} —
modal overlays. The black-50% backdrop stays literal (hsl(0 0% 0% / 0.5))
since it's a deliberate semi-transparent overlay, not theme-aware.
The .danger button background switches from #dc2626 to color-error.
- voice/VoiceCaptureBar — primary indigo accent + red error state
both follow theme tokens now.
- links/LinkedItems — text + border tokens.
These are foundational — every app rendered in the workbench inherits
its chrome and detail view from these files, so any visual tweak now
propagates everywhere consistently.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-launch audit of the entire mana-monorepo. 29 items prioritized
across 4 phases (Critical → Low) plus a Bonus section. Each item
is annotated with its current status (✅ done / ❌ false / ⚠️
overstated / ☐ open) and concrete file paths.
Key findings:
- ~70% of the original LLM-generated audit claims were either
factually wrong, substantially overstated, or already
implemented. The doc records both the original claim and the
verified reality so future audits don't re-investigate the
same false leads.
- The genuine launch-relevant items (Phase 1 Critical) are all
addressed: recursive turbo dev scripts removed (#2),
structured logging via shared-hono + shared-logger (#3),
sso-config consistency spec for the auth↔CORS drift (#4),
apps/api response shape helpers (#5).
- Bonus discoveries during the sweep: typed Hono context for
apps/api modules (#28, 69 → 0 type errors), Dead-Code-Sweep
of 4 zero-consumer packages + abandoned game stubs +
redundant lockfiles (#29, ~21000 LOC removed across the full
sweep).
- Items closed as false/won't-fix: per-product landing pages
kept (#1), service duplication myth (#6), store pattern drift
overstated (#7), package count goal unrealistic (#8),
PRE_LAUNCH_CLEANUP inverted (#15), encryption test parity
category error (#22), secrets/CI/CD docs already exist
(#24, #25), shared-errors salvage skipped (#27).
Stand at commit time: 23/29 items processed, 6 remaining
(#12 admin mock data, #13 .env hygiene, #14 cleanup nearly done,
#23 apps/api k6 script, #26 apps/context lockfile decision, #27
already closed).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three pnpm artifacts that were either Pre-Consolidation leftovers or
unintentional drift:
- apps/context/pnpm-lock.yaml + apps/context/pnpm-workspace.yaml
apps/context used to be its own nested workspace declaring
apps/* and packages/*. After consolidation only apps/context/
apps/mobile remains, and the root pnpm-workspace.yaml already
matches it via 'apps/*/apps/*'. The nested lockfile (242 KB)
was a separate dependency graph drifting independently from
the root.
- services/mana-media/packages/client/pnpm-lock.yaml
Anomalous lockfile in a workspace sub-package. The root
workspace already covers services/*/packages/* — no reason
for client/ to maintain its own resolution.
Verified after deletion:
- pnpm install completes cleanly (~16s) and now resolves
apps/context/apps/mobile from the root lockfile (pnpm list
confirms the workspace registration)
- apps/api type-check still 0 errors
- mana-auth tests still 19/19 passing
Tracked as item #26 in docs/REFACTORING_AUDIT_2026_04.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three independent dead-code cleanups bundled together because they
all touch dev scripts in the root package.json:
1. games/voxelava/ + games/worldream/ — orphaned game stubs
~5886 LOC of Svelte components, route handlers, and types with
no root package.json in either directory, no CI references, no
docker-compose entry, no mana-apps registry presence. The
matching root scripts dev:worldream:web + worldream:dev pointed
to a @worldream/web filter that doesn't exist as a workspace
member. games/arcade and games/whopixels remain untouched.
2. apps/memoro/* — clean stale @memoro/web references
apps/memoro/apps/web/ was removed during the consolidation; the
memoro frontend now lives in apps/mana/apps/web/src/lib/modules/
memoro/. But several scripts still pointed at the deleted
filter:
- root: dev:memoro:web (deleted), dev:memoro:app + :full
rewritten to drop the :web piece (server + audio-server
only)
- apps/memoro/package.json: dev:web removed, top-level dev
script removed (filtered @memoro/* which would have hit
the dead web filter)
3. apps/memoro/apps/server: declare @mana/notify-client dep
src/lib/notify.ts:6 has been importing @mana/notify-client
without declaring it in package.json — works by accident via
hoisted node_modules in the workspace. Add the dep so the
import is properly tracked. Found while verifying that
notify-client (which has 0 declared consumers) was actually
safe to keep.
Tracked as items #18, #19, #29 in
docs/REFACTORING_AUDIT_2026_04.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-launch audit found 4 packages with zero workspace consumers
that were leftover from before the consolidation:
- @mana/cards-database (1475 LOC)
Pre-consolidation flashcard backend with its own Docker Compose
and Drizzle config. Replaced by the cards module in the unified
Mana app: apps/mana/apps/web/src/lib/modules/cards/. Now uses
Dexie + mana-sync against mana_platform.
- @mana/shared-api-client (1110 LOC)
Generic Go-style {data, error} REST client. Only reference left
was a string entry in shared-vite-config's noExternal list (not
a real import).
- @mana/shared-errors (1791 LOC)
NestJS-coupled exception filter package from before the Hono
migration. The Hono replacement (serviceErrorHandler in
@mana/shared-hono) ships in a separate commit. Result<T,E> +
ErrorCode enum bits had no consumers and weren't worth saving
standalone — if a need emerges they can grow organically.
- @mana/shared-splitscreen (694 LOC)
Side-by-side panel layout components. No code consumers; only
referenced from shared-vite-config noExternal and an old design
doc. The unified Mana app uses its own workbench scenes for
multi-pane layouts.
Verified zero code consumers via grep across .ts/.svelte/.json
before deletion. apps/api type-check stays at 0 errors after the
sweep, mana-auth tests still 19/19 passing.
Also clean packages/shared-vite-config/src/index.ts noExternal
list while we're here: drop the two deleted entries plus 8 ghost
packages (shared-feedback-ui/-service/-types, shared-help-ui/
-types/-content, shared-profile-ui, shared-subscription-ui) that
were referenced by name but never existed in packages/. List goes
from 22 → 12 entries.
Net: ~5070 LOC + workspace declarations removed.
Tracked as item #29 in docs/REFACTORING_AUDIT_2026_04.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Locks in the relationship between three places that must agree about
SSO origin configuration:
1. TRUSTED_ORIGINS in better-auth.config.ts (Better Auth allow-list)
2. CORS_ORIGINS env var on mana-auth in docker-compose.macmini.yml
3. The HTTPS subset of (1) must be a subset of (2) — every origin
Better Auth trusts must also pass CORS preflight
Background: root CLAUDE.md references this spec file as the canonical
"Adding an app to SSO" verification step (line 116) but the file
itself never existed. The first run of this spec immediately caught
two real bugs:
- 3 origins in TRUSTED_ORIGINS were missing from CORS_ORIGINS
(https://auth.mana.how, https://arcade.mana.how, https://whopxl.mana.how)
- 22 zombie subdomain entries in CORS_ORIGINS left over from before
the consolidation (calendar, chat, todo, ...) that no app actually
routes to anymore
Both fixes shipped together with the TRUSTED_ORIGINS extraction in
the broader pre-launch sweep (commit 919fcca4b). This spec is the
guard against the same drift creeping back in.
Eight tests:
- canonical mana.how + auth subdomain present
- localhost dev origins (3001, 5173) present
- all production origins HTTPS
- all production origins on *.mana.how
- no duplicates
- every HTTPS trusted origin appears in mana-auth CORS_ORIGINS
- soft warning for CORS_ORIGINS entries not in trustedOrigins
(catches drift in the other direction)
8/8 pass.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First real-world consumer of the @mana/shared-llm tier framework.
After STT transcription completes for a voice memo, the memos store
fire-and-forgets a generateTitleTask into the persistent task queue
with refType:'memo' + refId:memoId. A module-side watcher subscribed
via Dexie liveQuery to completed task rows writes the result back
into memo.title and deletes the queue row to mark it consumed.
What this commit ships:
apps/mana/apps/web/src/lib/llm-tasks/generate-title.ts
- generateTitleTask: minTier='none', contentClass='personal'
- runLlm: sends a German system prompt asking for a 3-7 word
title, defensive cleanup of any quotes/markdown the model
might leak through despite the prompt
- runRules: takes the first sentence (split on .!?\n), caps
at maxWords/60-chars, returns a non-empty fallback string.
Predictable and free, works on every device including the
ones where the user has opted out of all LLM tiers.
apps/mana/apps/web/src/lib/llm-task-registry.ts
- Register generateTitleTask alongside extractDate + summarize
so the queue processor can resolve the name back to the
task object after a row is pulled from the persistent table.
apps/mana/apps/web/src/lib/modules/memoro/stores/memos.svelte.ts
- After transcribeMemo successfully writes the transcript +
processingStatus:'completed', enqueue a generateTitleTask
tagged with refType:'memo' + refId + priority:1. Skips the
enqueue if the memo already has a non-empty title (so
manually-titled memos aren't overwritten on re-transcription)
or if the transcript came back empty.
- Wrapped in try/catch — queue failures must NEVER break the
transcription happy path.
apps/mana/apps/web/src/lib/modules/memoro/llm-watcher.svelte.ts
- startMemoroLlmWatcher() / stopMemoroLlmWatcher()
- Subscribes via Dexie liveQuery to llmQueueDb.tasks rows
where state='done', taskName='common.generateTitle',
refType='memo'. For each row:
1. Skip + delete row if result isn't a string (defensive)
2. Skip + delete row if memo no longer exists (deleted
between enqueue and result)
3. Skip + delete row if memo already has a manual title
(user typed one during the LLM round-trip)
4. Otherwise: encryptRecord + memoTable.update with
{ title: result, updatedAt: now }, then delete the
queue row to mark it consumed.
- Module-scope subscription handle, idempotent start/stop.
apps/mana/apps/web/src/routes/(app)/+layout.svelte
- startMemoroLlmWatcher() in handleAuthReady's Phase A right
after startLlmQueue(). The watcher needs to run regardless
of whether the user is currently on /memoro — a memo
transcribing in the background should auto-title even
while the user is doing something else.
- stopMemoroLlmWatcher() in onDestroy alongside stopLlmQueue().
End-to-end flow with a Tier 0 user (no AI enabled):
1. User records a memo via voice capture
2. memos.svelte.ts createWithTranscription() inserts the memo
with processingStatus:'processing'
3. transcribeMemo POSTs the audio to mana-stt, awaits the
transcript
4. Successful transcript → memos.svelte.ts writes
{ transcript, processingStatus:'completed' } to memoTable
5. Same function enqueues generateTitleTask with the transcript
6. LlmTaskQueue processor picks it up (the queue is running in
the background since layout init), calls
orchestrator.run(generateTitleTask, { text: transcript })
7. Orchestrator: Tier 0 user → no LLM tier → falls through to
runRules() which returns the first-sentence heuristic
8. Queue marks the row done with the rules-tier title string
9. Memoro watcher's liveQuery fires with the new completed row
10. Watcher writes title + deletes the queue row
11. ListView's existing useLiveQuery on memoTable picks up the
title change automatically
End-to-end flow with a Browser-tier user:
Steps 1-6 identical, then:
7. Orchestrator: browser tier ready → calls
generateTitleTask.runLlm with the BrowserBackend
8. Web Worker (Phase 3) runs Gemma 4 E2B against a 32-token
budget, returns a 3-7 word German title
9-11. Same as Tier 0 — the title lands in memo.title without
the user clicking anything
This is the validation the entire 4-phase architecture was built
for: a module-side auto-feature that's completely tier-agnostic,
fire-and-forget, persistent across reloads, and that gracefully
degrades from Gemma 4 down to a regex when the user has opted out.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The workbench paper card containing every app — was hardcoded cream
(#fffef5) light + dark brown (#252220) dark via :global(.dark).
Now uses hsl(var(--color-card)) so it follows the active theme variant.
The drag-handle bar, move buttons, window buttons, resize handle and
title all switch from hand-rolled gray scale to color-foreground /
color-muted-foreground / color-surface-hover. The close button hover
becomes color-error. The resize purple glow becomes color-primary.
This is the foundational shell — every app rendered in the workbench
inherits its background from this file, so the migration here unblocks
visual consistency across the whole app surface.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
31 hand-rolled rules + their :global(.dark) duplicates → hsl(var(--color-X)).
The largest scoped-CSS file in the P5 sweep. Indigo accents (#6366f1) for
the inline-editor borders, status badges, view-tab active state, filter-tab
active state, search-input focus, and ins-symbol active state all become
hsl(var(--color-primary)) so they follow the active theme variant. Lucid
star ratings (was hardcoded amber) become hsl(var(--color-warning)).
Danger reds for transcription failure / delete become hsl(var(--color-error)).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
11 hand-rolled rules + their :global(.dark) duplicates → hsl(var(--color-X)).
The brand pink (#ec4899) stays literal — it's the menstrual cycle tracker
brand color and should not track theme variants. Danger reds switch to
hsl(var(--color-error)) so they follow the theme palette.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
21 hand-rolled rules + their :global(.dark) duplicates → hsl(var(--color-X)).
income/expense semantic colors switch from literal #22c55e/#ef4444 to
hsl(var(--color-success))/hsl(var(--color-error)) — they keep their meaning
(green for money in, red for money out) but now follow the theme palette.
.add-btn primary action and .cat-chip.selected state move from hardcoded
indigo to color-primary so they follow the active theme variant.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
22 hand-rolled rules + their :global(.dark) duplicates → hsl(var(--color-X)).
The inline-editor border/background (was hardcoded indigo rgba) now uses
hsl(var(--color-primary) / alpha). The .ed-btn.primary save button (was
hardcoded #6366f1) becomes hsl(var(--color-primary)) so it follows the
active theme variant.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
13 hand-rolled rules + their :global(.dark) duplicates → hsl(var(--color-X)).
The hardcoded #3b82f6 today-marker becomes hsl(var(--color-primary)) so it
follows the active theme variant. Drag-target hover outline (was hardcoded
blue-500) also becomes primary. Tag-pill background keeps its --tag-color
custom property logic.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The mechanical var()→hsl(var()) sweep in commit 6e20c298a left 14
lines across spiral/+page.svelte and KeyboardShortcutsModal.svelte
with three closing parens instead of two:
background: hsl(var(--color-card)));
^ extra
Each of those originals had a hex fallback like
`var(--color-card, #fff)` and the replacer kept the trailing close
paren after dropping the fallback. The styles never showed up wrong
in the browser because the parser silently dropped the entire
declaration, but Tailwind 4's CSS validator catches it now and the
production build fails with "Missing opening (".
Mechanical fix via sed `s/hsl\(var\(--color-([a-z-]+)\)\)\);/
hsl(var(--color-$1));/g` on the two files. No semantic change.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First file in the visual track consolidation, retried after the theme
system was rewritten in 919fcca4b. The earlier attempt failed because
themes.css was inconsistent (--color-X resolved to wrapped string from
@theme inline, then to raw channels after runtime store ran, with raw
channels being invalid as a CSS color). With the new single-layer
themes.css all hsl(var(--color-X)) usages resolve correctly in light
and dark mode regardless of hydration state.
Substitution applied (the canonical pattern):
color: #374151 + :global(.dark) #e5e7eb → color: hsl(var(--color-foreground))
color: #9ca3af → color: hsl(var(--color-muted-foreground))
border: 1px rgba(0,0,0,0.08) + dark dup → border: 1px solid hsl(var(--color-border))
background: rgba(0,0,0,0.04) + dark dup → background: hsl(var(--color-surface-hover))
Net: 260 → 245 LOC, 7 hand-rolled palette rules eliminated, all 6
:global(.dark) selectors removed (theme system handles both modes via
.dark class on <html>).
Brand colors that should NOT track the theme primary stay literal:
- violet category badge (#8b5cf6) — kept hardcoded with color-mix for
the 12% alpha background, since this is a deliberate accent
indicating quote category, not a theme color
- favorite-active red — switched from literal to var(--color-error)
so it follows the theme's error color (consistent with other
delete/danger affordances in the app)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Until now, modules wanting to use the orchestrator had to await each
LLM call inline in their store code. That's fine for foreground tasks
("user clicked summarize") but a non-starter for background work
("auto-tag every new note", "generate a title for every voice memo
after STT finishes"). Background tasks need to:
- Queue up while no LLM tier is ready, then drain when one becomes
available (e.g. user just enabled the browser tier from settings)
- Survive page reloads, browser restarts, and the user navigating
away mid-execution
- Run one at a time without blocking the foreground UI
- Allow modules to subscribe to results reactively without polling
- Retry transient failures (network, model loading) but not
semantic ones (tier-too-low, content blocked)
Phase 4 ships exactly that.
Architecture:
packages/shared-llm/src/queue.ts — LlmTaskQueue class
+ QueuedTask interface (the persistent row shape)
+ EnqueueOptions (refType/refId/priority/maxAttempts)
+ TaskRegistry type (name → LlmTask map)
+ LlmTaskQueueOptions (table + orchestrator + registry +
retryBackoffMs + idleWakeupMs)
Public API:
- enqueue(task, input, opts) → string (returns the queued id)
- get(id), list(filter)
- retry(id), cancel(id), purge(olderThanMs)
- start(), stop() (idempotent processor lifecycle)
apps/mana/apps/web/src/lib/llm-queue.ts — web app singleton
- Dedicated `mana-llm-queue` Dexie database (separate from the
main `mana` IDB; see comment for the rationale: ephemeral
per-device state, no encryption needed, no sync needed, doesn't
belong in the long-frozen `mana` schema)
- Wires up the queue with llmOrchestrator + taskRegistry
- Exposes startLlmQueue() / stopLlmQueue() for the layout hook
apps/mana/apps/web/src/lib/llm-task-registry.ts
- Maps task names → task objects so the queue processor can
look up the implementation when pulling rows off the table.
Closures can't be persisted, so we round-trip via name.
- Currently registers extractDateTask + summarizeTextTask;
module-side tasks land here as we add them.
apps/mana/apps/web/src/routes/(app)/+layout.svelte
- startLlmQueue() in handleAuthReady's Phase A (auth-independent)
so guests + authenticated users both get the queue
- stopLlmQueue() in onDestroy as a fire-and-forget cleanup
Processor loop semantics (the heart of the implementation):
1. On start(), reclaim any 'running' rows from a crashed previous
session — reset them to 'pending'. The orphan recovery is the
reason a crash mid-task doesn't leave the queue stuck.
2. findNextRunnable() picks the highest-priority pending task whose
`notBefore` (retry-backoff timestamp) is in the past. Sort key:
priority desc, then enqueuedAt asc (FIFO within priority).
3. Mark the task running, increment attempts, look up the LlmTask
in the registry, hand it to orchestrator.run().
4. On success: mark done, store result + source + finishedAt.
5. On error:
- TierTooLowError or ProviderBlockedError → fail immediately,
no retry. These are not transient — the user's settings or
the content itself need to change.
- Anything else → if attempts < maxAttempts, reset to pending
with notBefore = now + retryBackoffMs (default 60s). Else
mark failed.
6. When no work is pending, sleep on a Promise that resolves when
either (a) someone calls enqueue() (which fires notifyWakeup),
or (b) idleWakeupMs elapses (default 30s, safety net for any
missed wakeup signal).
Module-side reactive reads use Dexie liveQuery directly on the queue
table — no special subscription API on the queue itself. This is
consistent with how every other Mana module reads its data, so the
mental model stays uniform:
const tags = useLiveQuery(
() => llmQueueDb.tasks
.where({ refType: 'note', refId, taskName: 'common.extractTags' })
.reverse().first(),
[refId]
);
Smoke test: a new "Queue" tab in /llm-test lets you enqueue the
existing extractDate / summarize tasks and watch the live state of
the queue table via liveQuery. The display includes per-row state
badge (pending/running/done/failed), tier source, attempt count,
input/output, and a "Done/failed löschen" button that exercises
purge().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After the themes.css rewrite (commit 919fcca4b), --color-X holds raw HSL
channels instead of full hsl() strings. Files using `var(--color-X)` standalone
in scoped CSS were already broken — the value is not a valid CSS color, so
the browser fell back to either the literal hex fallback (`var(--color-X, #...)`)
or to inherited (white in light mode). The Phase 1 rewrite is neutral for
those files (same broken behavior as before), but the now-canonical convention
is to wrap with hsl().
Sweep 11 components/routes that are NOT in the Phase 6 visual rewrite scope:
- Breadcrumbs, KeyboardShortcutsModal
- dashboard/TilePanel, dashboard/TileResizeHandle
- page-carousel/PageCarousel
- workbench/scenes/{ConfirmDialog, SceneRenameDialog, SceneTabs}
- routes/(app)/{llm-test, observatory, spiral}
Mechanical replacement: `var(--color-X[, fallback])` → `hsl(var(--color-X))`.
The hex fallbacks are dropped because :root in themes.css now defines all
--color-X values statically.
TilePanel had two unknown-token references that don't exist in the new
themes.css schema and were silently rendering their hex fallbacks:
- `--color-text` → `--color-foreground` (semantic synonym)
- `--color-destructive` → `--color-error` (shadcn name for the same concept)
Skipped from this sweep:
- Files in Phase 6 modules (places, habits, contacts, todo, dreams, finance,
calendar, notes, photos, automations, cycles, events, zitare) — they will
be migrated together with their hand-rolled palettes in Phase 6.
- skilltree/types.ts uses --color-branch-{intellect,body,…} tokens that have
never been defined anywhere — long-standing bug, needs actual brand colors
added to themes.css to fix properly.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Phase 3 build failed at the worker bundling step with:
"Invalid value 'iife' for option 'worker.format' - UMD and IIFE
output formats are not supported for code-splitting builds."
Vite defaults workers to IIFE format which can't handle code-split
imports. @mana/local-llm's new worker.ts imports @huggingface/
transformers, which itself is internally code-split into many
chunks (the ONNX runtime, the model classes, the tokenizer
families, the lazy backend selectors). IIFE has no way to load
those at runtime.
Switch the web app's vite.config.ts to `worker: { format: 'es' }`.
Module workers are supported in every browser that supports
WebGPU (Chrome 80+, Edge 80+, Safari 15+), so no users lose
support.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The browser tier of @mana/local-llm was running entirely in the main
JS thread. With Gemma 4 E2B that meant ~50-200 ms of synchronous
tensor work per forward pass × ~150 forward passes per generation =
the UI froze for 10-30 seconds during a single chat reply. Scrolling,
clicks, animations all stopped.
Move the actual inference into a Dedicated Web Worker. The main
thread keeps a thin LocalLLMEngine proxy with the same public API
(load / unload / generate / prompt / extractJson / classify /
onStatusChange / isSupported), so existing callers — the /llm-test
page, the playground module, @mana/shared-llm's BrowserBackend, the
Svelte 5 reactive bindings — need NO changes.
File layout after the split:
src/engine.ts — main-thread proxy (lazy worker init,
postMessage protocol, pending request map,
status broadcast handling, convenience
wrappers for prompt/extractJson/classify)
src/worker.ts — Web Worker entry point (typed message
protocol, single LocalLLMEngineImpl instance,
forwards status changes back to main thread)
src/engine-impl.ts — the actual transformers.js engine (renamed
from the previous engine.ts contents). NOT
exported from index.ts — only the worker
imports it. Same two-step tokenization,
aggregated progress reporting, streaming
token handling as before; just running in
a different thread now.
Worker construction uses Vite's documented `new Worker(new URL(
'./worker.ts', import.meta.url), { type: 'module' })` pattern, which
makes Vite split worker.ts (and its transformers.js dep) into its
own bundle chunk at build time. The proxy is lazy-init: the Worker
constructor is never touched at module-import time, so SSR stays
clean (Worker doesn't exist on Node).
Message protocol (typed end-to-end):
Main → Worker:
{ id, type: 'load', modelKey: ModelKey }
{ id, type: 'unload' }
{ id, type: 'generate', opts: SerializableGenerateOptions }
{ id, type: 'isReady' }
Worker → Main:
{ id, type: 'result', data?: unknown }
{ id, type: 'error', message: string }
{ id, type: 'token', token: string } — streaming chunk
{ type: 'status', status: LoadingStatus } — broadcast
The proxy assigns a unique id per request, stores the resolve/reject
+ optional onToken callback in a Map<id, PendingRequest>, and routes
incoming responses by id. Status messages have no id and fire every
registered status listener — same UX as before, just one extra hop.
Streaming: the worker re-attaches the streaming callback on its
side. Each emitted token gets posted back as `{ id, type: 'token',
token }` and the proxy invokes the original `onToken` callback. The
final `result` arrives as a normal response and resolves the
Promise. From the caller's perspective generate() still feels
identical — same async iterable feel via onToken, same return value.
Worker termination on unload: transformers.js doesn't expose a
dispose API, so we terminate the worker after unload and create a
fresh one on the next load. This is the only reliable way to
release VRAM between model swaps.
CSP: no header changes needed. The worker is loaded from a
same-origin URL (Vite emits it as
/_app/immutable/workers/worker.[hash].js), so 'self' in script-src
already covers it. The blob: + cdn.jsdelivr.net + wasm-unsafe-eval
allowlists we added during the original WebLLM/transformers.js
bring-up still apply because the worker still runs the same ONNX
runtime that needed them.
DistributiveOmit type helper: TS's plain `Omit<Union, K>` collapses
discriminated unions to an intersection in some configurations,
which broke the type narrowing at the postRequest call sites for
each request variant. Adding a tiny `DistributiveOmit<T, K>` helper
fixes the type-check without restructuring the protocol.
What this commit deliberately does NOT do:
- Change the public API surface. The whole point is that callers
remain untouched.
- Add multi-tab worker coordination via SharedWorker or
BroadcastChannel. Each tab still spawns its own dedicated worker
with its own copy of the model in VRAM. Multi-tab dedup is
Phase 2.5/Phase 4 work — see the design doc summary in the
previous Phase 1 commit message.
- Add a persistent task queue. Fire-and-forget background tasks
are Phase 4.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pre-launch theme system audit found multiple parallel layers in themes.css
(--theme-X full hsl strings, --X partial shadcn aliases, --color-X populated
by runtime store with raw channels) plus dead-code companion files. The
inconsistency caused light-mode regressions when scoped-CSS consumers
wrote `var(--color-X)` standalone — the variable holds raw HSL channels
which is invalid as a color value, browser fell back to inherited (white).
Rewrite to one consistent layer:
- Source of truth: --color-X defined as raw HSL channels (e.g.
`0 0% 17%`) in :root, .dark, and all variant [data-theme="..."]
blocks. Matches the format the runtime store
(@mana/shared-theme/src/utils.ts) writes, eliminating the
static-fallback-vs-runtime mismatch and the corresponding flash
of unstyled content on hydration.
- @theme inline uses self-reference + Tailwind v4 <alpha-value>
placeholder so utility classes generate correctly AND opacity
modifiers work: `text-foreground/50` → `hsl(var(--color-foreground) / 0.5)`.
- @layer components (.btn-primary, .card, .badge, etc.) wraps
var(--color-X) refs with hsl() — they were broken in light mode
too for the same reason.
Convention going forward (also documented in the file header):
1. Markup: use Tailwind utility classes (text-foreground, bg-card, …)
2. Scoped CSS: hsl(var(--color-X)) — always wrap with hsl()
3. NEVER raw var(--color-X) in CSS — that's the bug pattern
Net file: 692 → 580 LOC. Single source layer, no indirection.
Also delete dead companion files (zero imports anywhere):
- tailwind-v4.css (had broken self-reference, never imported)
- theme-variables.css (legacy hex-based palette)
- components.css (legacy component utilities)
- index.js / preset.js / colors.js (Tailwind v3 preset format,
irrelevant under Tailwind v4)
package.json exports map shrinks accordingly to just `./themes.css`.
Consumers using `hsl(var(--color-X))` (~379 files across mana-web,
manavoxel-web, arcade-web) keep working unchanged — the public API
name `--color-X` is preserved. Only the broken pattern `var(--color-X)`
(~61 files) needs a follow-up sweep, handled in a separate commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a single, deduped notification mechanism for the moments when the
app needs to nudge a signed-out user toward registration / login —
replacing the silent failures that used to happen when a guest hit a
server-only feature.
──── New: lib/stores/guest-prompt.svelte.ts ────
A mana-web-local singleton with a tiny API:
guestPrompt.requireAccount(featureKey, message?, actionLabel?)
guestPrompt.dismiss(id)
guestPrompt.clear()
Notifications are deduped by featureKey, so three failed LLM calls in
a row don't stack three identical "you need an account" stripes. The
"Anmelden" button uses an injectable navigator (via setGuestPromptNavigator)
so the layout can wire SvelteKit's goto for client-side routing
instead of the default window.location.assign fallback.
──── Hooked into api/base-client.ts ────
fetchWithRetry now short-circuits BEFORE the network call when the
user is unauthenticated — surfaces guestPrompt.requireAccount('api')
and returns "Anmeldung erforderlich" instead of burning the round-trip
to a server that's just going to 401. Saves latency, log noise, and
gives a better German error message than "Sitzung abgelaufen".
When a 401 does come back from the server (token expired mid-session),
fetchWithRetry calls guestPrompt.requireAccount('api:401', ..., 'Neu anmelden')
in addition to the existing return-with-error path.
Both hooks live in one central place so every feature that uses
createApiClient(...) — LLM, subscriptions, profile, credits, events,
gifts, etc. — gets the prompt for free without per-module changes.
──── Rendered in routes/(app)/+layout.svelte ────
The bottom-stack already had a NotificationBar slot for the time-based
guest nudge from createGuestMode (3-min trigger). The new event-driven
prompts merge into the SAME bar via array spread — one stripe, no
visual stacking — so the user only ever sees one band even when both
sources have something to say.
handleAuthReady() also calls guestPrompt.clear() when the user signs
in, so leftover guest prompts don't carry into the authenticated session.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two related issues in the encryption pipeline that were both surfacing
as silent failures when a user tried to log a mood / write to any
encrypted field shortly after page load or while signed out:
1. Boot-time race
The layout fires authStore.initialize() and vaultClient.unlock()
in the same tick. The very first user mutation can land before the
network round-trip that fetches the master key returns. encryptRecord
then synchronously sees a null key and throws VaultLockedError —
surfacing in the UI as "click does nothing" because nothing in the
call chain catches it.
Fix: KeyProvider gets a waitForKey(timeoutMs) method.
MemoryKeyProvider implements it via the existing onChange listener,
so callers resume as soon as setKey lands. encryptRecord now waits
up to 2 s before throwing, which converts a near-miss race into a
transparent millisecond delay.
2. Guest plaintext fallback (Option A in the chat thread)
Guests have no auth token, so the server vault is unreachable by
definition. Refusing every encrypted-field write would hide the
bulk of the app behind a sign-up wall — undesirable for the
try-before-you-buy local-first flow.
Fix: encryptRecord now silently no-ops when getCurrentUserId() is
null, writing plaintext to the local Dexie. guest-migration.ts
waits for the vault (10 s budget) and then encrypts the registry
fields per-table BEFORE the re-insert, so the on-disk state after
sign-in matches "user signed up first, then typed everything".
If the vault never opens (auth/network failure on /me/encryption-vault),
migration aborts cleanly — guest data stays put rather than being
re-inserted as plaintext under the real user id.
UI side: cycles/ListView.svelte wraps every dayLogsStore.logDay call
in a safeLogDay helper that catches VaultLockedError and surfaces a
toast pointing the user at Settings → Sicherheit. Previously the
unhandled rejection from a click handler vanished into the console.
Tests:
- record-helpers.test.ts now stamps a fake current user in beforeEach
so the new guest-skip doesn't no-op the encryption asserts. The
"throws when locked" test uses fake timers to flush the 2 s wait
without sitting on it.
- aes.test.ts: anonymous-class KeyProvider stub gains the new
waitForKey method to satisfy the interface.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bundles two unrelated changes that landed together due to a concurrent
lint-staged race in a multi-session edit. Splitting after the fact would
churn the parallel session's working tree, so the message is amended to
honestly describe both pieces.
──── 1. feat(shared-llm): tiered LLM orchestrator (Phase 1) ────
Replaces the unused NestJS @mana/shared-llm package with a tiered LLM
orchestrator that routes Mana tasks across four user-controlled privacy
tiers:
none — deterministic parsers / heuristics, no LLM
browser — Gemma 4 E2B via @mana/local-llm (WebGPU, on-device)
mana-server — services/mana-llm with Ollama (gemma3:4b on Mac Mini)
cloud — services/mana-llm with google/* model (Gemini API)
The user picks which tiers Mana is allowed to use. The orchestrator
walks the user's tier list in order, picks the first one that's
available + ready + permitted for the input's content class, and runs
the task. If everything fails, it falls through to a per-task
deterministic runRules() implementation when one is provided.
Package shape moved from NestJS-style (Module/Service/__tests__/) to a
flat browser-package layout (deps are @mana/local-llm + svelte peer).
All NestJS legacy files deleted: __tests__/, interfaces/, types/,
utils/, llm-client*.ts, llm.module.ts, standalone.ts, etc.
Phase 2 (UI work — settings page section, onboarding step, source
badge component, cloud-consent dialog) is a follow-up and does not
block this commit. The orchestrator is fully functional from the
Router tab right now.
──── 2. fix(mana/web): unwrap \$state proxy in workbench-scenes ────
Adding an app to a workbench scene threw DataCloneError. scenesState
is a \$state array, so current.openApps was a Svelte 5 proxy and
spreading it into a new array left proxy entries inside; IndexedDB's
structured clone refuses to serialise those. Snapshot before handing
the array to patchScene / createScene so Dexie sees plain objects.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adding an app to a workbench scene threw DataCloneError. scenesState
is a $state array, so current.openApps was a Svelte 5 proxy and
spreading it into a new array left proxy entries inside; IndexedDB's
structured clone refuses to serialise those. Snapshot before handing
the array to patchScene / createScene so Dexie sees plain objects.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Creating a new question crashed in mana-sync's pending-change hook
with DataCloneError because the tags array is a Svelte 5 \$state proxy
and IndexedDB / structured-clone can't serialize proxies.
Surfaced while clicking through the deep-research feature end-to-end
in the browser — the form would silently fail to submit with the error
buried in the console.
Use \$state.snapshot() to deep-clone tags into a plain array before
persisting. Other fields are primitives so they're already plain.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First file in the P5 visual-track consolidation. Replaces hand-rolled
#374151/#9ca3af/#e5e7eb palette + paired :global(.dark) duplications
with var(--color-foreground), var(--color-muted-foreground),
var(--color-border), var(--color-surface-hover) etc. — the same
theme tokens already used by shared-ui Card / PageHeader / FormModal.
Net: 260 → 245 LOC (-15), 7 hand-rolled rules eliminated, all
:global(.dark) selectors gone (theme system handles light/dark via
.dark class on <html>).
Brand colors that should NOT track the theme primary stay hardcoded:
the violet category badge (#8b5cf6) keeps its literal value, the
favorite-active red gets var(--color-error, #ef4444) with fallback.
This is the smallest of the 8 files identified by the P5 audit:
zitare DetailView (7 rules) → cycles → calendar → contacts → todo →
finance → notes → dreams (31 rules). Migration runs file-by-file
with one commit each so visual diffs are easy to review.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add packages/local-llm/CLAUDE.md as the package-level reference for
browser-local LLM inference. The package went through a non-trivial
engine swap from WebLLM/Qwen to transformers.js/Gemma 4 E2B on
2026-04-08, and the bring-up surfaced enough sharp edges that the
next person (or AI agent) touching this code will save real time
having them written down in one place rather than re-discovering
them error by error.
Captured topics:
- What the package is, what library/model is currently used, and
the deliberate engine-agnostic API surface that lets future swaps
stay contained to this package.
- Why we chose transformers.js + Gemma 4 over staying on WebLLM
(MLC compilation lag for new model architectures) and what the
return path looks like once MLC ships Gemma 4 builds.
- The seven CSP directives that browser-local inference needs and
WHY each one is required:
* script-src: 'wasm-unsafe-eval', cdn.jsdelivr.net, blob:
* connect-src: huggingface.co + *.huggingface.co + cdn-lfs-*,
*.hf.co + cas-bridge.xethub.hf.co (XET CDN),
cdn.jsdelivr.net (for the WASM preload fetch)
Including the subtle "jsDelivr is needed in BOTH script-src and
connect-src" trap that produces identical-looking error messages
for two distinct underlying causes.
- The Vite SSR module-cache gotcha: CSP additions made in
packages/shared-utils/security-headers.ts do NOT hot-reload across
the workspace package boundary, while additions made directly in
apps/mana/apps/web/src/hooks.server.ts do. Includes the diagnostic
pattern (compare which additions show up in the next CSP error
vs which don't) and the workaround (move them into hooks.server.ts
via setSecurityHeaders options).
- The two-step tokenization pattern that's mandatory for
Gemma4Processor: apply_chat_template(tokenize:false) → string, then
processor.tokenizer(text, return_tensors:'pt'). The collapsed
apply_chat_template(return_dict:true) path looks shorter but
produces a malformed input shape and crashes model.generate() deep
inside the forward pass with "Cannot read properties of null
(reading 'dims')" — opaque from the call site.
- The transformers.js v4 quirk that model.generate() returns null
(not a tensor) when a TextStreamer is attached. The streamer is
the only stable text channel; the engine always attaches one and
uses the streamer's collected text as the canonical output, with
a chars/4 fallback for token counts.
- API surface (Svelte 5 example), how to add a new model to the
registry, deploy notes (no base image rebuild needed for local-llm
changes alone, but IS needed if shared-utils CSP defaults change),
browser cache semantics, and hard browser support requirements
(WebGPU, ~1.5–2 GB VRAM for E2B q4f16, no CPU/WASM fallback).
Also link to the new doc from the root CLAUDE.md Shared Packages
table so people land on it from the standard discovery path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous attempt to fix the "Cannot read properties of null
(reading 'dims')" chat error was incomplete: I only stopped passing
the bogus return_tensor:'pt' option to apply_chat_template. The
underlying issue was that apply_chat_template's all-in-one mode
(return_dict:true) does not produce a proper Tensor-backed
{ input_ids, attention_mask } pair for multimodal-capable processors
like Gemma4Processor — it returns a shape that has no .dims on
input_ids, so model.generate() crashes deep inside the forward pass
the moment it tries to read the sequence length.
Switch to the documented two-step pattern from the Gemma 4 model
card: call apply_chat_template with tokenize:false to get the
formatted prompt as a plain string, then run that string through
processor.tokenizer with return_tensors:'pt' to get a proper Tensor
pair. The tokenizer's return_tensors option is the *Python*
convention and IS supported by transformers.js's Tokenizer class
(the API name collision between apply_chat_template's return_tensor
boolean and Tokenizer's return_tensors string is one of those nasty
spots where the JS port intentionally diverges from Python).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First end-to-end Gemma 4 inference attempt threw "Cannot read
properties of null (reading 'dims')" the moment a chat message was
sent. Two bugs piled on top of each other:
1. apply_chat_template() was being called with `return_tensor: 'pt'`,
which is the Python `transformers` convention. transformers.js's
equivalent option is just a boolean (the default), and the string
'pt' is unrecognized — older versions silently ignored it, but the
v4 code path now produces a less predictable input shape when it
sees the unknown value. Drop it.
2. model.generate() in transformers.js v4 returns null (not a tensor)
when a streamer is attached. The previous engine code only attached
a streamer if the caller passed an `onToken` callback, then
unconditionally tried to slice the tensor return for token counting
— which crashed because the chat tab DOES pass onToken for live
streaming. The streamer collected the text fine, but generate()
returned null and our tensor read blew up.
Restructure so the streamer is always attached and is the canonical
text channel. The tensor return is now only used for token counting
when present, and falls back to a chars/4 estimate when it isn't, so
the /llm-test UI still shows roughly meaningful prompt/completion
counts on either v3 (returns tensor) or v4 (returns null with
streamer). The user-facing GenerateResult.content now always comes
from the streamer's accumulated string instead of decoding the
tensor's sliced suffix, which is more robust across versions.
Also wrap the model.generate() call in try/catch so that versions
of transformers.js that throw at end-of-streaming (after the
streamer has already delivered all tokens) don't lose the answer.
We only re-throw if the streamer collected nothing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After jsDelivr was allowlisted, transformers.js progressed one step
further: it successfully fetched the ort-wasm-simd-threaded.asyncify.mjs
loader, then tried to wrap it in a `URL.createObjectURL(new Blob([...]))`
and instantiate the result as a multi-threaded Web Worker. The blob:
URL scheme is treated as its own CSP source by browsers, so the
existing script-src — which only allows 'self', specific HTTPS hosts,
and 'wasm-unsafe-eval' — blocked it.
Add `blob:` to the mana-web scriptSrc list. The blob: scheme always
inherits the document origin (you can't `URL.createObjectURL` a Blob
from another origin), so this allowlists nothing more than our own
runtime-generated worker scripts. It does NOT loosen the protection
against remote-script injection.
Worth knowing for future debugging: when transformers.js or any
WebGPU/onnxruntime-web stack hits "Failed to fetch dynamically
imported module: blob:..." after a successful dynamic import from
a CDN, the next CSP layer is always blob: in script-src.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous two attempts at allowlisting cdn.jsdelivr.net for
transformers.js's onnxruntime-web loader landed in shared-utils
security-headers.ts. The actual file change was correct (verified by
grep), the commits got pushed, the live security-headers.ts on disk
had the additions — but Vite's SSR module cache for cross-workspace-
package imports kept serving the OLD compiled shared-utils to
hooks.server.ts. Net effect: edits to hooks.server.ts hot-reloaded
fine (proven by the *.hf.co connect-src additions showing up
immediately) while edits to shared-utils/security-headers.ts did not.
A dev server restart should clear it but I'd rather not depend on
manual intervention every time we touch the shared CSP.
Move the jsdelivr allowlist out of the shared default and into
mana-web's hooks.server.ts via the existing scriptSrc + connectSrc
options. hooks.server.ts is in the SvelteKit app's own source tree so
it HMRs reliably, no SSR cache to fight. As a bonus this is also
architecturally cleaner: cdn.jsdelivr.net is only needed by mana-web
because mana-web is the only Mana app that bundles @mana/local-llm —
other apps get a slightly tighter CSP for free.
The pattern to remember: changes to packages/shared-utils that affect
SSR (response headers, server hooks) require either a dev server
restart OR a manual `rm -rf apps/.../node_modules/.vite` to take
effect. Client-side changes hot-reload fine.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two QuickActionsWidget files lived in parallel under different widget
systems — `dashboard/widgets/` (the user-customizable dashboard, i18n
keys, 3 actions: credits/feedback/profile) and `core/widgets/` (the
mana home screen, hardcoded German strings, 5 actions: todo/calendar/
contacts/context/times). The two rendered the same shape character-
for-character: optional emoji-prefixed title + a list of rounded-card
links each with icon + label + description. Only the data and a
slightly different padding/icon sizing differed.
Extract <QuickActionsList> in $lib/components that takes the actions
array directly (consumers resolve i18n before passing in). Both widget
files become thin wrappers — the dashboard one resolves $_(...) keys
and passes the result, the core one passes its hardcoded data with
`compact` set.
LOC: 110 → 102 across the 3 files (-8 net, plus the shared 70-LOC
molecule). Small numerically, but the bigger win is that future
changes to the link layout (hover state, padding, icon style) happen
once instead of twice — and the two widget files no longer accidentally
drift in sizing/spacing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The earlier fix added cdn.jsdelivr.net to script-src so the dynamic
import() of onnxruntime-web's loader .mjs would resolve. But that's
only half the story: transformers.js also issues plain fetch() calls
to PRE-LOAD the .wasm binary and the .mjs factory before the backend
selection code path is even reached. fetch() is governed by
connect-src, not script-src, so the wasm preload was still blocked
with "Failed to pre-load WASM binary: TypeError: Failed to fetch".
The visible downstream symptom was identical to the previous bug
("no available backend found. ERR: [webgpu] TypeError: Failed to
fetch dynamically imported module"), which made it look like the
script-src fix hadn't taken effect.
Add cdn.jsdelivr.net to the default connect-src too, alongside the
existing script-src entry, with a comment explaining why both are
required.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two issues hit while loading Gemma 4 E2B in /llm-test for the first
time on a local dev server.
1. CSP script-src blocked cdn.jsdelivr.net.
@huggingface/transformers v4 lazy-loads the onnxruntime-web WASM
loader shim via a runtime dynamic `import()` from
cdn.jsdelivr.net/npm/onnxruntime-web@... at backend selection time
(the package itself is bundled, but the WASM-loader is fetched on
demand so the static bundle stays small). With the previous CSP the
import was blocked and "no available backend found" was the only
downstream error. Allowlist cdn.jsdelivr.net in the shared CSP
script-src so every Mana web app picks this up automatically.
2. Loading bar oscillated wildly during the model download.
transformers.js downloads many shards in parallel (config.json,
tokenizer.json, generation_config.json, model.onnx, model_data.bin,
…) and fires the progress callback per file. The previous engine
code reported the latest event verbatim, so the bar bounced
between whichever file happened to be progressing fastest.
Replace per-file reporting with a Map<file, {loaded, total}>
accumulator and emit an aggregated total on every event. The
denominator can grow as new files are discovered (causing brief
small dips), but both numerator and denominator are individually
monotonic, so the aggregate is much smoother. Also include a
human-readable byte count and file count in the status text:
Downloading model (47%, 240 MB / 510 MB, 8 files)
Pin completed files to 100% on the 'done' event so the final
aggregate visibly hits 100% before the loading→ready transition.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Nine files (8 ListViews + todo's TaskList) reimplemented the same
context-menu state machinery character-for-character: a typed
$state object with visible/x/y/<itemKey>, a handleItemContextMenu
function that calls preventDefault and stuffs the click position
in, and a close handler that resets the entity field.
Extract `useItemContextMenu<T>()` in $lib/data/item-context-menu.svelte
that returns a reactive handle with `.state` (visible/x/y/target),
`.open(e, target)`, and `.close()`. Consumers derive their menu
items from `ctxMenu.state.target` and pass `ctxMenu.close` directly
to <ContextMenu onClose>.
Per file: ~10 LOC of state declaration + handler removed; consumer
items array switches from `ctxMenu.<entity>` to `ctxMenu.state.target`.
Across the 9 files this is ~−90 LOC of pure boilerplate; helper itself
is 50 LOC. Net small (~−40 LOC) but the boilerplate is gone and the
shape is one helper away from being adjustable globally.
Note: shared-ui already exports a `createContextMenuState` factory,
but it's a plain default-value object — not a Svelte 5 reactive
helper. This new wrapper composes with the existing `ContextMenuState<T>`
type from shared-ui rather than replacing it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>