Commit graph

534 commits

Author SHA1 Message Date
Till JS
c2a75bb8e1 feat(shared-types): add Zod schemas for AI structured outputs
Introduces packages/shared-types/src/ai-schemas.ts as the single source
of truth for the wire format between mana-api and the unified Mana app.

Two schemas:
  - MealAnalysisSchema (foods, totalNutrition, description, confidence,
    warnings, suggestions) — consumed by nutriphi /analysis/photo and
    /analysis/text routes
  - PlantIdentificationSchema (scientificName, commonNames, confidence,
    health/watering/light advice, generalTips) — consumed by planta
    /analysis/identify

Both schemas include .describe() annotations on every field. The Vercel
AI SDK passes these through to the model as part of the structured-output
prompt, which materially improves accuracy on Gemini Vision (the model
sees both the field name AND the German-language hint about what to put
there).

Schemas use plain .optional() rather than .nullable() because
generateObject() guides the model with strict schema adherence — it
won't emit JSON null for missing fields, just omit them.

Deps wired up:
  - apps/api: + ai@6, + @ai-sdk/openai-compatible@2, + @mana/shared-types
  - apps/mana/apps/web: + zod (for z.infer of the shared schemas)
  - packages/shared-types: + zod (for the schema definitions themselves)

All three on zod ^3.23 to stay in lockstep with the existing
apps/api zod usage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 16:59:28 +02:00
Till JS
92f8221bfd docs(shared-llm): correct the mana-server tier topology in code + CLAUDE.md
In commit c9e16243c (the gemma3:4b → gemma4:e4b switch) I sloppily
wrote in the ManaServerBackend docstring that mana-llm "routes them
to the local Ollama instance on the Mac Mini (running on the M4's
Metal GPU)". That is wrong AND it's the exact misconception I had
to debug-out-of earlier the same day.

The actual topology — already documented correctly in
docs/MAC_MINI_SERVER.md and docs/WINDOWS_GPU_SERVER_SETUP.md, I
just didn't read those before writing the docstring:

  mana-llm container's OLLAMA_URL points at host.docker.internal:13434
  → ~/gpu-proxy.py (Python TCP forwarder, LaunchAgent on Mac Mini)
  → 192.168.178.11:11434 (LAN)
  → Ollama on the Windows GPU server (RTX 3090, 24 GB VRAM)
  → Inference

The Mac Mini's brew-installed Ollama binary is NOT on the inference
path. It's just a CLI for inspecting the proxied daemon. Today's
"why does the Mac Mini still have Ollama 0.15.4" puzzle has the
answer "because nothing on the Mac Mini actually runs inference, the
binary version was never load-bearing".

Two doc fixes:

1. packages/shared-llm/src/backends/mana-server.ts
   Replace the lying docstring with the real topology, including a
   pointer to the two MAC_MINI_SERVER.md / WINDOWS_GPU_SERVER_SETUP.md
   sections that document it. Also note that gemma4:e4b is a
   reasoning model that emits message.reasoning when given enough
   tokens (cross-reference to remote.ts's fallback parser).

2. packages/local-llm/CLAUDE.md
   Add a paragraph at the top explaining the difference between
   "@mana/local-llm" (browser tier, on-device) and the @mana/shared-llm
   "mana-server" / "cloud" tiers (services/mana-llm proxy → gpu-proxy.py
   → RTX 3090). This was implicit before — "not related to
   services/mana-llm" — but didn't say where mana-server actually
   goes. Future me reading the doc would still have to dig through
   the docker-compose env to find out.

No code changes — only docstring + markdown.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 16:40:34 +02:00
Till JS
8adef1b39c fix(shared-llm): fall back to message.reasoning when content is empty
Reasoning-style models (Gemma 4 E4B is the first one we use, but
DeepSeek R1, Gemini 2.5 thinking, etc. behave the same way) split
their output into two fields:
  - message.content   — the final answer
  - message.reasoning — the chain-of-thought leading up to it

When the model is given too few max_tokens to finish reasoning AND
emit content, the response comes back with content="" and reasoning
populated with the half-finished thought. Verified empirically with
gemma4:e4b and `max_tokens: 10` on a "Sage Hi auf Deutsch in einem
Wort" prompt — content was "" while reasoning had "Here's a
thinking process to..." (cut off mid-thought).

For the title task this rarely matters because the system prompt is
directive enough to skip the thinking phase (verified: same gemma4:
e4b returns clean 7-token titles like "Sonnenstrahlen genießen
heute" with the standard system prompt + max_tokens 32). But it's
a real failure mode for any future task that uses a less-directive
prompt or hits a longer reasoning chain.

Defensive fix: prefer message.content first, fall back to
message.reasoning if content is empty. The fallback is a string-or-
nothing operation, no semantic interpretation — if the reasoning
field happens to contain a usable answer fragment, the caller's
cleanup chain (e.g. generateTitleTask's strip-quotes-and-dots
pipeline) will normalize it. If it's truly half-finished thought,
the caller's runRules fallback still kicks in via the existing
empty-result detection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 16:29:22 +02:00
Till JS
a412ccc6fb feat(mana/web/body): new module — combined fitness training + body comp tracking
Adds the unified Body module that merges what would otherwise be two
separate apps (fitness + bodylog) into one. The value lives in their
intersection: tracking lifts alongside bodyweight is what enables
real progressive-overload + recomp insights, and shared primitives
(charts, time series, units, photos) avoid duplicating UI surface.

This commit lands only the data layer + module registration so the
follow-up UI / route / dashboard widget can build on a stable
foundation.

Tables (db.version(2), already in place):
  bodyExercises    — exercise library (Squat, Bench, Deadlift, OHP,
                     Row, Pull-Up seeded as presets)
  bodyRoutines     — saved workout templates
  bodyWorkouts     — one logged training session
  bodySets         — set rows inside a workout, indexed [workoutId+order]
  bodyMeasurements — weight + measurements over time, indexed [type+date]
  bodyChecks       — daily energy/sleep/soreness/mood self-rating,
                     upserted per day
  bodyPhases       — cut/bulk/maintenance/recomp phase markers, with
                     auto-close on phase change so the "active phase"
                     view always has at most one open row

Encryption (registry.ts): all 7 tables flipped to enabled. Health
data is GDPR Art. 9 special-category, so user-typed text + the
sensitive numeric fields (weight, reps, value, startWeight,
targetWeight, energy/sleep/soreness/mood) are wrapped. Indexed
columns (ids, FKs, ordering, dates, kind/type/equipment enums)
stay plaintext so the existing query layer keeps working without
decrypt-on-every-row.

Module wiring:
  - bodyModuleConfig added to module-registry.ts
  - Body app entry registered in shared-branding mana-apps.ts
    (red→orange icon to set it apart from the green health-adjacent
    modules and the pink cycles icon)
  - APP_ICONS.body added (dumbbell + heart-pulse hybrid SVG)

Also captures the broader module-ideas brainstorm in
docs/future/MODULE_IDEAS.md and marks fitness + bodylog as merged
into the new body module.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 16:28:19 +02:00
Till JS
c9e16243c8 feat(shared-llm): bump mana-server default model to gemma4:e4b
Two surprises came out of "why do we still use Gemma 3 instead of 4":

1. The hardcoded default in ManaServerBackend was `gemma3:4b`, which
   was even smaller than mana-llm's actual server-side default of
   `gemma3:12b`. My initial guess from docs/LOCAL_LLM_MODELS.md was
   conservative.

2. The mana-llm OLLAMA_URL points at host.docker.internal:13434,
   which is NOT the Mac Mini's local Ollama — it's a Python TCP
   forwarder (~/gpu-proxy.py) that proxies to 192.168.178.11:11434
   on the Windows GPU server. So title generation has been running
   on the RTX 3090 the whole time, not on the M4 Metal GPU. The
   Mac Mini's brew-installed ollama 0.15.4 wasn't even being used
   for inference — only as a CLI to inspect the proxied Ollama.

To get to Gemma 4, both Ollama instances needed an upgrade:
  - Mac Mini brew  : 0.15.4 → 0.20.4 (cosmetic, the binary isn't on
                     the inference path; upgraded for consistency)
  - GPU server     : 0.18.2 → 0.20.4 via winget. Required restarting
                     the daemon via the OllamaServe scheduled task
                     that was already configured.

Then `ollama pull gemma4:e4b` on the GPU server (9.6 GB, ~10 min on
the LAN). Verified end-to-end via the proxy with a real chat
completion request to mana-llm — gemma4:e4b answered with a clean
4-word German title for a sample voice memo prompt:

  prompt: "Erstelle einen kurzen 3-Wort Titel für: Es ist ein
           schöner Tag heute am 9. April"
  → "Schöner Tag, neuntes April"

Changes in this commit:

  packages/shared-llm/src/backends/mana-server.ts
    - defaultModel: 'gemma3:4b' → 'gemma4:e4b'
    - Updated docstring to explain why E4B is the right Mana-Server
      tier default: 9.6 GB on disk, 128K context, "Effective 4B"
      arch punches above its weight class for German prompts, and
      the family stays consistent with the browser tier (Gemma 4
      E2B is the smaller sibling) so the source label and prompt
      behavior remain coherent across tiers.

  apps/mana/apps/web/src/lib/modules/memoro/views/DetailView.svelte
    - TITLE_SOURCE_LABELS map updated:
        browser     → "Auf deinem Gerät (Gemma 4 E2B)" (was "(Gemma 4)")
        mana-server → "Mana-Server (Gemma 4 E4B)" (was "(gemma3:4b)")
    - The label now reflects that BOTH the browser and the mana-server
      tier are running Gemma 4 variants, which is more honest than
      the previous mix.

Did NOT change:
  - The Ollama OLLAMA_DEFAULT_MODEL env var in docker-compose.macmini.yml
    (still gemma3:12b). That's the fallback for callers who don't
    specify a model in their request. Our generate-title task always
    sends an explicit model string, so it's unaffected. Bumping the
    global default is a separate decision — it would change behavior
    for the playground module and any other consumer that relies on
    the implicit fallback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 16:06:33 +02:00
Till JS
45790ffbb8 refactor(mana): rename inventar → inventory across the codebase
The workbench-registry app id 'inventar' did not match its
@mana/shared-branding MANA_APPS counterpart 'inventory', so the tier-
gating join in apps/web/src/lib/app-registry/registry.ts silently
failed for the inventory module — it fell into the "no MANA_APPS
entry, default visible" fallback and was effectively un-gated. The
codebase had also voted overwhelmingly for 'inventar' (53 files) vs
'inventory' (3 files in shared-branding), so the long-standing
mismatch was just bookkeeping debt waiting to bite.

Pre-release, no live data, so the cleanest fix is to align everything
on the English 'inventory':

- Workbench-registry id, module.config.ts appId, module folder, route
  folder and i18n locale folder all renamed via git mv
- Standalone apps/inventar/ workspace package renamed
- All imports, store identifiers (InventarEvents → InventoryEvents,
  INVENTAR_GUEST_SEED, inventarModuleConfig), i18n keys and href/goto
  paths follow the rename
- The German display label "Inventar" is preserved everywhere it is a
  user-visible string (page titles, i18n values, toast labels)
- Dexie table prefixes (invCollections, invItems, …) are unchanged
- Drive-by fix: ListView.svelte was querying non-existent
  inventarCollections/inventarItems tables — corrected to the actual
  invCollections/invItems names from module.config
- The "inventar ↔ inventory id mismatch" workaround comment in
  registry.ts is removed since the mismatch no longer exists

module-registry.ts also picks up the user's parallel newsModuleConfig
addition because both edits land in the same import block — keeping
them split would have left the build in an inconsistent state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 15:50:24 +02:00
Till JS
e00e6f5a08 refactor(shared-branding): derive APP_URLS from APP_ICONS
The hand-maintained APP_URLS map kept silently drifting from the
AppIconId union — most recently the new 'who' entry was missing,
which crashed getPillAppItems at runtime with "Cannot read properties
of undefined (reading 'prod')". Drift was already flagged by the type
system but the error was lost in the existing svelte-check noise.

APP_URLS is now generated at module load by walking Object.keys of
APP_ICONS (the source of AppIconId), so every id is guaranteed a URL.
A small APP_URL_OVERRIDES map carries the handful of apps that don't
follow the unified mana.how/{id} pattern (root path for the unified
shell, subdomains for standalone apps like arcade).

Adds two integrity tests as defense-in-depth: one asserts every
MANA_APPS id has a matching APP_ICONS icon, the other asserts every
AppIconId resolves to a non-empty dev+prod URL. Both would have caught
the 'who' regression on its own without needing svelte-check.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 15:14:52 +02:00
Till JS
07b130ff9e fix(shared-branding): add missing 'who' entry to APP_URLS
The 'who' app was registered in MANA_APPS but never added to APP_URLS,
so getPillAppItems crashed at runtime when mapping over apps with
"Cannot read properties of undefined (reading 'prod')". This was also
flagged by svelte-check as a missing key in the Record<AppIconId, ...>
type but had been ignored.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 15:08:02 +02:00
Till JS
a130f8e4c0 feat(mana/web): clickable page titles open route in new tab
PageShell gains an optional titleHref prop — when set, the header title
renders as an <a target="_blank"> with hover underline. Also wires this
into the homepage app gallery (shared-ui/AppsPage): the grid card title
is now an anchor to /{app.id}, while the rest of the card still opens
the existing detail modal. Card converted from <button> to role=button
so the nested anchor is valid HTML.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 14:27:51 +02:00
Till JS
b8bfc4d775 fix(branding): drop who module's required tier from beta to public
The initial requiredTier='beta' was an arbitrary RFC default — when I
first wired it up I was matching the status='beta' badge. But the
beta tier in this app means "early access via founder invite", not
"the feature is in beta". A signed-in standard user landing on /who
hit the AuthGate lock screen with "Standard < Beta required" instead
of being able to play the game.

Drop to 'public', which means "any signed-in user". The module is
still labeled status='beta' in the launcher (so it's flagged as new
+ unfinished), and the LLM calls behind it are credit-gated by the
existing chat-style consume flow — those are the actual gates that
matter for cost control.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 14:27:18 +02:00
Till JS
233cf28cf2 fix(shared-llm): switch remote backend to non-streaming, drop credentials
Diagnosis from the user's last test pinpointed the bug: mana-llm
returns totalFrames=0 (no SSE frames at all) when called from the
browser, but works perfectly when called via curl from the same host
with the same payload. Two compounding causes:

  1. credentials: 'include' in our fetch combined with mana-llm's
     CORS headers silently breaks the response body. This is the
     classic "Access-Control-Allow-Origin: * + Allow-Credentials: true"
     mismatch — browsers reject the response per spec but report it
     as a 0-byte success rather than an error.

  2. Streaming over CORS adds a second layer of fragility. Even if
     credentials weren't an issue, the browser fetch API's response
     body for SSE under CORS depends on a specific combination of
     server headers we evidently don't have.

Fix: drop both the streaming AND the credentials.

  - stream: false in the request body. Single JSON response per call,
    much friendlier to the browser fetch API.
  - No `credentials` field at all (default 'same-origin' for cross-
    origin requests = don't send cookies). mana-llm's API key
    middleware accepts anonymous requests, so we don't need to send
    any auth context.
  - Parse the response as `await res.json()` instead of streaming
    SSE chunks. Pull `choice.message.content` (or fall back to
    `choice.text` for legacy completions API responses).
  - Backwards-compatibility shim for `req.onToken`: if a caller
    registered a token callback (legacy chat-style streaming UX),
    fire it ONCE with the full content at the end. The current
    orchestrator + queue model never consumes per-token streams for
    remote tiers, so this is a degraded-but-equivalent path. The
    playground module uses its own client and isn't affected.

Verified manually with curl:

  $ curl -X POST https://llm.mana.how/v1/chat/completions \
      -H 'Content-Type: application/json' \
      -d '{"model":"gemma3:4b","messages":[{"role":"user","content":"Hi"}],"max_tokens":50,"stream":false}'
  → returns clean JSON with `choices[0].message.content` populated.

  Same call with `stream: true` from the same host also works (full
  SSE frames come back). The bug really is browser+credentials
  specific, not a service bug.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 14:07:06 +02:00
Till JS
0450c86527 fix(shared-llm): SSE shape diagnostics + simpler title prompt + fragment detection
User test on the mana-server tier showed Ollama gemma3:4b returning
LITERALLY empty content for the title task, which is much weirder
than the small browser model misbehaving. Three layered fixes plus
diagnostics that will tell us what's actually happening over the
wire next time.

1. remote.ts: SSE diagnostics + liberal field shape

   The mana-llm /v1/chat/completions endpoint claims OpenAI
   compatibility, but different upstream providers (Ollama, OpenAI,
   Gemini) wrap their token text in different field paths inside
   the SSE delta. Be liberal in what we accept:
     - choice.delta.content   (canonical OpenAI)
     - choice.delta.text      (some Ollama-compat shims)
     - choice.message.content (non-streaming response embedded in stream)
     - choice.text            (legacy completion API)

   Plus: count totalFrames + dataFrames + capture firstFrameRaw +
   firstFrameParsed during the stream. When `collected` is empty at
   the end of the stream, dump all of that to console.warn so the
   next test session shows us exactly what mana-llm is sending. This
   is the only reliable way to debug "empty completion" without a
   network sniffer in the user's browser.

2. generate-title.ts: drop few-shot, use simple system+user prompt

   The previous few-shot prompt with three `Aufnahme: "..."\nTitel: ...`
   examples was apparently too much for Ollama gemma3:4b on the
   mana-server tier — it returned literal "" for reasons we don't
   fully understand (chat-template confusion with the embedded
   quotes? multi-section format? some quirk of how mana-llm formats
   the messages for Ollama?). Either way, the failure mode is clear.

   Replace with a minimal two-message format:
     - system: "Du erzeugst einen kurzen Titel (3-5 Wörter)..."
     - user: <transcript>
   Same instruction, much simpler shape. Bumped maxTokens 24 → 32
   to give the model breathing room.

3. generate-title.ts: rules fallback detects sentence fragments

   Even when the LLM fails and we fall through to runRules, the
   previous heuristic for medium-length transcripts (10-20 words)
   would extract the first 7 words verbatim — which for a typical
   "Eine kleine Testaufnahme um zu sehen ob alles funktioniert" memo
   produces "Eine kleine Testaufnahme, um zu sehen, ob" as the
   "title". That's a sentence fragment ending mid-thought, not a
   title. Worse than "Memo vom 9. April 2026".

   Add a "looks like a sentence fragment" heuristic: if the last
   word of the extracted slice is a German stop-word or article
   (und/oder/wenn/ob/zu/um/der/die/das/ein/...) the result is
   clearly mid-clause. In that case fall through to dateLabel()
   instead of writing the fragment.

   Stop-word list is curated to 30 entries — common conjunctions,
   articles, prepositions, auxiliaries. Not exhaustive but catches
   the typical "first 7 words of a German sentence" failure mode.

After this commit lands, the next test will surface in the console
EITHER:
  - the actual delta shape mana-llm is using (so we know if our
    parser is wrong or if the model is genuinely silent)
  - a real LLM-generated title (if the simpler prompt worked)
  - "Memo vom <date>" via the rules fallback (if the LLM still
    fails but the rules fragment detection caught the bad slice)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 13:12:13 +02:00
Till JS
65c4d935d5 feat(shared-branding): register the who module in mana-apps + new icon
Two changes:

  app-icons.ts
    Add APP_ICONS.who — purple gradient theatre-mask silhouette with
    a question mark, references the "guess who's behind the disguise"
    mechanic. Stays in the same hand-rolled SVG-data-URL style as the
    other module icons (no external assets, no font dependencies).

  mana-apps.ts
    New ManaApp entry: id 'who', name 'Who', purple #a855f7,
    requiredTier 'beta', status 'beta'. Description in DE + EN
    explains the mechanic and lists the four shipping decks.
    Slotted at the end of MANA_APPS so the existing app order is
    preserved.

These are the last pieces needed for the unified Mana app launcher
to surface the new module. With this commit + the previous two, the
module is end-to-end visible: launcher → /(app)/who route → ListView
with deck picker → PlayView chat loop.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 13:10:55 +02:00
Till JS
43d19b5900 docs(shared-tailwind/themes): document hsl() wrap rule + token allowlist
Extends the top-of-file comment with the lessons learned from the P5
visual-track migration:

- Why bare var(--color-X) silently fails (browser falls back to inherit;
  the zitare white-on-white regression that triggered the rewrite).
- Concrete / examples for the three rewrap patterns (plain ref, ref
  with fallback, color-mix opacity).
- The brand-literal carve-out — which palettes deliberately stay as
  literal colors and why they should not be migrated.
- The stable token allowlist + the four removed names that future code
  should not reference (--color-info / --color-text / --color-destructive
  / --color-surface / --color-input) and the right replacements.
- A note on the runtime story: createThemeStore writes the same names so
  static defaults handle first paint and hydration takes over after.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 13:05:16 +02:00
Till JS
d2e44c8b65 chore(packages): remove 2 zero-consumer config packages
Item #21 in the pre-launch audit suggested merging the four
config-y packages (shared-config, shared-tsconfig, shared-vite-config,
shared-drizzle-config) into a single @mana/build-config with
conditional exports. The first reality-check of the item counted
package.json declarations and reported 5 total consumer relationships.
A second reality-check while implementing — grep over actual .ts /
.svelte / .json imports — showed two of the four packages are dead:

  - packages/shared-config/ (598 LOC, 4 TS files)
    Declared in apps/mana/apps/web/package.json but never imported
    anywhere. Stale dep from before the consolidation.

  - packages/shared-tsconfig/ (5 JSON tsconfig presets)
    Zero references anywhere. Not extended by any tsconfig.json,
    not declared in any package.json. Pure Pre-Consolidation
    leftover.

The remaining two packages were left intact:
  - shared-vite-config (3 real consumers in vite.config.ts files)
  - shared-drizzle-config (1 real consumer in mana-media)

They cover different toolchains (Vite SSR config vs drizzle-kit
generator config) — merging them into a single build-config would
be cosmetic, not a real reduction in complexity. Audit's "merge to
1" goal was based on the inflated consumer count and is no longer
worth doing.

Verification:
  - pnpm install completes cleanly
  - apps/api type-check still 0 errors
  - packages/shared-hono type-check still 0 errors

Net: 4 → 2 config packages, ~700 LOC dead code removed.

Also closes item #26 (non-root pnpm-lock.yaml status) — already
done in commit 034a07d16, doc was just out of date. Audit is now
29/29 items fully processed.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 12:35:40 +02:00
Till JS
c8da2267b7 fix(skilltree): define the missing branch accent colors
skilltree/types.ts has had `var(--color-branch-{intellect,body,creativity,
social,practical,mindset})` references for as long as I can grep, but those
CSS variables were never defined anywhere. Every skill in the gamified
tree was rendering inherited color (effectively invisible accent), making
the 6 branches visually indistinguishable.

Add the 6 colors as a new "domain accent" section in shared-tailwind/themes.css,
defined once at :root and never overridden by .dark or variant blocks
(they're brand-internal accents, not theme-aware — the same way cycles
keeps its brand pink literal).

  - intellect → blue (217 91% 60%) — knowledge, thinking
  - body      → red (0 84% 60%) — physical, energy
  - creativity → violet (271 91% 65%) — art, expression
  - social    → amber (38 92% 50%) — warmth, relationships
  - practical → teal (173 80% 40%) — craft, tools
  - mindset   → green (142 71% 45%) — calm, growth

Also update skilltree/types.ts to wrap the var() calls with hsl() per
the canonical convention (the values are now raw HSL channels).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 12:23:55 +02:00
Till JS
e52b6e29fe chore(shared-branding): set all apps to requiredTier 'guest' (testing only)
Temporarily flips every MANA_APPS entry from public/beta/alpha/founder
to 'guest' so the tier-gated workbench picker, openApps soft-filter,
and (app)/+layout per-route gate can be exercised end-to-end without
needing a tier upgrade. The hasAppAccess hierarchy is unchanged —
guests are still tier 0; this just makes every app's threshold also 0.

Revert before any release. Only the 36 in-app entries are touched;
function signatures and type definitions stay intact.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 12:14:02 +02:00
Till JS
488489944d chore(packages): remove 4 dead zero-consumer packages
Pre-launch audit found 4 packages with zero workspace consumers
that were leftover from before the consolidation:

  - @mana/cards-database (1475 LOC)
    Pre-consolidation flashcard backend with its own Docker Compose
    and Drizzle config. Replaced by the cards module in the unified
    Mana app: apps/mana/apps/web/src/lib/modules/cards/. Now uses
    Dexie + mana-sync against mana_platform.

  - @mana/shared-api-client (1110 LOC)
    Generic Go-style {data, error} REST client. Only reference left
    was a string entry in shared-vite-config's noExternal list (not
    a real import).

  - @mana/shared-errors (1791 LOC)
    NestJS-coupled exception filter package from before the Hono
    migration. The Hono replacement (serviceErrorHandler in
    @mana/shared-hono) ships in a separate commit. Result<T,E> +
    ErrorCode enum bits had no consumers and weren't worth saving
    standalone — if a need emerges they can grow organically.

  - @mana/shared-splitscreen (694 LOC)
    Side-by-side panel layout components. No code consumers; only
    referenced from shared-vite-config noExternal and an old design
    doc. The unified Mana app uses its own workbench scenes for
    multi-pane layouts.

Verified zero code consumers via grep across .ts/.svelte/.json
before deletion. apps/api type-check stays at 0 errors after the
sweep, mana-auth tests still 19/19 passing.

Also clean packages/shared-vite-config/src/index.ts noExternal
list while we're here: drop the two deleted entries plus 8 ghost
packages (shared-feedback-ui/-service/-types, shared-help-ui/
-types/-content, shared-profile-ui, shared-subscription-ui) that
were referenced by name but never existed in packages/. List goes
from 22 → 12 entries.

Net: ~5070 LOC + workspace declarations removed.

Tracked as item #29 in docs/REFACTORING_AUDIT_2026_04.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 11:56:25 +02:00
Till JS
3b5d58ecbe feat(shared-llm): Phase 4 — persistent LLM task queue
Until now, modules wanting to use the orchestrator had to await each
LLM call inline in their store code. That's fine for foreground tasks
("user clicked summarize") but a non-starter for background work
("auto-tag every new note", "generate a title for every voice memo
after STT finishes"). Background tasks need to:

  - Queue up while no LLM tier is ready, then drain when one becomes
    available (e.g. user just enabled the browser tier from settings)
  - Survive page reloads, browser restarts, and the user navigating
    away mid-execution
  - Run one at a time without blocking the foreground UI
  - Allow modules to subscribe to results reactively without polling
  - Retry transient failures (network, model loading) but not
    semantic ones (tier-too-low, content blocked)

Phase 4 ships exactly that.

Architecture:

  packages/shared-llm/src/queue.ts — LlmTaskQueue class
    + QueuedTask interface (the persistent row shape)
    + EnqueueOptions (refType/refId/priority/maxAttempts)
    + TaskRegistry type (name → LlmTask map)
    + LlmTaskQueueOptions (table + orchestrator + registry +
                           retryBackoffMs + idleWakeupMs)

  Public API:
    - enqueue(task, input, opts) → string  (returns the queued id)
    - get(id), list(filter)
    - retry(id), cancel(id), purge(olderThanMs)
    - start(), stop()  (idempotent processor lifecycle)

  apps/mana/apps/web/src/lib/llm-queue.ts — web app singleton
    - Dedicated `mana-llm-queue` Dexie database (separate from the
      main `mana` IDB; see comment for the rationale: ephemeral
      per-device state, no encryption needed, no sync needed, doesn't
      belong in the long-frozen `mana` schema)
    - Wires up the queue with llmOrchestrator + taskRegistry
    - Exposes startLlmQueue() / stopLlmQueue() for the layout hook

  apps/mana/apps/web/src/lib/llm-task-registry.ts
    - Maps task names → task objects so the queue processor can
      look up the implementation when pulling rows off the table.
      Closures can't be persisted, so we round-trip via name.
    - Currently registers extractDateTask + summarizeTextTask;
      module-side tasks land here as we add them.

  apps/mana/apps/web/src/routes/(app)/+layout.svelte
    - startLlmQueue() in handleAuthReady's Phase A (auth-independent)
      so guests + authenticated users both get the queue
    - stopLlmQueue() in onDestroy as a fire-and-forget cleanup

Processor loop semantics (the heart of the implementation):

  1. On start(), reclaim any 'running' rows from a crashed previous
     session — reset them to 'pending'. The orphan recovery is the
     reason a crash mid-task doesn't leave the queue stuck.
  2. findNextRunnable() picks the highest-priority pending task whose
     `notBefore` (retry-backoff timestamp) is in the past. Sort key:
     priority desc, then enqueuedAt asc (FIFO within priority).
  3. Mark the task running, increment attempts, look up the LlmTask
     in the registry, hand it to orchestrator.run().
  4. On success: mark done, store result + source + finishedAt.
  5. On error:
       - TierTooLowError or ProviderBlockedError → fail immediately,
         no retry. These are not transient — the user's settings or
         the content itself need to change.
       - Anything else → if attempts < maxAttempts, reset to pending
         with notBefore = now + retryBackoffMs (default 60s). Else
         mark failed.
  6. When no work is pending, sleep on a Promise that resolves when
     either (a) someone calls enqueue() (which fires notifyWakeup),
     or (b) idleWakeupMs elapses (default 30s, safety net for any
     missed wakeup signal).

Module-side reactive reads use Dexie liveQuery directly on the queue
table — no special subscription API on the queue itself. This is
consistent with how every other Mana module reads its data, so the
mental model stays uniform:

  const tags = useLiveQuery(
    () => llmQueueDb.tasks
      .where({ refType: 'note', refId, taskName: 'common.extractTags' })
      .reverse().first(),
    [refId]
  );

Smoke test: a new "Queue" tab in /llm-test lets you enqueue the
existing extractDate / summarize tasks and watch the live state of
the queue table via liveQuery. The display includes per-row state
badge (pending/running/done/failed), tier source, attempt count,
input/output, and a "Done/failed löschen" button that exercises
purge().

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 01:51:20 +02:00
Till JS
45f368f471 feat(local-llm): Phase 3 — move inference into a Web Worker
The browser tier of @mana/local-llm was running entirely in the main
JS thread. With Gemma 4 E2B that meant ~50-200 ms of synchronous
tensor work per forward pass × ~150 forward passes per generation =
the UI froze for 10-30 seconds during a single chat reply. Scrolling,
clicks, animations all stopped.

Move the actual inference into a Dedicated Web Worker. The main
thread keeps a thin LocalLLMEngine proxy with the same public API
(load / unload / generate / prompt / extractJson / classify /
onStatusChange / isSupported), so existing callers — the /llm-test
page, the playground module, @mana/shared-llm's BrowserBackend, the
Svelte 5 reactive bindings — need NO changes.

File layout after the split:

  src/engine.ts       — main-thread proxy (lazy worker init,
                        postMessage protocol, pending request map,
                        status broadcast handling, convenience
                        wrappers for prompt/extractJson/classify)
  src/worker.ts       — Web Worker entry point (typed message
                        protocol, single LocalLLMEngineImpl instance,
                        forwards status changes back to main thread)
  src/engine-impl.ts  — the actual transformers.js engine (renamed
                        from the previous engine.ts contents). NOT
                        exported from index.ts — only the worker
                        imports it. Same two-step tokenization,
                        aggregated progress reporting, streaming
                        token handling as before; just running in
                        a different thread now.

Worker construction uses Vite's documented `new Worker(new URL(
'./worker.ts', import.meta.url), { type: 'module' })` pattern, which
makes Vite split worker.ts (and its transformers.js dep) into its
own bundle chunk at build time. The proxy is lazy-init: the Worker
constructor is never touched at module-import time, so SSR stays
clean (Worker doesn't exist on Node).

Message protocol (typed end-to-end):

  Main → Worker:
    { id, type: 'load',     modelKey: ModelKey }
    { id, type: 'unload' }
    { id, type: 'generate', opts: SerializableGenerateOptions }
    { id, type: 'isReady' }

  Worker → Main:
    { id, type: 'result',  data?: unknown }
    { id, type: 'error',   message: string }
    { id, type: 'token',   token: string }       — streaming chunk
    {     type: 'status',  status: LoadingStatus }  — broadcast

The proxy assigns a unique id per request, stores the resolve/reject
+ optional onToken callback in a Map<id, PendingRequest>, and routes
incoming responses by id. Status messages have no id and fire every
registered status listener — same UX as before, just one extra hop.

Streaming: the worker re-attaches the streaming callback on its
side. Each emitted token gets posted back as `{ id, type: 'token',
token }` and the proxy invokes the original `onToken` callback. The
final `result` arrives as a normal response and resolves the
Promise. From the caller's perspective generate() still feels
identical — same async iterable feel via onToken, same return value.

Worker termination on unload: transformers.js doesn't expose a
dispose API, so we terminate the worker after unload and create a
fresh one on the next load. This is the only reliable way to
release VRAM between model swaps.

CSP: no header changes needed. The worker is loaded from a
same-origin URL (Vite emits it as
/_app/immutable/workers/worker.[hash].js), so 'self' in script-src
already covers it. The blob: + cdn.jsdelivr.net + wasm-unsafe-eval
allowlists we added during the original WebLLM/transformers.js
bring-up still apply because the worker still runs the same ONNX
runtime that needed them.

DistributiveOmit type helper: TS's plain `Omit<Union, K>` collapses
discriminated unions to an intersection in some configurations,
which broke the type narrowing at the postRequest call sites for
each request variant. Adding a tiny `DistributiveOmit<T, K>` helper
fixes the type-check without restructuring the protocol.

What this commit deliberately does NOT do:

- Change the public API surface. The whole point is that callers
  remain untouched.
- Add multi-tab worker coordination via SharedWorker or
  BroadcastChannel. Each tab still spawns its own dedicated worker
  with its own copy of the model in VRAM. Multi-tab dedup is
  Phase 2.5/Phase 4 work — see the design doc summary in the
  previous Phase 1 commit message.
- Add a persistent task queue. Fire-and-forget background tasks
  are Phase 4.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 01:27:10 +02:00
Till JS
919fcca4b7 refactor(shared-tailwind): rewrite themes.css to single-layer shadcn convention
Pre-launch theme system audit found multiple parallel layers in themes.css
(--theme-X full hsl strings, --X partial shadcn aliases, --color-X populated
by runtime store with raw channels) plus dead-code companion files. The
inconsistency caused light-mode regressions when scoped-CSS consumers
wrote `var(--color-X)` standalone — the variable holds raw HSL channels
which is invalid as a color value, browser fell back to inherited (white).

Rewrite to one consistent layer:

  - Source of truth: --color-X defined as raw HSL channels (e.g.
    `0 0% 17%`) in :root, .dark, and all variant [data-theme="..."]
    blocks. Matches the format the runtime store
    (@mana/shared-theme/src/utils.ts) writes, eliminating the
    static-fallback-vs-runtime mismatch and the corresponding flash
    of unstyled content on hydration.

  - @theme inline uses self-reference + Tailwind v4 <alpha-value>
    placeholder so utility classes generate correctly AND opacity
    modifiers work: `text-foreground/50` → `hsl(var(--color-foreground) / 0.5)`.

  - @layer components (.btn-primary, .card, .badge, etc.) wraps
    var(--color-X) refs with hsl() — they were broken in light mode
    too for the same reason.

Convention going forward (also documented in the file header):

  1. Markup: use Tailwind utility classes (text-foreground, bg-card, …)
  2. Scoped CSS: hsl(var(--color-X)) — always wrap with hsl()
  3. NEVER raw var(--color-X) in CSS — that's the bug pattern

Net file: 692 → 580 LOC. Single source layer, no indirection.

Also delete dead companion files (zero imports anywhere):
  - tailwind-v4.css (had broken self-reference, never imported)
  - theme-variables.css (legacy hex-based palette)
  - components.css (legacy component utilities)
  - index.js / preset.js / colors.js (Tailwind v3 preset format,
    irrelevant under Tailwind v4)

package.json exports map shrinks accordingly to just `./themes.css`.

Consumers using `hsl(var(--color-X))` (~379 files across mana-web,
manavoxel-web, arcade-web) keep working unchanged — the public API
name `--color-X` is preserved. Only the broken pattern `var(--color-X)`
(~61 files) needs a follow-up sweep, handled in a separate commit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 01:13:06 +02:00
Till JS
56065c8537 fix(mana/web): unwrap $state proxy in workbench-scenes Dexie writes
Adding an app to a workbench scene threw DataCloneError. scenesState
is a $state array, so current.openApps was a Svelte 5 proxy and
spreading it into a new array left proxy entries inside; IndexedDB's
structured clone refuses to serialise those. Snapshot before handing
the array to patchScene / createScene so Dexie sees plain objects.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 00:44:00 +02:00
Till JS
7901a5df2f docs(local-llm): document browser-local LLM stack and CSP requirements
Add packages/local-llm/CLAUDE.md as the package-level reference for
browser-local LLM inference. The package went through a non-trivial
engine swap from WebLLM/Qwen to transformers.js/Gemma 4 E2B on
2026-04-08, and the bring-up surfaced enough sharp edges that the
next person (or AI agent) touching this code will save real time
having them written down in one place rather than re-discovering
them error by error.

Captured topics:

- What the package is, what library/model is currently used, and
  the deliberate engine-agnostic API surface that lets future swaps
  stay contained to this package.

- Why we chose transformers.js + Gemma 4 over staying on WebLLM
  (MLC compilation lag for new model architectures) and what the
  return path looks like once MLC ships Gemma 4 builds.

- The seven CSP directives that browser-local inference needs and
  WHY each one is required:
    * script-src: 'wasm-unsafe-eval', cdn.jsdelivr.net, blob:
    * connect-src: huggingface.co + *.huggingface.co + cdn-lfs-*,
                   *.hf.co + cas-bridge.xethub.hf.co (XET CDN),
                   cdn.jsdelivr.net (for the WASM preload fetch)
  Including the subtle "jsDelivr is needed in BOTH script-src and
  connect-src" trap that produces identical-looking error messages
  for two distinct underlying causes.

- The Vite SSR module-cache gotcha: CSP additions made in
  packages/shared-utils/security-headers.ts do NOT hot-reload across
  the workspace package boundary, while additions made directly in
  apps/mana/apps/web/src/hooks.server.ts do. Includes the diagnostic
  pattern (compare which additions show up in the next CSP error
  vs which don't) and the workaround (move them into hooks.server.ts
  via setSecurityHeaders options).

- The two-step tokenization pattern that's mandatory for
  Gemma4Processor: apply_chat_template(tokenize:false) → string, then
  processor.tokenizer(text, return_tensors:'pt'). The collapsed
  apply_chat_template(return_dict:true) path looks shorter but
  produces a malformed input shape and crashes model.generate() deep
  inside the forward pass with "Cannot read properties of null
  (reading 'dims')" — opaque from the call site.

- The transformers.js v4 quirk that model.generate() returns null
  (not a tensor) when a TextStreamer is attached. The streamer is
  the only stable text channel; the engine always attaches one and
  uses the streamer's collected text as the canonical output, with
  a chars/4 fallback for token counts.

- API surface (Svelte 5 example), how to add a new model to the
  registry, deploy notes (no base image rebuild needed for local-llm
  changes alone, but IS needed if shared-utils CSP defaults change),
  browser cache semantics, and hard browser support requirements
  (WebGPU, ~1.5–2 GB VRAM for E2B q4f16, no CPU/WASM fallback).

Also link to the new doc from the root CLAUDE.md Shared Packages
table so people land on it from the standard discovery path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 23:27:50 +02:00
Till JS
e8423e7551 fix(local-llm): use two-step tokenization to fix Gemma 4 generate crash
The previous attempt to fix the "Cannot read properties of null
(reading 'dims')" chat error was incomplete: I only stopped passing
the bogus return_tensor:'pt' option to apply_chat_template. The
underlying issue was that apply_chat_template's all-in-one mode
(return_dict:true) does not produce a proper Tensor-backed
{ input_ids, attention_mask } pair for multimodal-capable processors
like Gemma4Processor — it returns a shape that has no .dims on
input_ids, so model.generate() crashes deep inside the forward pass
the moment it tries to read the sequence length.

Switch to the documented two-step pattern from the Gemma 4 model
card: call apply_chat_template with tokenize:false to get the
formatted prompt as a plain string, then run that string through
processor.tokenizer with return_tensors:'pt' to get a proper Tensor
pair. The tokenizer's return_tensors option is the *Python*
convention and IS supported by transformers.js's Tokenizer class
(the API name collision between apply_chat_template's return_tensor
boolean and Tokenizer's return_tensors string is one of those nasty
spots where the JS port intentionally diverges from Python).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 23:19:24 +02:00
Till JS
7f1513b5a3 fix(local-llm): handle null model.generate() return + bogus return_tensor
First end-to-end Gemma 4 inference attempt threw "Cannot read
properties of null (reading 'dims')" the moment a chat message was
sent. Two bugs piled on top of each other:

1. apply_chat_template() was being called with `return_tensor: 'pt'`,
   which is the Python `transformers` convention. transformers.js's
   equivalent option is just a boolean (the default), and the string
   'pt' is unrecognized — older versions silently ignored it, but the
   v4 code path now produces a less predictable input shape when it
   sees the unknown value. Drop it.

2. model.generate() in transformers.js v4 returns null (not a tensor)
   when a streamer is attached. The previous engine code only attached
   a streamer if the caller passed an `onToken` callback, then
   unconditionally tried to slice the tensor return for token counting
   — which crashed because the chat tab DOES pass onToken for live
   streaming. The streamer collected the text fine, but generate()
   returned null and our tensor read blew up.

Restructure so the streamer is always attached and is the canonical
text channel. The tensor return is now only used for token counting
when present, and falls back to a chars/4 estimate when it isn't, so
the /llm-test UI still shows roughly meaningful prompt/completion
counts on either v3 (returns tensor) or v4 (returns null with
streamer). The user-facing GenerateResult.content now always comes
from the streamer's accumulated string instead of decoding the
tensor's sliced suffix, which is more robust across versions.

Also wrap the model.generate() call in try/catch so that versions
of transformers.js that throw at end-of-streaming (after the
streamer has already delivered all tokens) don't lose the answer.
We only re-throw if the streamer collected nothing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 23:15:33 +02:00
Till JS
e4e3360ca8 fix(csp): move jsdelivr allowlist to mana-web hooks (Vite SSR cache workaround)
The previous two attempts at allowlisting cdn.jsdelivr.net for
transformers.js's onnxruntime-web loader landed in shared-utils
security-headers.ts. The actual file change was correct (verified by
grep), the commits got pushed, the live security-headers.ts on disk
had the additions — but Vite's SSR module cache for cross-workspace-
package imports kept serving the OLD compiled shared-utils to
hooks.server.ts. Net effect: edits to hooks.server.ts hot-reloaded
fine (proven by the *.hf.co connect-src additions showing up
immediately) while edits to shared-utils/security-headers.ts did not.
A dev server restart should clear it but I'd rather not depend on
manual intervention every time we touch the shared CSP.

Move the jsdelivr allowlist out of the shared default and into
mana-web's hooks.server.ts via the existing scriptSrc + connectSrc
options. hooks.server.ts is in the SvelteKit app's own source tree so
it HMRs reliably, no SSR cache to fight. As a bonus this is also
architecturally cleaner: cdn.jsdelivr.net is only needed by mana-web
because mana-web is the only Mana app that bundles @mana/local-llm —
other apps get a slightly tighter CSP for free.

The pattern to remember: changes to packages/shared-utils that affect
SSR (response headers, server hooks) require either a dev server
restart OR a manual `rm -rf apps/.../node_modules/.vite` to take
effect. Client-side changes hot-reload fine.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 23:03:38 +02:00
Till JS
8772a9b9e1 fix(csp): also allow jsdelivr in connect-src for transformers.js WASM preload
The earlier fix added cdn.jsdelivr.net to script-src so the dynamic
import() of onnxruntime-web's loader .mjs would resolve. But that's
only half the story: transformers.js also issues plain fetch() calls
to PRE-LOAD the .wasm binary and the .mjs factory before the backend
selection code path is even reached. fetch() is governed by
connect-src, not script-src, so the wasm preload was still blocked
with "Failed to pre-load WASM binary: TypeError: Failed to fetch".
The visible downstream symptom was identical to the previous bug
("no available backend found. ERR: [webgpu] TypeError: Failed to
fetch dynamically imported module"), which made it look like the
script-src fix hadn't taken effect.

Add cdn.jsdelivr.net to the default connect-src too, alongside the
existing script-src entry, with a comment explaining why both are
required.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 22:59:52 +02:00
Till JS
b50a5c9ac7 fix(local-llm): allow jsdelivr in CSP + aggregate transformers.js progress
Two issues hit while loading Gemma 4 E2B in /llm-test for the first
time on a local dev server.

1. CSP script-src blocked cdn.jsdelivr.net.
   @huggingface/transformers v4 lazy-loads the onnxruntime-web WASM
   loader shim via a runtime dynamic `import()` from
   cdn.jsdelivr.net/npm/onnxruntime-web@... at backend selection time
   (the package itself is bundled, but the WASM-loader is fetched on
   demand so the static bundle stays small). With the previous CSP the
   import was blocked and "no available backend found" was the only
   downstream error. Allowlist cdn.jsdelivr.net in the shared CSP
   script-src so every Mana web app picks this up automatically.

2. Loading bar oscillated wildly during the model download.
   transformers.js downloads many shards in parallel (config.json,
   tokenizer.json, generation_config.json, model.onnx, model_data.bin,
   …) and fires the progress callback per file. The previous engine
   code reported the latest event verbatim, so the bar bounced
   between whichever file happened to be progressing fastest.

   Replace per-file reporting with a Map<file, {loaded, total}>
   accumulator and emit an aggregated total on every event. The
   denominator can grow as new files are discovered (causing brief
   small dips), but both numerator and denominator are individually
   monotonic, so the aggregate is much smoother. Also include a
   human-readable byte count and file count in the status text:
       Downloading model (47%, 240 MB / 510 MB, 8 files)

   Pin completed files to 100% on the 'done' event so the final
   aggregate visibly hits 100% before the loading→ready transition.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 22:56:52 +02:00
Till JS
1f26aa4f2f feat(local-llm): swap WebLLM/Qwen for transformers.js + Gemma 4 E2B
Replace the entire @mana/local-llm engine with a transformers.js-based
implementation backed by Google's Gemma 4 E2B (released 2026-04-02).
The external API of LocalLLMEngine — load(), generate(), prompt(),
extractJson(), classify(), onStatusChange(), isSupported() — is
preserved 1:1, so the /llm-test page, the playground module, and the
Svelte 5 reactive bindings in svelte.svelte.ts need no changes
beyond updating the default model key.

Why the engine swap: MLC has not (and as of today still hasn't)
published Gemma 4 builds for WebLLM. The webml-community team and
HuggingFace's onnx-community already have Gemma 4 E2B running in
the browser via transformers.js + WebGPU, with a documented
Gemma4ForConditionalGeneration class shipped in @huggingface/transformers
v4.0.0. Going through the ONNX route gets us the latest Google model
six days after release instead of waiting on MLC compilation.

Trade-offs accepted (discussed before this commit):
- transformers.js is a more generic ONNX runtime, so per-token
  throughput will be ~20-40% lower than WebLLM would deliver for the
  same model size. For a 2B model on a modern WebGPU device that's
  still well above interactive latency.
- The JS bundle gains ~2-3 MB (the ONNX runtime). Negligible compared
  to the 500 MB model download.
- transformers.js v4 is brand new (released alongside Gemma 4) so the
  Gemma4ForConditionalGeneration code path has very little battle
  testing yet. The risk is partially offset by webml-community's
  reference implementation.

What changed file by file:

- packages/local-llm/package.json: drop @mlc-ai/web-llm, add
  @huggingface/transformers ^4.0.0; bump version 0.1.0 → 0.2.0; rewrite
  description.

- packages/local-llm/src/types.ts: add `dtype` field to ModelConfig
  ('fp32' | 'fp16' | 'q8' | 'q4' | 'q4f16') so each model can request
  the quantization that matches its uploaded ONNX shards.

- packages/local-llm/src/models.ts: replace the old Qwen 2.5 + Gemma 2
  registry with a single `gemma-4-e2b` entry pointing at
  onnx-community/gemma-4-E2B-it-ONNX with q4f16 quantization. Future
  models can be added by appending entries — the /llm-test picker
  reads MODELS dynamically and picks them up automatically.

- packages/local-llm/src/cache.ts: replace the WebLLM-specific
  hasModelInCache helper with a generic Cache API probe that looks for
  `https://huggingface.co/{model_id}/resolve/main/tokenizer.json` in
  any open cache. tokenizer.json is small, downloaded first, and
  always present, so its presence is a reliable proxy for "model has
  been loaded before".

- packages/local-llm/src/engine.ts: full rewrite. Internally we now
  hold a transformers.js model + processor pair (created via
  AutoProcessor.from_pretrained + Gemma4ForConditionalGeneration.from_pretrained
  with `device: 'webgpu'`), and translate our LoadingStatus union from
  the library's `progress_callback` shape. generate() applies Gemma's
  chat template via the processor, runs model.generate() with optional
  TextStreamer for streaming, then slices the prompt tokens off the
  output tensor to compute per-call usage. The convenience methods
  (prompt, extractJson, classify) are unchanged because they only call
  generate() under the hood.

- packages/local-llm/src/generate.ts and status.svelte.ts: deleted.
  These were orphaned from a much earlier engine API (referenced
  `getEngine()` / `subscribe()` / `LlmState` symbols that haven't
  existed for a while) and were never re-exported from index.ts —
  they only showed up because `tsc --noEmit` was crawling the src
  tree. Their functionality lives in engine.ts + svelte.svelte.ts now.

- apps/mana/apps/web/package.json: swap the direct dep from
  @mlc-ai/web-llm to @huggingface/transformers. This is the same
  trick we used for the previous adapter-node externals warning —
  having it as a direct dep makes adapter-node's Rollup pass treat
  it as external automatically.

- apps/mana/apps/web/vite.config.ts: swap ssr.external entry from
  @mlc-ai/web-llm to @huggingface/transformers. Add a comment
  explaining the why so the next person doesn't wonder.

- apps/mana/apps/web/src/routes/(app)/llm-test/+page.svelte: change
  the default selectedModel from 'qwen-2.5-1.5b' to 'gemma-4-e2b'.
  All other model display strings come from the MODELS registry, so
  this is the single hard-coded reference that needed updating.

- pnpm-lock.yaml: regenerated. Confirmed @mlc-ai/web-llm is gone (0
  references) and @huggingface/transformers is in (4 references).

CSP: no header changes needed. We already opened connect-src for
huggingface.co + cdn-lfs.huggingface.co + raw.githubusercontent.com
when fixing the WebLLM blockers earlier today, and 'wasm-unsafe-eval'
is already in script-src — both transformers.js (ONNX runtime) and
WebLLM (MLC runtime) need that. If transformers.js spawns its
inference into a Web Worker via a blob URL we may need to add
`worker-src 'self' blob:` once we hit the first runtime test, but
the existing CSP should be enough for the synchronous path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 22:22:32 +02:00
Till JS
c3cb9dd533 refactor(mana/web): consolidate ListView scaffolding into BaseListView
Every workbench-style module ListView reimplemented the same
liveQuery + filter + scroll-area + empty-state shell. Extract a
shared <BaseListView> in @mana/shared-ui (with toolbar/header/
listHeader/item/empty snippets) and migrate the 17 modules whose
list templates fit the workbench tailwind track.

While here:
- migrate DeckCard onto the existing (previously unused) shared
  Card atom from shared-ui/atoms.
- fix a latent type bug in times/ListView: it was reading .date /
  .startTime / .isRunning off LocalTimeEntry, which doesn't define
  them. Now uses the proper joined TimeEntry via toTimeEntry() like
  the rest of the times module.

Modules with their own scoped-CSS layout track (calendar, finance,
contacts, notes, places, todo, photos, habits, automations, dreams,
cycles) and outliers (calc, events, playground, zitare) are left
alone — migrating them would be a visual rewrite, not a structural
shell swap.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:40:47 +02:00
Till JS
624f5ce00b fix(csp): allow wasm-unsafe-eval so @mana/local-llm can instantiate WebLLM
WebAssembly.instantiate() was blocked by script-src on every app using
shared security headers. 'wasm-unsafe-eval' is the narrow CSP source
that whitelists WASM compilation only — it does NOT re-enable eval() or
new Function(). Required by the MLC WebGPU runtime that powers the
in-browser Qwen models on /llm-test.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:05:30 +02:00
Till JS
4fd5ff3199 feat(local-llm): add Gemma 2 + allow HF/MLC hosts in CSP
WebLLM was blocked by connect-src — model config and weight shards live
on huggingface.co (+ cdn-lfs.* for LFS), and the WebGPU model_lib WASM
comes from raw.githubusercontent.com (binary-mlc-llm-libs). Also wires
Gemma 2 2B/9B into the model registry so /llm-test picks them up.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 18:00:57 +02:00
Till JS
bfeeef7819 chore(matrix): final scrub of stale matrix references
A grep audit after the previous matrix removal commits found a handful
of stragglers in non-runtime files that the earlier sweeps missed:

- services/mana-llm/CLAUDE.md: removed matrix-ollama-bot from the
  consumer-apps diagram and from the related-services table
- services/mana-video-gen/CLAUDE.md: removed "Matrix Bots" integration
  bullet
- packages/notify-client/README.md: removed sendMatrix() doc entry
  (the method itself was already gone in the prior cleanup)
- docker/grafana/dashboards/logs-explorer.json: dropped the "Matrix
  Stack" log row that queried tier="matrix" (would show no data forever)
- docker/grafana/dashboards/master-overview.json: dropped the "Matrix
  Bots" stat panel that counted up{job=~"matrix-.*-bot"}
- apps/mana/apps/landing/src/data/ecosystem-health.json: regenerated via
  scripts/ecosystem-audit.mjs to drop matrix from the app list, icon
  counts, file analytics, top offenders and authGuard missing list
- .gitignore: removed services/matrix-stt-bot/data/ pattern (the
  service itself was deleted long ago)

Production-side stragglers also addressed (not in this commit):
- DROP USER synapse on prod Postgres (the parallel cleanup commit
  2514831a3 dropped DATABASE matrix + DATABASE synapse but left the
  role behind)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:47:54 +02:00
Till JS
2514831a3b chore(matrix): scrub final matrix references after subsystem removal
The matrix subsystem was removed in a prior commit. This commit cleans
up the small leftovers that grep found:

- docker-compose.macmini.yml: dropped the "Matrix Stack" port-range
  comment, the "matrix" category from the naming convention, and a
  stale watchtower comment about Matrix notifications.
- packages/credits/src/operations.ts: removed AI_BOT_CHAT credit
  operation type and its definition. It was the billing entry for "Chat
  with AI via Matrix bot" — no callers left.
- services/mana-credits gifts schema + service + validation: removed the
  targetMatrixId column / param / Zod field. The corresponding
  PostgreSQL column was dropped manually with
  `ALTER TABLE gifts.gift_codes DROP COLUMN target_matrix_id` on prod.
- docker/grafana/dashboards/{master,system}-overview.json: removed the
  `up{job="synapse"}` panel queries — they would have shown No Data
  forever now that Synapse is gone.

Production-side cleanup performed in parallel (not in this commit):
- Stopped + removed mana-matrix-{synapse,element,web,bot} containers
- Removed mana-matrix-bot:local, matrix-web:latest,
  matrixdotorg/synapse:latest, vectorim/element-web:latest images (~3 GB)
- Removed mana-matrix-bots-data Docker volume
- Removed /Volumes/ManaData/matrix/ media store (4.3 MB)
- DROP DATABASE matrix; DROP DATABASE synapse; on Postgres

Cosmetic leftovers intentionally untouched:
- Eisenhower matrix in todo (LayoutMode 'matrix') — productivity concept
- ${{ matrix.service }} in .github/workflows — GitHub Actions strategy
- services/mana-media/apps/api/dist/.../matrix/* — stale build output
  (not in git, regenerated next mana-media build)
2026-04-08 16:39:42 +02:00
Till JS
8e8b6ac65f fix(mana-auth) + chore: rewrite /api/v1/auth/login JWT mint, remove Matrix stack
This commit bundles two unrelated changes that were swept together by an
accidental `git add -A` in another working session. Documented here so the
history reflects what's actually inside.

═══════════════════════════════════════════════════════════════════════
1. fix(mana-auth): /api/v1/auth/login mints JWT via auth.handler instead
   of api.signInEmail
═══════════════════════════════════════════════════════════════════════

Previous attempt (commit 55cc75e7d) tried to fix the broken JWT mint in
/api/v1/auth/login by switching the cookie name from `mana.session_token`
to `__Secure-mana.session_token` for production. That was necessary but
not sufficient: Better Auth's session cookie value isn't just the raw
session token, it's `<token>.<HMAC>` where the HMAC is derived from the
better-auth secret. Reconstructing the cookie from auth.api.signInEmail's
JSON response only gave us the raw token, so /api/auth/token's
get-session middleware still couldn't validate it and the JWT mint kept
silently failing.

Real fix: do the sign-in via auth.handler (the HTTP path) rather than
auth.api.signInEmail (the SDK path). The handler returns a real fetch
Response with a Set-Cookie header containing the fully signed cookie
envelope. We capture that header verbatim and forward it as the cookie
on the /api/auth/token request, which now passes validation and mints
the JWT correctly.

Verified end-to-end on auth.mana.how:

  $ curl -X POST https://auth.mana.how/api/v1/auth/login \
      -d '{"email":"...","password":"..."}'
  {
    "user": {...},
    "token": "<session token>",
    "accessToken": "eyJhbGciOiJFZERTQSI...",   ← real JWT now
    "refreshToken": "<session token>"
  }

Side benefits:
- Email-not-verified path is now handled by checking
  signInResponse.status === 403 directly, no more catching APIError
  with the comment-noted async-stream footgun.
- X-Forwarded-For is forwarded explicitly so Better Auth's rate limiter
  and our security log see the real client IP.
- The leftover catch block now only handles unexpected exceptions
  (network errors etc); the FORBIDDEN-checking logic in it is dead but
  harmless and left in for defense in depth.

═══════════════════════════════════════════════════════════════════════
2. chore: remove the entire self-hosted Matrix stack (Synapse, Element,
   Manalink, mana-matrix-bot)
═══════════════════════════════════════════════════════════════════════

The Matrix subsystem ran parallel to the main Mana product without any
load-bearing integration: the unified web app never imported matrix-js-sdk,
the chat module uses mana-sync (local-first), and mana-matrix-bot's
plugins duplicated features the unified app already ships natively.
Keeping it alive cost a Synapse + Element + matrix-web + bot container
quartet, three Cloudflare routes, an OIDC provider plugin in mana-auth,
and a steady drip of devlog/dependency churn.

Removed:
- apps/matrix (Manalink web + mobile, ~150 files)
- services/mana-matrix-bot (Go bot with ~20 plugins)
- docker/matrix configs (Synapse + Element)
- synapse/element-web/matrix-web/mana-matrix-bot services in
  docker-compose.macmini.yml
- matrix.mana.how/element.mana.how/link.mana.how Cloudflare tunnel routes
- OIDC provider plugin + matrix-synapse trustedClient + matrixUserLinks
  table from mana-auth (oauth_* schema definitions also removed)
- MatrixService import path in mana-media (importFromMatrix endpoint)
- Matrix notification channel in mana-notify (worker, metrics, config,
  channel_type enum, MatrixOptions handler)
- Matrix entries from shared-branding (mana-apps + app-icons),
  notify-client, the i18n bundle, the observatory map, the credits
  app-label list, the landing footer/apps page, the prometheus + alerts
  + promtail tier mappings, and the matrix-related deploy paths in
  cd-macmini.yml + ci.yml

Devlog/manascore/blueprint entries that mention Matrix are left intact
as historical record. The oauth_* + matrix_user_links Postgres tables
stay on existing prod databases — code can no longer write to them, drop
them in a follow-up migration if you want them gone for real.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:32:13 +02:00
Till JS
45958ad885 feat(mana/web): global requireAuth() gate for guest-blocked features
The unified Mana app runs most modules in a "guest mode": you can
open a module, look around, type a quick note, etc. without an
account. But anything that touches an *encrypted* table (dreams
voice capture, memoro recordings, notes, todo, calendar events, …)
needs the user to be logged in — the encryption vault only unlocks
against a Mana Auth session, and writing to those tables without
it throws `VaultLockedError` at the very last step of the action.

Before this commit, every entry point into an encryption-required
action would silently let the guest go through the whole flow
(record audio, wait for transcription, open the dexie write) and
then explode with a stack-trace error. The user lost work and
didn't know why. The dreams voice capture flow surfaced this
during the 2026-04-08 STT debugging session.

The fix is a global imperative gate: `requireAuth({ feature, reason })`.
Call sites await it before the action; it returns immediately if the
user is already authenticated, otherwise pops a global modal that
asks the guest to log in or cancel. Promise-based, so callers
decide what to do with `false` (silent abort, restore state, own
toast).

  $lib/auth/require-auth.svelte.ts          new — store + helper
  $lib/components/auth/AuthRequiredModal.svelte  new — global modal
  routes/+layout.svelte                     mount the modal once
  packages/shared-utils/src/analytics.ts    new ManaEvents.featureBlockedByAuth
                                            event for conversion tracking

Wired into the two voice-capture entry points that actually exhibited
the bug:

  modules/dreams/ListView.svelte  → feature: 'dreams-voice-capture'
  routes/(app)/memoro/+page.svelte → feature: 'memoro-voice-capture'

Both gate on `requireAuth()` BEFORE the mic permission request, so
guests see the friendly "Konto erforderlich" modal instead of
recording → transcribing → crashing.

Design choices documented in detail in the require-auth.svelte.ts
header comment:
  - Imperative function (not a button wrapper component) so it
    works in event handlers, store actions, keyboard shortcuts,
    drag-drop handlers — anywhere async code runs.
  - Single global modal mounted once in the root layout, no
    portal/z-index gymnastics; two simultaneous prompts replace
    each other (the most recent one wins).
  - Checks `authStore.isAuthenticated`, not vault-unlocked state —
    the user-facing concept is "I need an account", not "I need
    a working encryption vault". Vault-unlock failures (network
    error etc.) are a separate bug class with their own UX.
  - The modal navigates to `/login?next=<current path>` so the
    user lands back on the same page after logging in. The
    Promise resolves `false` on navigation; the user re-clicks
    the original button after coming back, and the second click
    sees `isAuthenticated === true` and proceeds without a modal.
    Re-triggering the original action across a navigation cycle
    would require restoring half-recorded mic state — not worth
    the complexity, and the second click is a clean UX.

How to wire a new entry point (4 lines):

    import { requireAuth } from '$lib/auth/require-auth.svelte';

    async function handleCreateThing() {
      const ok = await requireAuth({
        feature: 'create-thing',
        reason: 'Things werden verschlüsselt gespeichert. Dafür brauchst du ein Mana-Konto.',
      });
      if (!ok) return;
      // ...existing logic
    }

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 15:36:38 +02:00
Till JS
2b4494628e fix(mana/web): unblock voice capture — permissions policy, notification mount, dev SW
Three independent bugs that conspired to make the dreams + memoro mic
buttons completely unusable in production AND in dev. Each one alone
would have been the only blocker; they layered on top of each other so
fixing the top one just exposed the next.

1. Permissions-Policy header blocked the microphone API entirely.
   `packages/shared-utils/src/security-headers.ts` set
   `microphone=()` which means "no origin, including self, may use
   the microphone". `getUserMedia()` throws a `Permissions policy
   violation` and the browser never even shows the permission
   dialog — no amount of OS / browser / site settings can override
   it because the policy blocks the API at the document level.
   Fix: change to `microphone=(self)` so mana.how itself can use
   the API. Camera stays disallowed (no module needs it).

2. Notification permission was requested at layout mount time.
   `(app)/+layout.svelte` called
   `notificationService.requestPermission()` from `onMount()`. Modern
   browsers require permission requests to come from a user gesture
   — calling it without one queues the prompt until the next click.
   That meant the user's FIRST click on any button (in this case the
   dreams "Traum sprechen" mic button) showed the queued notifications
   prompt instead of the action they actually clicked. Worse,
   `getUserMedia()` was then silently dropped because Chrome only
   shows one permission dialog at a time.
   Fix: remove the mount-time call entirely. Notification permission
   must be requested from a button the user explicitly clicks
   ("Benachrichtigungen aktivieren" toggle in Settings or first time
   a reminder is created) — the reminder scheduler still runs without
   permission, it just won't fire OS notifications until granted.

3. vite-plugin-pwa registered a service worker in dev that cached
   the old layout chunks across reloads, so the fix for #2 was
   invisible until the user manually unregistered the SW in DevTools.
   `vite-plugin-pwa` defaults `devEnabled: true`, which is a
   well-known footgun for fast iteration. Production still gets the
   full SW (this only flips dev). The 2026-04-08 mic-button hunt
   took an extra hour for exactly this reason.
   Fix: pass `devEnabled: false` to createPWAConfig in vite.config.ts.

Verified: in a fresh incognito tab on `localhost:5173/`, opening the
Dreams app in the workbench and clicking the mic button now shows the
microphone permission dialog directly (no notifications hijack), and
recording → transcription works end-to-end against the production
mana-stt service on the GPU box.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 15:36:03 +02:00
Till JS
abe0a21966 refactor(auth-ui): tighten LoginPage UX, a11y, and dead code
Some checks are pending
CI / Build mana-crawler (push) Blocked by required conditions
CI / Build mana-media (push) Blocked by required conditions
CI / Build mana-credits (push) Blocked by required conditions
CI / Build mana-web (push) Blocked by required conditions
CI / Build chat-backend (push) Blocked by required conditions
CI / Build chat-web (push) Blocked by required conditions
CI / Build todo-backend (push) Blocked by required conditions
CI / Build todo-web (push) Blocked by required conditions
CI / Build calendar-backend (push) Blocked by required conditions
CI / Build calendar-web (push) Blocked by required conditions
CI / Build clock-web (push) Blocked by required conditions
CI / Build contacts-backend (push) Blocked by required conditions
CI / Build contacts-web (push) Blocked by required conditions
CI / Build presi-web (push) Blocked by required conditions
CI / Build storage-backend (push) Blocked by required conditions
CI / Build storage-web (push) Blocked by required conditions
CI / Build telegram-stats-bot (push) Blocked by required conditions
CI / Build nutriphi-backend (push) Blocked by required conditions
CI / Build nutriphi-web (push) Blocked by required conditions
CI / Build skilltree-web (push) Blocked by required conditions
CI / Build mana-matrix-bot (Go) (push) Blocked by required conditions
Docker Validate / Validate Dockerfiles (push) Waiting to run
Docker Validate / Build calendar-web (push) Blocked by required conditions
Docker Validate / Build todo-backend (push) Blocked by required conditions
Docker Validate / Build todo-web (push) Blocked by required conditions
Docker Validate / Build zitare-web (push) Blocked by required conditions
Docker Validate / Build mana-auth (push) Blocked by required conditions
Docker Validate / Build mana-sync (push) Blocked by required conditions
Docker Validate / Build mana-media (push) Blocked by required conditions
Mirror to Forgejo / Push to Forgejo (push) Waiting to run
LoginPage cleanup:
- Drop dev pre-fill credentials and the secret logo-as-button trick
- Remove duplicate in-component theme toggle; accept isDark as a prop and let the (auth) layout's global theme toggle drive it
- Move passkey CTA below the password form so the primary flow stays primary
- Remove the dead "Angemeldet bleiben" checkbox (was bound but never forwarded to onSignIn)
- Fix the skip-to-form link to use sr-only/focus:not-sr-only so it only appears on keyboard focus
- Fix the "oder" divider to render its before/after hairlines by setting an explicit color on the parent
- Wire focus-visible outlines on all interactive controls
- Bump 0.6 → 0.75 opacity on subtitle text for AA contrast
- Drop opacity-60 from the headerControls wrapper

Robustness:
- Track all setTimeout IDs in a Set and clear them in an effect cleanup so navigation away doesn't fire stale callbacks (success redirects, error shake, focus restore)
- Replace (result as any) casts with the new typed AuthResult fields
- New resolveErrorCode() helper prefers result.errorCode and falls back to legacy string matching, so rate-limit / account-lock detection survives i18n
- WebAuthn Conditional UI: on mount, if PublicKeyCredential.isConditionalMediationAvailable(), call onSignInWithPasskey({ conditional: true }) so passkeys appear inline in the email autofill dropdown
- Extract the dismissible success-banner markup into a {#snippet successBanner} and reuse it for the verified / verification-sent / magic-link-sent cases (~50 lines of duplicate JSX out)

Page wrappers:
- login/+page.svelte passes isDark={theme.isDark} so the in-app theme store drives both layouts
- register/+page.svelte wraps trackGuestConversion() in queueMicrotask + try/catch so analytics can never block the success redirect
- Drop the dead baseSignupCredits={25} prop from register/+page.svelte (RegisterPage never accepted it)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 12:41:19 +02:00
Till JS
ff7dc5d875 feat(auth): structured error codes + conditional passkey UI
- Add AuthErrorCode union and typed twoFactorRedirect/retryAfter fields on AuthResult so the frontend can branch on stable codes instead of locale-dependent error strings.
- Extend signInWithPasskey with an optional { conditional } flag, threaded through to @simplewebauthn/browser via useBrowserAutofill, so hosts can opt into WebAuthn Conditional UI (passkey suggestions inline in the email autofill dropdown).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 12:40:51 +02:00
Till JS
c8ed58b7d1 fix(mana,ui): integrate guest nudge into bottom stack + theme it
The "Gefällt es dir?" guest nudge was a free-floating fixed element at
bottom: 10rem, so it didn't follow the bottom-stack when the PillNav was
collapsed. Move it inside .bottom-stack as the first child so it shares
the stack's reflow.

NotificationBar now uses the elevation system (--color-surface-elevated,
--color-border-strong, --color-foreground) instead of hardcoded rgba so
it adapts to all themes. Bumped the CTA button (shadow + hover lift) and
container (stronger border, layered shadow) to be more visible.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 12:13:05 +02:00
Till JS
fbab96c74b feat(cycles): add menstrual cycle tracking module
New unified-app module under apps/mana/apps/web/src/lib/modules/cycles.
Adds three Dexie tables (cycles, cycleDayLogs, cycleSymptoms) in db v7,
SYNC_APP_MAP entry, app-registry registration, branding (icon + entry +
APP_URLS), and a /cycles route.

Includes phase derivation (menstruation/follicular/ovulation/luteal),
heuristic next-period and fertile-window prediction (rolling mean over
last 6 cycles), 10 default symptoms, and 33 unit tests covering the
pure utilities.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 14:35:33 +02:00
Till JS
30022e82e1 feat(events): scaffold social events module (Phase 1a, local-only)
New 'events' module for planning gatherings with guest lists and RSVPs,
distinct from the personal calendar. Events surface in the calendar via
TimeBlock with sourceModule='events'. Guests, RSVPs and a publish stub
work fully local-first; the public RSVP server lands in Phase 1b.
2026-04-07 14:12:41 +02:00
Till JS
8e71096a61 feat(dreams): scaffold Traumtagebuch module
Adds a new Dreams module to the unified Mana app for capturing dream
journal entries with mood, lucid status, recurring symbols, and
timeline insights. Founder-tier gated for now.

- Dexie schema v5 with dreams, dreamSymbols, dreamTags
- Mutation store with auto symbol counting on create/update/delete
- ListView with quick capture, inline editor, mood picker, lucid
  toggle, monthly grouping, insights ribbon, context menu
- Workbench registration with note → dream drop transform
- New 'dream' DragType, dreams app icon, mana-apps catalog entry

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 14:07:12 +02:00
Till JS
b9fdf0802f fix(cards-database): add .js extensions to relative imports for NodeNext
Package uses moduleResolution: NodeNext which requires explicit .js extensions on relative ESM imports. Without these, prepare/build failed and broke pnpm install for the whole monorepo.

The implicit-any errors on (table) callbacks were cascading from the broken imports — they resolve once the modules import correctly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 14:01:44 +02:00
Till JS
e974761e8a chore(workspace): unify vitest to ^4.1.2 across all packages
The lockfile had grown five (!) different vitest versions over time:
1.6.1, 2.1.9, 3.2.4, 4.1.2 and 4.1.3 — pulled in by various
packages that pinned outdated majors. The mismatch produced the
classic "createDOMElementFilter not found" startup crash because
hoisted @vitest/utils@3.x was loaded by the nested @vitest/runner@4.x.

Bumped every package.json that pinned an old vitest:
- apps/manavoxel/apps/web      (^4.1.0 → ^4.1.2)
- apps/matrix/apps/web         (^4.1.0 → ^4.1.2)
- apps/memoro/apps/server      (^3.0.0 → ^4.1.2)
- apps/nutriphi/packages/shared (^2.1.8 → ^4.1.2)
- packages/qr-export           (^3.0.5 → ^4.1.2)
- packages/shared-llm          (^2.0.0 → ^4.1.2)
- packages/shared-storage      (^4.1.0 → ^4.1.2)
- packages/spiral-db           (^1.6.1 → ^4.1.2)
- packages/test-config         (^3.0.0 → ^4.1.2)
- packages/wallpaper-generator (^3.0.5 → ^4.1.2)

After a clean pnpm-lock.yaml regenerate, every @vitest/* sub-package
resolves to a single version (4.1.3, picked by semver) — no more
duplicates between hoisted and nested node_modules.

Verified by running:
  pnpm --filter @mana/web vitest run src/lib/data/sync.test.ts
  → 20/20 tests passing in 217ms
  pnpm --filter @mana/web vitest run src/lib/data/time-blocks/recurrence.test.ts
  → 19/19 tests passing in 198ms

Pre-existing test failures in base-client.test.ts (German error
strings vs english assertions), dashboard.test.ts (widget count
drift), and content/help/index.test.ts (svelte-i18n locale not
initialised in test env) are unrelated and tracked separately.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:58:29 +02:00
Till JS
440f6507f1 fix: extract types from .svelte files for proper named re-exports
Svelte 5 .svelte modules only expose a default export, so 'export type { X } from "./X.svelte"' fails type-check. Move shared interfaces into adjacent .ts type files.

- shared-ui/navigation: SpotlightAction, ContentSearcher, ContentSearch{Result,Group} → types.ts
- shared-auth-ui: PasskeyManagerTranslations, TwoFactorSetupTranslations, SessionManagerTranslations → types.ts
- mana/web/page-carousel: CarouselPage → new types.ts
- mana/web: bump @vitest/* to 4.1.2 (matches lockfile)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:53:13 +02:00
Till JS
fc743a494b fix: type errors from ManaCore→Mana rename and stale templates
- shared-branding/mana-apps: drop duplicate `mana` and obsolete `inventar` URL entries
- web/app.d.ts: move __BUILD_HASH__/__BUILD_TIME__ ambient declarations into declare global so they survive module-scoping
- web: remove dead supabase template (routes/api/example, lib/server/middleware) — locals.session no longer exists post auth migration
- habits/queries: drop stale Record<string,string> cast on LocalHabit (legacy emoji field)
- shared-stores/toggle-field: cast to Dexie UpdateSpec instead of Partial<T> for newer dexie types

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:42:17 +02:00
Till JS
3ffbf37ee7 fix(shared-branding): dedupe duplicate manaSvg from rename collision
The ManaCore→Mana rename converted both `manaCoreSvg` and the existing
`manaSvg` to the same identifier, leaving two `const manaSvg = ...`
declarations and two `mana:` keys in APP_ICONS. This broke any consumer
of the package with a duplicate-symbol error at SSR build time.

Removed the legacy ManaCore icon (4-circle quartet) and kept the
current Mana brand icon (single droplet). Removed the duplicate
APP_ICONS entry as well.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:39:40 +02:00
Till JS
9e0ade4c0a fix(mana/web): sprint 3 — type-safe sync protocol + tests
- New SyncChange / FieldChange / SyncOp types replace `any[]` in
  applyServerChanges. The wire format is now self-documenting and
  TypeScript catches malformed callsites at compile time.
- isValidSyncChange() validates incoming server payloads at the boundary:
  malformed entries are dropped with a single warn log, valid ones are
  applied. A bad row from the server can no longer corrupt IndexedDB.
  Hand-rolled type guards keep us free of a runtime-validation dep.
- applyServerChanges() and readFieldTimestamps() are now top-level
  exports (extracted out of createUnifiedSync's closure) so they can be
  imported directly by tests. Behaviour is unchanged — the closure
  variant inside the sync manager just resolves the module-level
  symbol now.
- New sync.test.ts covers:
  * pure isValidSyncChange and readFieldTimestamps cases
  * field-level LWW: server-newer wins, split outcome when local-newer
    on one field and server-newer on another
  * insert with __fieldTimestamps stamping
  * soft-delete LWW guard
  * malformed-entry drop with valid entries surviving
  * sync-loop guard: server-applied writes don't generate _pendingChanges
- fake-indexeddb added as devDependency for the integration tests.

Note: the monorepo's vitest install is currently tangled across mixed
@vitest/* package versions in the lockfile, so `pnpm test` fails before
reaching this file. The tests are written to pass on any vitest 4.x once
that's untangled — needs its own dedicated cleanup pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:38:23 +02:00
Till JS
ce04f43248 fix(timeblocks): type errors from recurrence migration
- calendar/types: replace duplicate recurrenceRule with recurrenceDate on CalendarEvent; map it in timeBlockToCalendarEvent
- recurrence: drop stale Record casts now that LocalTimeBlock types isRecurrenceException and recurrenceDate
- todo: route recurrenceRule through TimeBlock in createTask/updateTask, load it from block in useTaskForm; accept labelIds via metadata; remove stale projectId casts
- calendar/events: include linkedBlockId/parentBlockId/recurrenceDate in createDraftEvent
- habits: drop unused db / LocalTimeBlock imports
- eslint-config: disable consistent-type-imports (parser conflict with .svelte.ts files)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:22:59 +02:00