Vite/Rollup im Docker-Build resolved den Import nicht über transitive
shared-auth-Hoisting — anderes pnpm-Layout in der Container-Stage. Lokal
funktionierte es nur durch zufällige Hoisting-Reihenfolge.
`settings-client.ts` importiert `@simplewebauthn/browser` für Passkey-
Register. Daher explizite Dep im managarten-Web-Paket.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
-1023 / +113 lines: lockfile shrinks dramatically as the workspace
loses 8 service packages plus their transitive devDeps. @mana/media-client
now resolved from Verdaccio (^0.1.0) instead of as a workspace member.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase A — Cards joins the unified theme system:
- Drop placeholder --color-cards-* palette; app.css imports
@mana/shared-tailwind/themes.css + sources.css.
- Remove hardcoded class="dark" from app.html; body uses
bg-background text-foreground.
- New $lib/stores/theme.ts: createThemeStore({ appId: 'cards' }).
ThemeToggle from @mana/shared-theme-ui in the header next to
the streak chip.
- Sweep all neutral / red / emerald / amber / indigo utilities in
apps/cards/apps/web/src to semantic tokens (560 substitutions
across 19 files): bg-neutral-900 → bg-card, text-neutral-400 →
text-muted-foreground, bg-red-500 → bg-error, etc. Domain
literals kept (FSRS grade colors red/orange/green/blue, GitHub-
violet PR-merged badge, marketplace-amber Buy button, admin-
inbox category palette).
- Cards added to validate-theme-utilities scope so future drift
fails CI.
Phase C — per-app accent token:
- New --color-app-accent in shared-tailwind/themes.css. Theme-
agnostic (registered in validate-theme-parity's THEME_AGNOSTIC
regex), so it stays the same across light/dark/lume/etc. Defaults
to Mana indigo at :root.
- Cards layout writes 258 90% 66% (= #8b5cf6 violet, from
MANA_APPS.cards.color) onto documentElement at boot via
applyCardsAccent(). All Cards CTAs (Lernen, Abonnieren, Senden,
links inside cloze cards) flow through bg-app-accent /
text-app-accent now.
Net effect: Cards gets light/dark + 4 palette variants + a11y
toggles for free, and any future app can drop in by setting its
own --color-app-accent without touching shared-tailwind.
PDF input:
• lib/ai/pdf.ts wraps pdfjs-dist (Apache-2.0). Worker is bound via
Vite's `?worker` suffix so the heavy parsing runs off-main-thread.
• AiCardGen gains a "📄 PDF laden" button that pipes extracted text
into the same textarea — the user can review/trim before
generation. Reading state shows file name + page count + chars.
Heatmap:
• queries.useStudyHeatmap(weeks=12) fills gaps with count=0 so the
grid renders without holes.
• StudyHeatmap.svelte: 7 rows × N columns (Monday-anchored), 5
intensity buckets (neutral → emerald-300), tooltip per cell with
date + count, legend strip.
• Mounted on the dashboard between the deck list and the Anki import
so the user lands on a quick visual progress receipt every visit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2e cleanup. status-page-gen + a dedicated nginx now run on the
GPU-Box (sparse repo clone provides the generator script + mana-apps.ts,
hourly git-pull via systemd timer). Container queries VictoriaMetrics
locally over docker-network ('http://victoriametrics:9090'), no public
vm.mana.how endpoint required — that hostname is also gone from the
GPU tunnel config (v25 → v26 effectively, removed in same PUT that
added status.mana.how).
DNS for status.mana.how now points at the mana-gpu-server tunnel.
Mini-tunnel ingress for it is removed; the previous 'mana-status-gen'
container on the Mini was stopped + rm'd.
Side benefit: closes the inode-stale-bind-mount bug that took status.
mana.how down for a few hours — single-file bind mounts on the Mini
break whenever the CD git-checkout rewrites the source file. The
GPU-Box mounts the same files but the systemd timer git-pulls in-
place, preserving the inode.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Anki users have decks they hate the UI for but can't migrate. This
gives them a one-drop path: drop a .apkg on the homepage, see a
preview, confirm, the cards land in our DB and start syncing.
Pipeline (lib/anki/):
• parse.ts — JSZip → sql.js (WASM SQLite) → walk Anki's three core
tables (col, notes, cards). Models (col.models JSON) classify each
note: type=0 → basic / basic-reverse, type=1 → cloze. Anki cards
table has one row per generated learnable unit (basic-reverse = 2,
cloze = N) — we dedupe at the note level since our model
regenerates those automatically via reviewStore.ensureReviewsForCard.
• import.ts — every Anki deck becomes one of ours (1:1, "::" → " / ");
fields go through sanitizeAnkiHtml (drops <img>, [sound:], maps
<b>/<i> to Markdown). Orphans land in a fallback "Anki-Import"
deck.
UI: AnkiImport.svelte on the decks list — drag-drop or click,
parse → preview ("X Decks, Y Karten"), confirm → import. No images,
no audio, no review history (cards are FSRS-new on import) — those
are Phase 2.
Deps: sql.js 1.14, jszip 3.10, @types/sql.js. WASM blob copied into
static/ so SvelteKit serves it at /sql-wasm.wasm.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Phase-1 crypto wrapper is a no-op stub; it never actually imports
@mana/shared-crypto. The dep was forward-looking for the Phase-2 vault
wiring, but it broke `pnpm install` inside the cards-web Dockerfile
because the sveltekit-base image only ships a curated subset of
@mana/* packages and shared-crypto isn't one of them.
The wiring will come back when the vault roundtrip is on, together
with a base-image bump.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Builds out the Cards spinoff end-to-end so the standalone app at
cards.mana.how shares its data layer with the in-mana cards module
through a single pure-utility package.
Why a spinoff and not just a deeper module: per the GUIDELINES, Cards
gets its own brand + URL while reusing mana-auth, mana-sync, and the
mana-credits/billing stack. The in-mana module under mana.how/cards
stays untouched as the integrated experience.
Phase 0 — mana-modul foundation
• New tables cardReviews + cardStudyBlocks (Dexie v61) + plaintext
classification in the crypto registry.
• LocalCard learns a {type, fields} shape; legacy front/back columns
kept as a back-compat mirror so older builds keep rendering.
• FSRS v6 scheduler + Cloze parser + Markdown render pipeline.
• UI in apps/mana/.../routes/(app)/cards/ gets a learn session
(learn/[deckId]), 4-type card editor, due-counter, markdown lists.
Phase 1 — standalone (apps/cards/apps/web)
• SvelteKit 2 + Svelte 5 + Tailwind 4, port 5180.
• Own Dexie 'cards' DB with a slim 5-table schema.
• Own sync engine: pending-changes hooks, 1 s push / 5 s pull against
POST /sync/cards, server-apply with suppression to avoid ping-pong.
• Auth-Gate via @mana/shared-auth-ui (LoginPage / RegisterPage).
• Encryption hooks at every write/read/apply path, currently no-op
stubs — flipping to real vault-backed AES-GCM is a single-file
change in src/lib/data/crypto.ts.
Shared package — @mana/cards-core
• Pulls types, cloze, card-reviews, FSRS wrapper, and Markdown
renderer out of the mana module so both frontends import from one
source. mana-modul keeps thin re-export shims so consumers don't
need to change imports.
• 19 vitest tests carried over from the mana module.
Server-side wiring
• cards.mana.how added to mana-auth PRODUCTION_TRUSTED_ORIGINS and
its CORS_ORIGINS env (sso-config.spec.ts stays green).
• New cards-web container in docker-compose.macmini.yml (mirrors
manavoxel-web pattern, 128m, depends on mana-auth healthy).
• cloudflared-config.yml repoints cards.mana.how from :5000 (the
unified mana-web container) to :5180. mana.how/cards is unchanged.
Cleanup
• Removed an unrelated 2026-03/04 NestJS+Supabase+Expo experiment
that was lingering under apps/cards/ (apps/landing, supabase/,
.github/workflows, MANA_CORE_*.md, etc.). It predated this plan
and would have confused future readers.
Validation
• svelte-check on mana-web: 0 errors over 7697 files
• svelte-check on cards-web: 0 errors over 3481 files
• vitest on cards-core: 19/19 pass
• pnpm check:crypto: 214 tables classified
• bun test sso-config.spec.ts: 8/8 pass
• vite build on cards-web: green
Not done in this commit (deliberate)
• Real encryption (vault roundtrip) — Phase 2.
• WebSocket-driven pull (5 s polling for now).
• Mobile/landing standalone surfaces — Phase 2/3.
• The actual production cutover on the Mac mini (build, deploy,
cloudflared sync) — config is staged, deploy is a user action.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Arcade lives as its own pnpm workspace at ~/Documents/Code/arcade
now, with no @mana/* coupling. This drops every reference and the
games/ directory from the monorepo.
Removes:
- games/ directory (89 files: web + server + 22 HTML games + screenshots)
- @arcade/web, @arcade/server pnpm workspace entries (games/* globs)
- arcade scripts in root package.json (4 scripts)
- arcade.mana.how from mana-auth trusted origins + CORS_ORIGINS
- arcade entries in mana-apps registry, app-icons, URL overrides
- arcade.mana.how from cloudflared tunnel + prometheus blackbox probes
- arcade-web service block in docker-compose.macmini.yml
- generate-env.mjs entries for arcade server + web
- BRANDING_ONLY 'arcade' entry in registry consistency spec
- dead arcade translation keys in GuestWelcomeModal (DE+EN)
- arcade mention in CLAUDE.md, authentication guideline, MODULE_REGISTRY
Verified:
- services/mana-auth/src/auth/sso-config.spec.ts: 8/8 pass
- pnpm install regenerates lockfile cleanly (-536 lines)
- no remaining 'arcade' refs outside historical snapshot docs
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
iPhone HEIC photos uploaded through Chrome on macOS landed as
`mimeType: application/octet-stream` because Chrome doesn't recognise
the HEIC MIME and `file.type` was empty. The transform endpoint then
refused with `Transform only supported for images` (HTTP 400) and
the wardrobe Try-On flow surfaced this as `mana-media transform
failed for <id>: HTTP 400`. Even fixing the MIME wouldn't have been
enough — sharp's prebuilt binary ships the heif container format
without a HEVC decoder plugin (libde265 is omitted for patent
reasons), so the actual decode would still throw.
Three-part fix at the upload edge:
1. New `services/sniff.ts` — magic-byte sniffer for image MIMEs.
Reads the first ~16 bytes and recognises JPEG, PNG, GIF, WebP,
BMP, TIFF, HEIC, HEIF, AVIF. Returns `null` for everything else
so the caller can fall back to whatever the browser claimed.
2. Upload route — sniffs every upload before passing the buffer to
`uploadService.upload`. Trusts magic bytes over `file.type` so
Chrome's empty-type HEIC still lands with `image/heic`. Removes
the entire class of `application/octet-stream` rows for files
that are obviously images.
3. HEIC/HEIF transcoded to JPEG at upload via the new
`heic-convert` dependency (pure-JS WASM, no system libs needed).
The original buffer is replaced with the JPEG bytes, the MIME
becomes `image/jpeg`, and the filename's `.HEIC` extension is
rewritten to `.jpg`. Downstream code (process pipeline, transform
endpoint, sharp) then deals exclusively with formats sharp can
actually decode. Failure path returns HTTP 500 with a clear
`HEIC conversion failed` error so the client knows it wasn't a
generic crash.
Bonus, transform endpoint hardening: `mimeType.startsWith('image/')`
gate now also accepts a row whose stored MIME is wrong (legacy
`application/octet-stream` from before this fix) when the actual
bytes sniff as an image. Lets old broken rows still serve where
the format itself is decodable; the upload-side fix prevents new
ones from existing.
Sharp 0.33 on this machine reports `heif: 1.18.2` for the container
but rejects the actual HEVC compressed bitstream — confirmed by the
exact error string `No decoding plugin installed for this
compression format (11.6003)`. Going through `heic-convert` first
sidesteps that entirely.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
SharedLinkControls now renders a lazy QR code (qrcode npm) and a
datetime-local "Läuft ab" picker. Both stay in sync with the active
URL — regenerating the link rebuilds the QR; clearing the expiry
re-publishes with no `expiresAt`.
Wired across all three unlisted collections:
- Calendar: LocalEvent.unlistedExpiresAt + setUnlistedExpiry +
preserve-on-refresh + clear-on-flip; both Workbench DetailView and
EventDetailModal pass expiresAt+onExpiryChange to SharedLinkControls.
- Library: same pattern in libraryEntriesStore + DetailView.
- Places: same pattern in placesStore + DetailView.
setVisibility clears any prior expiry so a flip-away-flip-back gets
a fresh "never expires" link. refreshUnlistedSnapshot and
regenerateUnlistedToken preserve the existing expiry so a content
edit or token rotation never silently extends a link's lifetime.
The qrcode dep ships as a regular `dependencies` entry on
@mana/shared-privacy so any consuming app picks it up via the
workspace.
Note: an unrelated svelte-check error in writing/components/DraftCard
("draft" not assignable to DragType) exists from a parallel session
and is not introduced by this commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First consumer of @mana/shared-privacy. Library entries now carry an
explicit VisibilityLevel the owner can flip from the detail view via
<VisibilityPicker>; embed resolver gates hard on canEmbedOnWebsite so
only entries the user marked 'public' appear on published websites.
Replaces the M1/old flow — the library embed used to pass-through
`filter.isFavorite` as a weak proxy for "show on my site". That filter
still works as an additional user-facing filter, but it can no longer
override the visibility gate (fixes a real leak: a favourited private
book would have ended up on the public snapshot).
Changes:
- @mana/shared-privacy added to the web-app's dependency list
- LocalLibraryEntry + LibraryEntry gain visibility / unlistedToken /
visibilityChangedAt / visibilityChangedBy fields. Legacy rows
(pre-migration) fall back to 'space' via the toLibraryEntry
converter — matches the Dexie hook's existing structural default
and maps to the space-foundation semantics unchanged
- libraryEntriesStore.createEntry stamps defaultVisibilityFor(active
space.type) explicitly so personal-space entries default to
'private' instead of the generic 'space' fallback
- libraryEntriesStore.setVisibility(id, level): flips the field,
mints/clears the unlisted token on the transition boundary, emits
the cross-module VisibilityChanged domain event
- Event catalog registers VisibilityChanged with the payload type
re-exported from @mana/shared-privacy (kept under a dedicated
"Visibility (Cross-Module)" section — this is the first of many
modules that will emit it)
- Library DetailView header gains the <VisibilityPicker> next to the
kind-pill, so "who sees this?" is visible at a glance
- embeds.ts resolveLibraryEntries replaces its favourite-proxy gate
with canEmbedOnWebsite. User filters (kind/status/favorite) still
stack on top but cannot relax the visibility requirement
- ListView's inline-create EntryForm seed ships with
visibility: 'private' so the type asserts cleanly and the preview
entry matches the safe default
No schema migration needed — the visibility column already exists on
every space-scoped Dexie record (Spaces-Foundation v28). The Dexie
hook's 'space' default still fires for rows the library store doesn't
pre-populate (e.g. legacy paths); setVisibility and createEntry now
own the intent.
What's verified:
- pnpm check (web): 7450 files, 0 errors, 0 warnings
- pnpm test library + website: 23/23 passing
- @mana/shared-privacy: 15/15 passing (re-ran after the dep pull)
- pnpm run validate:all: theme-tokens, theme-parity, crypto-registry,
encrypted-tools all green
Next in the rollout: M3 Picture (swap the picture.board isPublic
flag for visibility and update the board embed to use
canEmbedOnWebsite). See docs/plans/visibility-system.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Scaffold the unified visibility/privacy layer introduced by docs/plans/
visibility-system.md. No module adopts it yet — this is the foundation
PR (M1). Module rollout lands in follow-ups starting with Library (M2).
What ships:
- @mana/shared-privacy package
- VisibilityLevel enum ('private' | 'space' | 'unlisted' | 'public')
- VisibilityLevelSchema + UnlistedTokenSchema (zod)
- defaultVisibilityFor(spaceType): personal → private, else → space
- predicates: canEmbedOnWebsite, isReachableByLink,
isVisibleToSpaceMember, canAiAccessCrossUser (always false in P1)
- generateUnlistedToken() — 32-char base64url, CSPRNG, ~192 bits
- VISIBILITY_METADATA: German labels + descriptions + phosphor icon
names so non-UI surfaces (audit logs, CLI) label levels consistently
- <VisibilityPicker> svelte component: compact lock/globe trigger with
4-option menu, full descriptions, optional compact + disabledLevels
- VisibilityChangedPayload type for the domain-event catalog (consumer
registers it when the first module adopts the system)
- .claude/guidelines/visibility.md — step-by-step for module authors
(schema migrations + store wiring + picker placement + embed resolver +
legacy isPublic migration), with a pre-PR checklist
- Plan-doc "Offene Fragen" section rewritten as "Designentscheidungen"
with the seven resolutions the user approved
- CLAUDE.md: shared-privacy listed in the packages table; visibility.md
listed in the guidelines table
- 15 unit tests covering predicates (one-and-only-one 'public' for
embed; phase-1 AI always-deny), defaults (personal vs multi-member,
null fallback), token uniqueness + schema round-trip
Key constraints honored:
- `visibility` stays plaintext (NOT added to the encryption registry)
so RLS predicates and publish resolvers can read it without the user's
master key
- Publish flow remains "decrypt client-side, inline plaintext into
snapshot" — the pattern picture.board already uses in embeds.ts
- Deny-by-default everywhere (personal default = private; unknown space
type defaults to private; cross-user AI always false)
Not in this PR (per plan):
- No schema migrations in any module (M2–M6)
- No RLS predicate updates (arrives with M2)
- No /settings/privacy overview (M7)
- No unlisted share routes (M8)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two interlocking fixes driven by a production lockout incident.
## Bug that motivated this
A fresh schema-drift column (auth.users.onboarding_completed_at) made
every Better Auth query crash with Postgres 42703. The /login wrapper
swallowed the non-2xx and mapped it onto a generic "401 Invalid
credentials" AND bumped the password lockout counter — so 5 legit
login attempts against a broken DB would have locked every real user
out of their own account. Same wrapper pattern on /register, /refresh,
/reset-password etc. The 30-minute hunt ended in a one-off repro
script that finally surfaced the real Postgres error.
The user-facing passkey button additionally returned generic 404s on
every login-page mount because the route wasn't registered (the DB
schema existed, the Better Auth plugin wasn't wired).
## Phase 1 — Error classification (services/mana-auth/src/lib/auth-errors)
- 19-code AuthErrorCode taxonomy (INVALID_CREDENTIALS, EMAIL_NOT_VERIFIED,
ACCOUNT_LOCKED, SERVICE_UNAVAILABLE, PASSKEY_VERIFICATION_FAILED, …)
- classifyFromResponse/classifyFromError handle: Better Auth APIError
(duck-typed on `name === 'APIError'`), Postgres errors (23505 unique,
42703/08xxx → infra), ZodError, fetch/ECONNREFUSED network errors,
bare Error, unknown.
- respondWithError routes the structured response, logs at the right
level, fires the correct security event, and CRITICALLY only bumps
the lockout counter for actual credential failures — SERVICE_UNAVAILABLE
and INTERNAL never touch lockout.
- All 12 endpoints in routes/auth.ts refactored (/login, /register,
/logout, /session-to-token, /refresh, /validate, /forgot-password,
/reset-password, /resend-verification, /profile GET+POST,
/change-email, /change-password, /account DELETE).
- Fixed pre-existing auth.api.forgetPassword typo (→ requestPasswordReset).
- shared-logger + requestLogger middleware wired in index.ts; all
console.* calls in the service removed.
## Phase 2 — Passkey end-to-end (@better-auth/passkey 1.6+)
- sql/007_passkey_bootstrap.sql: idempotent schema alignment —
friendly_name→name, +aaguid, transports jsonb→text, +method column
on login_attempts.
- better-auth.config.ts: passkey plugin wired with rpID/rpName/origin
from new webauthn config section. rpID defaults to mana.how in prod
(from COOKIE_DOMAIN), localhost in dev.
- routes/passkeys.ts: 7 wrapper endpoints (capability probe,
register/options+verify, authenticate/options+verify with JWT mint,
list, delete, rename). Each routes errors through the classifier;
authenticate/verify promotes generic INVALID_CREDENTIALS to
PASSKEY_VERIFICATION_FAILED.
- PasskeyRateLimitService: in-memory per-IP (options: 20/min) and
per-credential (verify: 10 failures/min → 5 min cooldown) buckets.
Deliberately separate from the password lockout — different factor,
different blast radius.
- Client: authService.getPasskeyCapability() async probe, memoised per
session. authStore.passkeyAvailable reactive state. LoginPage gates
on === true so a slow probe doesn't flash the button in.
- AuthResult grew a code: AuthErrorCode field; handleAuthError in
shared-auth prefers the server envelope over the legacy message
heuristics.
## Tests
- 30 unit tests for the classifier covering every branch (including
the exact Postgres 42703 shape that started this).
- 9 unit tests for the rate limiter.
- 14 integration tests for the auth routes — the regression test
explicitly asserts "upstream 500 → 503 + zero lockout bumps".
- 101 tests pass, 0 fail, 30 pre-existing skips unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
M1 of docs/plans/wardrobe-module.md — pure data layer + backend plumbing,
zero UI (that's M2). A user can now hold a digital wardrobe per space:
brand merch, club Trikots, family Kleiderschrank, team Kostüme, practice
Dresscode, and personal closet all live as separate pools under the same
Dexie tables, space-scoped like tags/scenes/agents after Phase 2c.
Data model — two tables, no join:
- wardrobeGarments (Dexie v41): single clothing items / accessories.
Indexed on `category` + `createdAt` + `isArchived`. Encrypted:
name/brand/color/size/material/tags/notes. Plaintext: category,
mediaIds, counters, timestamps — all indexed or structural.
`mediaIds[0]` is the primary photo used for try-on; additional
ids are alternate views (back, detail) for M7.
- wardrobeOutfits (Dexie v41): named compositions referencing
garment ids. Encrypted: name/description/tags. Plaintext:
garmentIds (FK array), occasion (closed enum — useful for
undecrypted filtering), season, booleans, lastTryOn snapshot.
- picture.images gains `wardrobeOutfitId?: string | null` as a
plaintext back-reference. Try-on results land in the Picture
gallery like any other generation; the outfit detail view
queries them via this id rather than maintaining a third table.
Space scope:
- `wardrobe` added to all five explicit allowlists in shared-types/
spaces.ts (personal is wildcard, no edit needed). Each space type
gets a one-line comment explaining the real-world use case.
- App registry: `wardrobe` entry in shared-branding/mana-apps.ts
with a rose→fuchsia gradient icon (T-shirt on hanger silhouette),
color #e11d48, tier 'beta', status 'beta'.
- Module registry: wardrobeModuleConfig imported + appended to
MODULE_CONFIGS so SYNC_APP_MAP picks it up automatically.
Backend:
- MAX_REFERENCE_IMAGES bumped 4 → 8 in picture/generate-with-
reference (plus the client-side default in ReferenceImagePicker).
Justified with a comment: face + body + top + bottom + shoes +
outerwear + 2 accessories = 8. Cost doesn't scale with ref count
(OpenAI bills per output), so the bump is a pure capability
expansion with no credit-side risk.
- New POST /api/v1/wardrobe/garments/upload wraps uploadImageToMedia
with app='wardrobe'. Registered under /api/v1/wardrobe in index.ts.
Pattern 1:1 with the profile/me-images/upload endpoint; tier-gating
falls out of wardrobe NOT being in RESOURCE_MODULES (tier='guest'
works — consistent with picture's plain CRUD).
Stores emit domain events (WardrobeGarmentAdded, WardrobeOutfitCreated,
WardrobeOutfitTryOn, etc.) so later mana-ai missions can observe
activity without polling.
No UI in this commit. M2 (Garments-Grundlayer) wires the route + grid
+ upload-zone; M3 the Outfit composer; M4 the Try-On integration.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Enables the M1 parallel-reads optimisation on the webapp side. Both
consumers of runPlannerLoop pass an isParallelSafe predicate derived
from the tool catalog:
isParallelSafe: (name) =>
AI_TOOL_CATALOG_BY_NAME.get(name)?.defaultPolicy === 'auto'
Auto-policy tools (list_tasks, get_habits, nutrition_summary, …) run
via Promise.all in batches of 10 when the LLM fans them out in one
round. Propose-policy tools — which surface to the user as Proposal
cards — stay sequential so intent ordering in the inbox is preserved
and pre-execute guardrails can reason about prior-step state.
Tests: 31 existing companion + mission tests pass unchanged; the
parallel path is exercised via the new loop.test.ts cases shipped
with the M1 commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Foundation for autonomous Claude-driven testing. Plan:
docs/plans/mana-mcp-and-personas.md.
New packages
- @mana/tool-registry — schema-first ToolSpec<InputSchema, OutputSchema>
with zod generics, scope ('user-space' | 'admin') and policyHint
('read' | 'write' | 'destructive'). sync-client helpers speak the
mana-sync push/pull protocol directly so RLS and field-level LWW are
preserved. MasterKeyClient fetches per-user MKs via the existing
mana-auth GET /api/v1/me/encryption-vault/key endpoint (JWT-gated,
ZK-aware, already audited) — no new service-key endpoint built.
ZeroKnowledgeUserError surfaced as a typed throw.
- @mana/shared-crypto — AES-GCM-256 primitives extracted from the web
app's $lib/data/crypto/aes.ts so the server-side tool handlers and the
browser produce byte-for-byte identical wire format
(enc:1:{b64(iv)}.{b64(ct)}). Web app aes.ts now re-exports from
shared-crypto — 5 existing importers unchanged, svelte-check stays
green.
New service
- services/mana-mcp (:3069, Bun/Hono) — MCP Streamable HTTP gateway.
JWKS auth against mana-auth, per-user session isolation (session-id
belongs to the user who opened it — cross-user access returns 403),
admin-scoped tools filtered out before registration. MasterKeyClient
cached per process with a 5-minute TTL.
11 tools registered
- habits.{create,list,update,archive}, spaces.list (plaintext, M1)
- todo.{create,list,complete}, notes.{create,search}, journal.add
(encrypted — field lists match
apps/mana/apps/web/src/lib/data/crypto/registry.ts verbatim)
Infra
- Port 3069 added to docs/PORT_SCHEMA.md
- services/mana-mcp/CLAUDE.md with architecture, auth model,
tool-authoring recipe, local smoke-test steps
- Root CLAUDE.md services list updated
Type-check green across shared-crypto, mana-tool-registry, mana-mcp.
svelte-check on apps/mana/apps/web stays at 0 errors / 0 warnings.
Boot smoke verified: /health returns registry.loaded=true, unauthed
/mcp → 401, invalid-JWT /mcp → 401 with descriptive message.
Decisions locked in for later milestones (per plan D1–D10):
- Personas will be real mana-auth users (users.kind='persona'), no
service-key bypass (D1, D2)
- Tool-registry is the SSOT; mana-ai and the legacy
apps/api/src/mcp/server.ts get merged into it in M4 (three current
parallel tool catalogs collapse to one)
- Persona-runner (:3070) will be a separate service using the Claude
Agent SDK + MCP client (D5)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Opt-in path for missions that want Gemini Deep Research Max (up to 60 min
per task) instead of the shallow RSS pre-research. Because Max runs well
past a single 60-second tick, the state is carried across ticks:
tick N: submit → INSERT mission_research_jobs row → skip planner
tick N+k: poll → still running → skip planner (metric pending_skips)
tick N+m: poll → completed → inject as ResolvedInput, DELETE row, plan
- ManaResearchClient talks to mana-research's new internal
/v1/internal/research/async endpoints with X-Service-Key +
X-User-Id. Graceful-null on transport errors so a flaky
mana-research never crashes the tick loop.
- New table mana_ai.mission_research_jobs with PK (user_id, mission_id)
— presence is the "pending" flag; delete-on-terminal keeps queries
trivial.
- handleDeepResearch() encapsulates the state machine; planOneMission
now returns a discriminated union (planned | skipped | failed) so
"research pending" isn't miscounted as a parse failure.
- Opt-in at TWO gates to keep cost in check ($3–7/task, 1500 credits
per run):
1. MANA_AI_DEEP_RESEARCH_ENABLED=true server-side (default off)
2. DEEP_RESEARCH_TRIGGER regex matches the mission objective
(strict: "deep research", "tiefe recherche", "umfassende
recherche", "hintergrundrecherche", "deep dive")
Falls back to shallow RSS when either gate fails or the submit
errors upstream.
- Prom metrics: mana_ai_research_jobs_{submitted,completed,failed}_total
labelled by provider, plus _pending_skips_total.
- docker-compose wires MANA_RESEARCH_URL + the opt-in flag and adds
mana-research to depends_on.
- Full write-up with real API response shape (outputs plural, not
OpenAI-style), step-3 MCP-server plan (security-gated, not built),
ops + kill-switch: docs/reports/gemini-deep-research.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
End-to-end send path lives: click "Jetzt senden" in step 4 → client
resolves recipients → POST /v1/mail/bulk-send → mana-mail loops through
JMAP with per-recipient signed URLs → status flips draft → sent.
mana-mail (backend)
- New Postgres schema `broadcast.{campaigns,sends,events}` in Drizzle.
Campaigns + sends keyed on the webapp's local ids so joins are free;
events append-only with send_id FK, dedup at query-time not write-time
so tracking pixel hits don't contend on a transaction.
- tracking-token.ts: HMAC-SHA256 over JSON({campaignId, sendId, nonce}),
base64url.base64url encoded. JSON inner payload instead of delimiter
splits so IDs can contain any character. timingSafeEqual for the HMAC
comparison. 9 unit tests covering roundtrip / tamper / malformed.
- broadcast-orchestrator.ts: takes pre-resolved recipient list, inlines
CSS once via juice (webResources.images=false so no external fetches
slow the loop), per-recipient substitutes `{{unsubscribe_url}}` /
`{{web_view_url}}` + injects open pixel, submits each mail through
the user's own JMAP account. Writes sends rows first (status=queued)
so a crash mid-loop leaves truthful DB state. Returns aggregate
stats + per-email errors.
- Routes: POST /v1/mail/bulk-send (JWT, cap at 5000 recipients via
zod + config), GET /v1/mail/campaigns/:id/events (JWT, aggregates
opens + clicks + unsubscribes with COUNT DISTINCT for the "unique"
metric), GET/POST /v1/track/{open,click,unsubscribe}/:token (public,
no auth, signed URL is the only gate).
- Track routes mounted OUTSIDE /api/v1/mail/* because the JWT
middleware guards that subtree — recipients aren't logged in.
- Config: BROADCAST_TRACKING_SECRET (separate from SERVICE_KEY so the
blast radius of a leak stays narrow),
BROADCAST_MAX_RECIPIENTS_PER_CAMPAIGN (default 5000),
BROADCAST_MAX_RECIPIENTS_PER_HOUR (default 500, not yet enforced).
- Added juice@^11 dependency.
Webapp (client)
- api.ts: sendCampaign() resolves the audience from Dexie contacts,
renders the full email HTML + plaintext with placeholders, POSTs to
mana-mail. Contacts NEVER leave the client decrypted — the server
only sees the flat recipient list the user's client produced.
- fetchCampaignStats() for M7 dashboard/detail polling.
- ComposeView step 4 replaced: confirmation modal with "sicher?"
question, sending state with spinner, done state with delivered
count + expandable per-email error list + "Zur Übersicht" button.
- Status transitions to 'sent' with cached stats after successful
send via applyServerStatus.
Known M4 gaps (fill in M5)
- Open/click/unsubscribe track endpoints return valid responses but
event dedup is rough — one insert per hit, dedup at query time
only. M5 adds windowed IP-hash dedup.
- Synchronous send loop. 100 recipients ≈ 15s blocking. M5/M6 moves
this to an async job queue with SSE progress.
- Each recipient generates a "Sent" folder entry in the user's
Stalwart mailbox. Fine for 50-recipient newsletters, silly for
5000. Phase 2 carves out a dedicated broadcast mailbox.
Plan: docs/plans/broadcast-module.md §M4.
Next: M5 open/click tracking with dedup + rate-limits.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Core authoring loop works end-to-end: create a draft, filter an audience
from contacts, write content in a rich-text editor, save. Send is still
stubbed (M4 gets mana-mail's bulk endpoint).
Dependencies
- @tiptap/core + starter-kit + image + link + placeholder (3.22.4)
- shared-auth/tsconfig: allowImportingTsExtensions +
rewriteRelativeImportExtensions so tsc accepts shared-types' explicit
.ts imports. Was blocking EVERY pnpm install postinstall hook in the
repo — fixing it here unblocks everyone, not just broadcast.
Module
- queries.ts: useAllCampaigns / useAllTemplates with scoped-db + crypto,
computeStats (counts + open/click rates per year), formatRate helper
- stores/settings.svelte.ts: singleton with ensure/get/update, same
pattern as invoices settings
- stores/campaigns.svelte.ts: createCampaign (pulls sender defaults from
settings), updateCampaign / updateContent / updateAudience (draft-only
edit guard), schedule / cancel / duplicate / deleteCampaign, plus an
applyServerStatus hook for M4's orchestrator to write back progress
Audience
- audience/segment-builder.ts: pure matchContact / filterAudience /
countAudience / describeAudience. AND semantics across filters. Drops
contacts without a usable email so estimatedCount never inflates.
- audience/AudienceBuilder.svelte: tag-chip UI with live count, dedup
(same tag twice toggles op instead of stacking), greys out already-
referenced tags in the picker
Editor
- editor/Editor.svelte: Tiptap wrapper with onMount / onDestroy, toolbar
(bold/italic/H1/H2/lists/link/image), bind on content (Tiptap JSON +
derived HTML/plaintext). Image upload reuses invoices' mana-media
uploader pragmatically; extract to @mana/shared-uload later.
Compose wizard
- views/ComposeView.svelte: 4-step stepper (Audience → Content →
Preflight → Send). Steps 3+4 stubbed pragmatically. Autosave on step
change so content survives navigation. Step 3/4 gated on earlier
readiness so the user can't skip.
Routes
- /broadcasts/new: bootstraps a draft + redirects to edit
- /broadcasts/[id]/edit: guarded on status=='draft'
- ListView: working "+ Neue Kampagne" button, rows open edit
Tests
- 17 unit tests for segment-builder covering tag has/not-has/AND,
email eq/contains case-insensitivity, no-email filtering, no-mutation,
describeAudience resolver + fallback
Plan: docs/plans/broadcast-module.md §M2.
Next: M3 HTML-render with email-safe inlining + preview.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Moves the canonical SpaceType + SPACE_MODULE_ALLOWLIST to @mana/shared-types
(framework-free) so the Bun services can consume them without pulling in
Svelte. shared-branding keeps only the UI-facing labels and descriptions
and re-exports the canonical types for frontend convenience.
Wires two Better Auth organization hooks in mana-auth:
- beforeCreateOrganization asserts metadata.type is a valid SpaceType,
rejecting the create with a BAD_REQUEST otherwise.
- beforeDeleteOrganization rejects deletion of the personal space.
Covered by bun tests (11 assertions) for the helper module.
No migration and no schema change — type lives in the existing
organization.metadata jsonb column.
Plan: docs/plans/spaces-foundation.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds client-side PDF generation via pdf-lib (Helvetica standard fonts,
~7KB output, no font bytes shipped).
Renderer (pdf/renderer.ts)
- renderInvoicePdf(invoice, settings) → Uint8Array
- renderInvoicePdfBlob(...) → Blob for iframe / download / email attach
- Layout sections: header (sender + meta), recipient, subject, lines
table with wrapping + description row, totals with per-rate VAT
breakdown, notes, terms, footer
- Pagination: lines table opens a continuation page if content would
overflow into the QR-Bill reserved area; continuation pages redraw
the table header
Template (pdf/templates/default.ts)
- A4, margins in mm, emerald accent matching app icon
- Reserves 105mm at page bottom for the Swiss QR-Bill (M5) so the
body never collides with that region
DetailView integration
- Live PDF preview in an iframe — re-renders when invoice.updatedAt
changes (mutations bump the timestamp)
- Blob URLs revoked on render / unmount to avoid memory leaks
- "PDF herunterladen" button produces a Rechnung-{number}.pdf download
- Structured-data view moved behind <details> so the PDF is the primary
surface; raw data still accessible for debugging
pdf-lib dep added to @mana/web.
Plan: docs/plans/invoices-module.md §M4.
Next: M5 swissqrbill (Zahlteil in the reserved region).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Six Expo mobile apps lagged behind their web counterparts and haven't
shipped updates. Keeping them in the repo kept CI noisy (the context/
mobile type errors were only unmasked after yesterday's postinstall
fix), and they blocked other cleanup (parallel lockfile entries, dead
scripts). Removing them since the web surface under mana.how is the
active product.
Deleted (~175 MB, ~700 files):
- apps/cards/apps/mobile
- apps/chat/apps/mobile
- apps/context/apps/mobile (the one still failing type-check)
- apps/mana/apps/mobile
- apps/picture/apps/mobile
- apps/traces/apps/mobile
Kept: apps/memoro/apps/mobile (the only actively-developed mobile app,
tied to the audio-recording native module).
Cleanup:
- Dropped 6 `dev:*:mobile` scripts from root package.json that pointed
at the deleted apps. Other `dev:*:mobile` entries (quotes, contacts,
calendar, mail, moodlit, finance, figgos) already pointed at
non-existent apps before this change — out of scope, a separate
dead-script sweep.
- Root CLAUDE.md: updated the "per-product mobile apps exist" prose
and the repo-layout diagram to reflect the memoro-only reality.
- apps/mana/CLAUDE.md: removed the `mobile/` entry from the apps/
layout box, noted the deletion date, and updated the tech-stack
table to point at the memoro mobile app as the sole Expo surface.
No CI workflow or turbo.json references touched — none existed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI previously ran `pnpm run test || true` — test failures were silently
swallowed with no artifact, so we had no visibility into what was actually
passing across 1,296 test files.
- New `test:coverage` turbo pipeline task + root script; packages that opt
in by declaring their own `test:coverage` get picked up automatically.
- Wired up three high-value Vitest targets: apps/mana/apps/web (main
frontend, ~590 tests), shared-ui (Svelte component library), and
shared-storage (S3 client). Each emits lcov.info + coverage-summary.json
+ browsable HTML.
- apps/mana/apps/web `"test"` was running in watch mode (just `vitest`),
which hangs under turbo orchestration — changed to `vitest run` and
added `test:watch` for the interactive case.
- CI uploads coverage artifacts (14-day retention) regardless of whether
tests passed. `continue-on-error: true` replaces `|| true` so a failed
suite shows up as a warning annotation on the PR rather than being
invisible. Flip to a hard gate once main is green for a full week.
- Testing guideline documents the pattern + the template vitest config
+ the planned 80% threshold.
- ESLint flat-config `vitest.config.ts` ignore only matched at the root;
widened to `**/vitest.config.{ts,js,mjs}` so nested configs don't trip
the project-service parser.
Coverage baseline produced locally:
shared-storage: 91.37% lines (6 files, 123 tests)
shared-ui: 2.87% lines (mostly Svelte components, untested)
apps/mana/web: 9/59 test files fail — pre-existing, not regression
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The JWT already carried a `tier` claim but nothing on the server read it
— AuthGate enforcement was client-only, so a valid JWT could hit paid
LLM/research endpoints regardless of the user's access tier.
- shared-hono authMiddleware now extracts `tier` into `c.userTier`,
defaulting unknown/missing claims to `public` (never silently grants
higher access).
- New `requireTier(minTier)` middleware + `hasTier`/`getTierLevel`
helpers. Tier hierarchy (guest < public < beta < alpha < founder) is
mirrored locally to avoid pulling the Svelte-facing shared-branding
package into Bun services.
- Applied `requireTier('beta')` as defense-in-depth on resource-heavy
apps/api modules (chat, context, food, guides, news-research, picture,
plants, research, traces, who) and the MCP endpoint. Pure CRUD modules
stay auth-only — access there is gated by ownership, not tier.
- DEV_BYPASS_AUTH now injects `userTier` (defaults to founder, override
via DEV_USER_TIER).
- Authentication guideline documents the pattern + test suite covers
hierarchy, passes-at-minimum, and rejection paths.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
New Bun/Hono service on port 3068 that bundles many web-research providers
behind a unified interface for side-by-side comparison. All eval runs
persist in research.* (mana_platform) so quality can be reviewed later.
Providers (Phase 1+2):
search: searxng, duckduckgo, brave, tavily, exa, serper
extract: readability (via mana-search), jina-reader, firecrawl
Endpoints:
POST /v1/search, /v1/search/compare — single + fan-out
POST /v1/extract, /v1/extract/compare — single + fan-out
GET /v1/runs, /v1/runs/:id — history
POST /v1/runs/:run/results/:id/rate — manual eval
GET /v1/providers, /v1/providers/health — catalog + readiness
Auto-routing: when `provider` is omitted, queries are classified via regex
(fast path, 0ms) with optional mana-llm fallback, then routed to the first
available provider for that query type (news → tavily, academic → exa,
semantic → exa, etc.).
Credits: server-key calls go through mana-credits reserve → commit/refund
so failed provider calls don't charge the user. BYO-keys supported via
research.provider_configs (UI arrives in Phase 4).
Cache: Redis with graceful degradation (1h TTL for search, 24h for
extract). Pay-per-use APIs only — no subscription-gated providers.
Docs: docs/plans/mana-research-service.md + docs/reports/web-research-capabilities.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mount an MCP (Model Context Protocol) server at /api/v1/mcp in the
unified Hono API. External clients like Claude Desktop, Cursor, and
VS Code Copilot can discover and call all 29 Mana tools via the
standard MCP protocol.
Architecture:
- WebStandardStreamableHTTPServerTransport for Bun/Hono compatibility
- AI_TOOL_CATALOG → MCP tool definitions with JSON Schema (via Zod)
- Stateful sessions with Mcp-Session-Id header
- Auth via existing authMiddleware (JWT or API key)
Phase 1 scope: tools/list returns all 29 tools with schemas,
tools/call acknowledges with descriptive messages. Phase 2 will add
actual DB reads/writes via sync_changes.
Usage:
Claude Desktop config:
{"mcpServers": {"mana": {"url": "http://localhost:3060/api/v1/mcp"}}}
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
news-ingester and apps/api both shipped their own copy of rss-parser
+ jsdom + Readability glue. Single source now in packages/shared-rss.
Adds discoverFeeds (rel=alternate + common-paths probe) and validateFeed
which News Research will use. JSDOM virtualConsole is silenced once,
in the package, instead of in two parallel call sites.
- packages/shared-rss: parse, extract, discover, validate
- services/news-ingester: drop local parsers, depend on @mana/shared-rss
- apps/api: drop @mozilla/readability + jsdom direct deps, use shared
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
vite-plugin-pwa's \`virtual:pwa-register/svelte\` imports workbox-window
at build time. The package was resolved transitively via @vite-pwa/
sveltekit but not installed in the webapp's own node_modules when
pnpm fetches only the workspace deps in a restricted Docker context.
Result: Rollup 'failed to resolve import "workbox-window"' at build.
Pinning the direct dep so the Docker build picks it up in the
filtered pnpm install.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase 1 of the Mission Key-Grant rollout. Webapp can now request a
wrapped per-mission data key; mana-ai can unwrap and (Phase 2) use it.
mana-auth:
- POST /api/v1/me/ai-mission-grant — HKDF-derives MDK from the user
master key, RSA-OAEP-2048-wraps with the mana-ai public key, returns
{ wrappedKey, derivation, issuedAt, expiresAt }
- MissionGrantService refuses zero-knowledge users (409 ZK_ACTIVE) and
returns 503 GRANT_NOT_CONFIGURED when MANA_AI_PUBLIC_KEY_PEM is unset
- TTL clamped to [1h, 30d]
mana-ai:
- configureMissionGrantKey + unwrapMissionGrant with structured failure
reasons (not-configured / expired / malformed / wrap-rejected)
- mana_ai.decrypt_audit table + RLS policy scoped to
app.current_user_id — append-only row per server-side decrypt attempt
- MANA_AI_PRIVATE_KEY_PEM env slot; absent = grants silently disabled
No existing behaviour changes: missions without a grant run exactly as
before. Grant flow is wired end-to-end but unused until Phase 2 lands
the encrypted resolver.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wires mana-ai into the existing observability stack so tick throughput,
plan-failure rates, planner latencies, and snapshot refresh health are
visible in Grafana + Prometheus, and the service's uptime surfaces on
status.mana.how under the "Internal" section.
- `src/metrics.ts` — prom-client Registry with `mana_ai_` prefix.
Counters: ticks_total, plans_produced_total, plans_written_back_total,
parse_failures_total, mission_errors_total, snapshots_new/updated,
snapshot_rows_applied_total, http_requests_total.
Histograms: tick_duration_seconds (0.1–120s), planner_request_
duration_seconds (0.25–60s), http_request_duration_seconds (0.005–10s).
- `src/index.ts` — HTTP middleware labels every request by
method/path/status; `/metrics` serves the Prometheus text format.
- `src/cron/tick.ts` — increments counters + wraps the tick with
`tickDuration.startTimer()`. Snapshot stats fold through.
- `src/planner/client.ts` — wraps `complete()` in a latency histogram
timer so planner tail latency shows up separately from tick duration.
- `docker/prometheus/prometheus.yml` —
1. New `mana-ai` scrape job against `mana-ai:3066/metrics` (30s).
2. `/health` added to the `blackbox-internal` job so uptime shows on
status.mana.how alongside mana-geocoding.
- `scripts/generate-status-page.sh` — friendly label for the new probe:
`mana-ai:3066/health` → "Mana AI Runner" (generator already iterates
`blackbox-internal`, no other changes needed).
- `package.json` — prom-client ^15.1.3
All 17 Bun tests still pass; tsc clean.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Service now produces plans end-to-end for due missions. Takes the
shared prompt/parser from @mana/shared-ai, calls mana-llm's
OpenAI-compatible endpoint, parses + validates the response against a
server-side tool allow-list.
- `src/planner/tools.ts` — hardcoded subset of webapp tools where
policy === 'propose'. Mirror of `DEFAULT_AI_POLICY` in the webapp;
drift just means the server doesn't suggest newly-added tools
(graceful degradation). Contract test between the two lists is a
sensible follow-up.
- `src/cron/tick.ts`
- Iterates due missions, builds the shared Planner prompt per mission,
parses the LLM response, logs the resulting plan
- Per-mission try/catch so one flaky LLM response doesn't abort the
queue; stats now track `plansProduced` + `parseFailures`
- `serverMissionToSharedMission()` converts the projection shape to
the shared-ai Mission type at the boundary
- `resolvedInputs: []` today — the Planner sees concept + objective +
iteration history only. Full resolvers (notes/kontext/goals via
Postgres replay) land alongside write-back in the next PR.
- No write-back yet: the plan is logged but not persisted to
`sync_changes`. Write-back needs an RLS-scoped helper mirroring
mana-sync's `withUser` pattern — tracked explicitly as the remaining
open piece in CLAUDE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Single source of truth for AI Workbench types shared between the webapp
(Vite/SvelteKit) and the server-side mana-ai Bun service. Prevents the
two runtimes from drifting on prompt shape or mission structure.
- `@mana/shared-ai` package:
- `actor.ts` — Actor union (user | ai | system) + helpers, mirrors the
webapp's runtime type so server-side consumers parse incoming actors
without re-declaring
- `missions/types.ts` — Mission, MissionCadence, MissionInputRef,
MissionIteration, PlanStep, MissionState. Adds optional
`iteration.source: 'browser' | 'server'` to distinguish foreground
vs server-produced iterations (groundwork for proposal write-back)
- `planner/prompt.ts` — `buildPlannerPrompt` pure function
- `planner/parser.ts` — `parsePlannerResponse` strict JSON validator
- Vitest smoke tests (2) cover prompt → parse round-trip + unknown-
tool rejection
- Webapp:
- `missions/types.ts` re-exports from shared-ai, keeps webapp-local
`MISSIONS_TABLE` constant + `planStepStatusFromProposal` bridge
- `missions/planner/{types,prompt,parser}.ts` become re-export stubs
so existing imports keep working unchanged
- Existing webapp tests (60) continue to pass — the wire code didn't
move, just its home
Next: mana-ai service imports buildPlannerPrompt/parsePlannerResponse
from shared-ai + wires mana-llm + writes iteration back as a
'source=server' row (tracked in services/mana-ai/CLAUDE.md).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The .mana backup parser in src/lib/data/backup/format.ts imports
inflateRaw from pako but the package was never declared. The
production Vite build fails to resolve it — pnpm let it through
locally only because some other workspace dep hoists pako into
node_modules.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reserves port 3042 in PORT_SCHEMA.md, adds mail pgSchema to
setup-databases.sh and init-db scripts, installs mana-mail workspace
dependencies.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
PillNav overhaul:
- Dropdown-as-bar: theme/AI/sync/user menus render as horizontal
bars in the bottom stack (PillDropdownBar) instead of floating
popovers. New onOpenBar/activeBarId props on PillNavigation.
- iconOnly pills: tags/search/workbench-tabs pills show only icons.
Home pill removed. New iconOnly flag on PillNavItem.
- Segmented toggle groups: items sharing a `group` id render as a
single segmented pill (e.g. Light/Dark/System triple).
- Fullscreen mode: press "f" to hide all bottom chrome, Esc to exit.
- QuickInputBar + bottom bar visibility toggles via new pills.
- Progress ring on AI trigger pill during model download
(conic-gradient ::after, follows pill border-radius).
@mana/local-stt — new package for browser-local speech-to-text:
- Whisper models via transformers.js v4 (WebGPU + WASM fallback)
- Same Web Worker architecture as @mana/local-llm
- Two models: Whisper Tiny (150 MB) and Whisper Small (950 MB)
- Reactive Svelte 5 bindings (getLocalSttStatus, loadLocalStt, transcribe)
Voice-to-text integration:
- useLocalStt() composable: mic capture via AudioContext +
ScriptProcessor, resample to 16kHz mono, feed into Whisper worker
- Mic button in QuickInputBar (leftAction slot) with
recording/loading/transcribing states + pulse animation
- Transcribed text injected into InputBar via new injectedText prop
- STT model selector in AI bar alongside LLM tier controls
Also: vite.config.ts server.fs.allow expanded to monorepo root
so workspace package workers resolve in dev.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New mana-geocoding service (port 3018) wraps a self-hosted Pelias
instance with LRU caching and OSM→PlaceCategory auto-mapping.
All geocoding queries stay within our infrastructure — no user
location data leaves the network.
Places module integration:
- Address autocomplete search in ListView (creates place with
name, coords, address, category in one step)
- Address search + reverse geocoding button in DetailView
- Auto-fill address via reverse geocoding during tracking
- OSM category mapping (amenity:restaurant→food, shop:*→shopping, etc.)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Introduces packages/shared-types/src/ai-schemas.ts as the single source
of truth for the wire format between mana-api and the unified Mana app.
Two schemas:
- MealAnalysisSchema (foods, totalNutrition, description, confidence,
warnings, suggestions) — consumed by nutriphi /analysis/photo and
/analysis/text routes
- PlantIdentificationSchema (scientificName, commonNames, confidence,
health/watering/light advice, generalTips) — consumed by planta
/analysis/identify
Both schemas include .describe() annotations on every field. The Vercel
AI SDK passes these through to the model as part of the structured-output
prompt, which materially improves accuracy on Gemini Vision (the model
sees both the field name AND the German-language hint about what to put
there).
Schemas use plain .optional() rather than .nullable() because
generateObject() guides the model with strict schema adherence — it
won't emit JSON null for missing fields, just omit them.
Deps wired up:
- apps/api: + ai@6, + @ai-sdk/openai-compatible@2, + @mana/shared-types
- apps/mana/apps/web: + zod (for z.infer of the shared schemas)
- packages/shared-types: + zod (for the schema definitions themselves)
All three on zod ^3.23 to stay in lockstep with the existing
apps/api zod usage.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The package was a leftover from the per-product Planta backend that
got consolidated into apps/api a while back. Repo-wide grep for
@planta/shared returns zero matches — it had no consumers.
- Delete the four files (package.json, src/index.ts, src/types/index.ts,
tsconfig.json)
- Update apps/planta/CLAUDE.md to reflect the cleanup (the previous
note pointed at a refactoring audit doc that already tracked it)
- Refresh pnpm-lock.yaml so the workspace member is no longer pinned
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Until now, modules wanting to use the orchestrator had to await each
LLM call inline in their store code. That's fine for foreground tasks
("user clicked summarize") but a non-starter for background work
("auto-tag every new note", "generate a title for every voice memo
after STT finishes"). Background tasks need to:
- Queue up while no LLM tier is ready, then drain when one becomes
available (e.g. user just enabled the browser tier from settings)
- Survive page reloads, browser restarts, and the user navigating
away mid-execution
- Run one at a time without blocking the foreground UI
- Allow modules to subscribe to results reactively without polling
- Retry transient failures (network, model loading) but not
semantic ones (tier-too-low, content blocked)
Phase 4 ships exactly that.
Architecture:
packages/shared-llm/src/queue.ts — LlmTaskQueue class
+ QueuedTask interface (the persistent row shape)
+ EnqueueOptions (refType/refId/priority/maxAttempts)
+ TaskRegistry type (name → LlmTask map)
+ LlmTaskQueueOptions (table + orchestrator + registry +
retryBackoffMs + idleWakeupMs)
Public API:
- enqueue(task, input, opts) → string (returns the queued id)
- get(id), list(filter)
- retry(id), cancel(id), purge(olderThanMs)
- start(), stop() (idempotent processor lifecycle)
apps/mana/apps/web/src/lib/llm-queue.ts — web app singleton
- Dedicated `mana-llm-queue` Dexie database (separate from the
main `mana` IDB; see comment for the rationale: ephemeral
per-device state, no encryption needed, no sync needed, doesn't
belong in the long-frozen `mana` schema)
- Wires up the queue with llmOrchestrator + taskRegistry
- Exposes startLlmQueue() / stopLlmQueue() for the layout hook
apps/mana/apps/web/src/lib/llm-task-registry.ts
- Maps task names → task objects so the queue processor can
look up the implementation when pulling rows off the table.
Closures can't be persisted, so we round-trip via name.
- Currently registers extractDateTask + summarizeTextTask;
module-side tasks land here as we add them.
apps/mana/apps/web/src/routes/(app)/+layout.svelte
- startLlmQueue() in handleAuthReady's Phase A (auth-independent)
so guests + authenticated users both get the queue
- stopLlmQueue() in onDestroy as a fire-and-forget cleanup
Processor loop semantics (the heart of the implementation):
1. On start(), reclaim any 'running' rows from a crashed previous
session — reset them to 'pending'. The orphan recovery is the
reason a crash mid-task doesn't leave the queue stuck.
2. findNextRunnable() picks the highest-priority pending task whose
`notBefore` (retry-backoff timestamp) is in the past. Sort key:
priority desc, then enqueuedAt asc (FIFO within priority).
3. Mark the task running, increment attempts, look up the LlmTask
in the registry, hand it to orchestrator.run().
4. On success: mark done, store result + source + finishedAt.
5. On error:
- TierTooLowError or ProviderBlockedError → fail immediately,
no retry. These are not transient — the user's settings or
the content itself need to change.
- Anything else → if attempts < maxAttempts, reset to pending
with notBefore = now + retryBackoffMs (default 60s). Else
mark failed.
6. When no work is pending, sleep on a Promise that resolves when
either (a) someone calls enqueue() (which fires notifyWakeup),
or (b) idleWakeupMs elapses (default 30s, safety net for any
missed wakeup signal).
Module-side reactive reads use Dexie liveQuery directly on the queue
table — no special subscription API on the queue itself. This is
consistent with how every other Mana module reads its data, so the
mental model stays uniform:
const tags = useLiveQuery(
() => llmQueueDb.tasks
.where({ refType: 'note', refId, taskName: 'common.extractTags' })
.reverse().first(),
[refId]
);
Smoke test: a new "Queue" tab in /llm-test lets you enqueue the
existing extractDate / summarize tasks and watch the live state of
the queue table via liveQuery. The display includes per-row state
badge (pending/running/done/failed), tier source, attempt count,
input/output, and a "Done/failed löschen" button that exercises
purge().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adding an app to a workbench scene threw DataCloneError. scenesState
is a $state array, so current.openApps was a Svelte 5 proxy and
spreading it into a new array left proxy entries inside; IndexedDB's
structured clone refuses to serialise those. Snapshot before handing
the array to patchScene / createScene so Dexie sees plain objects.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the entire @mana/local-llm engine with a transformers.js-based
implementation backed by Google's Gemma 4 E2B (released 2026-04-02).
The external API of LocalLLMEngine — load(), generate(), prompt(),
extractJson(), classify(), onStatusChange(), isSupported() — is
preserved 1:1, so the /llm-test page, the playground module, and the
Svelte 5 reactive bindings in svelte.svelte.ts need no changes
beyond updating the default model key.
Why the engine swap: MLC has not (and as of today still hasn't)
published Gemma 4 builds for WebLLM. The webml-community team and
HuggingFace's onnx-community already have Gemma 4 E2B running in
the browser via transformers.js + WebGPU, with a documented
Gemma4ForConditionalGeneration class shipped in @huggingface/transformers
v4.0.0. Going through the ONNX route gets us the latest Google model
six days after release instead of waiting on MLC compilation.
Trade-offs accepted (discussed before this commit):
- transformers.js is a more generic ONNX runtime, so per-token
throughput will be ~20-40% lower than WebLLM would deliver for the
same model size. For a 2B model on a modern WebGPU device that's
still well above interactive latency.
- The JS bundle gains ~2-3 MB (the ONNX runtime). Negligible compared
to the 500 MB model download.
- transformers.js v4 is brand new (released alongside Gemma 4) so the
Gemma4ForConditionalGeneration code path has very little battle
testing yet. The risk is partially offset by webml-community's
reference implementation.
What changed file by file:
- packages/local-llm/package.json: drop @mlc-ai/web-llm, add
@huggingface/transformers ^4.0.0; bump version 0.1.0 → 0.2.0; rewrite
description.
- packages/local-llm/src/types.ts: add `dtype` field to ModelConfig
('fp32' | 'fp16' | 'q8' | 'q4' | 'q4f16') so each model can request
the quantization that matches its uploaded ONNX shards.
- packages/local-llm/src/models.ts: replace the old Qwen 2.5 + Gemma 2
registry with a single `gemma-4-e2b` entry pointing at
onnx-community/gemma-4-E2B-it-ONNX with q4f16 quantization. Future
models can be added by appending entries — the /llm-test picker
reads MODELS dynamically and picks them up automatically.
- packages/local-llm/src/cache.ts: replace the WebLLM-specific
hasModelInCache helper with a generic Cache API probe that looks for
`https://huggingface.co/{model_id}/resolve/main/tokenizer.json` in
any open cache. tokenizer.json is small, downloaded first, and
always present, so its presence is a reliable proxy for "model has
been loaded before".
- packages/local-llm/src/engine.ts: full rewrite. Internally we now
hold a transformers.js model + processor pair (created via
AutoProcessor.from_pretrained + Gemma4ForConditionalGeneration.from_pretrained
with `device: 'webgpu'`), and translate our LoadingStatus union from
the library's `progress_callback` shape. generate() applies Gemma's
chat template via the processor, runs model.generate() with optional
TextStreamer for streaming, then slices the prompt tokens off the
output tensor to compute per-call usage. The convenience methods
(prompt, extractJson, classify) are unchanged because they only call
generate() under the hood.
- packages/local-llm/src/generate.ts and status.svelte.ts: deleted.
These were orphaned from a much earlier engine API (referenced
`getEngine()` / `subscribe()` / `LlmState` symbols that haven't
existed for a while) and were never re-exported from index.ts —
they only showed up because `tsc --noEmit` was crawling the src
tree. Their functionality lives in engine.ts + svelte.svelte.ts now.
- apps/mana/apps/web/package.json: swap the direct dep from
@mlc-ai/web-llm to @huggingface/transformers. This is the same
trick we used for the previous adapter-node externals warning —
having it as a direct dep makes adapter-node's Rollup pass treat
it as external automatically.
- apps/mana/apps/web/vite.config.ts: swap ssr.external entry from
@mlc-ai/web-llm to @huggingface/transformers. Add a comment
explaining the why so the next person doesn't wonder.
- apps/mana/apps/web/src/routes/(app)/llm-test/+page.svelte: change
the default selectedModel from 'qwen-2.5-1.5b' to 'gemma-4-e2b'.
All other model display strings come from the MODELS registry, so
this is the single hard-coded reference that needed updating.
- pnpm-lock.yaml: regenerated. Confirmed @mlc-ai/web-llm is gone (0
references) and @huggingface/transformers is in (4 references).
CSP: no header changes needed. We already opened connect-src for
huggingface.co + cdn-lfs.huggingface.co + raw.githubusercontent.com
when fixing the WebLLM blockers earlier today, and 'wasm-unsafe-eval'
is already in script-src — both transformers.js (ONNX runtime) and
WebLLM (MLC runtime) need that. If transformers.js spawns its
inference into a Web Worker via a blob URL we may need to add
`worker-src 'self' blob:` once we hit the first runtime test, but
the existing CSP should be enough for the synchronous path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two unrelated bugs in scripts/mac-mini/ensure-containers-running.sh,
both caught while debugging a mana-auth crash loop on 2026-04-08:
1. The recovery path passed --env-file "$PROJECT_ROOT/.env.macmini" to
docker compose, but that file has never existed on the server — only
.env does, and compose auto-loads it from the working directory. The
explicit --env-file silently caused recovered containers to start with
empty secrets (e.g. blank MANA_AUTH_KEK), which made mana-auth crash
the moment it came back up. The auto-recovery loop was therefore
self-defeating: it kept "fixing" auth into the same broken state
every 5 minutes for hours, with no notification because compose
exited 0. Drop --env-file entirely and cd into PROJECT_ROOT so
compose's standard .env discovery applies.
2. mana-infra-minio-init is a one-shot job container that legitimately
sits in "exited" state after running once. The script flagged it as
"stuck" every cycle, tried to "recover" it, and spammed the log with
ERROR lines. Add an explicit ONESHOT_INIT_CONTAINERS allowlist and
skip those names in both the initial scan and the post-recovery
verification.
Also tee compose output into the log so future failures actually leave
a breadcrumb instead of disappearing into the void.
Also: bump @mlc-ai/web-llm from a transitive dep (via @mana/local-llm)
to a direct dep of @mana/web. SvelteKit's adapter-node post-build
Rollup pass uses the web app's direct deps as its externals heuristic;
without this entry it warns "@mlc-ai/web-llm ... could not be resolved
- treating it as an external dependency" on every build. Functionally
harmless (the dynamic import in LocalLLMEngine only fires in the
browser), but the warning hid a real adapter-node misconfiguration
that would have bitten us if we'd ever tried to SSR /llm-test.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reflects the removal of apps/matrix and services/mana-matrix-bot from
the workspace plus the dropped @matrix-org/matrix-sdk-crypto-nodejs
override in package.json. Net -365 lines.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>