Phase 2f-2. RSS/Atom ingester (15-min tick → mana_platform.news.curated_articles)
moved to GPU-Box. Service has zero hot-path coupling, all the writes go
cross-LAN to Mini-postgres analog to the Glitchtip pattern.
Two implementation gotchas worth recording:
1. cross-arch image transfer doesn't work. Saved news-ingester:local
from the Mini (Apple M4 → linux/arm64), tried `docker load` on the
GPU-Box (linux/amd64) and got 'exec format error' on every restart.
Native build on the GPU-Box was the only path forward.
2. The original services/news-ingester/Dockerfile assumes
pnpm-workspace state from prior builds (no COPY for packages/shared-rss
in the build context). Fresh builds error with
ERR_PNPM_WORKSPACE_PKG_NOT_FOUND.
Workaround: a GPU-Box-specific Dockerfile at infrastructure/news-ingester/
that vendors shared-rss into the build via a workspace:* → file:ref
sed swap. Build context is the repo root (sparse-clone provides
packages/shared-rss + services/news-ingester). The Mini-side Dockerfile
stays untouched so existing CD builds aren't disturbed.
Mini-side: container stopped + removed from docker-compose.macmini.yml,
running container count 44 → 39.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires cards-server into the Mac-mini stack so we can deploy alongside
the rest of the Mana services.
- Dockerfile mirrors the mana-credits 2-stage pattern (node+pnpm
installer → bun runtime), exposes :3072, includes a /health
healthcheck.
- docker-compose.macmini.yml: new cards-server block right after
mana-credits — depends on postgres + mana-auth, 128m mem, all the
env knobs from the Phase-α config (author payout BPS, community-
verified thresholds, sibling-service URLs).
- cloudflared-config.yml: cards-api.mana.how → :3072. Distinct from
cards.mana.how (the user-facing PWA) so the API surface is clearly
separated.
- sso-origins.ts: cards-api.mana.how added to PRODUCTION_TRUSTED_ORIGINS.
- mana-auth CORS_ORIGINS in compose: cards-api.mana.how added.
Restored whopxl.mana.how that had drifted out — sso-config.spec.ts
had been flagging it but the missing entry surfaced when I added
cards-api. spec is back to 8/8 green.
Deploy plan (next steps, not in this commit):
1. ./scripts/mac-mini/build-app.sh cards-server
2. docker exec mana-app-cards-server bun run db:push (creates the
`cards` schema + 16 tables in mana_platform)
3. ./scripts/mac-mini/sync-tunnel-config.sh
4. Smoke: curl https://cards-api.mana.how/health → 200
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2f-1 cutover. npm.mana.how DNS now CNAMEs to mana-gpu-server
tunnel (config v27), Mini-side route entry no longer needed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2f-1: verdaccio (npm.mana.how) was the heaviest non-hot-path
service still left on the Mini after Phase 2 — read-mostly registry
that ci/local pnpm-installs hit, latency-unkritisch. Moved into
infrastructure/docker-compose.gpu-box.yml. Storage volume content
(@mana/* packages + htpasswd) migrated via tar-stream.
Config came from the mana-platform repo's
infrastructure/verdaccio/config.yaml. Copied into mana-monorepo so the
GPU-Box's sparse-clone (already pulling scripts/ +
packages/shared-branding) can also bind-mount it without needing a
second repo on the box.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
photon was the last 'health: none' container on the GPU-Box —
pre-existing user setup created via raw docker-run before Phase 2.
Adopted into infrastructure/docker-compose.gpu-box.yml with the
exact same image / volumes / cmd / port mapping so the OSM index in
/opt/photon-data survives untouched, plus a curl-based healthcheck
against /api?q=Berlin&limit=1 (Photon has no /health endpoint —
this is the canonical liveness probe).
start_period 120s gives Java the warmup window without false-flagging.
Recreate took ~10s including healthy state, no perceptible downtime
on photon.mana.how.
After this, all 20 GPU-Box containers report healthy. Mac Mini still
has 2 long-standing 'unhealthy' (mana-verdaccio's wget probe is
broken but npm.mana.how serves 200; mana-mail/Stalwart in bootstrap
mode, never configured) — both pre-existing, neither user-impacting.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three containers were running with no healthcheck — Docker showed them
as 'none', so an actual crash inside the container would only surface
once the process itself exited (and got restarted by restart-policy).
Added container-internal probes that don't depend on tools the image
doesn't ship:
- glitchtip-worker: bash + /dev/tcp/glitchtip-redis/6379 — confirms the
Celery broker is reachable. Bare-metal probe, no extra deps.
- gpu-promtail: bash + /dev/tcp/loki/3100 — confirms the loki sink the
worker is shipping to is reachable. Replaces the wget-based check
that errored 'executable file not found' on every tick.
- status-page-gen: stat + date — confirms /output/status.json was
rewritten in the last 3 min (script writes it every 60s). Catches
the case where the apk-install loop wedges or the generator
silently dies.
CMD-SHELL is /bin/sh which is dash on Debian-based images and dash
doesn't support /dev/tcp — used CMD form with explicit bash for the
two TCP probes.
photon stays without a healthcheck — pre-existing user container, not
in this compose file. Adding it would require a recreate which loses
the warm OSM cache.
After rollout: 17/20 GPU-Box containers healthy + 3 'none' (status-nginx,
glitchtip-redis, gpu-node-exporter — all standard upstream images
without built-in /health endpoints; their service is checked indirectly
via downstream consumers' healthchecks).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Promtail v3.0.0 ships a minimal alpine-ish image with only the
promtail binary. The original Mini compose's wget-based healthcheck
errored out with 'executable file not found' on every tick, marking
the container as 'unhealthy' for hours despite Loki actively
receiving logs from it. Restart-policy unless-stopped catches real
crashes anyway, so the healthcheck adds noise without value.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mana-sync's billing middleware short-circuited every push/pull with
402 for users without a sync subscription. Cards promises free Sync
in its Phase-1 GUIDELINES, so it shouldn't gate its own users on a
mana-credits subscription it never sells.
Implementation:
• billing.NewChecker now takes an exemptApps slice. The middleware
extracts {appId} from the URL path and short-circuits before the
user lookup if the app is in the set.
• Configurable via the BILLING_EXEMPT_APPS env var (comma-separated).
• Set BILLING_EXEMPT_APPS=cards on the mana-sync container so the
cards.mana.how Sync loop stops 402-ing.
• Tests cover the exemption + the empty/whitespace edge cases. All
other apps keep the original behaviour (fail-open if mana-credits
is unreachable, 402 if it explicitly says inactive).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2c had 3 cross-LAN-routing pain points; Phase 2e + the photon
fix solved 2 of them, so the doc was misleading. Refactored the
"Bekannte Limits" block in PLAN_OPTION_C.md into a proper
cross-LAN-pattern table that lists each known case + its current
status. Phase-2c-original gpu-* and Mini-Promtail entries kept as
the remaining open items, with the same Cloudflare-Tunnel-as-LAN-bridge
workaround spelled out (Loki-HTTP-Push via loki.mana.how would be the
next obvious move).
Plus infrastructure/README.md now lists every active public-hostname
the mana-gpu-server tunnel exposes (v26).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two 404s reported from prod:
1) sql.js's production-bundled loader requested /sql-wasm-browser.wasm
but only sql-wasm.wasm was in static/. The browser bundle is its own
file; copy both so the loader hits whichever variant the runtime
picks. Without this the .apkg import dies before reading SQLite.
2) shared-pwa's manifest references pwa-192x192.png, pwa-512x512.png
and apple-touch-icon.png. None existed → Chrome's manifest-icon
validator failed and there was no usable icon for A2HS. Generated
minimal indigo-stacked-cards PNGs at the three required sizes.
Note: the sync 402 reports in the same console output are a separate,
intentional behaviour — mana-sync's billing middleware blocks pull/
push when the user has no active sync subscription. No code change
needed; handled at the mana-credits layer.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two cleanups against the status-page DOWN list:
photon-self (photon.mana.how route):
mana-geocoding's /health/photon-self pings the photon backend, which
lives as a Docker container on the GPU-Box (port 2322). PHOTON_SELF_API_URL
was http://192.168.178.11:2322 — Mini-host can hit that fine but
Mini-Docker-containers can't (Colima-NAT-quirk we keep running into).
Routed photon through the mana-gpu-server tunnel (config v26) and
flipped the env var to https://photon.mana.how. Probe goes UP, geocoding
for sensitive queries (privacy:'local' provider tier) actually works
now too — was effectively orphaned before.
whopxl removed everywhere it still lingered:
Container hasn't existed on the Mini in months (no compose service,
no source dir under apps/, no listener on :5100 — only the dead
cloudflared route + a stale CORS_ORIGINS entry on mana-auth). Cleaned
cloudflared-config.yml, prometheus.yml blackbox-web target, and the
mana-auth CORS list. Old DNS CNAME for whopxl.mana.how stays for now;
no harm.
Plus while we were here: who-api.mana.how/api/decks was the right probe
for who-server's deck catalogue (root /api/decks lives on who-api, not
who.mana.how which is the SSR shell).
Live: status.mana.how shows 58/59 UP; the last 'whopxl' entry will
fall off after VM's TSDB rolls past the probe_success staleness window.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap from the first Anki-import pass: media files are now
uploaded alongside the cards instead of stripped.
Pipeline:
• parse.ts: read the .apkg's `media` JSON manifest, build a
filename → ZIP-entry map (Anki names files numerically; the
manifest is the original-name lookup table). Returned alongside
decks/cards as parsed.mediaByFilename.
• import.ts: collectMediaRefs() walks every card field, gathers
distinct <img src=…> and [sound:…] references — orphan media
bundled in the .apkg are ignored. Referenced files upload to
mana-media in 4 parallel workers, returning a filename → URL map.
• parse.sanitizeAnkiHtml() now takes that map: <img src="X"> →
<img src="<url>" alt="" />, [sound:Y] → <audio controls
preload="metadata" src="<url>"/>. The remaining-tag stripper has
a negative lookahead for img/audio/video/source so the new tags
survive.
• CardFace already renders <img>/<audio> via @mana/cards-core's
DOMPurify config (the image/audio attachments commit added the
allowlist), so the freshly-imported cards just work in the
learn session.
UI:
• AnkiImport gains an "uploading-media" stage with X / N progress
bar between preview and card creation.
• Preview now shows the media count, copy promise updated from
"Bilder/Audio bleiben raus" to "Bilder + Audio werden mit
übernommen".
• Result block reports `N Medien übernommen · M fehlgeschlagen`.
Phase-2 ideas: per-user media scoping in mana-media; verify-then-
upload via /media/hash/:sha256 to skip duplicates from re-imports.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit revealed status.mana.how was probing only the unified mana-app
path-routes (mana.how/{module}) plus a couple of GPU services. None
of the standalone deployments were monitored, and three probe targets
were stale.
Changes:
- prometheus.yml blackbox-web: drop mana.how/{context,who} (context
module was dropped 2026-04-29; mana.how/who never existed —
/who is a standalone stack on its own subdomain). Add the eight
hosts that DO have separate deployments today: whopxl, manavoxel,
memoro (landing), cards (Phase-1 spinoff), who.mana.how/cantina,
npm (Verdaccio).
- prometheus.yml blackbox-api: add memoro-api/health,
memoro-audio/health, who-api.mana.how/api/decks,
admin.mana.how/health (admin's root is auth-walled, only /health
returns 200).
- prometheus.yml blackbox-gpu: add gpu-llm.mana.how/health (was
missing; gpu-stt/tts/img/video were in, gpu-llm was somehow not).
- cloudflared-config.yml: restore who.mana.how → :5092 +
who-api.mana.how → :3092. The DNS CNAME points at the Mini tunnel
but the route entries had been lost during a previous compose
cleanup, so every who.* request was hitting the catch-all 404 and
the standalone Bun stack was effectively orphaned at the edge
(PM2 + LaunchAgent all healthy on Mini, just no public route).
Live state after rollout: status.mana.how shows 57/59 services UP,
the two remaining DOWN are pre-existing — photon-self (Phase-2c
cross-LAN routing limitation, documented in PLAN_OPTION_C.md) and
whopxl-web (container not running on the Mini, separate issue).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cards can now carry image, audio, and video attachments uploaded to
mana-media (the existing CAS service that already powers picture,
photos, wardrobe, etc.).
Pipeline:
• lib/media/upload.ts wraps POST /api/v1/media/upload (multipart,
app=cards). Returns { id, url, kind } with the right variant URL
per kind (medium for images, full file for audio/video). 25 MB
cap matches the website-upload pattern.
• mediaToFieldSnippet(): drops Markdown ![]() for images; raw
<audio>/<video controls> for the others — the user can later
tweak attributes by hand.
• Deck-detail card editor gains a "📎 Anhang" button next to every
text field (front/back/cloze). Pick → upload → snippet appended
to the field's content. Loading + error states surfaced inline.
Render:
• @mana/cards-core/render.ts whitelists `audio`, `source`, `video`
plus the `controls`/`preload`/`src`/`type` attrs in DOMPurify so
inline media survives sanitization. Markdown's <img> already
passed through the default policy.
Wiring:
• hooks.server.ts injects __PUBLIC_MANA_MEDIA_URL__.
• compose adds PUBLIC_MANA_MEDIA_URL_CLIENT=https://media.mana.how
to cards-web.
Phase 2 ideas: drag-drop directly into the textarea, paste-from-
clipboard for screenshots, mana-media auth scoping per user, Anki
import bringing media files along.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PDF input:
• lib/ai/pdf.ts wraps pdfjs-dist (Apache-2.0). Worker is bound via
Vite's `?worker` suffix so the heavy parsing runs off-main-thread.
• AiCardGen gains a "📄 PDF laden" button that pipes extracted text
into the same textarea — the user can review/trim before
generation. Reading state shows file name + page count + chars.
Heatmap:
• queries.useStudyHeatmap(weeks=12) fills gaps with count=0 so the
grid renders without holes.
• StudyHeatmap.svelte: 7 rows × N columns (Monday-anchored), 5
intensity buckets (neutral → emerald-300), tooltip per cell with
date + count, legend strip.
• Mounted on the dashboard between the deck list and the Anki import
so the user lands on a quick visual progress receipt every visit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The GPU-Box stack has been carrying real production workload since
Phase 2c (monitoring) but only existed as a /srv/mana/docker-compose.gpu-box.yml
on the box itself. If the WSL filesystem dies, none of it is
reproducible. Bring the file into infrastructure/ as the source of
truth (live file on the box must be kept synchronous; manual rsync
for now since there's no CD into the GPU box).
Plus:
- infrastructure/.env.gpu-box.example as the secrets template
- infrastructure/README.md describing what runs there + how the
Cloudflare-tunnel ingress is API-managed (not config.yml)
- .gitignore for the live infrastructure/.env.gpu-box copy
- MAC_MINI_SERVER.md status-page section now points at the GPU-Box
setup instead of the long-stopped Mini container
- PLAN_OPTION_C.md: Phase 2e row + GPU-Box service tree update
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PWA:
• SvelteKitPWA in vite.config via @mana/shared-pwa preset (name +
theme color). Layout injects pwaInfo.webManifest.linkTag so
Chrome's manifest pickup works → install icon + A2HS.
• Service worker registers automatically (workbox auto-update); the
cards-already-cached path now keeps working offline as long as the
user has visited a deck once.
AI generation:
• lib/ai/generate.ts — direct fetch to mana-llm /v1/chat/completions
(OpenAI-compatible, mirrors playground module). System prompt
constrains the model to a JSON array of {front, back}. Code-fence
stripping handles models that wrap JSON in ```json blocks despite
the prompt.
• AiCardGen.svelte — text in, list of generated cards out, per-card
checkbox preview, "X übernehmen" creates them via cardStore.
Phase-1 lands them as basic cards.
• Mounted on the deck-detail page next to "Neue Karte".
Wiring:
• hooks.server.ts injects __PUBLIC_MANA_LLM_URL__.
• compose adds PUBLIC_MANA_LLM_URL_CLIENT=https://llm.mana.how to the
cards-web service.
• app.d.ts gets vite-plugin-pwa virtual-module references so
svelte-check can resolve `virtual:pwa-info`.
Phase 2: PDF/image input, mana-credits gating, model selector,
streaming preview as cards arrive.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2e cleanup. status-page-gen + a dedicated nginx now run on the
GPU-Box (sparse repo clone provides the generator script + mana-apps.ts,
hourly git-pull via systemd timer). Container queries VictoriaMetrics
locally over docker-network ('http://victoriametrics:9090'), no public
vm.mana.how endpoint required — that hostname is also gone from the
GPU tunnel config (v25 → v26 effectively, removed in same PUT that
added status.mana.how).
DNS for status.mana.how now points at the mana-gpu-server tunnel.
Mini-tunnel ingress for it is removed; the previous 'mana-status-gen'
container on the Mini was stopped + rm'd.
Side benefit: closes the inode-stale-bind-mount bug that took status.
mana.how down for a few hours — single-file bind mounts on the Mini
break whenever the CD git-checkout rewrites the source file. The
GPU-Box mounts the same files but the systemd timer git-pulls in-
place, preserving the inode.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Anki users have decks they hate the UI for but can't migrate. This
gives them a one-drop path: drop a .apkg on the homepage, see a
preview, confirm, the cards land in our DB and start syncing.
Pipeline (lib/anki/):
• parse.ts — JSZip → sql.js (WASM SQLite) → walk Anki's three core
tables (col, notes, cards). Models (col.models JSON) classify each
note: type=0 → basic / basic-reverse, type=1 → cloze. Anki cards
table has one row per generated learnable unit (basic-reverse = 2,
cloze = N) — we dedupe at the note level since our model
regenerates those automatically via reviewStore.ensureReviewsForCard.
• import.ts — every Anki deck becomes one of ours (1:1, "::" → " / ");
fields go through sanitizeAnkiHtml (drops <img>, [sound:], maps
<b>/<i> to Markdown). Orphans land in a fallback "Anki-Import"
deck.
UI: AnkiImport.svelte on the decks list — drag-drop or click,
parse → preview ("X Decks, Y Karten"), confirm → import. No images,
no audio, no review history (cards are FSRS-new on import) — those
are Phase 2.
Deps: sql.js 1.14, jszip 3.10, @types/sql.js. WASM blob copied into
static/ so SvelteKit serves it at /sql-wasm.wasm.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After Phase 2c VM moved off the Mini, but the status-page generator
still queried localhost:9090 — and Colima containers can't reach the
GPU-Box's LAN IP through the Mini's bridge. Result: status.mana.how
showed 0/0 services UP across the board.
Routed VM through a new vm.mana.how Public Hostname on the
mana-gpu-server tunnel (config v24) so the Mini-side container reaches
it the same way browsers do. /api/v1/query path is identical, no
script changes required. Network-mode no longer needs to be host now
that the URL is public.
Verified live: status.json reports 49/53 services UP.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two small UI surfaces over data the backend already computes:
• Header shows current streak (🔥 N) — useStreak() walks back through
cardStudyBlocks until it finds a gap.
• Decks list shows a "fällig"-pill per deck and a total in the header
subline — useDueCountByDeck() joins cardReviews→cards once and
groups by deckId.
Both queries live in lib/queries.ts and use Dexie liveQuery, so the
header refreshes automatically the moment a learn session ticks the
study block forward.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
53→45 running, 9.6 GiB nominal → ~6 GiB nominal, build-app.sh's
monitoring-stop trigger no longer fires. Cross-link to PLAN_OPTION_C.md
for the GPU-box-side picture (grafana/git/stats/glitchtip live there
via the mana-gpu-server tunnel).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2c+2d cleanup. The 14 services that moved to the GPU-Box stack
(grafana, victoriametrics, loki, tempo, promtail, alertmanager,
vmalert, pushgateway, blackbox-exporter, alert-notifier, umami,
glitchtip + worker, forgejo) are now stopped on the Mini and stable
on the GPU box, so the rollback insurance can come out:
- docker-compose.macmini.yml: drop 14 service blocks (-369 lines) +
the now-orphan named volumes (victoriametrics_data, loki_data,
alertmanager_data, grafana_data, tempo_data).
- cloudflared-config.yml: drop the four hostnames whose DNS already
points at the mana-gpu-server tunnel — Mini-tunnel ingress for them
has been dead routing since 2026-05-06, removing the rules just makes
the file match reality. The hostnames now live in the GPU tunnel's
dashboard config (token-managed).
Containers + volumes stay on the Mini for now; running
`docker compose -f docker-compose.macmini.yml --env-file .env.macmini up -d --remove-orphans`
on the box drops them in one go when ready.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two real-world fixes from wiring mana-auth to Glitchtip:
1. The compiled dist/ folder was excluded from Docker builds via
.dockerignore's '**/dist' rule, so any container that pnpm-installed
the package found node_modules/@mana/shared-error-tracking but no
loadable entry point ('Cannot find module' at startup). Match the
pattern shared-hono uses — point main + types + exports straight at
src/*.ts. Bun runs TS natively and the type-only consumers don't
care.
2. Glitchtip projects expose UUID-format public keys (`556fbd2e-a720-…`)
in their generated DSNs. @sentry/node v9 tightened its DSN regex to
alphanumeric-only, so it silently rejects the DSN with "Invalid
Sentry Dsn" and never sends events. Strip the dashes from the
user/key portion before handing it to Sentry — the Glitchtip ingest
endpoint accepts both forms over the wire, so no server change.
Plus the missing Dockerfile COPY lines for shared-error-tracking and
eslint-config (root package.json devDeps reference the latter, which
breaks pnpm-filter installs that don't include it in the build context).
Verified end-to-end: 4 issues now in Glitchtip from mana-auth
(2 manual probes + 1 captureException + 1 401 from a
real /api/v1/me/data request without auth).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirror the same fix as cards-core (dd2e60954): the new
shared-error-tracking workspace dep needs an explicit COPY line
in the installer stage, otherwise pnpm-install-with-filter can't
find the package and the runtime container is missing it under
node_modules/@mana/.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
createManaAuthStore from @mana/shared-auth-ui reads the auth backend
URL from window.__PUBLIC_MANA_AUTH_URL__ at runtime. Without the
injection it falls back to a relative URL, so signIn POSTs land at
cards.mana.how/api/v1/auth/login (SvelteKit 404, HTML body) instead
of auth.mana.how/api/v1/auth/login.
Adds a hooks.server.ts modeled after the mana-web one, but trimmed
to the two URLs the standalone app actually consumes today (auth +
sync). The values come from PUBLIC_MANA_AUTH_URL_CLIENT and
PUBLIC_MANA_SYNC_URL_CLIENT in docker-compose.macmini.yml.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
First server-side error-tracking integration. Pattern mirrors the
client-side one in apps/mana/apps/web/src/hooks.client.ts:
- pull @mana/shared-error-tracking into mana-auth's deps (workspace pkg
with @sentry/node + a no-op fallback when GLITCHTIP_DSN is unset)
- call initErrorTracking() at the top of services/mana-auth/src/index.ts
before the rest of the module body executes — this lets Sentry hook
uncaughtException / unhandledRejection before any Hono handlers register
- wrap app.onError so non-HTTPException throws also flow into
captureException with path/method/query context. HTTPExceptions are
intentional 4xx/422 and stay out of the issue list (otherwise every
401 from a stale session would page somebody at 3am)
- compose: pass GLITCHTIP_DSN_MANA_PLATFORM through as GLITCHTIP_DSN per
service so each container's events get tagged with serverName='mana-auth'
DSN itself isn't in the repo; lives in .env.macmini on the Mac Mini and
is referenced from the Glitchtip credentials doc.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The cards-spinoff commit (0a544ac41) added @mana/cards-core as a
workspace dependency for apps/mana/apps/web but didn't update the
two Dockerfiles that COPY-and-pnpm-install the workspace into the
image. CD's --no-cache build for mana-web therefore failed at
`pnpm install` with ERR_PNPM_WORKSPACE_PKG_NOT_FOUND, leaving the
container on a stale pre-cleanup image whose ListView28 chunk still
referenced the dropped contextSpaces Dexie table — every mana.how
route 500'd.
Adding the COPY line to both files (the shared sveltekit-base layer
and the per-app layer that does a second pnpm install) makes the
package available to the workspace resolver and lets the build go
through.
Plus the Phase 2c-d doc updates that piled up today (Glitchtip
on dedicated GPU-box stack, gitignore for *_CREDENTIALS.md files).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Phase-1 crypto wrapper is a no-op stub; it never actually imports
@mana/shared-crypto. The dep was forward-looking for the Phase-2 vault
wiring, but it broke `pnpm install` inside the cards-web Dockerfile
because the sveltekit-base image only ships a curated subset of
@mana/* packages and shared-crypto isn't one of them.
The wiring will come back when the vault roundtrip is on, together
with a base-image bump.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-push hook runs svelte-check with --fail-on-warnings; nine
long-standing warnings in unrelated files (forms / website-blocks)
were blocking otherwise-clean pushes.
Each <label> here is a visual label whose control follows on the next
line — accessible to a screen reader through proximity but not through
a `for=`/`id` association. The state_referenced_locally cases capture
a prop on first render by design (re-running the hook on prop change
would be a different feature). The <nav role=tablist> is the existing
tab-strip semantic.
All seven sites get scoped svelte-ignore comments rather than functional
rewrites — the goal is to unblock CI, not redesign these components.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Builds out the Cards spinoff end-to-end so the standalone app at
cards.mana.how shares its data layer with the in-mana cards module
through a single pure-utility package.
Why a spinoff and not just a deeper module: per the GUIDELINES, Cards
gets its own brand + URL while reusing mana-auth, mana-sync, and the
mana-credits/billing stack. The in-mana module under mana.how/cards
stays untouched as the integrated experience.
Phase 0 — mana-modul foundation
• New tables cardReviews + cardStudyBlocks (Dexie v61) + plaintext
classification in the crypto registry.
• LocalCard learns a {type, fields} shape; legacy front/back columns
kept as a back-compat mirror so older builds keep rendering.
• FSRS v6 scheduler + Cloze parser + Markdown render pipeline.
• UI in apps/mana/.../routes/(app)/cards/ gets a learn session
(learn/[deckId]), 4-type card editor, due-counter, markdown lists.
Phase 1 — standalone (apps/cards/apps/web)
• SvelteKit 2 + Svelte 5 + Tailwind 4, port 5180.
• Own Dexie 'cards' DB with a slim 5-table schema.
• Own sync engine: pending-changes hooks, 1 s push / 5 s pull against
POST /sync/cards, server-apply with suppression to avoid ping-pong.
• Auth-Gate via @mana/shared-auth-ui (LoginPage / RegisterPage).
• Encryption hooks at every write/read/apply path, currently no-op
stubs — flipping to real vault-backed AES-GCM is a single-file
change in src/lib/data/crypto.ts.
Shared package — @mana/cards-core
• Pulls types, cloze, card-reviews, FSRS wrapper, and Markdown
renderer out of the mana module so both frontends import from one
source. mana-modul keeps thin re-export shims so consumers don't
need to change imports.
• 19 vitest tests carried over from the mana module.
Server-side wiring
• cards.mana.how added to mana-auth PRODUCTION_TRUSTED_ORIGINS and
its CORS_ORIGINS env (sso-config.spec.ts stays green).
• New cards-web container in docker-compose.macmini.yml (mirrors
manavoxel-web pattern, 128m, depends on mana-auth healthy).
• cloudflared-config.yml repoints cards.mana.how from :5000 (the
unified mana-web container) to :5180. mana.how/cards is unchanged.
Cleanup
• Removed an unrelated 2026-03/04 NestJS+Supabase+Expo experiment
that was lingering under apps/cards/ (apps/landing, supabase/,
.github/workflows, MANA_CORE_*.md, etc.). It predated this plan
and would have confused future readers.
Validation
• svelte-check on mana-web: 0 errors over 7697 files
• svelte-check on cards-web: 0 errors over 3481 files
• vitest on cards-core: 19/19 pass
• pnpm check:crypto: 214 tables classified
• bun test sso-config.spec.ts: 8/8 pass
• vite build on cards-web: green
Not done in this commit (deliberate)
• Real encryption (vault roundtrip) — Phase 2.
• WebSocket-driven pull (5 s polling for now).
• Mobile/landing standalone surfaces — Phase 2/3.
• The actual production cutover on the Mac mini (build, deploy,
cloudflared sync) — config is staged, deploy is a user action.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Verbindliche Leitlinien für den Cards-Spinoff (Karteikarten-App mit
Spaced Repetition). Status: Planungsphase, noch kein Code. Doc dient
als nicht-verhandelbarer Kontext für PRs sobald gebaut wird.
Wichtigste Festlegungen:
- Game-Dev-Prinzip: Phase 1 baut NUR den Core-Gameloop (Lernsession).
KI-Generierung, Voice, Sharing, Stripe, Mobile, Dashboards = Phase 2+.
- Open-Source-only: jede Dep braucht OSI-konforme Lizenz.
- Zentrale Mana-Bausteine sind Pflicht, kein Eigen-Auth/Sync/Analytics.
- Daten-Contract mit dem bestehenden mana-Modul: gleiche Postgres-
Tabellen (cardDecks/cards + neu cardReviews/cardStudyBlocks),
appId='cards'. Schema-Änderungen rolled-out gemeinsam, nicht einseitig.
- FSRS v6 via ts-fsrs für Spaced-Repetition-Algorithmik.
- Phase 1 hat keinen eigenen Service — Lese-/Schreibpfad geht
ausschließlich über IndexedDB → mana-sync → Postgres.
Definition of Done in §7 ist die Acceptance-Liste fürs MVP.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
memoro ist seit längerem ein eigener Repo (Code/memoro/) mit eigenem
Compose-Stack auf dem Mini (~/projects/memoro-deploy/). Der Tunnel
zeigte bisher trotzdem auf die unified mana web app (Port 5000) — d.h.
memoro.mana.how rendert nur das Mana-Dashboard, nicht die echte
Memoro-Marketing-Landing.
Vier Hostnames in einem eigenen Memoro-Block:
memoro.mana.how → :3120 (Astro-Landing, Marketing-Site)
memoro-app.mana.how → :3130 (SvelteKit-SPA, Web-App)
memoro-api.mana.how → :3110 (API)
memoro-audio.mana.how → :3101 (Audio-Service)
memoro-app vs memoro auf erster Subdomain-Tiefe gelassen damit
Cloudflare Universal SSL ohne Wildcard-Konfig greift.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hilfsdienste (Monitoring, Forgejo, Glitchtip, Umami) wandern von der
auslastungs-kritischen Mac-Mini-Box auf die Windows-GPU-Box, die
ohnehin 95 % System-RAM idle hat. Production-Hot-Path bleibt auf dem
Mini, kein Geld ausgegeben, Single-Point-of-Failure am Standort
reduziert.
Stand 2026-05-06: Phase 0–2b shipped (WSL2-Docker, Grafana cross-box,
Forgejo, Umami healthy). Phase 2c (Loki+VM+Alerts) und Phase 4
(Cloudflare-Cutover für grafana.mana.how) brauchen eigene Sessions —
beides Pre-existing-Mis-config-Aufräumen, kein Architektur-Risiko.
Hardware-Inventar in WINDOWS_GPU_SERVER_SETUP.md ergänzt: Ryzen 9 5950X,
64 GB DDR4, RTX 3090, 660 GB frei C:. WSL2 auf 24 GB / 12 vCPU
gedeckelt damit AI-Scheduled-Tasks > 30 GB Reserve haben.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Platform-Repo (Code/mana/) reserviert 3065 für mana-media; um Doppel-
Belegung zu vermeiden wandert mana-events (Public-RSVP / Event-Sharing)
auf 3115. Neuer Port-Block 311x ist unbenutzt und gehört strukturell
neben mana-mail (3042) bzw. die anderen 30xx Service-Ports.
Berührt jeden harden-coded 3065-Default — Server-Config, Webapp-Config,
SSR-Routes (rsvp/[token], status), Playwright-Webserver-Setup, e2e-Spec.
PUBLIC_MANA_EVENTS_URL in .env.development zieht beide Variablen mit.
PORT_SCHEMA.md trägt jetzt den Wechsel mit Datum + Begründung —
zukünftiges Ich soll nicht raten warum der Port aus der 30xx-Reihe
ausschert.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Schließt die platform/product-split-Lücke: HEAD's apps/api/src/index.ts
referenziert seit dem Forms-M10d-Commit personasInternalRoutes /
personasAdminRoutes — die Implementierung lag aber noch nicht im Repo.
Build war strukturell broken bis hierhin.
Was wandert von mana-auth nach apps/api:
apps/api/src/modules/personas/
├── schema.ts — pgSchema('personas') mit personas /
│ persona_actions / persona_feedback;
│ userId ist plain text (Cross-DB-FK auf
│ mana-auth's auth.users geht nach Split nicht).
├── internal-routes.ts — service-key gated GET /due, POST /:id/actions
│ und POST /:id/feedback. Append-only +
│ idempotent über deterministische row-ids
│ (tickId-i-tool / tickId-module).
└── admin-routes.ts — admin-JWT gated CRUD; ruft mana-auth via
/api/v1/admin/users + /api/v1/auth/register
+ /api/v1/internal/users/:id/persona-stamp
für den User-Lifecycle.
Persona-runner-Client zeigt jetzt auf apps/api:
- config.ts: neues apiUrl-Feld (default http://localhost:3060,
Env MANA_API_URL); authUrl bleibt für /api/v1/auth/login + spaces.
- clients/mana-auth-internal.ts: drei Calls treffen jetzt
/api/v1/personas/internal/* statt mana-auth's
/api/v1/internal/personas/* — Datei-Name bleibt um Call-Site-Diff
klein zu halten.
- index.ts: ManaAuthInternalClient bekommt config.apiUrl statt authUrl.
Seed/Cleanup-Skripte:
- --api= als bevorzugter Flag, --auth= als Legacy-Alias (cached
Shell-History würde sonst hart brechen).
- default http://localhost:3060, Env MANA_API_URL.
- Endpoint-Pfade umgeschrieben:
POST /api/v1/admin/personas → /api/v1/personas/admin
DELETE /api/v1/admin/personas/:id → /api/v1/personas/admin/:id
drizzle.config.ts: schema-Array + schemaFilter um 'personas' erweitert.
DB-push ist Pflicht-Schritt vor erstem Boot, sonst 42P01 auf /due.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Echter Server-Cron für recurring forms — wave-send läuft jetzt
unabhängig von Owner-Tab-State. Bisheriger M10c webapp-side scheduler
bleibt als Belt-and-suspenders aktiv (idempotent).
Architektur:
1. **Owner-private internal_meta auf unlisted snapshots**
- Drizzle: neue jsonb-column `internal_meta` (Drizzle migration
0001_internal_meta.sql).
- public-routes.ts strippt sie strukturell — die explicit select()-
projection enthält sie nicht (recipients + sender würden sonst
via share-link leaken).
- publish-route akzeptiert sie im Body, persistiert auf insert +
update.
- ALLOWED_COLLECTIONS um 'lasts' und 'forms' erweitert (war ein
latenter Bug — formsStore.setVisibility('unlisted') hätte ohne
diese Ergänzung 400 zurückbekommen; M4b lief vermutlich nie
end-to-end durch).
2. **shared-privacy publishUnlistedSnapshot**
- PublishUnlistedOptions erweitert um optionales `internalMeta`.
Forwarded an /api/v1/unlisted/:collection/:recordId body.
3. **Webapp formsStore**
- lib/wave-mail.ts: buildFormInternalMeta(form, broadcastSettings)
baut den Owner-Private-Blob: { kind, recurrence: {frequency,
recipientEmails, lastSentAt}, sender: {fromEmail, fromName,
replyTo, legalAddress}, formMeta: {title, description} }.
Returns null wenn Voraussetzungen fehlen (kein recurrence, keine
recipients, fehlende broadcast-settings).
- stores/forms.svelte.ts: setVisibility / regenerateUnlistedToken /
setUnlistedExpiry laden broadcastSettings via Dexie + decrypt,
bauen internalMeta, übergeben an publishUnlistedSnapshot. Form
wird vor dem buildFormInternalMeta-Call dekrypted.
4. **mana-mail internal bulk-send route**
- createInternalRoutes(accountService, broadcastOrchestrator,
maxRecipients) — Signature erweitert.
- Neue POST /api/v1/internal/mail/bulk-send: gleicher Payload-shape
wie user-facing /v1/mail/bulk-send aber userId aus Body statt
JWT. X-Service-Key-gate sitzt bei /api/v1/internal/* prefix.
Audit-trail trägt principalId aus Body. Cap = 5000 (gleicher
Wert wie user-facing).
5. **apps/api forms wave-worker**
- 5-min setInterval, advisory-lock-gated (key 0x464f5257 'FORW').
- Tick: select snapshots WHERE collection='forms' AND
internal_meta IS NOT NULL AND revoked_at IS NULL. Filter auf
kind='forms-recurrence' + isWaveDue (lastSentAt + period <= now,
never-sent fires sofort). Pro fälligem snapshot: build HTML/text
mailbody (mirror webapp wave-mail-render), POST an mana-mail
internal-bulk-send mit X-Service-Key + userId, dann jsonb_set
auf internal_meta.recurrence.lastSentAt. Per-snapshot errors
werden als console.warn geloggt, Tick läuft weiter.
- Disable via FORMS_WAVE_WORKER_DISABLED=true (tests / multi-
replica deployments).
- Wired in apps/api/src/index.ts neben startArticleImportWorker().
Trade-offs:
- internal_meta wird beim setVisibility/regenerate/setExpiry frisch
aus broadcast-settings gebaut — wenn der User später broadcast-
settings ändert (zB neuer fromEmail) muss er das Form re-publishen
damit die snapshot-internal_meta aktualisiert wird. Doc-it: zukünftiger
Patch könnte ein "settings drift"-Warning ins UI surfacen.
- Worker-Update von lastSentAt geht NICHT zurück in den webapp-form
(settings.recurrence.lastSentAt ist verschlüsselt, server kann
nicht schreiben). Owner-UI zeigt ältere lastSentAt von manuellen
Sends; auto-cron-sends sind in den Server-Logs sichtbar. Future
patch: GET /api/v1/forms/:id/recurrence-status (auth) gibt das
snapshot.internal_meta zurück, UI rendert Auto-Cron-State.
- Webapp-side wave-scheduler (M10c) läuft parallel weiter — wenn
Owner-Tab offen ist, kann beides feuern. Idempotent durch
lastSentAt-check (weekly/monthly buckets), aber theoretisch könnte
double-fire passieren wenn die Calls innerhalb 1ms versetzt sind.
Real-world ignorierbar; future patch: scheduler liest jetzt
internal_meta.lastSentAt vom server-side state.
apps/api buildet (1776 modules). mana-mail buildet (523 modules).
svelte-check 0 errors in forms/. Forms-Tests 70/70 unverändert.
DB-Migration 0001_internal_meta.sql muss manuell appliziert werden
(siehe feedback memory: hand-authored SQL migrations sind nicht in
pnpm setup:db).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Schließt M7 ab: Form-Antworten erzeugen jetzt zusätzlich zu Kontakten
(M7a) und Event-RSVPs (M7b) auch Library-Einträge und Space-
Einladungen. feedback bleibt bewusst aus dem UI raus —
Architektur-Mismatch.
- types.ts:
- AutoSyncConfig erweitert um optional `libraryKind`
('book'|'movie'|'series'|'comic') und `spaceMemberRole`
('member'|'admin', default 'member').
- Form domain-type bekommt `spaceId: string` (war intern auf
LocalForm vorhanden, wird jetzt durch toForm exposed). Brauchen
wir, weil space_member-Invite den organizationId der Form-Owner-
Space schicken muss.
- queries.ts toForm: spaceId aus LocalForm.spaceId mappen, Fallback ''.
- lib/auto-sync.ts:
- buildLibraryEntryFromAnswers (pure): mappt title / creators /
year / review. creators-strings werden auf , ; \n gesplittet
(multi-author-mapping). year bounds-checked 1900..2100.
- buildSpaceInviteFromAnswers (pure): findet das erste Form-Feld
mit mapping='email', validiert per Loose-Regex, gibt
{email}-payload zurück.
- dispatchTarget('library'): wirft wenn libraryKind fehlt; ruft
libraryEntriesStore.createEntry mit kind+title+creators+year+
review.
- dispatchTarget('space_member'): wirft wenn form.spaceId fehlt;
POSTet an /api/auth/organization/invite-member über authFetch
mit role aus cfg.spaceMemberRole. Returns invitation.id oder
Fallback `invite:<email>` (better-auth response-shape kann je
nach Version variieren).
- dispatchTarget('feedback') wirft jetzt mit klarem Kommentar:
architektur-Mismatch — feedback ist zentraler Public-Hub,
nicht per-Owner-Daten. UI filtert die Option raus.
- applyAutoSync reicht `form` durch zu dispatchTarget (statt nur
cfg/answers), damit Space-Invite die spaceId hat.
- lib/auto-sync.spec.ts: 9 weitere Tests (4 library: title/creators/
year-bounds/empty, 5 space: extract/malformed/non-mapped/no-mapping/
non-string). Total Forms-Tests jetzt 70/70.
- SettingsPanel:
- SUPPORTED_TARGETS auf [contacts, events, library, space_member]
erweitert. feedback erscheint NICHT — Type bleibt für Legacy-
Daten erhalten, aber UI bietet ihn nicht an.
- Library-Block: kind-picker (book/movie/series/comic) +
LIBRARY_KEYS-Mapping (title, creators, year, review).
- Space-Member-Block: role-picker (member/admin) +
SPACE_KEYS-Mapping (nur 'email'). Hint "mappe genau ein Feld".
- setMappingFor preserved jetzt alle target-spezifischen Felder
(targetId, libraryKind, spaceMemberRole) damit ein Mapping-Edit
nicht den Rest droppt.
- 25 neue i18n-Keys × 5 Locales (autoSync.targetLibrary/SpaceMember,
libraryKindPicker/libraryKind.*, libraryKey.*, libraryHint,
spaceMemberRolePicker/RoleMember/RoleAdmin/Hint/MappingHint).
Parity 6515 keys aligned.
Trade-offs:
- Library-Auto-Sync erzeugt einen Eintrag pro Antwort. Deduplizierung
(gleicher Titel kommt schon vor) bleibt manueller User-Workflow —
Autosync hat kein Wissen über die existierenden Bibliothek.
- Space-Invite-Flow läuft asynchron: Submitter kriegt Mana-Mail mit
Invite-Link, klickt → wird Member. Bei nicht-Mana-Identitäten muss
der Submitter erst registrieren. Owner sieht den Pending-State unter
/spaces.
- feedback: bewusst nicht implementiert. Form-Antworten als public-
feedback einzukippen wäre semantisch falsch (Owner sammelt für
sich, nicht zur Veröffentlichung).
Forms-Tests 70/70. svelte-check 0 errors. apps/api unverändert.
i18n-parity 6515 keys × 5 locales aligned.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Killer-Feature für den Conversation-Mode (M9): User kann auf
choice/yes_no/rating-Feldern in eigenen Worten antworten ("ich nehme
den zweiten Vorschlag" / "klar bin ich dabei" / "so 4 von 5"), ein
LLM mappt das auf die strikte Option-ID / boolean / Integer.
- apps/api/modules/forms/public-routes.ts: neuer
POST /api/v1/forms/public/:token/conversation/extract Endpoint.
Rate-limited (30/min/token + 60/min/IP — Owner-Side-Costs für haiku
trotz unauthenticated-Pfad). freeText hard-cap 1000 Zeichen.
Token-resolve via unlistedSnapshots, fieldId muss im publish-Schema
existieren. Dispatch:
- text/email/number/date: passthrough (free-text IST die Antwort)
- single_choice/multi_choice/yes_no/rating: mana-llm haiku-Call
mit field-spezifischem System-Prompt + JSON-only-Output, Parser
validiert Option-IDs gegen das Schema (Hallucination-Schutz).
Response { extracted, confidence: 'high' | 'low', alternatives? }.
confidence='low' wenn LLM unsicher → Client zeigt Warnung im
Preview-Block, User kann manuell auswählen.
- ConversationFormView: collapsible <details>"Lieber in eigenen
Worten antworten?"-Block unter den quick-reply-Buttons aller
choice/yes_no/rating-Felder. User tippt Free-Text → "Verstehen"
ruft endpoint → Preview-Karte mit der erkannten Antwort
(teal=high-confidence, amber=low-confidence) → "Übernehmen" oder
"Abbrechen". commitExtract löst setAnswerAndAdvance aus, läuft
über den selben Pfad wie quick-reply-Klick.
Schema-Validierung im Parser:
- single_choice: optionId muss in field.options sein, sonst null
- multi_choice: filtert nur valide IDs raus, Array kann leer sein
- yes_no: nur true/false/null erlaubt
- rating: round(value), bounds-check 1..ratingScale
LLM-Call:
- model claude-haiku-4-5 (cheapest)
- temperature 0 (deterministisch)
- maxTokens 200 (JSON-Output ist klein)
- Markdown-code-fence-Strip für robustes JSON-Parsing
Trade-offs:
- Public-Endpoint = ungated LLM-Spend für Form-Owner. Rate-Limits
+ freeText-Cap mitigaten Spam, aber 30 Calls/min × 200 tokens =
moderate Kosten pro Form. Owner sollte das im Hinterkopf haben.
- Confidence='low' eskaliert zur User-Sichtbarkeit, bricht aber
nicht den Flow — User kann übernehmen oder abbrechen.
Forms-Tests 61/61 unverändert (extract braucht Live-LLM für E2E,
absichtlich kein vitest-Mock). svelte-check 0 errors. apps/api
buildet (1772 modules).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Public-Form-Variante als linearer Chat-Flow (M9, KF4 aus dem Plan).
Owner wählt im Builder zwischen "klassisch" (alle Felder
gleichzeitig, M4b-View) und "conversation" (eine Frage nach der
anderen, mobile-friendly).
LLM-gestützte free-text → typed-Antwort-Extraktion (z.B. "Ich nehme
den zweiten Vorschlag" → option-id) bleibt M9b — die jetzige
Implementierung nutzt typed widgets pro Field-Type für einen
deterministischen ersten Wurf.
- types.ts: FormSettings.experience: 'classic' | 'conversation'
(default 'classic'). Reist im Settings-Blob mitverschlüsselt.
- data/unlisted/resolvers.ts: buildFormBlob whitelistet experience
ins public-snapshot — nur ein Enum, kein PII.
- SharedFormView (M4b) bleibt der classic-Renderer.
- ConversationFormView (neu, ~600 Zeilen):
- Linear: stepIndex zeigt durch das Visible-Subset von
resolveVisibleFields (gleicher branching-resolver wie classic).
- Pro Step: question-bubble + Field-Type-spezifischer Widget:
short_text/long_text/email/number → Free-Text-Input mit
Enter-Submit, date → datepicker, yes_no → 2 Quick-Reply-Buttons,
rating → Skala-Buttons, single_choice → vertikale
Quick-Reply-Liste, multi_choice → Toggle-Chips + "Weiter",
section → "Verstanden"-Step, consent → Yes(/Nein optional).
- Answer-Bubble nach Submit; "← Vorherige" droppt das letzte
Q/A-Pair und löscht die Antwort, damit der branching-resolver
den nächsten Step neu berechnet.
- Final-Step: Submitter-Name+Email (optional) + bestehender
POST /api/v1/forms/public/:token/submit.
- Progress-Bar oben, "via Mana Forms"-Footer.
- routes/share/[token]/+page.svelte: dispatched bei
collection='forms' auf experience-Wert — 'conversation' →
ConversationFormView, sonst SharedFormView.
- SettingsPanel: dropdown unter den Anonymous-Toggle, dt./eng./es./
fr./it. (15 neue i18n-Keys × 5 Locales = 6498 keys aligned).
Trade-offs:
- Branching reagiert pro-step: wenn der User auf einer späteren Frage
zurückgeht und die Quelle einer Hide-Regel ändert, fällt der
zwischenzeitlich gerenderte Pfad weg — eventuell taucht eine neue
Frage als "next" auf. Dokumentiert als linearer "tree-walk" statt
WYSIWYG-Snapshot, üblich für Typeform-Klone.
- Ohne LLM-Extraction (M9b) sind die Quick-Replies nicht fluide; das
ist intent: deterministic > magical for first ship.
Forms-Tests 61/61. svelte-check 0 errors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Headless wave-send während die Mana-Tab offen ist (M10c). Echter
Server-Cron (mana-ai oder mana-notify) bleibt M10d.
- lib/wave-mail.ts: sendWaveViaBulkMail POSTet an
/api/v1/mail/bulk-send mit form-derived payload (subject =
"{title} — {cohort}", htmlBody mit Inline-CSS + share-link-Button +
Impressum + Abmelden-Footer, textBody plain). campaignId =
form-{formId}-{cohort} (idempotent über Retries). Wirft
WavePreconditionError wenn fromEmail/fromName/legalAddress fehlen
oder Empfänger leer sind — Caller fällt auf mailto-Bridge zurück.
- lib/wave-scheduler.ts: singleton setInterval (5 min,
Page-visibility-aware — pausiert bei hidden), Tick scant formTable,
dekrypt-aware, filtert published+token+recurrence+recipients+due,
ruft sendWaveViaBulkMail + markWaveSent. Wirft nicht — per-form
errors werden als console.warn geloggt, Schedule läuft weiter.
Initial-tick 30s nach start damit Montag-Morgen-Welle nicht
5 Minuten warten muss. start/stop idempotent.
- BuilderView.sendWave: versucht erst bulk-send (wenn
broadcasts-Settings configured = defaultFromEmail + legalAddress),
fällt auf mailto-Bridge zurück (M10b) wenn precondition fehlt.
waveError-state für non-precondition-Fehler. Confirm-Dialog hat
jetzt zwei Texte (confirmBulk vs confirmSend).
- (app)/+layout.svelte: startWaveScheduler() neben startMissionTick()
beim Auth-Ready, stopWaveScheduler() im onDestroy.
- 5 neue i18n-Keys × 5 Locales (forms.builder.recurrence.confirmBulk).
Parity 6495.
Trade-offs:
- Auto-Tick nur während Tab offen — headless Cron via mana-ai-Mission
oder mana-notify-Worker bleibt M10d.
- Bulk-send bypasst die Campaign-Pipeline der broadcasts (kein
Audience-Filter, kein Rich-Editor) — ist Absicht für Forms-Wellen
als kurze transactional notifications.
- DSGVO: Impressum + Abmelden-Footer ({{unsubscribe_url}} wird vom
Orchestrator pro Empfänger ersetzt) sind Pflicht via
WavePreconditionError; Mailto-Fallback hat das nicht — User-Risk.
Forms-Tests 61/61 unverändert. svelte-check 0 errors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
#16 — Help-content entry for articles
The articles module had no entry in `app-registry/help-content.ts`
so the (?) icon in ModuleShell rendered an empty body. Added
description + 9 features + 4 tips covering Reader-View, Highlights,
Bookmarklet, Share-Target, and the new bulk-import flow with the
"Erneut speichern" rescue path for cookie-walled hits.
#10 — Console statements marked intentional
The 5 console.log/warn/error calls in import-worker.ts (boot,
tick errors, GC summary, stale-recovery sweep) were ESLint
warnings. They're intentional operational logs — same pattern as
services/mana-ai/src/cron/tick.ts. Added file-level
`/* eslint-disable no-console */` with a comment explaining the
pattern + that structured signal lives in Prometheus counters.
#17 — Full 5-locale i18n for the bulk-import UI
New `articles.import` namespace with 50 keys covering the
BulkImportForm, JobsList, JobDetailView, and AddUrlForm bulk-link.
All five locales translated by hand:
- de.json (canonical, mirrors the original hardcoded German)
- en.json (English)
- fr.json (French — bookmarklet → "bookmarklet HTML du navigateur")
- it.json (Italian — bookmarklet → "bookmarklet HTML del browser")
- es.json (Spanish — bookmarklet → "bookmarklet HTML del navegador")
Plural-aware `consent_hint_body` uses ICU plural format
(`{n, plural, one {…} other {…}}`) so single-vs-multiple article
counts read naturally in each language.
The consent-hint sentence is split into 3 keys (body/link/after-link)
so the link text appears mid-sentence rather than tacked on after.
Components converted to `$_('articles.import.*')` everywhere — no
remaining hardcoded strings in the bulk-import UI.
i18n parity validator: 76 namespaces × 5 locales — 6477 canonical
keys, all aligned. validate:i18n-hardcoded baseline unchanged for
articles files (broadcasts/notes/timeline failures are user-WIP).
Plan: docs/plans/articles-bulk-import.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two new public hostnames pointing at containers that live in the
separate mana-platform repo (Code/mana, ~/projects/mana-platform on
the Mac Mini):
- admin.mana.how → :3071 (mana-admin, Verein backoffice)
- npm.mana.how → :4873 (Verdaccio, private @mana/* npm registry)
Both deployed alongside the legacy stack via
infrastructure/docker-compose.macmini.yml in the mana-platform repo.
No change to existing routes.
Heimstart-Karte für das Forms-Modul, parallele zu BroadcastsWidget /
InvoicesOpenWidget:
- modules/forms/widgets/FormsWidget.svelte: 3-Spalten-Stats
(veröffentlicht / Entwurf / Antworten total + "+N/7T" delta für
letzte 7 Tage), bis zu 2 zuletzt aktualisierte Forms mit
Status-Punkt (grün=published, grau=sonst) + Response-Count +
relative-Zeit, "+N weitere"-Link wenn mehr als 2 Forms existieren.
Empty-State mit "+ Erstes Formular bauen". Live aus Dexie via 2
parallele liveQuery-Subs (forms + formResponses).
- types/dashboard.ts: WidgetType-Union erweitert um 'forms';
WIDGET_REGISTRY-Eintrag mit defaultSize 'medium' + 📋-Icon.
- components/dashboard/widget-registry.ts: FormsWidget importiert +
in widgetComponents map registriert.
- 5 Locales × 2 dashboard-Keys (forms.title + forms.description).
App-Registry-Eintrag für /forms in app-registry/apps.ts existiert
bereits (Parallel-Session). FormsWidget ist die _aggregierte_
Heimstart-Variante; der app-registry-Eintrag mountet die ListView
direkt als Modul-Card.
i18n-parity 6417 keys aligned. svelte-check 0 errors in
modules/forms/widgets/.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Polish-pass on top of the bulk-import rollout. Five contained items.
#8 + #9 — Dexie v60 schema cleanup
- Drop articleImportJobs.leasedBy + .leasedUntil. They were defined
on the original v57 schema as a soft-lease handshake, but the
worker uses pg_try_advisory_xact_lock and never wrote them.
Local-* type + projection row stripped.
- Drop the standalone `state` index on articleImportItems.
[jobId+state] covers the worker's hot query; the state-solo
index had no call site.
Both changes lossless — Dexie just removes the column declarations
from new rows; existing rows still carry the dead nulls (zombies)
until the next full row-rewrite. Not worth a hard migration for
two never-written columns.
#15 — MAX_URLS_PER_JOB hard cap (200)
articleImportsStore.createJob() throws if the URL list exceeds the
cap. BulkImportForm surfaces the limit in the live counter chip
and disables the submit when over. The worker can chew through any
N, but at high counts the UI gets unwieldy (no virtualisation) and
wall-clock duration climbs into multi-hour. 200 is a pragmatic
ceiling — Pocket-export dumps average 50–150.
#13 — Filter-Tabs in JobsList
Pill-style tabs above the list: Alle / Aktiv / Fertig / Mit Fehlern,
each with the row count. Disabled when the bucket is empty so the
user only sees actionable filters. The "Mit Fehlern" filter
(errorCount > 0) is the most valuable for triage.
#18 — apps/mana/CLAUDE.md
- Articles row added to the Tool Coverage table (5 propose +
1 auto, including the new auto-policy import_articles_from_urls).
- New "Articles bulk-import" section after the AI Workbench part:
pipeline diagram, table list, actor + metrics + cap pointers.
#20 — ARTICLES_IMPORT_WORKER_DISABLED env var documented
New row under "Mana API — Articles Bulk-Import Worker" in
docs/ENVIRONMENT_VARIABLES.md.
Plan: docs/plans/articles-bulk-import.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>